The CNBC at Carnegie Mellon University (CMU) maintains its own computer facilities that provide faculty, staff and students with support, access to state of the art equipment and high-speed network access.
The Center maintains multiple servers that are located in the CNBC machine room. All servers have redundant power supplies and Uninterrupted Power Supplies (UPS) ensuringconnectivity for several hours without power. A 70TB, enterprise grade, RAID6, disk storage system provides space for users to store their data. They can access the files from their local computer using CMU’s network. The files are incrementally backed up to a separate server and saved for 3 months. A server with 40TBs of RAID 5 enterprise grade disk storage space was installed in October 2014 to provide the CNBC with desktop/laptop file incremental backup using CrashPlan PROe software, a multi-platform (MacOS, Windows, Linux, Solaris) enterprise solution. In June 2016, a 100TB , enterprise grade, disk storage system using zfs RAID6 was built to store the MICrONS project data, a project funded by the Intelligence Advanced Research Projects Activity (IARPA).
The CNBC Cluster (pictured above) is shared with the CMU’s Psychology Department and is located in the CMU School of Computer Science (SCS) machine room. The SCS Facility has 24/7 support and a team of HPC expertsthat works closely with the CNBC computing administrator. Currently the cluster is a two, 19″ rack system, with power distribution units and Uninterrupted Power Supply (UPS) protecting the essential components from primary power source loss. The cluster consists of 27 nodes, 392 CPUs and 12 Nvidia GeForce Titan X 12GB cards used for GPU processing. These resources are used for research, imaging, modeling, and simulations. The cluster was initially built in July 2010 and upgraded in July 2012. In June 2015, the cluster was reconfigured, upgraded and merged with the CMU Psychology Department’s cluster. The most recent upgrade occurred in June of 2016 when a second rack, PDU, Ethernet and Infintiband switches were added to accommodate expanding the cluster. The upgrade also included 3 new nodes with 4 GPU processing cards in each. The combined system now includes a total of 85 terabytes of shared disk space and 1.6 terabytes of RAM. Each node is connected via Quad Data Rate (QDR) Infiniband and the cluster is accessible through the Carnegie Mellon University network. The core operating system of the cluster is ROCKS+, a CentOS-based operating system with specialized packages for parallel and distributed computing. This operating system allows for a flexible, stable working environment for the machine’s core applications. Portable Batch System (PBS) is used for job scheduling and to allocate computational tasks, (i.e., batch jobs) among the available computing resources. Users’ home directories are incrementally backup up to the SCS’s tape backup system. The data partition is configured using RAID 6 and the ZFS file system with snapshots enabled for additional data protection.
Software packages used on the cluster include:
The cluster is kept up to date and maintained by SCS Facilities. David Pane, Manager of Computational Resources, is on hand to answer any questions or concerns that users may have about the cluster.
If you are interested in using the CNBC cluster for computational research, please contact David at email@example.com. A small maintenance fee is associated with access to the cluster and helps the CNBC make software and hardware improvements as advances in technology occur. The cluster application form along with policies and instructions can be found on our twiki.