Center for Integrated Research Computing
CIRC offers University researchers state of the art computing technology and software including support for the use and implementation of these resources. CIRC currently maintains about 240 TeraFLOPS of high performance computing systems, including the University of Rochester's flagship IBM Blue Gene/Q system.
1,024 nodes, 16,384 CPU cores, 16 TB RAM, 209 TeraFLOPS
The Blue Gene/Q system at the University of Rochester consists of one rack of the 209 TFLOPS IBM Blue Gene/Q massively parallel processing (MPP) supercomputer, one IBM System p (POWER 7) front-end node, one IBM System p (POWER 7) service node, and 4 IBM System x I/O nodes connected to a 400 TB IBM GPFS System Storage solution. The Blue Gene/Q system consists of 1,024 nodes, 16,384 CPU cores, 16 TB of RAM, and 400 TB of storage. Each node consists of a 16-core A2 processor with 32 MB of cache and access to 16 GB of RAM.
1,024 nodes, 4,096 CPU cores, 2 TB RAM, 14 TeraFLOPS
The Blue Gene/P system at the University of Rochester consists of one rack of the 13.9 TFLOPS IBM Blue Gene/P massively parallel processing (MPP) supercomputer, one IBM System p front-end node, one IBM System p service node, and 8 IBM System x I/O nodes connected to 200 TB IBM System Storage solution. The Blue Gene/P system consists of 1,024 nodes, 4,096 CPU cores, 2 TB of RAM, and 180 TB of storage. Each node consists of a quad-core PPC450 processor with 8 MB of cache and access to 2 GB of RAM.
152 nodes, 1,496 CPU cores, 3.3 TB RAM, 16 TeraFLOPS
The BlueHive cluster is CIRC's primary Linux cluster for demanding computations. The BlueHive cluster is an IBM BladeCenter/iDataPlex solution consisting of one front end node, one management node, and 152 individual nodes housed in multiple server racks. Ten nodes of this system are equipped with a total of 20 GPU cards for an additional computational perfomance of 10 TeraFLOPS.
The NX Cluster provides an environment for research computing tasks that do not have high performance demands, including software development, instruction, experimentation, and training. It also enables users to access a resumable GUI interface to CIRC systems.
In conjunction with all of the systems mentioned above, CIRC also has over 640 TB of GPFS storage available for computational output.
- Access to high-performance computing systems
- Research collaboration
- Assistance in grant writing
- Individual and group training
- Monthly symposia sessions
- Contributed hardware for priority access