Intel 14 has arrived!

The Institute for Cyber-Enabled Research and the High Performance Computing Center are pleased to announce the installation of our latest computer cluster, "Intel 14." This purchase includes 220 nodes with 4400 CPU cores and between 64 GB to 256 GB of memory per node. Based on the next generation Intel Xeon Ivy Bridge E5-2670v2 processors, these new compute cores are up to 2-3 times faster than our existing system, significantly shortening time to discovery. All of the compute nodes are tied together with the latest low latency 56 Gb/s FDR Infiniband connections.

In addition to the more powerful CPUs and the high speed network interface, the new system includes 80 NVIDIA K20 GPUs and 56 Xeon Phi 5110P accelerator cards. Combined, this new hardware is theoretically 10 times more powerful than all previous HPCC systems combined, and would put the HPCC's overall compute capability within the top 500 fastest supercomputers in the world. This extra capacity will significantly improve throughput, and the scheduling system is being adjusted to allow for bigger jobs and shorter queue times. The updated HPCC system is specifically designed to target the following research workflows:

Large Memory Jobs (Bioinformatics, Big Data Analytics)

Augmenting the HPCC's existing set of 2TB compute nodes, all nodes in the new Intel 14 cluster will have at least 64 GBs of memory. In fact, there are 24 nodes that have 256 GBs of memory and 64 nodes that have 128 GBs of memory.

CPU Intensive Jobs

The new Intel Xeon Ivy Bridge processors support the AVX instruction set which allow for significant improvements in vectorization of compiled code. It is not uncommon to see 2-3 times speedups over intel10 hardware with just a recompile of a program. Additionally, the new cluster more than doubles to total number of cores available on the HPCC.

Large Shared Memory Jobs (ex. OpenMP)

Each new node has 20 CPU cores, allowing for higher core count shared memory jobs. Additionally, users familiar with OpenMP may also be able to take advantage of offloading some of the work to the new Intel Xion Phi accelerator cards.

Large Shared Network Jobs (ex. MPI)

The low latency 56 GB/s FDR Infiniband connections enables faster inter-node communications. Also with the larger total number of CPUs on the system, the HPCC is increasing the maximum number of computing cores that an HPC user can utilize from 144 cores to 384 cores. Priority is given to larger jobs that can take full advantage of the new hardware that are not easily run on other systems.

General Purpose Graphical Processing Unit (GPGPU) Jobs

The new NVIDIA K20 cards are significantly faster than the NVIDIA C1060 that are still available in the gfx10 hardware. These new cards offer over 16 times faster double precision performance.

Hybrid Parallelization

For the first time the HPCC can accommodate our most advanced users that want to run hybrid jobs that include accelerators, shared memory and/or shared network parallelization. This type of software is on the cutting edge of research and will likely be required for all research code to take advantage of the worlds biggest computers now and into the future.

In 2010, Central Michigan University was the first institution to invest in the Michigan State University HPCC, establishing iCER as a regional center in scientific computing. With this current hardware purchase, iCER has established partnerships with two additional institutions; Kettering University and the USDA. All three institutions contribute funds to the HPCC, which allows us to construct a larger system that is more efficient than any individual organization effort. iCER is in negotiations with two other investor institutions and is actively seeking other partnerships to expand our regional influence and research capabilities.