High Performance Computing (HPC)

High Performance Computing systems for structural calculations consist of two symmetric multi-processor systems, 1 Tesla GPU system, and two Beowulf clusters running the Linux Operating System.

SMP Systems

**Executor:  This 56 core computer consists of 4 12-core 2.6Ghz Xeon CPU’s and has 256GB of shared memory for processing that needs to utilize large blocks of memory at once.  The machine also has 2TB of SSD local disk space for ultra-high speed local processing.

Chimera:   This 48 core computer consists of 4 12-core 2.1GHz Opteron 6172 CPUs and has 192GB of shared memory for processing that needs to utilize large blocks of memory at once. The machine also has 400GB of local disk space.

GPU Systems

Archer:  This 48 core computer consists of 2 24-core Intel Xeon CPUs and has 256GB of shared memory for processing that needs to utilize large amounts of memory and GPU accelerated resources.  It has two Tesla K40 GPU's for a total of 5760 compute cores running at 875MHz and 24GB of GPU memory.  The machine also has 1TB of ultra-fast solid-state local scratch space.

Beowulf Clusters

Ultron:  Ultron consists of 10 1U compute nodes and a 3U head node with solid-state storage.  Compute nodes have dual 14-core 2.4GHz Intel Xeon E5-2680 CPU’s with 256GB RAM.  All compute nodes have 512GB SSD drives as system/scratch space.  Two nodes contain NVidia Tesla K80 processors for CUDA accelerated applications.  Cluster communication is via a 56 Gb/s FDR InfiniBand.  In total there are 280 Xeon CPU cores with 2.56TB of RAM, and 9984 GPU/CUDA cores with 48GB GDDR5.  It has a theoretical peak performance of 28.212 TFLOPS for CPU's/GPU's combined.

DS2 (Deathstar II):  DS2 consists of 15 1U compute nodes and a 3U head node. Ten compute nodes have dual 12-core 2.6GHz opteron 6344 CPUs with 64GB of RAM while the other 5 have quad socket 12-core 2.1GHz opteron 6172 CPUs with 192GB of RAM. All Compute nodes have 400GB hard drives as system/scratch space. Two 24 core nodes also contain one GTX670 GPU card for CUDA calculations. The cluster is tied together using 10gigabit ethernet running at full duplex with jumbo frame rates enabled. In total there are 480 Opteron cores and 1.6TB of RAM. It has a theoretical peak performance of 2.256 TFLOPS for CPUs alone.