The HPC Advisory Council is also a community effort support center for HPC end-users, providing the following capabilities:

High-Performance Center Overview

HPC Advisory Council High-Performance Center offers an environment for developing, testing, benchmarking and optimizing products based on clustering technology. The center, located in Sunnyvale, California, provides on-site technical support and enables secure sessions onsite or remotely.

The High-Performance Center provides a unique ability to access the latest systems, CPU, and networking InfiniBand/Ethernet technologies, even before it reaches the public availability. It provides a development testing and tuning environment for applications.

The clusters utilize 'Fat Tree'(or Constant Bisectional Bandwidth - CBB) network architecture to construct non-blocking switch configurations. Fat Tree network is a switch topology in which integrated non-blocking switch elements (crossbars) with a relatively low number of ports are used to build a non-blocking switch topology supporting a much larger number of endpoints. Using full Fat Tree networks is a key ingredient to deliver non-blocking bandwidth for high performance computing and other large scale compute clusters.


Current Available Systems


Venus


  • Supermicro AS -2023US-TR4 8-node cluster
  • Dual Socket AMD EPYC 7551 32-Core Processor @ 2.00GHz
  • Mellanox ConnectX-5 EDR 100Gb/s InfiniBand/Ethernet
  • Mellanox Switch-IB 2 SB7800 36-Port 100Gb/s EDR InfiniBand switches
  • Memory: 256GB DDR4 2677MHz RDIMMs per node
  • 240GB 7.2K RPM SSD 2.5" hard drive per node

     



Helios


  • Supermicro SYS-6029U-TR4 / Foxconn Groot 1A42USF00-600-G 32-node cluster
  • Dual Socket Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz
  • Mellanox ConnectX-5 EDR 100Gb/s InfiniBand/VPI adapters with Socket Direct
  • Mellanox Switch-IB 2 SB7800 36-Port 100Gb/s EDR InfiniBand switches
  • Memory: 192GB DDR4 2677MHz RDIMMs per node
  • 1TB 7.2K RPM SSD 2.5" hard drive per node

     



Telesto


  • IBM S822LC POWER8 8-node cluster
  • Dual Socket IBM POWER8 10-core CPUs @ 2.86 GHz
  • Mellanox ConnectX-4 EDR 100Gb/s InfiniBand adapters
  • Mellanox Switch-IB SB7700 36-Port 100Gb/s EDR InfiniBand switch
  • Memory: 256GB DDR3 PC3-14900 RDIMMs per node
  • 1TB 7.2K RPM 6.0 Gb/s SATA 2.5" hard drive per node
  • GPU: NVIDIA Kepler K80 GPUs

Thor


  • Dell™ PowerEdge™ R730/R630 36-node cluster
  • Dual Socket Intel® Xeon® 16-core CPUs E5-2697A V4 @ 2.60 GHz
  • Mellanox ConnectX-5 EDR 100Gb/s InfiniBand adapters
  • Mellanox Switch-IB 2 SB7800 36-Port 100Gb/s EDR InfiniBand switches
  • Mellanox Connect-IB® Dual FDR 56Gb/s InfiniBand adapters
  • Mellanox SwitchX SX6036 36-Port 56Gb/s FDR VPI InfiniBand switches
  • Memory: 256GB DDR4 2400MHz RDIMMs per node
  • 1TB 7.2K RPM SATA 2.5" hard drives per node

Odin


  • Colfax CX2660s-X6 2U 4-node cluster
  • Dual Socket Intel® Xeon® 14-core CPUs E5-2697 V3 @ 2.60 GHz
  • Mellanox ConnectX-4® EDR InfiniBand and 100Gb/s Ethernet VPI adapters
  • Mellanox Switch-IB SB7700 36-Port 100Gb/s EDR InfiniBand switches
  • GPU: NVIDIA Kepler K80 GPUs
  • Memory: 64GB DDR4 2133MHz RDIMMs per node

     

Heimdall


  • HP™ Apollo™ 6000 10-node cluster
  • Dual Socket Intel® Xeon® 14-core CPUs E5-2697 V3 @ 2.60 GHz
  • Mellanox ConnectX-3® FDR 56Gb/s InfiniBand and Ethernet VPI adapters
  • Mellanox SwitchX SX6036 36-Port 56Gb/s FDR VPI InfiniBand switches
  • Mellanox ConnectX-4 EDR 100Gb/s InfiniBand adapters
  • Mellanox Switch-IB SB7700 36-Port 100Gb/s EDR InfiniBand switches
  • Memory: 64GB DDR4 2133MHz RDIMMs per node

Dell Infiniband-Based Lustre Storage


  • Storage for MDS: Dell PowerVault MD3420
  • 24x 500GB 7200RPM 6.0Gbps SAS drives
  • Storage for OSS: 2xDell PowerVault MD3460
  • 60x 1TB 7200RPM 6.0Gbps SAS drives on each MD3460
  • MDS: 2x Dell PowerEdge R620 servers
  • Dual Socket Intel(R) Xeon(R) 10-core E5-2660 v2 CPUs @ 2.20 GHz
  • Mellanox ConnectX®-3 56Gb/s FDR InfiniBand and Ethernet VPI HCA
  • Dual Rank 128GB DDR3 1866MHz DIMMs memory
  • OSS: 2x Dell PowerEdge R620 servers
  • Dual Socket Intel(R) Xeon(R) 10-core E5-2660 v2 CPUs @ 2.20 GHz
  • Mellanox ConnectX®-3 56Gb/s FDR InfiniBand and Ethernet VPI HCA
  • Dual Rank 128GB DDR3 1866MHz DIMMs memory
  • Management: Dell PowerEdge R320 server
  • Intel(R) Xeon(R) 6-core E5-2430 CPUs @ 2.20 GHz
  • Dual Rank 48GB DDR3 1600MHz DIMMs memory
  • Intel Enterprise Edition Lustre (IEEL)
  • Mellanox SwitchX SX6036 36-Port 56Gb/s FDR InfiniBand switch

   

Ops


  • Colfax CX1350s-XK5 1U 4-node cluster
  • Based on Supermicro SYS-1027GR-TRF
  • Dual Socket Intel® Xeon® 10-core E5-2680 V2 CPUs @ 2.80 GHz
  • NVIDIA Tesla P100 PCIe-3 x16, 16GB HBM2
  • Mellanox ConnectX-4 EDR 100Gb/s InfiniBand adapters
  • Mellanox Switch-IB SB7700 36-Port 100Gb/s EDR InfiniBand switches
  • Mellanox ConnectX®-3 56Gb/s FDR InfiniBand and Ethernet VPI HCA
  • Mellanox SwitchX SX6036 36-Port 56Gb/s FDR InfiniBand switch
  • 500GB 7.2K RPM SATA 2.5" 6Gbps hard drive
  • Dual Rank 32GB DDR3 1600MHz DIMMs memory

     

Jupiter

  • Dell™ PowerEdge™ R720xd/R720 32-node cluster
  • Dual Socket Intel® Xeon® 10-core CPUs E5-2680 V2 @ 2.80 GHz
  • Mellanox ConnectX-4 EDR 100Gb/s InfiniBand adapter
  • Mellanox Switch-IB SB7700 36-Port 100Gb/s EDR InfiniBand switches
  • Mellanox ConnectX®-3 VPI 56Gb/s FDR InfiniBand adapters
  • Mellanox Connect-IB® FDR InfiniBand adapters
  • Mellanox SwitchX SX6036 36-Port 56Gb/s FDR InfiniBand switch
  • R720xd: 24x 250GB 7.2K RPM SATA 2.5" hard drives per node
  • R720: 16x 250GB 7.2K RPM SATA 2.5" hard drives per node with 1 GPU
  • Memory: 64GB DDR3 1600MHz RDIMMs per node
  • GPU: NVIDIA Kepler K40, K20x and K20 GPUs


Mercury


  • Dell™ PowerEdge™ C6145 6-node cluster
  • Quad-socket AMD Opteron 6386 SE (Abu Dhabi), 64 Cores per node
  • Mellanox ConnectX®-3 InfiniBand VPI adapter
  • Mellanox 36-Port 40Gb/s InfiniBand Switch
  • Memory 128 GB, 1600 MHz DDR3 memory per node
  • HIC (Host Interface Card) to Dell™ PowerEdge C410x PCIe expansion chassis for GPU computing


InfiniBand-based Storage (Lustre)


  • Two Intel Core i7 920 CPUs (2.67GHz)
  • DDR3-1333MHz memory (6GB total)
  • Seagate Cheetah 15K 450GB SAS Hard Disk
  • OS: RHEL5.2
  • Mellanox ConnectX-2 40Gb/s QDR InfiniBand adapter

  

Janus

  • Dell™ PowerEdge™ M610 38-node cluster
  • Six-Core Intel® Xeon® processor X5670 @ 2.93 GHz
  • Intel Cluster Ready certified cluster
  • Mellanox ConnectX®-2 40Gb/s InfiniBand mezzanine card
  • Mellanox M3601Q 36-Port 40Gb/s InfiniBand Switch
  • Memory: 24GB memory per node

 

Juno

  • 1x GIGABYTE R270-T64 Chassis
    • 2 x Cavium ThunderX 48-core ARM processors
    • Memory: 64GB DDR4 2400 MHz
    • Mellanox ConnectX-4 EDR 100Gb/s InfiniBand/VPI adapter
    • SSD 480GB SATA 3
  • 2x GIGABYTE MT30-GS0 Chassis
    • 1x Cavium ThunderX 32-core ARM Processor
    • Memory: 128GB DDR4 2400 MHz
    • Mellanox ConnectX-5 EDR 100Gb/s InfiniBand/VPI adapter
    • SSD 1TB SATA 3
  • Switch : Mellanox Switch-IB 2 SB7800 36-Port 100Gb/s EDR InfiniBand switches

 


The HPC Advisory Council would also like to thank the following equipment providers for their generous donations throughout the High-Performance Center's history.