Rhodes College High-Performance Computing



Search for a doc

Research Computing at Rhodes College

 

Welcome to the documentation site for the Rhodes College High-Performance Computing (HPC) facilities.

Our principal resource for research computing is the Lotus HPC Cluster. The cluster was procured as the result of a National Science Foundation Campus Cyberinfrastructure grant (NSF CC*: Compute award 2018758) in July 2020. Lotus was designed and built by Penguin Computing for both high-performance (parallel) and high-throughput (many sequential) workloads submitted by Rhodes College faculty, students, and collaborators. Lotus was integrated into the Open Science Grid OSPool in 2021.

The cluster’s compute nodes are have two 24-core AMD EPYC 7352 processors, 256 GB of memory, 512GB of local SSD scratch space, and are running Rocky Linux 9.5. Lotus is comprised of 44 compute nodes, with each node connected to a 100Gb/s Mellanox EDR Infiniband interconnect network. Each compute node has access to 504 TB of total system storage on our storage server.

The Lotus cluster provides access to all 2,112 CPU cores and 11TB of system memory via the SLURM workload manager and cluster management via OpenHPC and Warewulf 4.5.

 

Getting Started

If you have not used a command-line based Linux system before, you will need to pick up some basics before using the Lotus system. Start with the Getting Started page.

📕 User guide

If you are familiar with cluster computing systems you will want to start by reading through the user guide documentation for requesting access and using the system. Detailed system information is contained in the User Guide.