Rhodes College High-Performance Computing

Search for a doc

Research Computing at Rhodes College


Welcome to the documentation site for high-performance computing (HPC) facilities at Rhodes College.

The principal resource for research computing is the Lotus HPC Cluster. This machine was procured as a result from a National Science Foundation grant under the Campus Cyberinfrastructure (NSF CC*: Compute award 2018758) in July of 2020. Lotus is a cluster designed and built by Penguin Computing designed for both high-performance (parallel) and high-throughput (many sequential) workloads for Rhodes College faculty, students, and collaborators. Lotus will be integrated into the Open Science Grid later in 2021.

Compute nodes are powered by two 24-core AMD EPYC 7352 processors each with 256 GB of memory and 512GB of local SSD scratch space running Centos 7.9 Linux. Lotus is comprised of 44 compute nodes, with each node connected to a 100Gb/s Mellanox EDR Infiniband interconnect network. Each compute node has access to 504 TB of total system storage.

The Lotus cluster provides access to all 2,112 CPU cores and 11TB of system memory via the SLURM workload manager and cluster management with Penguin Scyld ClusterWare.


Getting Started

If you have not used a command-line based Linux system before, you will need to pick up some basics before using the Lotus system. Start with the Getting Started page.

📕 User guide

If you are familiar with cluster computing systems you will want to start by reading through the user guide documentation for requesting access and using the system. Detailed system information is contained in the User Guide.


Information on acquiring the system and installing it on campus.


🕑 Recently updated

You'll see the 5 most recently updated pages that you and your team create.