Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Technical Details

System Component

Configuration

Compute Nodes

CPU Type

AMD EPYC 7352

Sockets

2

Cores/socket

24

Clock speed

2.3 GHz

Memory

256 GB RAM

Local Storage

512GB Micron 1300 SSD (/scratch)

Memory Bandwidth

409.6 GB/s

System

Total Compute Nodes

44

Total Compute Cores

2,112

Total Memory

11.6 TB

Total Storage

342 TB

Interconnect

Mellanox Infiniband EDR

Link bandwidth

100 Gb/s

MPI Latency

1.64 µs

Systems Software Environment

Software

Description

Operating System

CentOS Linux 7.9

Cluster Management

Scyld Clusterware 11.0

Compilers

AOCC, GCC, Clang, Go

Parallel Frameworks

Open MPI, MVAPICH2, Sandia OpenSHMEM

System Access

To use the Lotus cluster, you must first request an account on the system. Rhodes College faculty, students, and staff may send an email to helpdesk@rhodes.edu requesting access to the cluster. Non-Rhodes users must have a guest researcher account that is sponsored by a Rhodes faculty or staff member.

Access to Lotus is through the Secure Shell (SSH) to:

lotus.rhodes.edu

For on campus users:

From your terminal window at the prompt , type the following (not including the $ and replacing the “user” with your username) to log in!

$ ssh user@lotus.rhodes.edu

For off-campus users:

Direct SSH access is not permitted from off-campus. Users may either use a VPN to access the cluster (and then SSH in), or can login to a virtual desktop at http://desktops.rhodes.edu and use PuTTY to access the cluster. For more information on using these resources, see the Getting Started information.

Notes:

When you login to lotus.rhodes.edu you will be directed to either lotus-login01 or lotus-login02. These machines are identical in hardware and software configuration.

You may add your SSH public key to ~/.ssh/authorized_keys to enable password-less login using ECDSA, RSA, and ed25519 key types. Please ensure that your private keys are secured with a strong local password. You can use ssh-agent to avoid having to repeatedly type your private key password.

Hosts which attempt to connect very frequently (many times per second) may be blocked temporarily in order to improve system security. If you are blocked, wait 15 minutes and try again.

Modules

The cluster provides the modules system for loading specific software packages and environments. Module commands can update your shell environment to automatically find optional tools, compilers, and libraries that you may need to support your application. Modules also provide a flexible mechanism for maintaining several versions of the same software or specific combinations of dependent software packages. New modules can be added upon request.

To list all of the available modules on the system, use the following command:

module available

To load a specific specific module you can use the load command:

module load mvapich2

This would load the MVAPICH2 MPI library into your environment, replacing any other version of MPI that was previously configured. Running a module command only affects the current running shell. You may wish to add specific module commands to batch files for submitting jobs or add then to shell configuration files that are read on login (typically .bashrc or .zshrc)

Other useful module commands are listed below:

Command

Description

module list

List the modules that are currently loaded

module avail

List the modules that are available to be loaded

module display <module name>

Show the environment variables modified by the <module_name> module

module load <module name>

Load the module <module_name> into the environment

module unload <module name>

Remove the module <module_name> from the environment

module swap <mod1> <mod2>

Replace <mod1> with <mod2> in the environment

Job Charging and Queue Limits

Currently, the cluster is operating under a free use billing model. There are no explicit time allocations for the cluster or enforced limits on overall usage of the system. This use model is subject to change depending on how usage evolves over time.

This resource is a shared, campus-wide resource. We ask that you use the system in a manner that is consistent with campus community standards and respect the shared nature of the system.

Jobs are subject to the following limits:

  • Maximum wall clock time for a single job is 48 hours

  • Jobs may request up to the max number of cores on the system (2,112)

  • Jobs may request up to the max number of nodes on the system (44)

  • Users may have at most 128 jobs queued at a time

  • Queued jobs may be preempted to support priority jobs (e.g. a paper deadline) or for emergency maintenance.

Compiling

All hosts in the cluster have access to GNU, AOCC (AMD), and Clang compilers along with multiple MPI implementations (OpenMPI and MVAPICH2). The default compiler is GCC 10.2.0, which has been compiled with AMD Rome specific optimizations (-march=znver2). GCC and AOCC compilers can be configured to generate Advanced Vector Extensions 2 (AVX2). Using AVX2, up to eight floating point operations can be executed per-cycle per-core. AVX2 is not enabled by default and is enabled by setting the appropriate compiler flags.

Using GCC

The GNU GCC compiler family can be loaded with the module system (it is loaded by default):

module load gcc

To compile a program with the GNU toolchain use the following commands:

Serial

MPI

OpenMP

MPI+OpenMP

Fortran

gfortran

mpif90

gfortran -fopenmp

mpif90 -fopenmp

C

gcc

mpicc

gcc -fopenmp

mpicc -fopenmp

C++

g++

mpicxx

g++ -fopenmp

mpicxx -fopenmp

To compile your programs with AVX extensions, compile with the -march=core-avx2 compiler flag. You will probably want to use this in conjunction with normal optimization flags (i.e. -O3)

For more information on the GNU compilers, check the manual pages:

man gcc or man g++ or man gfortran

Using AOCC (AMD compiler)

The AMD Optimizing C/C++ Compiler (AOCC) is available and can be loaded with the module system:

module load aocc

To compile a program with the AMD toolchain use the following commands:

Serial

MPI

OpenMP

MPI+OpenMP

Fortran

flang

mpif90

flang -mp

mpif90 -mp

C

clang

mpicc

clang -mp

mpicc -mp

C++

clang++

mpicxx

clang++ -mp

mpicxx -mp

Running Jobs on Lotus

Storage Considerations

Software

  • No labels