...
System Component | Configuration |
---|---|
Compute Nodes | |
CPU Type | AMD EPYC 7352 |
Nodes | 4 |
Sockets | 2 |
Cores/socket | 24 |
Clock speed | 2.3 GHz |
Memory | 256 GB RAM |
Local Storage | 512GB Micron 1300 SSD (/scratch) |
Memory Bandwidth | 409.6 GB/s |
System | |
Total Compute Nodes | 44 |
Total Compute Cores | 2,112 |
Total Memory | 11.6 TB |
Total Storage | 342 TB |
Interconnect | Mellanox Infiniband EDR |
Link bandwidth | 100 Gb/s |
MPI Latency | 1.64 µs |
...
Direct SSH access is not permitted from off-campus. Users may either use a VPN to access the cluster (and then SSH in), or can login to a virtual desktop at http://desktops.rhodes.edu and use PuTTY to access the cluster. For more information on using these resources, see the Getting Started information.
Notes:
When you login to lotus.rhodes.edu
you will be directed to either lotus-login01
or lotus-login02
. These machines are identical in hardware and software configuration.
You may add your SSH public key to ~/.ssh/authorized_keys
to enable password-less login using ECDSA, RSA, and ed25519 key types. Please ensure that your private keys are secured with a strong local password. You can use ssh-agent to avoid having to repeatedly type your private key password.
Hosts which attempt to connect very frequently (many times per second) may be blocked temporarily in order to improve system security. If you are blocked, wait 15 minutes and try again.
Modules
The cluster provides the modules
system for loading specific software packages and environments. Module commands can update your shell environment to automatically find optional tools, compilers, and libraries that you may need to support your application. Modules also provide a flexible mechanism for maintaining several versions of the same software or specific combinations of dependent software packages. New modules can be added upon request.
...
Job Charging and Queue Limits
Currently, the cluster is operating under a free use billing model. There are no explicit time allocations for the cluster or enforced limits on overall usage of the system. This use model is subject to change depending on how usage evolves over time.
This resource is a shared, campus-wide resource. We ask that you use the system in a manner that is consistent with campus community standards and respect the shared nature of the system.
Jobs are subject to the following limits:
Maximum wall clock time for a single job is 48 hours
Jobs may request up to the max number of cores on the system (2,112)
Jobs may request up to the max number of nodes on the system (44)
Users may have at most 128 jobs queued at a time
Queued jobs may be preempted to support priority jobs (e.g. a paper deadline) or for emergency maintenance.
Compiling
All hosts in the cluster have access to GNU, AOCC (AMD), and Clang compilers along with multiple MPI implementations (OpenMPI and MVAPICH2). The default compiler is GCC 10.2.0, which has been compiled with AMD Rome specific optimizations (-march=znver2). GCC and AOCC compilers can be configured to generate Advanced Vector Extensions 2 (AVX2). Using AVX2, up to eight floating point operations can be executed per-cycle per-core. AVX2 is not enabled by default and is enabled by setting the appropriate compiler flags.
Using GCC
The GNU GCC compiler family can be loaded with the module system (it is loaded by default):
module load gcc
To compile a program with the GNU toolchain use the following commands:
Serial | MPI | OpenMP | MPI+OpenMP | |
---|---|---|---|---|
Fortran |
|
|
|
|
C |
|
|
|
|
C++ |
|
|
|
|
To compile your programs with AVX extensions, compile with the -march=core-avx2
compiler flag. You will probably want to use this in conjunction with normal optimization flags (i.e. -O3
)
For more information on the GNU compilers, check the manual pages:
man gcc
or man g++
or man gfortran
Using AOCC (AMD compiler)
The AMD Optimizing C/C++ Compiler (AOCC) is available and can be loaded with the module system:
module load aocc
To compile a program with the AMD toolchain use the following commands:
Serial | MPI | OpenMP | MPI+OpenMP | |
---|---|---|---|---|
Fortran |
|
|
|
|
C |
|
|
|
|
C++ |
|
|
|
|
Running Jobs on Lotus
Storage Considerations
...