High Performance Computing (HPC)
What is High Performance Computing?
High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.
Its commonly referred to as HPC.
Where do I start?
As a researcher at Griffith University, you have access to a number of HPC systems including (but not limited to):
Euramoo at QRIS
- Euramoo is optimised for multiple serial jobs as opposed to large parallel ones
Gowonda at Griffith University
- This cluster is based at Griffith University similar to Euramoo but will be decommissioned in the future
Flashlight at RCC UQ
- This is a large memory node. Flashlite physical nodes go to 512GB (non-virtualized)
Depending on your needs, there may be an option that suits you more or less. Each is its own server with its own software and licences. There are also other options such as Virtual Machines from Nectar.
Where can I learn?
- Beginner HPC course at Intersect
- Intermediate HPC course at Intersect
- Advanced HPC course at Intersect
- Introduction to machine learning at Intersect
- Pawsey User Training material – Parallelize your code and more… (please note, this tutorial is written for SLURM, not PBSPro or Torque but can provide you with a starting point)
Where can I go for help?
You can contact Indy Siva who is our HPC Systems Engineer or attend our Hacky Hours