-
Notifications
You must be signed in to change notification settings - Fork 23
slurm_intro
Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.
We provide a set of 2 appliances to easily deploy a Slurm cluster: Slurm Controller and Slurm Worker.
Since the Slurm daemons (slurmctld and slurmd) are very lightweight, the minimum hardware requirements are low.
- OpenNebula version: >= 6.4
- Slurm Controller
- Minimum CPU: 2CPUs
- Minimum Memory: 2GB
- Minimum Disk: 10GB
- Slurm Worker
- Minimum CPU: 1CPU
- Minimum Memory: 1GB
- Minimum Disk: 7GB
Details for each release are available on the release page, providing comprehensive information about every version. The Slurm appliances are based on Ubuntu 24.04 LTS (for x86-64).
| Component | Version |
|---|---|
| Slurm | 23.11.4 |
Next: Slurm Quick Start
- OpenNebula Apps Overview
- OS Appliances Update Policy
- OneApps Quick Intro
- Build Instructions
- Linux Contextualization Packages
- Windows Contextualization Packages
- OneKE (OpenNebula Kubernetes Edition)
- Virtual Router
- Overview & Release Notes
- Quick Start
- OpenRC Services
- Virtual Router Modules
- Glossary
- WordPress
- Harbor Container Registry
- MinIO
- vLLM AI
- Slurm
- NVIDIA Fabric Manager
- Rancher CAPI
- Development