
“Slurm’s open-source foundation offers safeguards such as transparent code, forking ability, and community governance, but SchedMD’s control gives Nvidia soft power rather than hard lock-in,” said Manish Rawat, semiconductor analyst at TechInsights. Rawat said Nvidia could subtly shape the roadmap, prioritising GPU-aware scheduling and topology optimisations that favour its own hardware, and that integration timelines already showed faster support for the CUDA ecosystem compared to alternatives such as AMD’s ROCm or Intel’s oneAPI – creating what he described as a “best-supported path effect.”
What is Slurm, and why does it matter
Slurm, originally developed at Lawrence Livermore National Laboratory, runs on roughly 60% of the world’s supercomputers. The software is in active use at major AI companies, including Meta Platforms, French AI startup Mistral, and Anthropic for elements of AI model training, Reuters reported.
Government supercomputers used for weather forecasting and national security research also depend on it. Nvidia acquired Slurm developer SchedMD in December 2025 and described the deal as a push to strengthen its open-source ecosystem and help users adopt newer AI techniques alongside traditional supercomputing work.

