HPC Cluster Slurm Partitions

general

  • Partition name: general (will eventually rename to "epyc")
  • Partition nodes: epyc[00-17]
  • Max run time: 5 days
  • Max CPUs per node: 120
  • Max GPUs per node: 0
  • Min GPUs per node: 0
  • If you run a job here and request a GPU, the job will not start

gpu

  • Partition name: gpu (will eventually rename to "epyc-gpu")
  • Partition nodes: epyc[00-15]
  • Max run time: 5 days
  • Max CPUs per node: 8
  • Max GPUs per node: 2
  • Min GPUs per node: 1
  • If you run a job here and don't request a GPU, the job will not start

epyc-full

  • Partition name: epyc-full
  • Partition nodes: epyc[00-15]
  • Max run time: 5 days
  • Max CPUs per node: 128
  • Max GPUs per node: 2
  • Min GPUs per node: 1
  • If you run a job here and don't request a GPU, the job will not start

epyc-long-jobs

  • Partition name: epyc-long-jobs
  • Partition nodes: epyc[00-15]
  • Max run time: 14 days
  • Max CPUs per node: 128
  • Max GPUs per node: 2
  • Min GPUs per node: 0
  • Max nodes per job: 2
  • Max jobs in the partition: 1
  • Your account must be explicitly given access to use this partition

ts

  • Partition name: ts
  • Partition nodes: ts[01-32]
  • Max run time: 5 days
  • Max CPUs per node: 26
  • Max GPUs per node: 0
  • Min GPUs per node: 0
  • If you run a job here and request a GPU, the job will not start

ts-gpu

  • Partition name: ts-gpu
  • Partition nodes: ts[01-32]
  • Max run time: 5 days
  • Max CPUs per node: 2
  • Max GPUs per node: 1
  • Min GPUs per node: 1
  • If you run a job here and don't request a GPU, the job will not start

ts-long-jobs

  • Partition name: ts-long-jobs
  • Partition nodes: ts[01-32]
  • Max run time: 14 days
  • Max CPUs per node: 28
  • Max GPUs per node: 1
  • Min GPUs per node: 0
  • Max nodes per job: 2
  • Max jobs in the partition: 1
  • Your account must be explicitly given access to use this partition

firefly

  • Partition name: firefly
  • Partition nodes: firefly[00-03]
  • Max run time: 5 days
  • Max CPUs per node: 72
  • Max GPUs per node: 0
  • Min GPUs per node: 0
  • If you run a job here and request a GPU, the job will not start

firefly-gpu

  • Partition name: firefly-gpu
  • Partition nodes: firefly[00-03]
  • Max run time: 5 days
  • Max CPUs per node: 8
  • Max GPUs per node: 4
  • Min GPUs per node: 1
  • If you run a job here and don't request a GPU, the job will not start

firefly-full

  • Partition name: firefly-full
  • Partition nodes: firefly[00-03]
  • Max run time: 5 days
  • Max CPUs per node: 80
  • Max GPUs per node: 4
  • Min GPUs per node: 1
  • If you run a job here and don't request a GPU, the job will not start

firefly-long-jobs

  • Partition name: firefly-long-jobs
  • Partition nodes: firefly[00-03]
  • Max run time: 14 days
  • Max CPUs per node: 80
  • Max GPUs per node: 4
  • Min GPUs per node: 0
  • Max nodes per job: 2
  • Max jobs in the partition: 1
  • Your account must be explicitly given access to use this partition

lookout

  • (May 6, 2024 - Not available yet)
  • Partition name: lookout
  • Partition nodes: lookout[00-03]
  • Max run time: 5 days
  • Max CPUs per node: 144
  • Max GPUs per node: 0
  • Min GPUs per node: 0
  • If you run a job here and request a GPU, the job will not start

lookout-gpu

  • (May 6, 2024 - Not available yet)
  • Partition name: lookout-gpu
  • Partition nodes: lookout[00-03]
  • Max run time: 5 days
  • Max CPUs per node: 16
  • Max GPUs per node: 4
  • Min GPUs per node: 1
  • If you run a job here and don't request a GPU, the job will not start

lookout-full

  • (May 6, 2024 - Not available yet)
  • Partition name: lookout-full
  • Partition nodes: lookout[00-03]
  • Max run time: 5 days
  • Max CPUs per node: 160
  • Max GPUs per node: 4
  • Min GPUs per node: 1
  • If you run a job here and don't request a GPU, the job will not start