The info below details the available partitions on the MocsHPC cluster.
Jump to...
epyc nodes
epyc-cpu
- Partition name: epyc-cpu
- Partition nodes: epyc[00-28] (epyc[16-28] will be used first)
- Max run time: 5 days
- Max CPUs per node: 120
- Max GPUs per node: 0
- Min GPUs per node: 0
- Notes:
- If you run a job here and request a GPU, the job will not start
epyc-gpu
- Partition name: epyc-gpu
- Partition nodes: epyc[00-15]
- Max run time: 5 days
- Max CPUs per node: 8
- Max GPUs per node: 2
- Min GPUs per node: 1
- Notes:
- If you run a job here and don't request a GPU, the job will not start
epyc-full
- Partition name: epyc-full
- Partition nodes: epyc[00-15]
- Max run time: 5 days
- Max CPUs per node: 128
- Max GPUs per node: 2
- Min GPUs per node: 1
- Notes:
- Your account must be explicitly given access to use this partition.
- If you run a job here and don't request a GPU, the job will not start
epyc-long-jobs
- Partition name: epyc-long-jobs
- Partition nodes: epyc[00-15]
- Max run time: 14 days
- Max CPUs per node: 128
- Max GPUs per node: 2
- Min GPUs per node: 0
- Max nodes per job: 2
- Max jobs in the partition: 1
- Notes:
- Your account must be explicitly given access to use this partition
ts nodes
ts-cpu
- Partition name: ts-cpu
- Partition nodes: ts[01-32]
- Max run time: 5 days
- Max CPUs per node: 26
- Max GPUs per node: 0
- Min GPUs per node: 0
- Notes:
- If you run a job here and request a GPU, the job will not start
ts-gpu
- Partition name: ts-gpu
- Partition nodes: ts[01-32]
- Max run time: 5 days
- Max CPUs per node: 2
- Max GPUs per node: 1
- Min GPUs per node: 1
- Notes:
- If you run a job here and don't request a GPU, the job will not start
ts-full
- Partition name: ts-full
- Partition nodes: ts[01-32]
- Max run time: 5 days
- Max CPUs per node: 28
- Max GPUs per node: 1
- Min GPUs per node: 1
- Notes:
- Your account must be explicitly given access to use this partition.
- If you run a job here and don't request a GPU, the job will not start
ts-long-jobs
- Partition name: ts-long-jobs
- Partition nodes: ts[01-32]
- Max run time: 14 days
- Max CPUs per node: 28
- Max GPUs per node: 1
- Min GPUs per node: 0
- Max nodes per job: 2
- Max jobs in the partition: 1
- Notes:
- Your account must be explicitly given access to use this partition
firefly nodes
firefly-cpu
- Partition name: firefly-cpu
- Partition nodes: firefly[00-03]
- Max run time: 5 days
- Max CPUs per node: 72
- Max GPUs per node: 0
- Min GPUs per node: 0
- Notes:
- If you run a job here and request a GPU, the job will not start
firefly-gpu
- Partition name: firefly-gpu
- Partition nodes: firefly[00-03]
- Max run time: 5 days
- Max CPUs per node: 8
- Max GPUs per node: 4
- Min GPUs per node: 1
- Notes:
- If you run a job here and don't request a GPU, the job will not start
firefly-full
- Partition name: firefly-full
- Partition nodes: firefly[00-03]
- Max run time: 5 days
- Max CPUs per node: 80
- Max GPUs per node: 4
- Min GPUs per node: 1
- Notes:
- Your account must be explicitly given access to use this partition
- If you run a job here and don't request a GPU, the job will not start
firefly-long-jobs
- Partition name: firefly-long-jobs
- Partition nodes: firefly[00-03]
- Max run time: 14 days
- Max CPUs per node: 80
- Max GPUs per node: 4
- Min GPUs per node: 0
- Max nodes per job: 2
- Max jobs in the partition: 1
- Notes:
- Your account must be explicitly given access to use this partition
lookout nodes
lookout-cpu
- Partition name: lookout
- Partition nodes: lookout[00-05]
- Max run time: 5 days
- Max CPUs per node: 160
- Max GPUs per node: 0
- Min GPUs per node: 0
- Notes:
- If you run a job here and request a GPU, the job will not start
lookout-long-jobs
- Partition name: lookout-long-jobs
- Partition nodes: lookout[00-05]
- Max run time: 14 days
- Max CPUs per node: 160
- Notes:
- Your account must be explicitly given access to use this partition
Additional Support
Open an IT Helpdesk request ticket.
Send an email to ITHelp@utc.edu.
Return to top of page