Updating...
Skip to main content
Filter your search by category. Current category:
All
All
Knowledge Base
Service Catalog
Search the client portal
Search
Sign In
Show Applications Menu
UTC Client Portal
Sign In
Search
Home
Software Index
Services
Knowledge Base
More Applications
Skip to Knowledge Base content
Search
Articles
Blank
Knowledge Base
Research Technology
HPC Resources
HPC Cluster Slurm Partitions
HPC Cluster Slurm Partitions
Tags
gpu
epyc
cluster-node
slurm
general
Partition name: general (will eventually rename to "epyc")
Partition nodes: epyc[00-17]
Max run time: 5 days
Max CPUs per node: 120
Max GPUs per node: 0
Min GPUs per node: 0
If you run a job here and request a GPU, the job will not start
gpu
Partition name: gpu (will eventually rename to "epyc-gpu")
Partition nodes: epyc[00-15]
Max run time: 5 days
Max CPUs per node: 8
Max GPUs per node: 2
Min GPUs per node: 1
If you run a job here and don't request a GPU, the job will not start
epyc-full
Partition name: epyc-full
Partition nodes: epyc[00-15]
Max run time: 5 days
Max CPUs per node: 128
Max GPUs per node: 2
Min GPUs per node: 1
If you run a job here and don't request a GPU, the job will not start
epyc-long-jobs
Partition name: epyc-long-jobs
Partition nodes: epyc[00-15]
Max run time: 14 days
Max CPUs per node: 128
Max GPUs per node: 2
Min GPUs per node: 0
Max nodes per job: 2
Max jobs in the partition: 1
Your account must be explicitly given access to use this partition
ts
Partition name: ts
Partition nodes: ts[01-32]
Max run time: 5 days
Max CPUs per node: 26
Max GPUs per node: 0
Min GPUs per node: 0
If you run a job here and request a GPU, the job will not start
ts-gpu
Partition name: ts-gpu
Partition nodes: ts[01-32]
Max run time: 5 days
Max CPUs per node: 2
Max GPUs per node: 1
Min GPUs per node: 1
If you run a job here and don't request a GPU, the job will not start
ts-long-jobs
Partition name: ts-long-jobs
Partition nodes: ts[01-32]
Max run time: 14 days
Max CPUs per node: 28
Max GPUs per node: 1
Min GPUs per node: 0
Max nodes per job: 2
Max jobs in the partition: 1
Your account must be explicitly given access to use this partition
firefly
Partition name: firefly
Partition nodes: firefly[00-03]
Max run time: 5 days
Max CPUs per node: 72
Max GPUs per node: 0
Min GPUs per node: 0
If you run a job here and request a GPU, the job will not start
firefly-gpu
Partition name: firefly-gpu
Partition nodes: firefly[00-03]
Max run time: 5 days
Max CPUs per node: 8
Max GPUs per node: 4
Min GPUs per node: 1
If you run a job here and don't request a GPU, the job will not start
firefly-full
Partition name: firefly-full
Partition nodes: firefly[00-03]
Max run time: 5 days
Max CPUs per node: 80
Max GPUs per node: 4
Min GPUs per node: 1
If you run a job here and don't request a GPU, the job will not start
firefly-long-jobs
Partition name: firefly-long-jobs
Partition nodes: firefly[00-03]
Max run time: 14 days
Max CPUs per node: 80
Max GPUs per node: 4
Min GPUs per node: 0
Max nodes per job: 2
Max jobs in the partition: 1
Your account must be explicitly given access to use this partition
lookout
(May 6, 2024 - Not available yet)
Partition name: lookout
Partition nodes: lookout[00-03]
Max run time: 5 days
Max CPUs per node: 144
Max GPUs per node: 0
Min GPUs per node: 0
If you run a job here and request a GPU, the job will not start
lookout-gpu
(May 6, 2024 - Not available yet)
Partition name: lookout-gpu
Partition nodes: lookout[00-03]
Max run time: 5 days
Max CPUs per node: 16
Max GPUs per node: 4
Min GPUs per node: 1
If you run a job here and don't request a GPU, the job will not start
lookout-full
(May 6, 2024 - Not available yet)
Partition name: lookout-full
Partition nodes: lookout[00-03]
Max run time: 5 days
Max CPUs per node: 160
Max GPUs per node: 4
Min GPUs per node: 1
If you run a job here and don't request a GPU, the job will not start
Sign in to leave feedback
0 reviews
Blank
Blank
Blank
Blank
Print Article
Deleting...
×
Share
Recipient(s)
- separate email addresses with a comma
Message
Press Alt + 0 within the editor to access accessibility instructions, or press Alt + F10 to access the menu.
Check out this article I found in the UTC Client Portal knowledge base.<br /><br /><a href="https://utc.teamdynamix.com/TDClient/2717/Portal/KB/ArticleDet?ID=163830">https://utc.teamdynamix.com/TDClient/2717/Portal/KB/ArticleDet?ID=163830</a><br /><br />HPC Cluster Slurm Partitions