(cui)= # CUI, Restricted Data Projects [//]: # (12 Compute nodes which is 3x Apollo 2000 Gen10 Plus 2U quad-node Chassis ) [//]: # (each with: ) [//]: # ( ▪ 4x HPE XL225n Gen10+ 1U Nodes, each with:) [//]: # ( ▪ 2x AMD EPYC 7542 2.9GHz/32-core/225W CPUs) [//]: # ( ▪ 16x 32GB 1x32GB Dual Rank x4 DDR4-3200 512GB memory 8GB/core ▪ 1x 240GB SATA 6G RI SSD) [//]: # ( ▪ 4x 1GbE ports, 1) [//]: # ( ▪ 1x HDR100 IB port) [//]: # () [//]: # (3 GPU Nodes 3x Apollo 6500 Gen10 Plus) [//]: # ( each with:) [//]: # ( ▪ 2x AMD EPYC 7542 2.9GHz/32-core/225W CPUs) [//]: # ( ▪ 16x 128GB DDR4-3200 2024GB memory) [//]: # ( ▪ 1x 240GB SATA 6G RI SSD) [//]: # ( ▪ 1x 2-port 10G SFP+ card, 1 , one with 1GbE SFP module for 1G ) [//]: # ( admin net ▪ 1x HDR100 IB port) [//]: # ( ▪ 8x NVIDIA A100 80GB SXM4 GPU Air Cooled GPUs) [//]: # (There is also a login and admin node) [//]: # (Storage is on a VAST Flash storage system with 656 TB of storage.) [//]: # () ## Overview ## The Controlled Unclassified Information (CUI) cluster is a CPU + GPU system which is the result of a partnership between ARC and the Hume Center. It came online in October of 2021 and provides a total of 15 nodes. Three nodes are dense GPU nodes matching those on (Tinkercliffs) and twelve are 64-core CPU nodes. Together, they provide strong scalability for a wide variety of workloads. Technical details are below: | Node type | CPU | GPU | | ------------ | ------------ | ------------ | | Manufacturer | HPE | HPE Apollo | | Chassis| HPE XL225n| HPE Apollo 6500 Gen10 Plus| | Chip | [AMD EPYC 7542](https://en.wikichip.org/wiki/amd/epyc/7542 "AMD EPYC Rome 7542") | [AMD EPYC 7542](https://en.wikichip.org/wiki/amd/epyc/7542 "AMD EPYC Rome 7542") | | Nodes | 12 | 3 | | Cores/Node | 64 | 64 | | GPU Model | - | [Nvidia Ampere A100-80GB](https://www.nvidia.com/en-us/data-center/a100/) | GPU/Node | - | 8 | | Memory (GB)/Node | 512 | 2048 | | Total Cores | 768 | 192 | | Total Memory (GB) | 6,144 | 6,144 | | Local Disk | 240GB SSD | 240GB SSD | | Interconnect | HDR-100 IB | HDR-100 IB | A VAST Flash storage system with 656TB of data storage capacity is connected to provide network-based storage for the cluster. ## Access ## The CUI system is set up to host projects which require some computational scale but are subject to controlled access restrictions such as the International Traffic in Arms Regulations (ITAR). Access to the CUI system requires a technology control plan (TCP) established with the Office of Export and Secure Research Compliance (OESRC) and consultation with ARC personnel to set up access and provide instructions for use. ### Networks from which CUI is accessible The login node for the CUI system, `cui1.arc.vt.edu`, will only accept connections from secured hosts on VT networks. However, connections from the VT VPN are not allowed since they could originate from arbitrary locations. If you do not have access to a secured host on a VT network, then you will likely need to connect from OESRC's [COMPASS](https://www.research.vt.edu/oesrc/ResearchSecurity/export-server.html) system. COMPASS is also accessible from off-campus, US locations by first connecting to OESRC's Barracuda VPN. [//]: # (## Software ##)