Using VS Code on ARC Clusters
This page describes how to use Visual Studio Code’s Remote-SSH (or similar IDEs such as Cursor) with ARC systems without violating login-node policies.
The core idea is:
Use login nodes only as a lightweight gateway and for editing / job management.
Run all real computation on compute nodes through Slurm jobs (batch or interactive), not directly on login nodes.
Relevant ARC documentation:
Acceptable Use Policy (including restrictions on login nodes)
Video tutorials (including VS Code examples)
1. Why login-node abuse via VS Code is a problem
ARC’s Acceptable Use Policy explains that heavy or long-running jobs must not run on login nodes:
Login nodes are shared gateways, not compute resources.
Allowed on login nodes:
Editing code and text files.
Light compilation (with limited threads).
Staging data and transfers.
Submitting and monitoring Slurm jobs.
Not allowed on login nodes:
CPU- or memory-intensive computations.
GPU jobs.
Large I/O or long-running interactive analysis.
Treating VS Code / Remote-SSH as a way to run training or production workflows on the login node.
ARC may terminate offending processes and may suspend accounts that repeatedly misuse login nodes.
Using VS Code safely means: edit on the login node, compute only inside Slurm jobs on compute nodes.
2. Prerequisites
Before using VS Code with ARC, you should have:
Network access
On-campus network or connected to VT VPN (Pulse Secure) using the “VT Traffic over SSL VPN” profile.
From off-campus, both login and compute nodes require VT VPN.
ARC account and allocations
An ARC account and at least one project/allocation you can charge jobs to.
SSH configuration
SSH keys set up and tested (passwordless or with passphrase) following Setting up and using SSH Keys.
VS Code and Remote-SSH
VS Code installed on your laptop.
The Remote – SSH extension installed in VS Code.
(Cursor users: use its built-in remote SSH support with the same SSH config.)
3. Configure SSH to an ARC login node
First, configure your local SSH client (on your laptop).
Edit ~/.ssh/config and add an entry for the login node of the cluster you use. For example, for Tinkercliffs:
Host tinkercliffs
HostName tinkercliffs1.arc.vt.edu
User <your_VT_PID>
IdentityFile ~/.ssh/id_ed25519 # or your private key path
You can use any friendly alias (tinkercliffs, arc-tc, etc.).
Test from a local terminal:
ssh tinkercliffs
If you can log in normally, you are ready to use VS Code Remote-SSH.
Note: Always use a login node for the same cluster where you plan to run jobs (e.g.,
tinkercliffs1/tinkercliffs2fortc*nodes).
4. Standard workflow: VS Code on login node, compute via Slurm jobs
This is the recommended workflow for most users. It keeps all heavy work on compute nodes while still giving you a full-featured IDE.
4.1 Connect VS Code to the login node
Open VS Code on your laptop.
Use the Remote-SSH extension:
Command Palette →
Remote-SSH: Connect to Host...Select the host alias you configured (e.g.,
tinkercliffs).When prompted, choose Linux as the remote platform.
Once connected, use File → Open Folder… and choose a directory on ARC:
e.g.
/home/<your_VT_PID>or/projects/<your_project>/....
At this point:
The VS Code file explorer shows your ARC files.
The integrated terminal is running on the login node.
Any commands you run there must be light and short-lived.
4.2 Submit batch jobs from VS Code
Use Slurm for non-interactive workloads:
In the VS Code terminal (on the login node), create a job script such as
job.sh:#!/bin/bash #SBATCH --job-name=test-job #SBATCH --account=<account> #SBATCH --partition=<partition> #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=4 #SBATCH --time=01:00:00 module load python python my_script.py
Submit the job:
sbatch job.shMonitor the job with tools like
squeueandsacct.
All heavy computation now happens on compute nodes, not on the login node.
4.3 Start an interactive job for debugging or REPL
For interactive debugging or exploratory work, request an interactive job:
interact --account=<account> --partition=<partition> --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 --time=02:00:00
or:
srun --account=<account> --partition=<partition> --nodes=1 --ntasks-per-node=1 --cpus-per-task=4 --time=02:00:00 --pty /bin/bash
Once the job starts:
hostname
will show a compute node name (e.g., tc006). Run Python, R, C/C++ binaries, etc. inside this interactive shell only.
When done:
exit
to end the interactive job and free resources.
The VS Code server itself still runs on the login node in this workflow. Only your interactive shell (and the processes it launches) run on the compute node, which is acceptable as long as heavy work is contained inside Slurm jobs.
5. Advanced: wildcard ProxyJump to connect VS Code directly to compute nodes
In some cases, you may want the VS Code server itself to run on the compute node (for example, to move language-server CPU/memory load off the login node). This is an advanced workflow and should only be used while you have an active job on that node.
5.1 Start an interactive job and note the node name
From the login node, request an interactive job (as in §4.3), then run:
hostname
Example:
tc006
This is the short name of the compute node where your job is running.
5.2 Configure wildcard ProxyJump in ~/.ssh/config
Instead of editing your SSH config for each compute node, you can add host patterns with a single ProxyJump per cluster.
Below is an example covering several ARC clusters; adjust the patterns or login nodes if needed:
# Tinkercliffs compute nodes (advanced)
# Automatically jump through a Tinkercliffs login node when you ssh to any compute node
# like "tc006", "tc-xe003", etc.
Host !tc1 !tc2 tc-intel* tc0* tc1* tc2* tc3* tc-lm* tc-gpu* tc-dgx* tc-xe*
ProxyJump tinkercliffs2.arc.vt.edu
# Falcon compute nodes
Host fal0* fal1*
ProxyJump falcon2.arc.vt.edu
# Owl compute nodes (excluding the owl1 login node itself)
Host !owl1 owl0* owl1* owl-hm* owlmln*
ProxyJump owl3.arc.vt.edu
What this does:
Any hostname matching one of the patterns (e.g.,
tc006,tc-xe001,fal012,owl-hm03) will automatically:SSH to the appropriate login node (
tinkercliffs2,falcon2,owl3), thenProxyJump into the compute node.
Negative patterns like
!tc1and!tc2ensure that the login nodes themselves do not use ProxyJump, so you can still runssh tinkercliffs2.arc.vt.edudirectly.
After adding these entries, a typical workflow is:
Start an interactive job and see the node name (e.g.,
tc006).From your laptop:
ssh tc006SSH automatically jumps through the appropriate login node and places you on
tc006.
Important: This does not bypass Slurm. You should only SSH into compute nodes where you currently have an active interactive job (or other legitimate reason), and you must still respect allocation and time limits.
5.3 Connect VS Code to the compute node
With the wildcard ProxyJump configuration in place:
In VS Code, open Remote-SSH → Connect to Host….
Select the compute node name directly (e.g.,
tc006).When prompted, choose Linux as the remote platform.
VS Code’s remote server now runs on the compute node instead of the login node.
Open your project directory (e.g.
/home/<your_VT_PID>or/projects/...).
This connection is only valid as long as your interactive job on that node is running.
6. Common mistakes and how to avoid them
Mistake 1 – Running heavy jobs directly in the login node terminal in VS Code
Example:
python train_model.pythat runs for hours, GPU jobs, or multi-process workloads on the login node.Fix: Use Slurm:
Batch jobs with
sbatch.Interactive jobs with
interactorsrun --pty.Run heavy commands only inside those job shells.
Mistake 2 – Trying to reach ARC systems from off-campus without VT VPN
From off-campus, both login and compute nodes require VT VPN.
Fix: Connect to VT VPN (Pulse Secure, “VT Traffic over SSL VPN”) before using
sshor VS Code Remote-SSH.
Mistake 3 – Using a login node from a different cluster in ProxyJump
Example of incorrect setup:
HostName tc006(Tinkercliffs compute node)ProxyJump owl3.arc.vt.edu(Owl login node)
Fix: Use a login node from the same cluster as the compute node:
e.g.,
tinkercliffs1/tinkercliffs2fortc*nodes.
Mistake 4 – Forgetting to end interactive jobs
Leaving interactive sessions running idle wastes resources and may be canceled by ARC staff.
Fix:
exitfrom job shells when finished and close any associated VS Code remote sessions.
7. Summary
When using VS Code or Cursor with ARC:
Always connect first to a login node via Remote-SSH.
Use the login node only for:
Editing files.
Managing jobs.
Light, short-running commands.
Run all real computation via Slurm on compute nodes:
Batch jobs (
sbatch).Interactive jobs (
interact/srun --pty).
For advanced users:
Use SSH wildcard
ProxyJumppatterns to connect VS Code directly to the compute node while you have a job on that node, instead of editing your SSH config for each new node.
Following this workflow keeps you within ARC’s Acceptable Use Policy and provides a safe, efficient remote-development experience.