Quickstart
The simplest command creates a cluster named default with one controller and one worker:
sind create cluster
This is equivalent to using this configuration:
kind: Cluster
name: default
nodes:
- role: controller
- role: worker
sind will:
- Set up mesh infrastructure (network, DNS, SSH) if not already present
- Create a cluster network and volumes
- Generate munge keys and Slurm configuration
- Start all node containers
- Wait for all nodes to become ready
sind get clusters
NAME NODES (S/C/W) SLURM STATUS
default 2 (0/1/1) 25.11 running
View individual nodes:
sind get nodes
NAME ROLE STATUS
controller.default controller running
worker-0.default worker running
For detailed health information:
sind status
Open an interactive shell on the controller (or submitter, if configured):
sind enter
Or run one-shot commands without an interactive session:
sind exec -- sinfo
sind exec -- srun hostname
Create a batch script in your working directory:
cat > job.sh << 'EOF'
#!/bin/bash
#SBATCH --job-name=hello
echo "Hello from $(hostname)"
sleep 30
EOF
Submit and monitor:
sind exec -- sbatch job.sh
Submitted batch job <JOBID>
Check the job queue:
sind exec -- squeue
View the output after the job completes:
cat slurm-<JOBID>.out
Hello from worker-0
By default, sind create cluster bind-mounts the current directory as /data inside all nodes. Batch scripts and output files are shared across the cluster and accessible directly on your host.
sind ssh controller
sind ssh worker-0
Add more workers to the running cluster:
sind create worker --count 3
Delete the cluster:
sind delete cluster default
Or delete all clusters at once:
sind delete cluster --all
When the last cluster is deleted, sind automatically cleans up the shared mesh infrastructure.
Pipe a configuration directly into sind create cluster:
sind create cluster << 'EOF'
kind: Cluster
name: dev
defaults:
cpus: 2
memory: 1g
nodes:
- controller
- submitter
- worker: 3
EOF
Or use a configuration file:
sind create cluster --config cluster.yaml
Multiple clusters can run side by side — each gets its own network, volumes, and DNS namespace. See Cluster Configuration for all available options.