How Does Kubectl SSH Work?

Yesterday, I wrote about using luksa’s kubectl ssh node to gain shell access to a Kubernetes node.

I was curious how it worked, so I looked at the script. It’s simpler than you’d think.

It creates this pod:

apiVersion: v1
kind: Pod
metadata:
  generateName: ssh-node-
  labels:
    plugin: ssh-node
spec:
  nodeName: $node
  containers:
  - name: ssh-node
    image: busybox
    imagePullPolicy: IfNotPresent
    command: ["chroot", "/host"]
    tty: true
    stdin: true
    stdinOnce: true
    securityContext:
      privileged: true
    volumeMounts:
    - name: host
      mountPath: /host
  volumes:
  - name: host
    hostPath:
      path: /
  hostNetwork: true
  hostIPC: true
  hostPID: true
  restartPolicy: Never
  tolerations:
    - operator: "Exists"

At a high level, it:

  • sets the nodeName so the pod runs on the node you want to shell into.
  • creates a pod that you can exec into.
  • mounts the host’s filesystem inside of the pod at /host.
  • change’s the root directory into /host.

Using a combination of techniques, it grants you a shell into any node as long as you’re a cluster admin.


Join the 80/20 DevOps Newsletter

If you're an engineering leader or developer, you should subscribe to my 80/20 DevOps Newsletter. Give me 1 minute of your day, and I'll teach you essential DevOps skills. I cover topics like Kubernetes, AWS, Infrastructure as Code, and more.

Not sure yet? Check out the archive.

Unsubscribe at any time.