Wait For Daemonset Pods Before Starting Pods

I’m not sure why I made this assumption, but I learned recently that Kubernetes DaemonSet pods aren’t guaranteed to be up before regular pods are scheduled on a node.

I had a pod that didn’t have enough resources, which caused the cluster-autoscaler to kick in and spin up a new node.

When the new node came online, the pod was instantly started. It ran before kube-proxy and our datadog pods. We noticed this behavior when we didn’t see any logs forwarded to Datadog but saw the job complete successfully.

As far as I know, there’s no good, ready solution to this problem. I came across this StackOverflow question.

We haven’t implemented this approach because it’s a rare edge case. However, the strategy I’d go with is to taint new nodes by default and have a process remove the taint when the datadog pods come up.


Join the 80/20 DevOps Newsletter

If you're an engineering leader or developer, you should subscribe to my 80/20 DevOps Newsletter. Give me 1 minute of your day, and I'll teach you essential DevOps skills. I cover topics like Kubernetes, AWS, Infrastructure as Code, and more.

Not sure yet? Check out the archive.

Unsubscribe at any time.