Kubernetes: Frequently Asked Questions

Jake Sanders | 09 November 2015

Getting Kubernetes up and running isn't the easiest thing in the world. We see a few common issues coming up so we thought we'd summarise them in a series of blog posts

Bear in mind Kubernetes is constantly evolving, so if something we post seems broken, hit us (or any other members of the community) up in the Kubernetes users slack room, or leave a comment on this post!

What's the easiest way to jump right in?

Seeing as Kubernetes is all about running containers, I would recommend setting up a CoreOS based installation. CoreOS is a lightweight operating system with no packages, designed explicitly for running containers, and ships with it's own built-in clustered key-value store: etcd.

Locally, I would recommend a Vagrant based environment such as this one. If you're ready to roll your own, I would roll your own cloud-config files to start the major Kubernetes components on CoreOS - check the CoreOS documentation for details.

SSL/Authentication Issues When Setting up a Cluster From Scratch

If you're in the middle of setting up your own cluster, you should secure your Kube API server, Master and Worker nodes. Here's how to generate the appropriate keys using OpenSSL

Generate a new root CA with which you'll sign the rest of the keys:


openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 1826 -out ca.pem -subj "/CN=kube-ca"

Now the API Server. Generate a new openssl config file:

openssl.cnf


[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
IP.1 = ${K8S_SERVICE_IP}
IP.2 = ${MASTER_IP}

Replace ${K8S_SERVICE_IP} with the first IP address in the subnet from your service IP range, and ${MASTER_IP} with the address that you will be interacting with via kubectl. Then:


openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf

Finally, the worker and cluster admin keypairs:


openssl genrsa -out worker-key.pem 2048
openssl req -new -key worker-key.pem -out worker.csr -subj "/CN=kube-worker"
openssl x509 -req -in worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out worker.pem -days 365
openssl genrsa -out admin-key.pem 2048
openssl req -new -key admin-key.pem -out admin.csr -subj "/CN=kube-admin"
openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 365

Once you have all the key files, you can set up your Kubernetes units to use them via command line parameters


- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
--client-ca-file=/etc/kubernetes/ssl/ca.pem
--service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem

If you like, you can also generate some bearer tokens (any valid text will do) and store them in the known_tokens.csv file. The format of this file is a 3 column CSV file in the form "password, user name, user id"

I Appear to Have a Working Cluster, but Pods are stuck in "Pending"

This is an extremely common problem, with many possible causes. The easiest place to start looking is to run kubectl get events and watch for any obvious looking error messages.

Here are some common solutions:

  • You don't have enough resources to run the pod!
  • A network issue means your workers are unreachable, check your network and SSL certs
  • Your worker nodes are having issues pulling from your registry, check the docker options.
  • You've specified a nodeSelector or similar constraint that can't be satisfied.

More Debugging FAQs

If you have no idea what's going wrong, a good place to start is to turn up the logging verbosity on your Kubelets, by adding --v=10 to the command line options, then checking the Kubelet logs. These could be located somewhere under /var/log, but a lot of "dockerised" Kubernetes installations have --logtostderr=true, meaning you will find logs under journalctl or docker logs

As of Kubernetes v1.1, you can also use the new kubectl run command to drop in to a shell directly inside a container on your cluster. where you can run debugging commands yourself. The official documentation for kubectl run has the details.

If you're having an issue with Kubernetes, try the Kubernetes slack room. We (and everyone else in the community) hang out there and try to answer questions.

Thank you for reading

Do you need help with a Cloud Native or Kubernetes implementation? Get in touch and let's work together.

Contact Us

At LiveWyer Labs we innovate through research and development, see what else we've been working on lately.

If you want to stay up to date and be notified when we post new and exciting content, make sure to follow our Linkedin and Medium.