Kubernetes installation with kubeadm

In my previous post, I presented an easy way how to deploy Kubernetes cluster with Rancher. Kubernetes is hard to install without using third party tools, but luckily they released an official tool for simple deployment kubeadm. Please note that kubeadm is still in Alpha and not ready for production use, but it is good enough to play with on development environments. Kubernetes installation with kubeadm really simplifies deployment procedure and it is easy to use. Also, I find it very stable during my testing. kubeadm is a part of Kubernetes distribution starting with 1.4.0 release, but it does not track same release process at the moment. I’m expecting it to be ready for Kubernetes 1.6.

In this post, I will show you how to use kubeadm tool and some tips for installing Kubernetes on Vagrant environment. Everything is done manually for a better understanding of the process, but it could be easily automated with some configuration management tool like SaltStack. Here is Vagrantfile I used to run 3 VMs:

I also added additional network interface for Kubernetes cluster networking and a simple shell command to update hosts file during provisioning. This is required by Kubernetes networking to work when running on Vagrant because the first interface is NAT bridge and all nodes have the same IP address. After all VMs are up and running the first step is to add official Kubernetes repo and to install all required packages:

Repeat above step on all three VMs. When installed we can start cluster initialization on the master node:

A short explanation for all three command parameters:

  • --api-advertise-addresses to select which interface to use
  • --pod-network-cidr is required for flannel network
  • --token 8c2350.f55343444a6ffc46 if omitted token will be auto-generated

A note from official docs on networking with kubeadm:

You must install a pod network add-on so that your pods can communicate with each other. It is necessary to do this before you try to deploy any applications to your cluster, and before kube-dns will start up. Note also that kubeadm only supports CNI based networks and therefore kubenet based networks will not work.

Flannel networking without Etcd

In this example, I will use Flannel experimental feature to use Kubernetes API as a datastore instead of Etcd. Options used to start Flannel containers:

--iface=enp0s8 parameter is required if you have multiple NICs. Also, I added KUBERNETES_SERVICE_HOST= and KUBERNETES_SERVICE_PORT=6443 environment variables for flannel to be able to reach Kubernetes API. Otherwise flannel will try to reach Kubernetes API at, which for some reason is not accessible from agents. I’m not quite sure if this is a bug.

Now download and start Flannel network CNI:

After some time you should see all services running. Also DNS service should be running at this point:

Adding Kubernetes nodes

Adding a new Kubernetes node is easy. Just one command:

After a few minutes you should see that node is in ready state:

Adding Kubernetes dashboard

Run below command to start Kubernetes dashboard service:

Accessing Kubernetes dashboard or any other service from your localhost is easy. You just need to find NodePort of the service and to create forward rule on VirtualBox. Find NodePort for kubernetes-dashboard service:

An easy way to create a forward rule is with VBoxManage cli tool. List all running VMs:

Forward port 31531 from the above command to your localhost on port 9090. The rule can be created on any agent:

At this point you should be able to access Kubernetes dashboard on http://localhost:9090

If you have any questions please leave a comment. Also please check the limitations of kubeadm tool.

Updates with kubernetes v1.6.1:

kubeadm is in beta now and some commands have changed.

Cluster initialization:

You will also get the message:
Flannel RBAC:
Flannel config:
To join the agents you need to specify a port also:
Follow me

Alen Komljen

Building and automating infrastructure with Docker, Kubernetes, kops, Helm, Rancher, Terraform, Ansible, SaltStack, Jenkins, AWS, GKE and many others.
Follow me

Latest posts by Alen Komljen (see all)

Tweet about this on Twitter0Share on Facebook6Share on LinkedIn0Share on Google+0
Alen KomljenKubernetes installation with kubeadm
  • Harsh Desai

    Hello Alex,

    Would you know a way to deploy the above setup using a private kubernetes build? I have a clone of the kubernetes repository and have made some changes to it. Now I want to deploy those changes to a new cluster using the above approach.

    • Hi,

      Sorry, but I never did that and I don’t know if it is possible or not.

  • Traiano Welcome

    Hi Alex, great article, thanks! One Issue I’m having though, I’m at the step where I want to expose the dashboard to the Mac computer I’m running VirtualBox on, however the networking of the dashboard service seems different to what you have in the article (so not sure how to expose it):

    [email protected]:~# kubectl describe services kubernetes-dashboard –namespace=kube-system
    Name: kubernetes-dashboard
    Namespace: kube-system
    Labels: k8s-app=kubernetes-dashboard
    Selector: k8s-app=kubernetes-dashboard
    Type: ClusterIP
    Port: 80/TCP
    Session Affinity: None

    How would I expose/access the dashboard service in this case?

  • Traiano Welcome


    I’ve set up a kubernetes cluster using your guide, however I notice on one agent one virtual ethernet interface is not present – and this causes services
    on that node not to be accessible:

    Master :

    [email protected]:~# ifconfig| egrep “Link encap”
    docker0 Link encap:Ethernet HWaddr 02:42:16:ff:3c:4d
    enp0s3 Link encap:Ethernet HWaddr 02:4c:96:80:3a:3c
    enp0s8 Link encap:Ethernet HWaddr 08:00:27:c0:7f:60
    lo Link encap:Local Loopback
    veth7bf0140 Link encap:Ethernet HWaddr 5e:94:fb:82:d1:ab

    Agent #1:

    [email protected]:~# ifconfig| egrep “Link encap”
    docker0 Link encap:Ethernet HWaddr 02:42:06:b8:3f:b4
    enp0s3 Link encap:Ethernet HWaddr 02:4c:96:80:3a:3c
    enp0s8 Link encap:Ethernet HWaddr 08:00:27:61:4d:6e
    lo Link encap:Local Loopback
    vetheb9dc07 Link encap:Ethernet HWaddr 1e:ec:9b:cc:ab:46

    Agent #2:

    [email protected]:~# ifconfig| egrep “Link encap”
    docker0 Link encap:Ethernet HWaddr 02:42:21:08:44:bf
    enp0s3 Link encap:Ethernet HWaddr 02:4c:96:80:3a:3c
    enp0s8 Link encap:Ethernet HWaddr 08:00:27:c5:da:fb
    lo Link encap:Local Loopback

    An IP address that is reachable from the master and the agent 1 is not reachable from Agent 2:

    [email protected]:~# ping -c 1
    PING ( 56(84) bytes of data.
    From icmp_seq=1 Destination Host Unreachable

    — ping statistics —
    1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

    [email protected]:~# ping -c 1
    PING ( 56(84) bytes of data.
    64 bytes from icmp_seq=1 ttl=64 time=0.063 ms

    — ping statistics —
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms

    [email protected]:~# ping -c 1
    PING ( 56(84) bytes of data.
    From icmp_seq=1 Destination Host Unreachable

    — ping statistics —
    1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

    Is there anything I could check to fix this?