ELK stack on Kubernetes

Having a good logging solution for almost any project is crucial. It is much easier to debug application logs. ELK (Elasticsearch / Logstash / Kibana) stack is popular among different platforms and often is a choice for in-house logging solution. Unlike Docker compose or Swarm, with Kubernetes we don’t have a possibility to specify logging driver for each container individually. This means we could set up logging driver on Docker engine level, but it is not a pretty solution. Since all logs are stored as files inside /var/log/containers, we can have an agent which will be deployed as DeamonSet and read those files from each worker and send them to Logstash.

Filebeat agent

For an agent, we will use Filebeat daemon which is a replacement for logstash forwarder. There isn’t official image available, but I created one and it is available here https://hub.docker.com/r/komljen/filebeat.

This is a filebeat configuration file ready for Kubernetes and added to the image:

From this config, we can see that Filebeat will get all logs from /var/log/containers directory, and it will skip it’s own logs and logs from kube pods, just in case you want those separate.

Also, to override this file, you can create a Kubernetes ConfigMap resource and mount it in container. Instructions for that are here: https://github.com/komljen/docker-filebeat.

Filebeat container will be deployed as DeamonSet, which means it will be running on each worker node. Here is DeamonSet filebeat config:

Create Kubernetes resource:

ELK stack

Because there are a lot of files for ELK stack, you can find all of them here https://github.com/komljen/kube-elk-filebeat.

All images are created from official Elastic images (Alpine based), with small changes in configs and all available on Docker Hub. Clone this repository and create all Kubernetes resources for ELK stack:

Kibana should be running on port 30000 and available from any worker node. To configure it, just open web browser and replace index name logstash-* with filebeat-*, choose time-field name and click create. All logs should be visible on Discover menu.

NOTE: This will not work if your Kubernetes cluster is managed by Rancher, because Kubernetes logs will not be available at usual location /var/log/containers.

Follow me

Alen Komljen

Building and automating infrastructure with Docker, Kubernetes, kops, Helm, Rancher, Terraform, Ansible, SaltStack, Jenkins, AWS, GKE and many others.
Follow me

Latest posts by Alen Komljen (see all)

Alen KomljenELK stack on Kubernetes