For all those who used docker-compose and similar tools, sometimes having environment files is crucial to keep docker-compose.yaml file organized and to make it easier to use and understand. When you move from docker-compose to Kubernetes you will probably search for this kind of feature. Prior to Kubernetes version 1.6, it was not possible to add environment file as a ConfigMap and just use it for container specs when needed. It is really useful when you have a lot of properties for your app and especially when you want to use it for multiple apps. So, instead of specifying each environment variable individually we can reference the whole ConfigMap.
In my previous post, I presented an easy way how to deploy Kubernetes cluster with Rancher. Kubernetes is hard to install without using third party tools, but luckily they released an official tool for simple deployment kubeadm. Please note that kubeadm is still in Alpha and not ready for production use, but it is good enough to play with on development environments. Kubernetes installation with kubeadm really simplifies deployment procedure and it is easy to use. Also, I find it very stable during my testing. kubeadm is a part of Kubernetes distribution starting with 1.4.0 release, but it does not track same release process at the moment. I’m expecting it to be ready for Kubernetes 1.6.
When it comes to Docker containers and orchestration there are a lot of available options. Almost every few months some new Docker orchestration tool is available. At least that was the case at the beginning. Most of those tools are opensource projects, but of course, there are some enterprise orchestration tools. However, Google’s Kubernetes is most used and really popular tool. Like with all Google products it is also complicated to install and manage. They recently released kubeadm Kubernetes deployment tool, but this is still in Alpha and not ready for production environments. In this post, I will show you how to deploy Kubernetes on top of Rancher which is my favorite. With Rancher, you could decide which Docker orchestration tool to use, like Cattle (Rancher), Kubernetes, Mesos or Docker Swarm.
When it comes to Docker and proxies, you will mostly not need them for running things locally or just to test something. However, we at Cron spend a lot of time managing production environments at corporations where everything is behind a proxy. Here in this post, I will share some basics and few tips on how to set up Docker daemons, build images and finally run Docker containers behind a proxy that doesn’t use authentication.
Few months ago, a Node.js project I was working on made a switch from Express to Meteor. The project was running on AWS Elastic Beanstalk and the continuous delivery procedure we had in place relied pretty heavily on Elastic Beanstalk and other AWS services. The first next step I took was look for the fastest and least painful way to accommodate Meteor in that procedure (i.e. the less changes the better). The first thing I discovered was that Elastic Beanstalk’s native Node.js stack doesn’t support Meteor out of the box, and there was no straightforward way to make it work.
SaltStack can be used to provision almost anything. I also find it useful for provisioning of Docker containers and orchestration. However, with a lot of salt states, I’ve noticed that there are a lot of similarities in them. Salt states for building images are almost the same for every Docker image. The best solution I found is using SaltStack macros and to make some sort of templates which are then reusable salt states for building any Docker image.
Most recommended way to persist data inside docker is to create data only container. However to simplify things it is also possible just to mount a directory from the host and to use that location as persistent storage. Also, this way it is easy enough to dockerize existing Postgres installations.