Terraform is a great tool for cloud provisioning and if you are not already I highly suggest to look into it. When cloud resources are already provisioned with Terraform sometimes we need to do reprovisioning, ideally each time we run terraform apply. That is not supported in Terraform, at least not yet. However, it has a great feature which enables you to do reprovisioning on infrastructure change. An example would be when the new node is added to AWS and it needs to be joined to other nodes in the cluster. We can do this with null_resource, which acts as any other resource, but it has support for triggers. The trigger needs to be some value that will change, otherwise, it will not run every time.
Having a good logging solution for almost any project is crucial. It is much easier to debug application logs. ELK (Elasticsearch / Logstash / Kibana) stack is popular among different platforms and often is a choice for in-house logging solution. Unlike Docker compose or Swarm, with Kubernetes we don’t have a possibility to specify logging driver for each container individually. This means we could set up logging driver on Docker engine level, but it is not a pretty solution. Since all logs are stored as files inside /var/log/containers, we can have an agent which will be deployed as DeamonSet and read those files from each worker and send them to Logstash.
For all those who used docker-compose and similar tools, sometimes having environment files is crucial to keep docker-compose.yaml file organized and to make it easier to use and understand. When you move from docker-compose to Kubernetes you will probably search for this kind of feature. Prior to Kubernetes version 1.6, it was not possible to add environment file as a ConfigMap and just use it for container specs when needed. It is really useful when you have a lot of properties for your app and especially when you want to use it for multiple apps. So, instead of specifying each environment variable individually we can reference the whole ConfigMap.
In my previous post, I presented an easy way how to deploy Kubernetes cluster with Rancher. Kubernetes is hard to install without using third party tools, but luckily they released an official tool for simple deployment kubeadm. Please note that kubeadm is still in Alpha and not ready for production use, but it is good enough to play with on development environments. Kubernetes installation with kubeadm really simplifies deployment procedure and it is easy to use. Also, I find it very stable during my testing. kubeadm is a part of Kubernetes distribution starting with 1.4.0 release, but it does not track same release process at the moment. I’m expecting it to be ready for Kubernetes 1.6.
When it comes to Docker containers and orchestration there are a lot of available options. Almost every few months some new Docker orchestration tool is available. At least that was the case at the beginning. Most of those tools are opensource projects, but of course, there are some enterprise orchestration tools. However, Google’s Kubernetes is most used and really popular tool. Like with all Google products it is also complicated to install and manage. They recently released kubeadm Kubernetes deployment tool, but this is still in Alpha and not ready for production environments. In this post, I will show you how to deploy Kubernetes on top of Rancher which is my favorite. With Rancher, you could decide which Docker orchestration tool to use, like Cattle (Rancher), Kubernetes, Mesos or Docker Swarm.
When I set out to deploy the excellent and free New Relic’s Server Monitoring agent on a couple of Elastic Beanstalk environments, I was expecting it to be an “easy-peasy copy/paste a couple of commands and it magically works” type of thing like it usually is when dealing with New Relic. What surprised me was that not only that there wasn’t a lot of official documentation for it, but I also couldn’t find a ready-made solution that did what I needed (crazy, I know). And what I needed seemed pretty simple; automatically deploy and configure the nrsysmond agent while having it read some info about the environment.
When it comes to Docker and proxies, you will mostly not need them for running things locally or just to test something. However, we at Cron spend a lot of time managing production environments at corporations where everything is behind a proxy. Here in this post, I will share some basics and few tips on how to set up Docker daemons, build images and finally run Docker containers behind a proxy that doesn’t use authentication.
Ruby is a programming language with a focus on simplicity and productivity. However, it lacks in performance comparing to other programming languages, especially in applications which make a lot of non-blocking operations like database and network calls. Concurrency can help with this, even though Ruby doesn’t support true multithreading (talking about MRI). The problem is that Ruby doesn’t provide many synchronization primitives, like some other languages designed with concurrency in mind (e.g Go). This post will show an implementation of a class based on lower level mechanisms for synchronization: TimedSemaphore.
Few months ago, a Node.js project I was working on made a switch from Express to Meteor. The project was running on AWS Elastic Beanstalk and the continuous delivery procedure we had in place relied pretty heavily on Elastic Beanstalk and other AWS services. The first next step I took was look for the fastest and least painful way to accommodate Meteor in that procedure (i.e. the less changes the better). The first thing I discovered was that Elastic Beanstalk’s native Node.js stack doesn’t support Meteor out of the box, and there was no straightforward way to make it work.