docker


Start Small and Keep it Simple: Overview

Keep it simple stupid KISS in software development isn’t always easy. After 10 years of maintaining and evolving a little search engine, it’s time for a review. Overview The project started with a question from a former colleague whether we could build a search engine about planned topics in journals, magazines, and newspapers. Me as a developer has a an instant desire to answer such a question with a clear Yes: greenfield projects are great playgrounds, places to learn and hone one’s skills.

Running Docker LinuxKit on Linode

This post will show you the steps it takes to run a custom LinuxKit based image on Linode. About LinuxKit But let’s start with the basics: LinuxKit is a tool to create immutable and minimal operating system images. As you might expect, it heavily leverages Docker images and uses containers to run services via containerd. Due to its simplicity you can configure your OS images with simple YAML files. Many examples are already available in the LinuxKit GitHub repository.

Zero Downtime Deployment with Docker Stack and Spring Boot

Playing around with Docker Swarm mode really makes fun: you only need a single command to initialize a cluster on your local machine. The following line configures your Docker Engine as Docker Swarm manager: docker swarm init The command output already tells you how to add additional worker nodes and looks similar to this: docker swarm join \ --token SWMTKN-1-somecharacters-andmorecharacters \ 192.168.47.11:2377 You don’t necessarily have to add additional worker nodes, because the manager node can also be used as worker.

Blue/Green Deployment with Docker InfraKit

The Docker InfraKit is a tool to create and manage you infrastructure in a declarative and self-healing manner. InfraKit itself more or less only consists of plugins, communicating via unix domain sockets. There are different types of plugins, namely group, instance, and flavour, each with a focus on different layers of infrastructure management. Instance plugins are no surprise here: they manage your physical resources. Whether a physical resource is actually some machine or only a virtual concept, is an implementation detail of the instance plugin.

On Library Dependencies and API Evolution

As maintainer for publicly available libraries one sometimes has to answer feature requests or handle feedback about missing features of the library. How does one prepare for different requirements? I’d like to tell you how and why I opted to implement yet another docker http client on my own and I’ll also try to explain where I see its benefits and disadvantages. not invented here syndrome? You have probably seen several Docker remote api client libraries for different environments or programming languages.

Determine whether a Docker container is running

You might want to find out if a Docker container named “elasticsearch” is currently running. There is a docker ps command to list all running containers in a table-like view. Lets assume there are two containers currently running, the result would look like this: Ignoring the fact that it’s a quite wide table you might want to take the chance and use some good old tools. Yes, the classics like grep, awk, sed and the other usual suspects.

Deploying with Gradle and Docker: have fun

Deploying products to production is a quite individual process for every application. Building deployment pipelines has already been described on a high level of abstraction, e.g. by the ThoughtWorks guys Jez Humble, Dan North and Chris Read in their paper The Deployment Production Line, not to forget the more general theme about Continuous Delivery being described in its own book by Jez Humble and David Farley. You might search for tools implementing the according patterns, and you’ll find some like ThoughtWorks Go (it’s free!

Gradle Summit 2014 follow-up

Probably a bit late, but private life is more important :) You can find the slides of my talk for the Gradle Summit 2014 online at http://gesellix.github.io/gradle-summit-2014/. The video ist available at YouTube along with the other Gradle Summit videos. In case you’re interested in details regarding our implementation of a deployment pipeline using Gradle and Docker (2nd part of the Gradle Summit talk), please follow this blog or my employer’s IT blog.

Gradle-Docker-Plugin and Docker-Client available

In line with our deployment pipeline written in Gradle and using Docker, we currently use Groovy’s process execution methods to talk with a command line Docker client. That way we make ourselves dependent to an installed Docker client on our CI servers. Since we don’t like to provide a bunch of specific CI servers, I started to implement a HTTP client for Docker, written in Groovy. The reason not to use existing Java implementations of such a Docker client was simply timing: some months ago the now completely rewritten Java Docker API Client wasn’t so well maintained than today… and, well, I like to play with new tools and wanted to explore the Docker remote API for myself.

Gradle Summit 2014

Great news: I’m going to speak at the Gradle Summit 2014 in Santa Clara. This will be the very first talk I’ll give in public so I’m quite excited! My talk will have two parts, with the first one about how we migrated a Maven multi-module project of 300 pom.xml files to Gradle. The second part will show you how we currently use Gradle in combination with Docker to continuously deploy a product of the EUROPACE 2 platform to production.

Facter, Docker and the public ip address

Using Docker on a Puppet managed host influences Facter when it tries to find the host ip address. Which network interface is the best? Facter collects some so called facts about the system and provides them to Puppet modules. When using a fact like :ipaddress you’ll see that Facter only uses the output of the native ifconfig command, sorts all existing interfaces by name and takes the first non local interface as result.

Docker and IPv6 on Ubuntu 13.10 (Saucy Salamander)

After making myself familiar with Docker I wanted to use it on a more recent Ubuntu 13.10 release. I still didn’t install Docker natively on my pc, but use a Vagrant box to play with fresh releases. After downloading a fresh Ubuntu 13.10 image, creating a VirtualBox image and installing the current Docker release 0.8.0, I tried to use my other little toys CouchDB and Elasticsearch in Docker containers. Sadly, I couldn’t connect to the exposed ports anymore.

Vagrant Forwarding Ports Exposed by Docker

Playing around with docker running inside a Vagrant VM and trying to use some services being exposed via HTTP ports makes you ask how to automatically forward the exposed docker ports through Vagrant to the host system. phew I found an already merged pull request, which looked like what I wanted. So, after a peek into the docker Vagrant file I tried to expose a CouchDB port to the host system like this: