Keep it simple stupid KISS in software development isn’t always easy. After 10 years of maintaining and evolving a little search engine, it’s time for a review. Overview The project started with a question from a former colleague whether we could build a search engine about planned topics in journals, magazines, and newspapers. Me as a developer has a an instant desire to answer such a question with a clear Yes: greenfield projects are great playgrounds, places to learn and hone one’s skills.
This post will show you the steps it takes to run a custom LinuxKit based image on Linode. About LinuxKit But let’s start with the basics: LinuxKit is a tool to create immutable and minimal operating system images. As you might expect, it heavily leverages Docker images and uses containers to run services via containerd. Due to its simplicity you can configure your OS images with simple YAML files. Many examples are already available in the LinuxKit GitHub repository.
Playing around with Docker Swarm mode really makes fun: you only need a single command to initialize a cluster on your local machine. The following line configures your Docker Engine as Docker Swarm manager: docker swarm init The command output already tells you how to add additional worker nodes and looks similar to this: docker swarm join \ --token SWMTKN-1-somecharacters-andmorecharacters \ 192.168.47.11:2377 You don’t necessarily have to add additional worker nodes, because the manager node can also be used as worker.
The Docker InfraKit is a tool to create and manage you infrastructure in a declarative and self-healing manner. InfraKit itself more or less only consists of plugins, communicating via unix domain sockets. There are different types of plugins, namely group, instance, and flavour, each with a focus on different layers of infrastructure management. Instance plugins are no surprise here: they manage your physical resources. Whether a physical resource is actually some machine or only a virtual concept, is an implementation detail of the instance plugin.
As maintainer for publicly available libraries one sometimes has to answer feature requests or handle feedback about missing features of the library. How does one prepare for different requirements? I’d like to tell you how and why I opted to implement yet another docker http client on my own and I’ll also try to explain where I see its benefits and disadvantages. not invented here syndrome? You have probably seen several Docker remote api client libraries for different environments or programming languages.
You have heard about Continuous Delivery or Continuous Deployment, don’t you? Apart from abstract definitions we showed you how one can perform deployments with the help of Gradle, Ansible, and Docker. A quite complete example project has been published at GitHub. Most deployment pipelines contain the steps in the figure below. The order of creating deployables and performing tests doesn’t really matter, both steps might even be performed asynchronously. But do deployments end with the newly released application being available on the production server?
You might want to find out if a Docker container named “elasticsearch” is currently running. There is a docker ps command to list all running containers in a table-like view. Lets assume there are two containers currently running, the result would look like this: Ignoring the fact that it’s a quite wide table you might want to take the chance and use some good old tools. Yes, the classics like grep, awk, sed and the other usual suspects.
Sometimes you need to apply a change on all of your CouchDB documents. I actually needed to remove an old and unused property from all of them. CouchDB conceptually doesn’t allow you to update a document without knowing its revision, so you would end up reading all your documents, modify them and update them one by one. Sounds nice, eh? update handlers There’s a better way, though. CouchDB has a concept of Document Update Handlers, which are saved in the database’s design document and are accessible through the HTTP API.
Deploying products to production is a quite individual process for every application. Building deployment pipelines has already been described on a high level of abstraction, e.g. by the ThoughtWorks guys Jez Humble, Dan North and Chris Read in their paper The Deployment Production Line, not to forget the more general theme about Continuous Delivery being described in its own book by Jez Humble and David Farley. You might search for tools implementing the according patterns, and you’ll find some like ThoughtWorks Go (it’s free!
Probably a bit late, but private life is more important :) You can find the slides of my talk for the Gradle Summit 2014 online at http://gesellix.github.io/gradle-summit-2014/. The video ist available at YouTube along with the other Gradle Summit videos. In case you’re interested in details regarding our implementation of a deployment pipeline using Gradle and Docker (2nd part of the Gradle Summit talk), please follow this blog or my employer’s IT blog.