Cette page n'est pas disponible en français. Veuillez-nous en excuser.

ABIS Infor - 2017-09

Deliver your application through Docker

Gie Indesteege (ABIS) - June 30th 2017

Abstract

Applications are no longer monolithic chunks of code, but are decomposed into distributed services, accessed via well defined APIs. The development and deployment, but especially the runtime of each individual service requires specific infrastructure and configuration. Controlling that environment can be simplified by using Docker, a virtualised environment.

What is Docker?

"Docker containers wrap up a piece of software in a complete file system that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in." (ref. 1)

Or, in other words: an active (virtualised) runtime environment, including the right configuration and files, will be provided via a docker container, that can be obtained through a docker image. Containers can move between physical, virtual or cloud foundations without requiring any modification.

Hence: define the Linux O.S. you need, add the necessary libraries and applications on top of this, wrap this into a container and provide this container via an image to the user. (Note: non-Linux users work with a virtualised Linux environment to run the containers)

Installation and preparation of Docker

You start by installing Docker.

  • Use yum (or curl or apt or ...) on Linux based systems,
  • or rely on Docker for Windows (10) and Microsoft Hyper-V,
  • or use the Docker Toolbox for Windows or Mac.

Activate the docker service, and start the docker daemon:

	systemctl enable docker.service
	systemctl start docker

This daemon binds to a Unix socket, listening to docker command requests. Configuration of the daemon can be done by specifying flags and options in the mydocker.conf in the /etc/systemd/system/docker.service.d directory.

Note: The docker daemon can/should also be enabled to become active at boot time, next time the host is started:

	systemctl enable docker

Running an application in Docker

Once the docker daemon is active, applications can be started by running an image:

	docker run image:version-tag command arguments

The run command contacts the daemon to load the image from the docker hub. The version tag can be used to indicate a specific version of the image. If the image is already available on the local machine, it will not be downloaded (again).

Next the command will be started in the newly instantiated container of the image.

Images can be found in the official docker store (store.docker.com) or can be created in another (private or public) hub.

Some examples to illustrate the ideas:

  • Demo application from official docker store/hub
	docker run --rm hello-world

The hello-world (demo) image, latest version, will be downloaded, and the (default) application started. The docker daemon streams the output to the docker client, which sends it to the terminal. As a result, you will see the message "Hello from Docker!".

The --rm option automatically cleans up the container and removes the file system when the container exits.

  • Interactive service on Linux image (available on docker store)
	docker run -it centos:6.8 /bin/bash

The centos image, tagged 6.8, will be used. The -it flags are a way of telling docker that you want to attach to this container and directly interact with it via a terminal. This will be done via the bash shell.

There's no additional Linux kernel, no background processes, no services. Just the one process we need. A container runs a single Linux process. While the container doesn't have a kernel of its own, it does have all the files, libraries, and utilities necessary for the process to run.

  • using a self-defined custom image, for instance a DB environment based on MariaDB

The image can be created as follows:

  • create a Dockerfile (respect the format specifications)
	# Dockerizing MyDB: Dockerfile for building MariaDB image
	# Based on centos:latest, installs MariaDB 
	FROM       centos:latest
	MAINTAINER mydockername <training@abis.be>
	# Installation (default port 3306 and entrypoint mysql):
	RUN apt-get update && apt-get install -y mariadb-org
	EXPOSE 3306
	ENTRYPOINT ["/usr/bin/mysql"]
  • Build a docker image (from the Dockerfile)
	# Format: docker build --tag/-t <user-name>/<repository> .
	docker build --tag mydockername/repo .
  • Push the image to the (docker) hub
	# Format: docker push <user-name>/<repository>
	docker push mydockername/repo

Using the image is now possible with

	docker run -p 3306:3306 --name mariadb_inst -d mydockername/repo
	mysql --host 192.168.59.103 ...

which will download the image (if not yet done), and provide an interface to a MariaDB environment.

Additional considerations

Containers are only active as long as they run the service or command. They cease to exist when the command/service completes. So, applications should be defined/created as fine grained services: web applications, micro services, ... Docker activity can be shown by the command

	docker ps -a

Docker containers make extensive use of SELinux and namespaces. Hence the services and processes are well isolated and secured on the system they are running. If you want to exchange information with a docker container/application, perhaps you need additional setup or configuration in your base system. For instance

  • if you need packet forwarding in an Ubuntu UFW firewall, change the default policy to ACCEPT
  • for a web application, you need to map ports between the host and the container (see also the MariaDB usage example).

Docker containers have to be controlled, combined, tuned, orchestrated. To that end you need additional tools like e.g. Docker Swarm/Directory or Kubernetes (ref. 2).

Conclusion

In this article we introduced the basic concepts of Docker containers, a lightweight replacement for machine virtualisation. However, understanding the full potential of this containerized environment is beyond the scope of this article. Be welcome to read the follow-on article in our next ABIS infor.

References

  1. Docker: http://www.docker.com
  2. Kubernetes: https://kubernetes.io
  3. http://www.eweek.com/virtualization/docker-at-4-the-container-revolution-continues