Understanding Docker

A beginner’s guide to understand docker concepts and common docker commands.

Shenali Jayakody
10 min readAug 11, 2021

Why Docker?

Developers have to work with different languages, frameworks and architectures when working on a project. Coding is not just about coding. It is about handling all the frameworks and libraries in an effective way so that the developer is able to deliver the best out of a project.

Docker simplify the tedious task of managing and choosing different frameworks, libraries and all the other dependencies. With docker, developers can easily pack the code along with dependencies into a container. This container can be run on any host that has docker installed in it, without needing to download or update dependencies manually.

Once you get the head around basic concepts of docker, you will be able to get a container up and running by executing few commands.

Docker containers vs Virtual Machines.

Even though both docker containers and VMs provide isolated environments and can be used to package applications, there are prominent differences between these two.

https://www.nakivo.com/blog/docker-vs-kubernetes/

VMs runs on Hypervisor and each VM has a separate Operating System inside it. Docker containers on the other hand, do not have separate Operating Systems inside them. Containers utilizes host Operating System and host OS kernel.

Due to the above reason, containers are more light weighted compared to VMs. Hence the process of containerization is much more easier and faster than virtualization.

Now let’s try to understand different concepts of docker.

Images

A docker image is a read-only template. It is a blueprint to create a container.

Once you have your application with its dependencies, you can create an image from that application. Anyone can use that image to create a container. Many containers can be created from one single image. You will learn more about this in the latter part of this article.

An image has a layered architecture. Most often a docker image is based on an existing image.

For those who are coming from an OOP background, you can think of images and containers as classes and objects.

NOTE: All the containers of an image should be removed before removing an image.

Containers

A container is a runnable instance of an image. A container has its own isolated environment with a separate file system and a network. A container is just like a machine. However, all the containers share the same host OS kernel.

We will learn more about containers in a while.

Docker registries

Docker images are stored in places called docker registries.

There are two types of registries as public registries and private registries. Docker hub is an example for a public docker registry. It is the default registry to which docker refers when running commands unless otherwise another registry is specified.

We can pull images from docker registries and we can run containers from those images. Whenever you pull an image from a docker registry, docker will check whether that image exists locally. If the image does not exist locally, then docker will pull it from the registry. Since images have a layered architecture, docker will pull the image layer-wise. If the image exist locally, only the different layers will be downloaded.

You can check all the tags of an image, from the docker registry in which the image exist.

The below image shows currently available tags of redis image in docker hub.

If the tag is not specified, the image with the “latest” tag will be pulled.

Containers in depth

You can run a container simply by using docker run command. Moreover, different tags can be used with run command in order to adjust the way in which the container should run according to the requirements.

Check here to learn more about docker run command and tags.

If you does not specify a name for the container, docker will automatically generate a name.

You can later access a container using its name or id.

Use docker ps/ps -a commands to access name/id of containers.

Following are some useful commands related to containers.

Port Mapping

Since containers have their own environment, we cannot directly access an application running on a port inside a container. In order to do so, we need to map the port in which the application is running on inside the container to a free port on docker host.

What is docker host?

A Docker host is a physical computer system or virtual machine running Linux. This can be your laptop, server or virtual machine in your data center, or computing resource provided by a cloud provider.

This is known as port mapping and -p tag is used.

Volumes and Bind Mounts

Each container have a separate file system inside the container. Data stored inside the file system of a container will be completely lost when we delete a container. This is not what we expect. In order to achieve data persistency, we need to map a directory inside a container to a directory outside the container.

We can either use volumes or bind mounts to accomplish data persistency and -v, --volume,--mount tags are used with docker run command.

Volumes are stored inside the docker’s storage directory in the host machine and docker manages the content of that directory. Non docker process should not modify that part of the host’s file system.

Unlike volumes, bind mounts can be stored anywhere on the host. Hence, bind mounts are specified with the full path on the host. Moreover, non docker process can modify the contents of a bind mount.

You have to be careful when using bind mounts, as processes running inside containers can change the host filesystem. Unlike bind mounts, volumes are isolated from the core functionality of the host.

There are two types of volumes as named volumes and anonymous volumes.

Comma separated key value pairs are used with mount flag.

For anonymous volumes, source field can be omitted. Source can also be specified as src while target can also be specified as destination or dst. Additionally, readonly field and volume-opt field which consists of key-value pairs with option name and the value can be used. Note that volume-opt can be used more than once.

NOTE: Bind mount commands works the same way. But you need to specify the full path of the directory on the host machine.

Furthermore, if you use --mount to bind mount a file/directory that does not exist, it will throw an error.

Networking

In order to provide networking functionality, docker uses several network drivers and each container has a internal IP address of its own.

  • bridge - Default network driver.

If you does not mention a network driver specifically this is the one that docker is going to use. Containers on the same bridge network can communicate with each other. You can either use the default bridge network named “bridge” or you can create user defined bridge networks.

  • host - this driver uses host’s network

If you use host network, the container will not get its own IP address. This means, if an application is running on a port inside such a container, you will be able to access that application on host’s same port. This can be really helpful in certain situations.

  • overlay - connect multiple docker daemons together and enable docker swarm services to communicate with each other.

Using an overlay network can come in handy when containers running on different hosts need to communicate with each other.

  • macvlan - allow to assign a MAC address to a container.

Using macvlan network make the container looks like a physical device in the network. Hence, this will be useful when dealing with legacy applications and applications that monitors network traffic which need such a behavior.

  • none - this allows to disable all networking, thus containers will not be attached to any network.

Furthermore, you can use third party network plugins with docker.

To learn more about different network drivers in docker, visit here.

When creating a network using docker network create command you can use --driver, --subnet, --ip-range, --gateway flags optionally.

Moreover, you can get information about the network of a specific container by checking out the “Network Settings” section in the JSON array returned when docker inspect containerName command is executed.

Make sure to disconnect all the containers attached to a user defined bridge network before removing the network.

Check here to learn more about networking in docker.

Containers on the same user defined bridge network can resolve each other using their names. But with default bridge network, you need to connect each other by using containers’ internal IP addresses or by using --link flag. Note that --link flag is a legacy feature in docker and it is better to avoid using it.

Unlike in user defined bridge networks, you need to stop a container and recreate it with different network options before removing that container from the default bridge network.

Docker Compose

Docker compose is a tool which allows to run multiple containers at once. All you need to do is creating a yaml file with all the services your application need. Furthermore, ports, volumes, networks can be specified in that docker compose file. By using docker compose up command, all the services/containers can be started.

There are 3 main versions of docker compose yaml file and there are some distinguished differences between them. For example, with version 2 onwards the containers/services should be mentioned under “services” section. Also, in version 1, we were able to use only the default bridge network but with version 2 onwards, docker by default creates a dedicated bridge network for the compose. Moreover, version 2 has a useful feature called “depends-on” which allows to specify if a container is depending on another container. This is really helpful when you need to start a container before a specific container.

The following image shows a sample docker compose file.

References: https://github.com/compose-spec/compose-spec/blob/master/spec.md
https://github.com/compose-spec/compose-spec/blob/master/spec.md

In the above compose file, “frontend” and “backend” are two containers.

The naming convention of docker compose yaml file is to name it as “docker-compose” but you can use another name if you prefer.

With docker compose down command, all the services will be terminated including the created network.

Dockerizing your application.

You can dockerize your application by simply creating an image. In order to create an image, first you need to create a Dockerfile. Upon successful creation of the image, you can push it to a docker registry and anyone can pull it and run containers in a docker host.

Dockerfile

Dockerfile is a script based instructions to create a docker image. It does not have a specific extension like other normal files. Just name it as “Dockerfile” and you have yourself a Dockerfile.

In a Dockerfile you need to specify all the commands that you would normally execute in command line when assembling an image.

As mentioned in the “Images” section of this article, an image is consists of layers. Each line in a Dockerfile corresponds to a layer in the image.

The following image shows a sample Dockerfile for an application written in Go Lang.

https://docs.docker.com/language/golang/build-images/

Now let’s try to understand some common components in Dockerfiles.

  • FROM - specify the base image from which the image will be created
  • WORKDIR - creates a directory inside the container and all the commands will be executed from there. (In the above Dockerfile, the name of that directory is “app”)
  • COPY - copies specified files into the working directory
  • RUN - runs a specific command
  • CMD - specifies which command to run within the container

The structure of the Dockerfile will be differ according to the language and the application. However, you can use the above information to create your own Dockerfile.

Check this to learn more about creating language specific Dockerfiles.

Building an image

Once you have a Dockerfile, you can create a docker image from it.

NOTE: Both these commands use current directory as the build context (Specified by the “. at the end.) thus they should be executed from the directory in which the Dockerfile exists.

If you need to push the image to a docker registry, the image name should be of a specific format. Check the below section for more details.

Pushing an image to a docker registry.

You can push your docker image to a public docker registry like docker hub or to a private registries like Amazon ECR, Google Container Registry, Azure Container Registry.

You need to prefix the image name with the name(id) of your docker registry account, to be able to push it to the registry. You can use this format to name your docker image when creating the image or else you can rename the image later.

In most docker registries, a sperate repository will be created for each image and all the versions (with different tags) of the image will be stored in the relevant repository.

References

https://docs.docker.com/

--

--

Shenali Jayakody
Shenali Jayakody

No responses yet