Skip to main content
designed to enable

Learn Docker Now: A Personal Guide for Beginners

Author Profile Image Irem Ebenstein

Over the years, many blog posts and YouTube videos about Docker have appeared. I remember seeing a demo at a conference about 7 to 8 years ago that introduced Docker as a new technology. They described how easy it was to create complex systems with it, and I thought: ‘What? Will we be building structures like Lego from now on?’ The idea of assembling environments and applications like building blocks fascinated me. Today, as a technician with projects in various fields, Docker has indeed become indispensable for me.

Do we still need Docker today?

Of course! Docker is like a reliable friend I can count on in every project phase. Whether I want to test a new database, try out a custom search engine, or quickly set up a service backend – Docker helps me get started easily without much effort. Many tech beginners ask if Docker is still relevant in 2024. From my perspective: Absolutely.

It saves me time and offers the flexibility I need without cluttering my computer. But why should Docker be exciting for you too? Let’s dive in step by step and find out together.

What is Docker, actually?

Docker is a software solution to run applications in a simulated operating system environment. This means that not only the software but the entire operating system and all dependencies are simulated. Instead of just installing the correct software version, you can use Docker to create a complete operating system environment perfectly suited for the software, running independently and isolated from your actual operating system.

For example, during the development of a website, a web server can be started in a virtualized Linux/Ubuntu environment on a Windows computer. The software and the simulated operating system run in a so-called container, and any number of containers can be created and started only when needed.

This might sound very dry, but the best way to understand it is to start using Docker and experience the magic firsthand.

Step 1: Installing Docker – it is really easy!

Installation for Windows

The first step is the installation.

For Windows users, this means downloading Docker including Docker Desktop from the official website and installing it. The installation is quick – a few clicks and you’re ready to go.

Installation for Linux

For Linux users, it’s even easier. They can install Docker with a few commands in the terminal:

sudo apt update
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker  # Damit Docker beim Start des Computers automatisch startet

The graphical interface Docker Desktop for managing Docker can optionally be installed for Linux as well. You can find the instructions here.

Verifying the Installation

After the installation, you should enter docker —version in the command prompt or PowerShell to ensure everything is working. If that works, we can get started!

docker --version
# Beispiel-Output: Docker version 20.10.14, build a224086

Docker Desktop for Easy Management

If you have installed Docker Desktop (automatically included with Windows), you have a graphical interface available for managing Docker. This makes it even easier to create and manage containers.

A screenshot of Docker Desktop with one active container.
A screenshot of Docker Desktop with one active container.

I was very excited when I installed Docker for the first time. I used to spend hours setting up servers and installing software. Today, thanks to Docker, it’s suddenly just a few lines of code.

How to Use Docker Images?

The most important concept when working with Docker is Docker images. A Docker image is like a recipe for a dish. It describes step by step what ingredients (or software dependencies) you need and how to combine them. Once you have a recipe, you can “cook” the same dishes over and over again – or in our case, create containers. Docker images allow you to do exactly that.

A simple example: With the “recipe” for a web server, you can set up a functioning server in a few seconds without having to delve into the details of the installation.

# download the official recipe (image) for the NGINX web server
docker pull nginx

# start the web server
docker run -p 8080:80 nginx

# The server is now running on your computer and is accessible at http://localhost:8080

Now you can open localhost:8080 in your browser, and voilà, there’s your web server! Docker has taken care of everything else for you.

The Hello-World page of nginx is running on localhost:8080 thanks to Docker.
The Hello-World page of nginx is running on localhost:8080 thanks to Docker.

An incredible number of images are available on Docker Hub. Here you can find everything from databases to web servers to machine learning frameworks. It’s like a huge supermarket where you can find everything you need. Want to try out a new framework? No problem, just download the appropriate image and get started.

Containers: A Tupperware for Your Application

While the image is like the recipe, the container is the finished dish in your Tupperware – ready to take and consume. The container is the live instance created from the image. The best part? You can stop, delete, and recreate a container at any time, and it will always look exactly as defined in the recipe.

A common example: databases. If I quickly need a MySQL database, I can simply run it in a Docker container without tedious installations.

docker run -d -p 3307:3306 -e MYSQL_ROOT_PASSWORD=<PASSWORD> --name test-database mysql

The database is now running on localhost:3306. It used to take me hours to perform installations and configurations – today it takes just a few minutes.

The container can be stopped just as easily. Since we gave the container a name with —name, we can simply reference this name.

docker stop test-database

The best part is that you can create a separate container for each project or client. Depending on the project you are working on, you start the appropriate container, e.g., a database container and stop it when you are finished. This way, only what you currently need is running.

Arguments and Port Forwarding – Easy Customization

The defined recipe of the container can be further customized and configured to your needs with arguments. This can be done either through arguments like “-p 3306:3306” for port forwarding or through environment variables for configurations. Depending on the image, there are different arguments you can use.

Here are some important arguments you should know:

ArgumentExplanation
-dThe container is started “detached,” meaning in the background. You can close the terminal, and it will continue running.
-pDefines port forwarding from your laptop to ports in the container. For example, -p 8080:80 forwards your port 8080 to the web server port 80 in the container.
—nameGives the container a name. If no name is specified, Docker invents a random name.
-vBinds directories from your laptop into the container. This way, you can copy configuration files into the container. The respective image you use usually provides good examples for this.

Many images can also be configured with environment variables. This is especially useful if you, for example, start a database and want to configure the password or username. Which variables you can set can be found in the documentation of the respective image. In the command above, for example, we have used the environment variables MYSQL_ROOT_PASSWORD to configure the password of the database.

docker run -d -p 3307:3306 -e MYSQL_ROOT_PASSWORD=<PASSWORD> --name test-database mysql

The image for WordPress requires, for example, the variables WORDPRESS_DB_USER, WORDPRESS_DB_PASSWORD, etc., to automatically set up the connection to the database at startup. This allows you to set up a complete WordPress installation and connect it to an existing database with a single command.

Port Forwarding – Looking inside the Container

Port forwarding might seem complicated at first. It’s like opening a door from your host computer to your container. It makes your container accessible to applications on your computer.

Suppose I want to create a test database:

docker run -d -p 3307:3306 -e MYSQL_ROOT_PASSWORD=<PASSWORD> --name test-database mysql

With this command, I can access and use the database via localhost:3307. Port 3307 on my computer is forwarded to port 3306 in the container. This way, I can easily reach the database through my browser or an application.

Connecting to a Container – The Direct Way In

Sometimes I want to “look into the heart of the container” and interact with it directly, for example, to check logs or change configurations. Docker allows you to access the container directly:

docker exec -it <container-id> /bin/bash

Now you are inside the container and can work directly as if you were on a remote server. This feature has saved my day countless times when I needed to quickly troubleshoot an issue.

Here is an explanation of the arguments:

ArgumentExplanation
execExecute a command in the container. In this case, we want to run /bin/bash in the container to connect to it.
-itStarts the command in interactive mode. This means that it runs /bin/bash not just once and exits, but you can interact with the container directly through /bin/bash.
<container-id>The ID of the container you want to interact with.
/bin/bashThe program to be started in the container. In this case, a classic Bash shell.

How do you find the container ID? Run “docker ps” in the terminal to see all active containers. With “docker ps -a” you can also see all stopped containers.

> docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED         STATUS         PORTS                               NAMES
9778f606e398   mysql     "docker-entrypoint.s…"   6 minutes ago   Up 4 seconds   33060/tcp, 0.0.0.0:3307->3306/tcp   test-database
58014af4d7d1   nginx     "/docker-entrypoint.…"   9 minutes ago   Up 3 minutes   0.0.0.0:8080->80/tcp                nginx-test

Geheimtipp: Container miteinander verknüpfen

Here is an a bit lesser-known but powerful feature: linking containers. Suppose you have an application that needs to communicate with a database. Docker networks make this easily possible.

  1. Create a network:
docker network create mein-netzwerk
  1. Start both containers within this network:
docker run -d --network=mein-netzwerk --name db mysql
docker run -d --network=mein-netzwerk --name app myapp

By specifying —network, your app container can now access the database by simply using db as the hostname – it couldn’t be easier!

Docker-Compose - Lego for Developers

Creating containers with a single command is already a great relief. However, connecting multiple containers and remembering all commands and environment variables can quickly become confusing.

Docker has a great solution for this. With Docker-Compose, you can create a recipe for multiple containers. It’s like creating a recipe for using recipes.

This recipe is defined in a YAML file and contains all the containers and settings needed. This way, you can relatively easily create a complete system with a web server, backend software, and database server all at once and run it with a single command.

For example, WordPress requires a web server and MySQL as a database – so at the minimum two containers. With the following simplified YAML file, Docker-Compose can start, configure, and connect both containers:

version: '2'

services:

  wordpress:
    depends_on:
      - db
    image: wordpress
    environment:
      WORDPRESS_DB_HOST: db:3306  # The name of the database container is used as hostname
      WORDPRESS_DB_USER: <USER>
      WORDPRESS_DB_PASSWORD: <PASSWORD>
      WORDPRESS_DB_NAME: wordpress
    ports:
      - 8082:80  # Forwarding the web server port 80 to localhost:8082 on your laptop
    networks:
      - wpNetwork  # Connection to the same network as the database

  db:
    image: mysql
    environment:
      MYSQL_ROOT_PASSWORD: <ROOT-PASSWORD>
      MYSQL_DATABASE: wordpress
      MYSQL_USER: <USER>
      MYSQL_PASSWORD: <PASSWORD>
    networks:
      - wpNetwork  # Connection to the same network as the database

networks:
  wpNetwork:

This YAML file is very simple, but there are more options to configure WordPress. Here you can find a detailed example.

If you are in the same directory as the YAML file in the terminal, you can start the system with a single command:

docker-compose up

Now WordPress should be accessible at localhost:8082 and connected to the database. If you open localhost:8082 in your browser, the WordPress installation should start.

A screenshot of the started WordPress installation after launching with docker-compose.
A screenshot of the started WordPress installation after launching with docker-compose.

The entire system can also be stopped just as easily:

docker-compose down

Conclusion: Docker – More Than Just a Tool

Today, Docker is like the reliable toolbox I carry with me for every project. The ease with which I can set up new environments and experiment makes Docker indispensable. I wish this experience for everyone – whether developer, data scientist, or anyone technically inclined.

That’s exactly why I wrote this article: Docker seems intimidating at first, but once you take the first steps, it’s like magic.

Would you like to use Docker in your company or have questions about integrating it into your existing infrastructure? We are happy to help. Contact us for a free initial consulation.


Interrested in using Docker in your company? We are happy to help you simplify your technical infrastructure with Docker. Reach out