Introduction
Every developer has hit this wall at least once. You build an application on your laptop, it works perfectly, you hand it off or deploy it to a server — and everything breaks. Different OS, missing libraries, wrong runtime version, conflicting dependencies.
Docker solves this problem permanently.
It packages your application along with everything it needs — code, runtime, libraries, environment variables, config — into a single standardised unit called a container. If it runs in Docker on your machine, it runs identically everywhere else.
In this Docker tutorial for beginners we will cover:
- What containers are and how Docker works
- The difference between images and containers
- Installing Docker and running your first container
- Writing a Dockerfile from scratch
- Building and running your own image
- Working with volumes and environment variables
- Managing multi-container apps with Docker Compose
Containers vs Virtual Machines
Before Docker, the standard way to isolate an application was a virtual machine (VM). A VM runs a full operating system on top of your host OS using a hypervisor. It is isolated and consistent — but heavy. Each VM requires gigabytes of storage and takes minutes to boot because it spins up an entire OS.
Containers take a completely different approach. Instead of virtualising hardware, containers share the host operating system’s kernel but isolate each application in its own filesystem and process space. This makes them:
- Lightweight — typically 100–500 MB vs several GB for a VM
- Fast — start in seconds, not minutes
- Portable — run identically on any machine with Docker installed
- Efficient — you can run dozens of containers on a machine that would only support a few VMs
Core Concepts: Images, Containers, Registry
Three terms you need to understand before writing any Docker commands:
- Image — a read-only blueprint for a container. It contains the OS layer, your app code, dependencies, and run instructions. Think of it as a recipe.
- Container — a running instance of an image. You can create many containers from the same image, each isolated from the others. Think of it as a dish cooked from the recipe.
- Registry — a place to store and share images. Docker Hub is the default public registry, with thousands of official images for databases, languages, web servers, and more.
Step 1 — Install Docker
Download and install Docker Desktop from the official Docker website:
- macOS / Windows — use Docker Desktop
- Linux (Ubuntu) — follow the official Ubuntu install guide
Verify the installation:
docker --version
# Docker version 27.x.x
docker run hello-world
# Pulls the hello-world image, runs it, confirms Docker is working
Step 2 — Run Your First Container
Run an Nginx web server in a container with a single command:
docker run -d -p 8080:80 --name my-nginx nginx
Breaking down the flags:
-d— detached mode, runs in the background-p 8080:80— maps port 8080 on your machine to port 80 inside the container--name my-nginx— gives the container a readable namenginx— the image to use (pulled from Docker Hub automatically if not cached)
Open http://localhost:8080 in your browser. You will see the Nginx welcome page — a full web server running inside a container, with zero installation on your host machine.
Step 3 — Essential Docker Commands
These are the commands you will use every day:
# List running containers
docker ps
# List ALL containers including stopped ones
docker ps -a
# Stop a container
docker stop my-nginx
# Start a stopped container
docker start my-nginx
# Remove a container (must be stopped first)
docker rm my-nginx
# List all downloaded images
docker images
# Remove an image
docker rmi nginx
# View logs from a container
docker logs my-nginx
# Open an interactive shell inside a running container
docker exec -it my-nginx bash
# Remove all stopped containers, dangling images, and unused networks
docker system prune
Step 4 — Write Your First Dockerfile
A Dockerfile is a plain text file with step-by-step instructions for building your image. Let us containerise a simple Node.js app.
Create a new folder my-app with these two files:
package.json:
{
"name": "my-app",
"version": "1.0.0",
"main": "index.js",
"dependencies": {
"express": "^4.18.0"
}
}
index.js:
const express = require("express");
const app = express();
app.get("/", (req, res) => {
res.send("Hello from Docker! 🐳");
});
app.listen(3000, () => {
console.log("App running on port 3000");
});
Now create a Dockerfile (no extension) in the same folder:
# Start from the official Node.js Alpine image (lightweight)
FROM node:18-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy package files first (enables Docker layer caching)
COPY package*.json ./
# Install only production dependencies
RUN npm install --omit=dev
# Copy the rest of the application code
COPY . .
# Tell Docker which port the app uses
EXPOSE 3000
# Command to run when the container starts
CMD ["node", "index.js"]
Also create a .dockerignore file to exclude unnecessary files:
node_modules
.env
*.log
Step 5 — Build and Run Your Image
Build the image:
# Build and tag it as "my-app:1.0"
# The dot tells Docker to look for the Dockerfile in the current directory
docker build -t my-app:1.0 .
Run a container from your image:
docker run -d -p 3000:3000 --name my-app-container my-app:1.0
Open http://localhost:3000 — you should see Hello from Docker! 🐳.
Your application is now running inside a container. Anyone with Docker installed can pull and run this exact container and get the exact same result, regardless of their OS or what is installed locally. That is the entire point of Docker.
Step 6 — Volumes and Persistent Data
Containers are ephemeral — when you delete a container, all data inside it is gone. Volumes fix this by mounting a folder from your host machine into the container.
# Mount the current directory into the container
# Local changes appear inside the container immediately (great for development)
docker run -d \
-p 3000:3000 \
-v $(pwd):/app \
--name my-app-dev \
my-app:1.0
For databases, use named volumes — Docker manages the storage location for you:
# PostgreSQL with a named volume for data persistence
docker run -d \
--name my-postgres \
-e POSTGRES_PASSWORD=secret \
-v postgres-data:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:15
# Data in postgres-data survives even if you delete and recreate the container
Step 7 — Environment Variables
Pass configuration into containers with the -e flag or a .env file:
# Pass a single variable
docker run -d -p 3000:3000 -e NODE_ENV=production my-app:1.0
# Pass multiple variables from a .env file
docker run -d -p 3000:3000 --env-file .env my-app:1.0
Access them inside your application the normal way:
const port = process.env.PORT || 3000;
const env = process.env.NODE_ENV || "development";
console.log(`Running in ${env} on port ${port}`);
Step 8 — Docker Compose
Real applications rarely run as a single container. A typical web app has a Node.js server, a PostgreSQL database, and maybe a Redis cache. Docker Compose lets you define and run all of them together using a single YAML file.
Create docker-compose.yml in your project root:
version: "3.9"
services:
app:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://postgres:secret@db:5432/myapp
depends_on:
- db
db:
image: postgres:15
environment:
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=myapp
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
postgres-data:
Manage your full stack with simple commands:
# Start all services in the background
docker compose up -d
# Follow logs from all services
docker compose logs -f
# Stop all services
docker compose down
# Stop and delete volumes (resets the database)
docker compose down -v
# Rebuild images after code changes
docker compose up -d --build
Final Thoughts
Docker is one of the most valuable tools a developer can add to their workflow. Once you grasp the core ideas — images, containers, Dockerfiles, Compose — everything else builds naturally on top.
Here is a quick recap from this Docker tutorial for beginners:
- Containers share the host kernel — they are lightweight, fast, and portable, unlike VMs
- Images are blueprints; containers are running instances of those blueprints
docker run,docker ps,docker stop,docker rmare your daily commands- A Dockerfile defines exactly how to build your image
- Volumes keep data alive when containers are deleted
- Docker Compose manages multi-container apps with a single YAML file
From here, the natural next steps are pushing your image to Docker Hub, learning multi-stage builds to keep production images small, and eventually deploying containers to the cloud. See our AWS S3 tutorial or our guide on setting up CI/CD with GitHub Actions to keep building.
The “works on my machine” problem is officially solved. If it runs in Docker, it runs everywhere.




Leave a Reply