Docker for Beginners: Build, Ship, and Run Apps with Ease

Docker for Beginners: Build, Ship, and Run Apps with Ease

Docker for Beginners: Build, Ship, and Run Apps with Ease (A Story-Based Guide)

When an app works on your machine but fails on a teammate's laptop, Docker turns chaos into boring reliability. This practical guide shows how to containerize a small Python API, create small images, use Docker Compose, push to Docker Hub, and avoid common pitfalls.

By ·

☕ Quick mental model

Think of Docker as a time capsule for your app:

  • Image = recipe (OS libs + dependencies + code)
  • Container = a running instance of that image
  • Dockerfile = instructions to bake the image
  • Registry = library of images (Docker Hub, GHCR, ECR)

1) Start simple: a tiny Python API

Create a folder called hello-docker/ with three files: app.py, requirements.txt, and Dockerfile.

app.py

from flask import Flask, jsonify, request

app = Flask(__name__)

@app.get("/ping")
def ping():
    return jsonify(ok=True, message="pong")

@app.post("/echo")
def echo():
    data = request.get_json(force=True, silent=True) or {}
    return jsonify(received=data)

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=8000)

requirements.txt

flask==3.0.0

Note: we bind to 0.0.0.0 so the app is reachable from outside the container.

2) Your first Dockerfile

Keep it readable and safe — use a non-root user and avoid caching unnecessary layers.

# 1) Choose a lightweight base image
FROM python:3.11-slim AS base

# 2) Create a non-root user (safer)
RUN useradd -m appuser

# 3) Set working directory
WORKDIR /app

# 4) Install Python deps
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# 5) Copy source
COPY app.py .

# 6) Expose the app port
EXPOSE 8000

# 7) Run as non-root
USER appuser
CMD ["python", "app.py"]

Build & run

# Build an image
docker build -t hello-docker:1.0 .

# Start a container on port 8000
docker run --rm -p 8000:8000 hello-docker:1.0

Test:

curl http://localhost:8000/ping
# {"ok":true,"message":"pong"}

curl -X POST http://localhost:8000/echo -H "content-type: application/json" -d '{"name":"Docker"}'
# {"received":{"name":"Docker"}}

3) Keep images small with multi-stage builds

If you compile dependencies or build frontend assets, multi-stage builds let you separate build tooling from runtime.

# Stage 1: build layer (has compilers)
FROM python:3.11-slim AS build
WORKDIR /build
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
    && rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip wheel --no-cache-dir --wheel-dir /wheels -r requirements.txt

# Stage 2: runtime (clean & small)
FROM python:3.11-slim AS runtime
WORKDIR /app
COPY --from=build /wheels /wheels
RUN pip install --no-cache-dir --no-index --find-links=/wheels /wheels/*
COPY app.py .
EXPOSE 8000
CMD ["python", "app.py"]

Why it matters: smaller images = faster deploys, lower bandwidth, fewer vulnerabilities.

4) Develop faster with Docker Compose

Compose spins up multiple services (API + DB) with a single command.

version: "3.9"
services:
  api:
    build: .
    ports:
      - "8000:8000"
    environment:
      - FLASK_ENV=development
    volumes:
      - .:/app
    command: python app.py

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

Run:

docker compose up --build

Now your API and Redis are networked and accessible locally.

5) Push to Docker Hub (share your work)

  1. Log in: docker login
  2. Tag & push:
    docker tag hello-docker:1.0 YOURUSER/hello-docker:1.0
    docker push YOURUSER/hello-docker:1.0

Anyone can run your image:

docker run --rm -p 8000:8000 YOURUSER/hello-docker:1.0

Use GHCR or AWS ECR as alternatives for private or organization-centric registries.

6) Real-world add-ons (quick wins)

Healthchecks

HEALTHCHECK --interval=30s --timeout=3s CMD curl -fsS http://localhost:8000/ping || exit 1

.dockerignore

__pycache__
*.pyc
.env
.git
node_modules
.wheels

Environment config (Compose)

environment:
  - APP_ENV=prod
  - APP_SECRET=${APP_SECRET}

7) Common pitfalls (and how to dodge them)

  • “Works on my machine” issues: pin dependency versions and use immutable tags (avoid latest).
  • Huge images: slim bases, multi-stage builds, and .dockerignore.
  • Permissions: use non-root users and be careful with mounted volumes.
  • Dangling images: prune periodically with docker system prune -f.
  • Secrets: don’t bake them into images—use env vars, secret managers, or runtime mounts.

8) Where to go next

  • Add a production WSGI server (e.g., gunicorn).
  • Introduce Nginx as a reverse proxy in Compose.
  • Build a multi-service stack (web + api + db + queue).
  • Deploy to AWS ECS/Fargate, Google Cloud Run, or Azure Container Apps.

Final thoughts

Docker turns "works on my machine" into "works anywhere." Start small: containerize a tiny API, learn Dockerfile basics, then evolve to multi-stage builds, Compose, and cloud deploys. If this helped, drop a comment and follow for the rest of the 30-day series.

Follow me on Medium

Comments

Popular posts from this blog

Top 7 Real-World Projects to Learn React, Python, and AWS (Beginner to Advanced)