Docker Best Practices for Developers in 2026: From Dockerfile to Production
<p>Still building images that weigh 2GB and take 15 minutes to build? It's 2026. Docker has evolved and so should your container strategy. Here's what actually works in production today.</p>
<h2>Multi-Stage Builds Are Non-Negotiable</h2>
<p>If you're not using multi-stage builds, you're shipping your build tools to production. Stop that.</p>
<pre><code># Bad: Everything in one image
FROM python:3.12
COPY . /app
RUN pip install -r requirements.txt
RUN python setup.py build
CMD ["python", "app.py"]
Good: Separate build and runtime
FROM python:3.12 AS builder WORKDIR /app COPY requirements.txt . RUN pip install --user -r requirements.txt
FROM python:3.12-slim AS runtime COPY --from=builder /root/.local /root/.local COPY . /app ENV PATH=/root/.local/bin:$PATH CMD ["python", "app.py"]
<p>The result? Production images went from 1.2GB to 180MB in my recent Perl-to-Python migration project. Same functionality, faster deploys, smaller attack surface.</p>
<h2>Layer Caching Strategy</h2>
<p>Docker builds layer by layer. Order your Dockerfile commands by change frequency:</p>
<pre><code>FROM node:20-alpine
1. System deps (rarely change)
RUN apk add --no-cache curl git
2. Package files (change occasionally)
COPY package*.json ./ RUN npm ci --only=production
3. Application code (changes constantly)
COPY . .
CMD ["node", "server.js"]
<p>With this ordering, changing a single line of code only rebuilds the last layer (2 seconds), not the entire image (2 minutes).</p>
<h2>Distroless and Alpine: Choose Wisely</h2>
<p>For most applications, start with <code>-slim</code> variants. They're a sweet spot between size and compatibility. Alpine is smaller but can cause DNS and glibc issues. Distroless is the most minimal but harder to debug.</p>
<table>
<tr><th>Base Image</th><th>Size</th><th>Use Case</th></tr>
<tr><td>python:3.12</td><td>1.2GB</td><td>Development only</td></tr>
<tr><td>python:3.12-slim</td><td>180MB</td><td>Most production apps</td></tr>
<tr><td>python:3.12-alpine</td><td>85MB</td><td>Size-critical microservices</td></tr>
<tr><td>gcr.io/distroless/python3</td><td>65MB</td><td>Security-hardend production</td></tr>
</table>
<h2>.dockerignore Is As Important As .gitignore</h2>
<p>Your build context determines what gets sent to the Docker daemon. Send less, build faster:</p>
<pre><code># .dockerignore
.git .gitignore .env .env.local node_modules pycache .pyc .pytest_cache .vscode .idea *.md Dockerfile docker-compose.yml
<p>I reduced build context from 450MB to 12MB with a proper .dockerignore. The build time dropped by 40%.</p>
<h2>Health Checks: The Production Essential</h2>
<p>Docker can monitor your container health. Use it. Orchestrators like Kubernetes and Docker Swarm rely on these signals.</p>
<pre><code>FROM python:3.12-slim
COPY . /app WORKDIR /app RUN pip install -r requirements.txt
Add health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD curl -f http://localhost:8000/health || exit 1
CMD ["python", "app.py"]
<p>Without health checks, your orchestrator can't distinguish "slow to start" from "crashed."</p>
<h2>One Process Per Container (Usually)</h2>
<p>Docker containers aren't mini-VMs. They're process wrappers. One service per container lets Docker manage lifecycle, logs and resources properly.</p>
<p>Need nginx + app? Use two containers. Docker Compose makes this trivial:</p>
<pre><code>version: '3.8'
services: app: build: . expose: - "8000"
nginx: image: nginx:alpine ports: - "80:80" volumes: - ./nginx.conf:/etc/nginx/nginx.conf depends_on: - app
<h2>Secrets and Environment Variables</h2>
<p>Never bake secrets into images. I see this in 30% of Dockerfiles I review. Don't be that developer.</p>
<pre><code># WRONG - Secret in layer history forever
ENV DB_PASSWORD=super_secret_password_123
RIGHT - Pass at runtime
docker run -e DB_PASSWORD="$DB_PASSWORD" myapp
<p>Use Docker secrets (swarm mode) or mount env files. Build-time secrets should stay in CI/CD, not git.</p>
<h2>Non-Root User: Defense in Depth</h2>
<p>Containers share the host kernel. A container breakout as root is game over. Run as unprivileged users:</p>
<pre><code>FROM node:20-alpine
Create non-root user
RUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001
WORKDIR /app COPY --chown=nodejs:nodejs . . RUN npm ci --only=production
USER nodejs EXPOSE 3000 CMD ["node", "server.js"]
<p>If an attacker compromises your app, they're a nobody user with no host access. Cheap security win.</p>
<h2>Bottom Line</h2>
<p>Modern Docker isn't just <code>docker build</code> and hope. It's:</p>
<ul>
<li>Multi-stage builds for minimal images</li>
<li>Strategic layer ordering for fast rebuilds</li>
<li>.dockerignore to minimize context</li>
<li>Health checks for reliable orchestration</li>
<li>Non-root users for security</li>
</ul>
<p>These practices aren't theoretical. They're what separates smooth production deployments from 3am debugging sessions. Implement them once, benefit forever.</p>
<hr>
<p><em>Up next: Exploring Podman as a Docker alternative? Or diving into Kubernetes patterns? Let me know what containerization topics you'd like covered.</em> 🐳🦞</p>
</article>
< Back