tutorials

Your Docker Builds Are A Lie

Stop blaming Docker for your 15-minute build times. These three overlooked fixes can slash your wait from minutes to seconds, reclaiming your most valuable resource: time.

Stork.AI
Hero image for: Your Docker Builds Are A Lie
💡

TL;DR / Key Takeaways

Stop blaming Docker for your 15-minute build times. These three overlooked fixes can slash your wait from minutes to seconds, reclaiming your most valuable resource: time.

That 15-Minute Build is a Red Flag

Frustratingly, Waiting 10 to 15 minutes every Docker build is a universal pain point for developers across the industry. Yeah, that can really slow things down, transforming what should be rapid iteration into a tedious, time-consuming grind. This widespread agony underpins the premise of Better Stack’s widely viewed video, "Your Docker Builds Are Slow… And It’s Your Fault," which directly confronts this often-ignored reality for countless engineers.

Instead of blaming Docker’s inherent architecture or demanding more powerful hardware, the truth of these prolonged build times points to a different culprit: readily identifiable, easily rectifiable anti-patterns embedded within your Dockerfiles. Docker itself is a remarkably efficient and powerful containerization tool; its perceived sluggishness typically arises from foundational missteps in how developers construct their build instructions, rather than intrinsic design flaws. Your builds are slow, not due to Docker, but due to practices most overlook.

But You don't have to endure these protracted, productivity-sapping build cycles any longer. This article will systematically deconstruct the three core techniques that consistently transform a Docker build routinely taking 10 to 15 minutes into one completing in under three minutes. We will reveal the strategies that drastically cut down build times, making your development workflow significantly more responsive and enjoyable.

These are not complex hacks, nor do they demand adopting entirely new tools or overhauling your entire existing codebase. Instead, we focus on foundational practices most developers simply overlook, or perhaps never learned. Mastering these simple yet powerful methods means ushering in an era of significantly faster iteration, dramatically smaller final images, and a far more efficient development pipeline, fundamentally altering Your relationship with Docker builds forever.

It’s Not Docker, It’s Your Bloated Context

Illustration: It’s Not Docker, It’s Your Bloated Context
Illustration: It’s Not Docker, It’s Your Bloated Context

Waiting 10 to 15 minutes for Docker builds often stems from a fundamental misunderstanding of the build context. When You execute `docker build`, Docker doesn’t just look at Your Dockerfile; it sends the entire specified local directory and all its contents to the Docker daemon. This critical initial transfer includes every file, regardless of whether Your Dockerfile explicitly copies it into the final image.

This often-overlooked detail is where inefficiency begins, making Your builds a lie from the start. The .dockerignore file stands as Your first, most critical optimization tool, instructing the Docker daemon which files and directories to exclude from that initial context transfer. It’s a simple yet powerful mechanism to prevent unnecessary data from ever leaving Your local machine and reaching the build engine.

Ignoring extraneous files dramatically reduces transfer size and build time. Almost universally, You should include: - `.git` directories, containing version control metadata - `node_modules` or `venv` folders, holding local dependencies - Build artifacts like `dist/`, `build/`, or `target/` - `.env` files, which often contain sensitive environment variables - `logs/` directories, for runtime logs - IDE configuration files, such as `.vscode/` or `.idea/`

Better Stack’s video, "Your Docker Builds Are Slow… And It’s Your Fault," vividly demonstrates the impact of this strategy. They reduced a build context from a massive 500 megabytes down to a mere 20 megabytes really fast by implementing a robust `.dockerignore` file. This immediate 25x reduction significantly speeds up the initial "Sending build context to Docker daemon" step, a frequent bottleneck for developers.

And it's not just about the transfer speed. A smaller context also profoundly enhances Docker’s internal layer caching, minimizing the chances of unnecessary cache invalidations. This means subsequent builds, even with minor code changes, leverage existing layers more effectively, accelerating Your development cycle dramatically. You gain substantial performance and reliability simply by precisely defining what *not* to send.

The Art of the Dockerfile `COPY`

Docker’s efficiency hinges on layer caching. Every instruction in a Dockerfile creates a new layer in the image. If an instruction and its inputs remain unchanged from a previous build, Docker intelligently reuses that cached layer, skipping redundant work and dramatically speeding up subsequent builds.

Many developers, however, inadvertently sabotage this mechanism with a single, seemingly innocuous line: `COPY . .` placed early in their Dockerfile. This command copies Your entire current directory – the full build context – into the image at once. This includes all source code, configuration files, and potentially even irrelevant development artifacts.

The issue arises because any alteration, no matter how small, to *any* file within that copied context invalidates this layer. Consequently, Docker must rebuild this layer and every subsequent layer. This often means reinstalling all project dependencies from scratch, even if Your `package.json` or `requirements.txt` hasn't changed.

Consider a more strategic approach. Instead of copying everything upfront, first copy only Your dependency manifest – for a Node.js project, that's `package.json` and `package-lock.json`. This minimal copy creates a stable layer that changes infrequently.

Immediately after, execute Your dependency installation command, such as `RUN npm install`. This step creates another distinct layer. Because only Your manifest was copied, this layer's input only changes when Your dependencies themselves are updated.

Only then, in a separate instruction, copy the rest of Your application code with `COPY . .`. Now, if You modify a single line of Your application logic, only the final layers invalidate. Docker reuses the stable dependency installation layer, bypassing a lengthy `npm install`.

This optimization is not trivial; it saves minutes on every build. Instead of waiting for dependencies to redownload and reinstall, Docker leverages its cache. This transforms a potentially 10-minute install step into an almost instantaneous cache hit, drastically accelerating Your development workflow.

Why Your `npm install` Takes Forever

Waiting 10 to 15 minutes for a Docker build often points to one primary culprit: dependency installation. Your `npm install` or `pip install` commands frequently consume the majority of this time, turning an otherwise quick code update into a protracted build process. Better Stack’s video highlights this pain, noting a typical install step can take three minutes.

These package managers are inherently slow, burdened by multiple factors. They contend with network latency as they fetch packages from remote registries, resolve intricate dependency trees requiring substantial CPU cycles, and perform extensive disk I/O to write thousands of files to the filesystem. This collective overhead makes dependency installation a resource-intensive operation.

Even when you meticulously order your `COPY` instructions—placing `package.json` or `requirements.txt` before application code—Docker's layer caching often falls short for dependencies. Most CI/CD environments operate with ephemeral runners, providing a clean slate for each build. This means previous dependency layers are rarely reused, forcing a full re-download and re-installation with every single build.

You confront this recurring problem directly with Docker's modern build engine, BuildKit. This advanced builder introduces a transformative feature: dedicated cache mounts. These mounts enable persistent, isolated caching for dependency installations, preventing redundant downloads and installations across builds and drastically reducing that three-minute install to mere seconds.

The BuildKit Cache Mount Miracle

Illustration: The BuildKit Cache Mount Miracle
Illustration: The BuildKit Cache Mount Miracle

Your `npm install` step often feels like an eternity, a major bottleneck in Docker builds. While layer caching helps, it struggles with the dynamic, external nature of package manager dependencies. BuildKit, Docker's modern build engine and the default for contemporary Docker installations, offers a powerful solution that radically transforms this experience.

BuildKit introduces a game-changing feature: `RUN --mount=type=cache`. This directive provides a persistent, dedicated cache directory within the build environment. Unlike standard Dockerfile instructions, files written to a cache mount do not become part of the final image layer. Instead, they persist across subsequent builds, acting as a high-speed repository for frequently downloaded assets.

Imagine skipping the arduous process of redownloading gigabytes of Node.js modules, Python packages, or Rust crates with every rebuild. The cache mount makes this a reality. It targets specific directories where package managers store their downloaded artifacts, ensuring they are available instantly for subsequent installations.

Consider this optimized Dockerfile snippet: `RUN --mount=type=cache,target=/root/.npm npm install`

This instruction tells BuildKit to mount a cache volume at `/root/.npm`, the default cache location for `npm`. When `npm install` runs, it first checks this mounted directory. If dependencies are already present from a previous build, `npm` reuses them, bypassing network requests and lengthy download times. This dramatically accelerates the dependency resolution phase.

The distinction from Docker's traditional layer caching is crucial. Layer caching reuses an entire instruction's output if its inputs (like the Dockerfile instruction itself or copied files) remain unchanged. A cache mount, conversely, provides a persistent, writeable volume specifically for build-time artifacts that should not be part of the final image. This makes it ideal for package managers, which download and store numerous files that are not directly application code.

Better Stack’s recent video highlights the profound impact of this technique, noting a dependency install step that plummeted from three minutes to approximately eight seconds. This massive improvement stems directly from leveraging BuildKit’s intelligent caching. It allows developers to maintain rapid iteration cycles, freeing them from the frustration of Waiting for slow dependency installations. BuildKit’s cache mounts represent a fundamental shift, moving beyond the limitations of simple layer reuse to provide truly intelligent, persistent caching for complex build environments.

Slashing Installs from 3 Minutes to 8 Seconds

A single change transforms dependency installation from a three-minute ordeal to an eight-second sprint. This dramatic reduction, highlighted by Better Stack, comes courtesy of BuildKit cache mounts. For projects heavy with external libraries, this optimization is often the most significant accelerator you can implement.

Previously, a standard `RUN npm install` or `RUN pip install` command within your Dockerfile meant every build, even minor code changes, triggered a full re-download and installation of all project dependencies. Docker’s layer caching mechanism, while powerful, couldn't persist package manager caches between builds, leading to redundant network requests and disk I/O.

BuildKit solves this by introducing the `--mount=type=cache` flag for `RUN` instructions. This creates a dedicated, persistent cache directory on the build host, accessible only during the build step. Package managers then use this location, storing downloaded packages and build artifacts for future reuse across builds.

Consider a Node.js application: instead of `RUN npm install`, you use `RUN --mount=type=cache,target=/root/.npm npm install --cache /tmp/npm-cache`. For Python, `RUN --mount=type=cache,target=/root/.cache/pip pip install --cache-dir /tmp/pip-cache` achieves a similar effect. The `target` specifies the cache location *inside* the container during the build.

This strategy extends broadly across various programming ecosystems. It applies to: - `pip`'s cache for Python - Maven's `.m2` directory for Java - `go mod download` for Go - RubyGems for Ruby projects The core principle remains consistent: direct the package manager to store its downloaded assets in a BuildKit-managed cache volume.

The impact is profound: dependency downloads and installations, once a primary bottleneck, become near-instantaneous after the initial run. This "one that changed everything" optimization, as Better Stack aptly puts it, fundamentally reshapes the economics of iterative development, freeing developers from frustratingly long waits.

Your Final Image is Too Damn Big

Beyond build speed, a bloated final image presents another critical performance bottleneck. Docker images often swell to hundreds of megabytes, sometimes even gigabytes, carrying unnecessary baggage into production. This directly translates to slower deployments and higher operational costs.

Large images significantly increase the time required to push to container registries and pull down to deployment targets. Imagine pulling a 1GB image versus a 50MB one across a fleet of servers – the difference in deployment speed is substantial. Furthermore, increased storage consumption across registries and host machines inflates infrastructure expenses.

Critically, a larger image also expands its security attack surface. Every additional file, library, or development tool included potentially introduces new vulnerabilities. Compilers, SDKs, development dependencies like testing frameworks, and temporary build artifacts frequently remain in production images, despite having no role in the application's runtime functionality.

The solution lies in a fundamental principle: separating the build environment from the runtime environment. You need a rich environment to compile your code, resolve dependencies, and generate executables. But for deployment, the goal is a minimal image containing only the application and its absolute runtime necessities. This strategic distinction forms the basis for creating lean, secure, and efficient container images.

The Multi-Stage Build Diet

Illustration: The Multi-Stage Build Diet
Illustration: The Multi-Stage Build Diet

After tackling slow build times, attention must turn to another critical optimization: the final image size. Many developers create massive Docker images, unwittingly including gigabytes of build-time dependencies, temporary files, and development tools that have no place in a production environment. This bloat leads to slower deployments, increased storage costs, and a larger attack surface.

Enter the multi-stage build diet, a powerful pattern that drastically slims down your final Docker images. This approach separates the compilation and dependency installation process from the final runtime environment. You leverage Docker’s ability to use artifacts from one build stage in another, discarding everything else.

The process begins with a "builder" stage. Here, a full-featured base image like `FROM node:18 as builder` provides all necessary tools for compilation and dependency installation. Within this stage, you copy `package*.json`, run `npm install` (including `devDependencies`), copy your source code, and execute your build command, such as `npm run build`. This stage contains all the temporary files and development tools required to produce your application's artifacts.

Next comes the final, lean stage. This stage typically uses a minimal base image, like `FROM node:18-alpine`, known for its significantly smaller footprint compared to its full-fat counterparts. This base image only includes what is absolutely essential for your application to run in production, stripping away unnecessary system libraries and utilities.

The magic happens with the `COPY --from=builder /app/dist /app` command. This crucial instruction selectively transfers *only* the compiled application artifacts—like your `/app/dist` folder—from the "builder" stage into the minimal final image. Everything else from the builder stage, including `node_modules`, compilers, and build caches, gets left behind, never making it into the production image.

Consider this example Dockerfile:

```dockerfile # Stage 1: Build the application FROM node:18 as builder WORKDIR /app # Copy package files to leverage layer caching COPY package*.json ./ # Install all dependencies RUN npm install # Copy source code and build COPY . . RUN npm run build

# Stage 2: Create the final, lean image FROM node:18-alpine WORKDIR /app # Copy ONLY the built output from the 'builder' stage COPY --from=builder /app/dist ./dist # Define the command to run your application CMD ["node", "./dist/index.js"] ```

This multi-stage approach ensures your production image contains just your application and its core runtime dependencies. Your Docker images shrink from hundreds of megabytes, or even gigabytes, to tens of megabytes, mirroring the efficiency gains seen with BuildKit cache mounts for build speed. This leads to significantly faster deployments, lower resource consumption, and a dramatically reduced security footprint.

Beyond the Basics: Modern Docker Hygiene

Optimized `COPY` instructions, BuildKit cache mounts, and multi-stage builds deliver significant gains in Docker build speed and final image size. However, modern Docker hygiene extends far beyond these foundational optimizations, demanding a proactive approach to security and long-term maintainability. True professional containerization integrates these essential practices from the outset to build robust, production-ready systems.

Foundational to a lean and secure container is the deliberate choice of base image. Developers increasingly choose minimal options such as `alpine` or Debian's `slim-bullseye` over larger, general-purpose distributions. These images drastically reduce the attack surface by excluding unnecessary system utilities, libraries, and packages, directly translating into fewer potential Common Vulnerabilities and Exposures (CVEs) and faster image downloads. Alpine, for instance, leverages Musl libc for its small footprint, while `slim-bullseye` intelligently prunes extraneous components from a stable Debian base.

Beyond simply minimizing image size, adopt robust security postures within the container itself. Running your application as a non-root user is a critical best practice. Instructions like `USER nobody` or creating a dedicated, unprivileged user and group within the Dockerfile prevent potential privilege escalation. If an attacker compromises the application, the impact is severely limited, as the process lacks root access to the host system or other containers.

Maintaining this elevated standard requires continuous vigilance, especially within automated CI/CD pipelines. Proactive security scanning tools like Docker Scout and Trivy become indispensable, analyzing image layers for known vulnerabilities, misconfigurations, and outdated components. Integrating such scanners ensures security checks are "shifted left," catching issues early in the development lifecycle and ensuring Docker images remain resilient throughout their operational lifespan.

Your New Reality: The 3-Minute Build

You now possess the strategies to fundamentally transform your Docker build pipeline. No longer must you endure slow, bloated processes. You understand the critical impact of optimizing your build context with a `.dockerignore` file and strategic `COPY` order, ensuring only essential files reach the Docker daemon. This alone can shrink transfers from 500 megabytes to a mere 20.

You have seen the power of BuildKit cache mounts, which eliminate redundant dependency downloads. This innovation slashes dependency installation times, turning a three-minute `npm install` into a mere eight-second operation. Yeah, this single optimization often marks the most dramatic performance gain for dependency-heavy projects.

Finally, you mastered multi-stage builds, a crucial technique for creating lean, production-ready images. By separating build-time dependencies from the final runtime environment, you drastically shrink final image sizes, improving deployment speed and reducing attack surface. And it simplifies maintenance.

Combined, these three core principles deliver staggering results. Builds that once left you Waiting 10 to 15 minutes now routinely complete in under three. The cumulative impact is undeniable, making "Your Docker Builds Are Slow" a relic of the past for You.

This isn't the end of the journey; it’s a robust new beginning. These local Dockerfile optimizations establish the foundation for advanced, team-based solutions like Docker Build Cloud, which further accelerate collaborative development cycles across your entire organization.

Instead of accepting sluggish builds, take ownership. You have the knowledge and the tools to implement immediate, impactful changes. But, don’t just take our word for it. Apply these three techniques to your slowest Docker project this week. Measure the difference in build times and image sizes. You will discover that "It, Your Fault" was a misunderstanding, replaced by efficient, rapid development.

Frequently Asked Questions

What is the single biggest mistake causing slow Docker builds?

The most common mistake is copying all application code into the Docker image before installing dependencies. This breaks Docker's layer cache, forcing a lengthy dependency reinstall with every single code change.

How does a multi-stage build make Docker images smaller?

A multi-stage build uses a temporary 'builder' stage with all the necessary tools and dependencies to compile or build the application. The final, smaller image is then created by copying only the essential compiled artifacts into a clean, minimal base image, leaving all build tools behind.

What is BuildKit and why is it faster?

BuildKit is Docker's modern build engine. It's faster due to features like parallel stage execution, skipping unused stages, and advanced caching, such as cache mounts which persist dependency caches between builds, dramatically speeding up install steps.

Why is a .dockerignore file so important?

A .dockerignore file prevents unnecessary files (like .git, node_modules, local logs) from being sent to the Docker daemon as part of the 'build context'. This drastically reduces the context size, speeding up the initial step of the build and preventing sensitive files from being included in the image.

Frequently Asked Questions

What is the single biggest mistake causing slow Docker builds?
The most common mistake is copying all application code into the Docker image before installing dependencies. This breaks Docker's layer cache, forcing a lengthy dependency reinstall with every single code change.
How does a multi-stage build make Docker images smaller?
A multi-stage build uses a temporary 'builder' stage with all the necessary tools and dependencies to compile or build the application. The final, smaller image is then created by copying only the essential compiled artifacts into a clean, minimal base image, leaving all build tools behind.
What is BuildKit and why is it faster?
BuildKit is Docker's modern build engine. It's faster due to features like parallel stage execution, skipping unused stages, and advanced caching, such as cache mounts which persist dependency caches between builds, dramatically speeding up install steps.
Why is a .dockerignore file so important?
A .dockerignore file prevents unnecessary files (like .git, node_modules, local logs) from being sent to the Docker daemon as part of the 'build context'. This drastically reduces the context size, speeding up the initial step of the build and preventing sensitive files from being included in the image.

Topics Covered

#docker#devops#performance#buildkit#optimization
🚀Discover More

Stay Ahead of the AI Curve

Discover the best AI tools, agents, and MCP servers curated by Stork.AI. Find the right solutions to supercharge your workflow.

Back to all posts