This Linux Trick Stops Next.js Crashes

Stop upgrading your cloud server just to fix failing Next.js builds. A single, forgotten Linux command can stabilize your deployments and save you hundreds.

tutorials
Hero image for: This Linux Trick Stops Next.js Crashes

The Silent Killer of Your Next.js Deployments

You run `next build` on a fresh Ubuntu droplet and everything looks fine for about 10 seconds. Then the load average explodes, CPU pins at 100%, and your SSH session starts dropping characters like a bad Zoom call. A minute later the terminal snaps back with a single, infuriating word: “Killed.”

No stack trace, no neat error code, just a dead build. Run it again and you get the same pattern: fans spin up, top shows Node and a swarm of workers chewing through CPU, then silence. If you check your provider’s graphs afterward, you see a sharp spike in CPU and memory, followed by a cliff where your process disappeared.

This failure mode hits hardest on the cheap seats of the cloud. Those $5–$7/month, 1 GB RAM instances from DigitalOcean, Vultr, Linode, or Lightsail are perfect for a basic Node app—until you ask them to run a modern Next.js build. During that build window, your “tiny but mighty” droplet suddenly behaves like a Raspberry Pi trying to compile Chrome.

Developers usually meet this with the most expensive reflex in cloud computing: assume the hardware is the problem. The story goes the same way every time. The build dies, the server feels frozen, and the gut reaction is, “This box just can’t handle Next.js. I need 2 GB, maybe 4 GB.”

Cloud dashboards even nudge you there. You see memory pegged, CPU maxed, and a red “out of memory” event in the logs. The upgrade button sits one click away, promising that a bigger instance tier will make the problem vanish. For many teams, that click becomes part of the deployment playbook.

Reality is less glamorous and much cheaper. Those crashes usually don’t mean your app needs a permanently larger machine; they mean your build step briefly needs more memory than your droplet has on tap. And on a default Ubuntu image with swap disabled, the kernel has only one reliable way to cope with that spike: kill your build.

Why Your 1GB Server Hates `next build`

Illustration: Why Your 1GB Server Hates `next build`
Illustration: Why Your 1GB Server Hates `next build`

Next.js looks like a single `node` process on `htop`, but `next build` behaves more like a small distributed system crammed into one binary. Under the hood, Next.js orchestrates multiple Node workers, a TypeScript toolchain, a bundler, and asset pipelines, all competing for the same cramped 1 GB of RAM.

Start with Node itself. The main process launches several worker threads or child processes to parallelize page compilation. Each worker loads its own chunk of the dependency graph, V8 heaps, and build metadata. Instead of one 200–300 MB process, you briefly get several, and their peaks stack.

Next, the TypeScript story. When you run `next build` on a TypeScript project, the toolchain loads the TypeScript compiler, parses your entire codebase, and performs type checking. That means multiple large ASTs, symbol tables, and caches living in memory at once, often spiking hundreds of megabytes on mid-sized projects.

On top of that, Next.js invokes a bundler (Webpack or Turbopack) to generate both client and server bundles. Each target needs its own dependency graph, optimization passes, and source maps. Large component libraries, UI frameworks, and design systems balloon these graphs, so a project that runs fine at 300–400 MB in production can hit 800–900 MB or more during a build.

Then come images and static assets. When you enable next/image or process large media, the build pipeline decodes, resizes, and recompresses files. Image operations are memory-hungry: a few 4K hero images or sprite sheets can briefly occupy tens or hundreds of megabytes per worker before being written back to disk.

All of this happens in a tight window of a few seconds to a few minutes. Imagine a 20-seat coffee shop that suddenly hosts a 60-person flash mob. Normal traffic works fine, but that short, chaotic burst blocks the doorways, overwhelms staff, and leaves regulars stuck outside. `next build` creates exactly that kind of temporal overload on a 1 GB droplet.

On a 1 GB Ubuntu server with no swap, that flash mob pushes memory usage past the physical limit. The kernel starts reclaiming aggressively, caches vanish, and when it still cannot find enough RAM, the OOM killer steps in and terminates the heaviest processes. Your `next build` dies with a one-word epitaph: `Killed`.

The Kernel's Brutal Last Resort: OOM Killer

OOM killer sounds dramatic because it is. When a Linux system runs out of physical RAM, the kernel’s Out-Of-Memory (OOM) killer steps in as a last-resort safety valve, scanning every running process and deciding which one to sacrifice so the entire machine does not lock up. Without it, a 1 GB Ubuntu droplet under memory pressure would simply freeze, dropping SSH sessions and leaving you with a dead terminal and a forced reboot.

Next.js builds make perfect targets. During `next build`, Node spawns multiple worker processes, loads compilers like TypeScript, crunches bundles, and sometimes processes images, easily pushing memory use hundreds of megabytes above the baseline in a short spike. To the kernel, that looks like a large, recent, and nonessential process that can be killed with minimal “system-wide” impact.

Linux uses a heuristic called oom_score (and oom_score_adj) to rank victims. Large processes that recently allocated a lot of memory, do not run as root, and do not belong to core system services float to the top. A Next.js build on a 1 GB droplet, already sitting next to nginx, sshd, and maybe a small database, often becomes the fattest, most disposable thing in RAM.

No swap changes the stakes completely. When RAM hits 100% and there is zero swap configured, the kernel has only two options: stall while it desperately reclaims pages, or invoke the OOM killer. That binary choice explains why your terminal hangs for a bit and then spits out a single word—“Killed”—with no stack trace, no Next.js error, and no helpful hint from Node.

That “Killed” line is not npm being rude; it is the kernel’s signature. You will see it on failed `pnpm install`, `npm install`, or `next build` runs when the OOM killer terminates the process mid-flight. System logs (`dmesg` or `journalctl -k`) usually reveal the smoking gun with entries like “Out of memory: Kill process 1234 (node) score 900 or sacrifice child.”

Swap gives the kernel another move. With even a 1–2 GB swap file, the system can push cold pages—idle daemons, cache pages, infrequently used libraries—to disk, freeing RAM so the build can finish instead of getting culled. For step-by-step guidance, resources such as How to create swap file to deploy NextJS and Docker app on Ubuntu VPS walk through a production-friendly setup.

Meet Your Server's Secret Weapon: The Swap File

Swap space acts like a pressure valve for your tiny server’s memory. Instead of immediately killing `next build` when RAM hits 100%, the Linux kernel can spill excess data into a dedicated chunk of disk and keep going. That chunk is your swap file.

Think of swap as “overflow RAM” that lives on storage instead of silicon. Linux carves out a file or partition on disk and treats it as an extension of physical memory, measured in the same 4 KB pages that regular RAM uses.

When your 1 GB droplet runs out of RAM during a Next.js build, the kernel starts triage. Pages that belong to idle daemons, old caches, or background services move from RAM into the swap file, freeing real memory for the build’s hot code paths.

The kernel does this automatically through its virtual memory manager. You do not rewrite your app or touch Node flags; once swap exists and is active, the system quietly shuffles less-used pages away and reserves the fastest memory for the task currently doing the most work.

Disk is slow compared to RAM — milliseconds instead of nanoseconds — so using swap always adds latency. For a short-lived build, though, stability beats speed: a `next build` that finishes in 90 seconds on a swapping server is far better than one that dies after 20 seconds with a single word, “Killed”.

On a server with swap configured, that same brutal memory spike looks boring. CPU still climbs, fans still spin, but your SSH session stays responsive, `top` shows swap usage ticking up, and the build grinds through instead of detonating the process table.

On a server without swap, the kernel has no escape hatch. Once RAM fills, it either stalls as it thrashes for reclaimable pages or invokes the OOM killer, terminating Node, your package manager, or whatever else looks expendable just to stay alive.

That contrast is stark: with swap, builds feel heavy but reliable; without it, the same workload can freeze your shell, trash your deploy, and force you to babysit a rebooted droplet.

Step-by-Step: Forging Your Swap File on Ubuntu

Illustration: Step-by-Step: Forging Your Swap File on Ubuntu
Illustration: Step-by-Step: Forging Your Swap File on Ubuntu

Start by confirming whether your Ubuntu box already has swap. Run `sudo swapon --show`. An empty result means no active swap devices. Follow up with `free -h` to see total RAM and current swap, then `df -h` to check disk usage. On a typical 25 GB droplet, you’ll usually see under 20% used, which leaves plenty of room for a 2 GB swap file.

With disk space confirmed, allocate the swap file. For a 1 GB RAM server, a 2 GB file gives Next.js builds real breathing room without thrashing the disk. Use `sudo fallocate -l 2G /swapfile`. This reserves 2 GB instantly without actually writing zeros. Verify it with `ls -lh /swapfile` and you should see `2.0G` in the size column.

Right now `/swapfile` is just a regular file that anyone might poke at. Lock it down so only root can read or write it. Run `sudo chmod 600 /swapfile`. Check permissions again with `ls -lh /swapfile` and you should see `-rw-------` at the start of the line, which confirms the file is private to root.

Next, turn that plain file into real swap space. Use `sudo mkswap /swapfile`. Ubuntu will respond with something like `setting up swapspace version 1, size = 2 GiB`. That message means the kernel now recognizes `/swapfile` as a valid swap area, but it is still inactive until you explicitly enable it.

Activate the swap with a single command: `sudo swapon /swapfile`. Run `sudo swapon --show` again and you should now see a table listing `/swapfile`, its size (about `2G`), and its priority. `free -h` will also show `Swap: 2.0Gi` in the summary, confirming the kernel can now offload memory pages when Next.js builds spike.

Right now this configuration survives reboots only if you make it permanent. Edit `/etc/fstab` with `sudo nano /etc/fstab` and add one line at the bottom:

- `/swapfile none swap sw 0 0`

Save, exit, and you are done. On the next reboot, Ubuntu will automatically enable `/swapfile`, so your Next.js builds keep working even after kernel updates, reboots, or unexpected crashes.

Surviving a Reboot: Making Your Swap Permanent

Swap you just created survives only as long as the current boot. The `swapon /swapfile` command flips a runtime switch; as soon as the kernel restarts, that state vanishes and your Next.js builds are back to dying with `Killed`.

To keep swap online across reboots, you must register it in `/etc/fstab`, the file system table Linux reads at boot. That file tells the kernel which disks, partitions, and swap areas to mount automatically.

Before touching it, make a backup. A broken `/etc/fstab` can stop the server from booting at all, leaving you scrambling for a recovery console on your cloud dashboard.

Run:

- `sudo cp /etc/fstab /etc/fstab.bak`

Now open the file with a root-capable editor, for example:

- `sudo nano /etc/fstab`

Scroll to the bottom and add this exact line:

- `/swapfile none swap sw 0 0`

Every field matters. `/swapfile` is the path to your swap file. `none` stands in for a mount point, because swap does not mount into the directory tree.

`swap` declares the filesystem type, which tells the kernel this entry is virtual memory, not ext4 or xfs. `sw` is the mount option set, shorthand for “treat this as swap with default behavior.”

The last two zeros control `dump` and `fsck` behavior. `0 0` means the system will not try to dump or run filesystem checks on this file, which is exactly what you want for swap.

After saving, validate the syntax with:

- `sudo mount -a`

No output usually means success. Reboot with `sudo reboot`, then confirm persistence using `free -h` or `swapon --show`. For deeper tuning and background on swap performance, see Supercharge Your Linux System with Swap Space - Kite Metric.

Swap's Double-Edged Sword: When It Helps vs. When It Hurts

Swap behaves like a pressure valve for memory, not a free performance upgrade. Used carefully, it keeps a tiny 1 GB droplet alive long enough for Next.js builds to finish. Used carelessly, it can grind a production app into the ground.

When to use swap: short, sharp memory spikes. A `next build` that runs for 2–5 minutes, briefly pushing usage from 700–800 MB to 1.3–1.5 GB, is perfect. The kernel can evict cold pages to disk, free a few hundred megabytes, and your build finishes instead of getting “Killed.”

Those same rules apply to other bursty tasks that run rarely and don’t serve live traffic. Good candidates include: - `npm install`, `pnpm install`, or `yarn install` - Database migrations or one-off data imports - Occasional admin or maintenance scripts - Deploy-time steps in containers or CI agents

In these cases, your app idles well under physical RAM—say 300–500 MB on a 1 GB server—and only needs extra headroom during builds or installs. You trade a bit of speed for reliability: a build might run 20–40% slower touching swap, but it actually completes. For many teams, staying on a smaller droplet offsets that cost immediately.

When not to use swap: steady-state memory pressure from your core app. If your Next.js server and database together want 1.4 GB on a 1 GB instance all day, the kernel constantly shuffles memory pages between RAM and disk. That thrashing destroys performance because disk, even SSD, is orders of magnitude slower than RAM.

You can spot harmful swapping with a few concrete symptoms: - High disk I/O (`iostat`, `iotop`) even at low request volume - Sluggish HTTP responses or timeouts with only a handful of users - `free -h` showing hundreds of megabytes of swap used and barely dropping at idle - Load average spiking while CPU usage stays oddly modest

If those red flags appear, swap behaves like a band-aid on a bullet wound. The real fix is more RAM or tighter memory budgets: trim Node workers, reduce cache sizes, split services, or move the database off-box. Swap should catch rare spikes, not carry your app 24/7.

The Final Piece: Taming Node.js Memory Limits

Illustration: The Final Piece: Taming Node.js Memory Limits
Illustration: The Final Piece: Taming Node.js Memory Limits

Swap buys you breathing room, but one more silent saboteur can still take your build down: Node.js itself. If Node decides it “needs” 8 GB of heap on a 1 GB server, no amount of swap wizardry will save you from the OOM killer. That’s where a single, obscure flag changes everything.

Node’s `--max-old-space-size` flag controls how much memory the V8 engine can use for its main JavaScript heap, measured in megabytes. When this limit sits too high, Node aggressively reserves memory your machine simply does not have, and the kernel responds by killing the process once RAM + swap run dry.

Next.js projects often hide this landmine inside `package.json`. Buried in the `scripts` section, you might see a build command like: - `"build": "NODE_OPTIONS='--max-old-space-size=8192' next build"`

On a 1 GB droplet, that 8192 MB heap target is fantasy. Node will happily try to climb there, your 1 GB of RAM and maybe 1–2 GB of swap will evaporate, and your build will exit with the same blunt `Killed` message you started with.

First step: open your project’s `package.json` and inspect every build-related script. Look for anything that sets `NODE_OPTIONS` or directly passes `--max-old-space-size` to `node` or `next`, for example: - `"build": "NODE_OPTIONS='--max-old-space-size=4096' next build"` - `"build": "node --max-old-space-size=6144 node_modules/next/dist/bin/next build"`

Then align that number with your actual budget: physical RAM + swap, minus overhead for the OS, database, and background services. On a 1 GB server with a 2 GB swap file (roughly 3 GB total), a `--max-old-space-size=2048` cap usually makes sense and leaves headroom for everything else.

Update the script, reinstall or redeploy, and run `next build` again. With swap enabled and Node’s heap capped to something realistic, your builds stop pretending they run on a 64 GB workstation and start behaving like they live on a cramped, cheap droplet.

Beyond Builds: Other Times Swap Will Save You

Swap quietly fixes more than flaky Next.js builds. Any workload that occasionally spikes memory on a small VPS or dev box benefits from having a few extra gigabytes of virtual memory to fall back on instead of face-planting into the OOM killer.

Package managers are repeat offenders here. A single `npm install`, `pnpm install`, or `yarn install` on a modern monorepo can spin up dozens of Node workers, unpack thousands of tarballs, and compute dependency trees in memory. On a 1 GB server, that can easily push usage past 90–100% RAM for several minutes.

Heavy data import and migration scripts behave the same way. ETL jobs that slurp a few hundred megabytes of JSON or CSV into memory, Prisma or TypeORM migrations that hydrate large schemas, or ad-hoc admin scripts that batch-process user records all create short, brutal memory spikes. With swap enabled, those spikes slow down instead of detonating your SSH session.

Database tooling also leans on RAM. Running `pg_restore` on a multi-gigabyte PostgreSQL dump, importing a MySQL snapshot, or running Elasticsearch reindexing can briefly allocate hundreds of megabytes of buffers and caches. A 1–2 GB swap file gives the kernel room to park inactive pages while the hot code paths stay in real RAM.

Containerized environments add another layer of fragility. A Docker container building a Next.js app, compiling native modules, or running tests might hit its cgroup memory limit long before the host does. Host-level swap space often acts as the last buffer that keeps the kernel from killing the container mid-build.

Local development is not immune either. Running `next dev`, Storybook, a database, and a browser full of tabs on an 8 GB laptop can chew through memory quickly. Pairing a small swap file with practices from How to optimize your local development environment - Next.js keeps your machine responsive while everything compiles.

Stop Overpaying: A Smart Developer's Guide to Scaling

Memory problems on small servers rarely need a bigger credit card. They need a diagnosis. Before you jump from a $5/month 1 GB droplet to a $20/month instance, check whether Next.js builds are blowing up because of short, spiky memory use or a constant, slow bleed.

A simple mental flowchart keeps you honest:

- Does the crash only happen during short, infrequent tasks (Next.js builds, `npm install`, migrations)? → Add swap first, then rerun the task. - Does the server feel slow during normal traffic, with high swap usage and elevated latency? → Profile memory, optimize queries and caches, or upgrade RAM. - Does swap usage stay high even when the app is “idle”? → You are masking a real capacity problem, not solving it.

For bursty workloads, a 1–2 GB swap file on a 1 GB RAM box often stops the OOM killer from nuking your build. You trade a few extra seconds or minutes of build time for a build that actually finishes. That’s a good deal when deploys happen a few times a day, not thousands of times per second.

Cost math makes the argument brutal and clear. Staying on a $5/month instance instead of jumping to $15 saves $10 every month, or $120 per year, per server. Scale that across 5 small services and you keep $600/year in your pocket for the price of a one-time `fallocate` and a line in `/etc/fstab`.

Smart scaling means stacking tools, not reflexively buying bigger boxes. Use swap to handle rare spikes, tune Node.js memory flags when builds still misbehave, and only then move up a tier when your steady-state usage proves you actually need it.

You end up with infrastructure you understand instead of infrastructure you fear. When a build dies with “Killed,” you know to check memory, swap, and Node.js limits before opening your cloud provider’s pricing page. That knowledge turns scaling from a panic move into a deliberate, cost-effective choice.

Frequently Asked Questions

Why do Next.js builds use so much memory?

Next.js builds are memory-intensive because they spawn multiple worker processes to compile TypeScript, bundle client and server code, and process assets like images simultaneously. This short but intense burst of activity can easily overwhelm servers with limited RAM, like 1GB cloud droplets.

Is using a Linux swap file bad for performance?

Swap is significantly slower than RAM, so it can harm performance if your application constantly relies on it for daily operations. However, for short, infrequent memory spikes like a build process, the slight slowdown is a worthy trade-off for stability, as it allows the build to complete successfully instead of crashing.

How much swap space should I add for a Next.js build?

A good rule of thumb for a small server (1-2GB RAM) is to add swap space equal to or double the amount of physical RAM. For a 1GB droplet, creating a 1-2GB swap file is often sufficient to handle the memory spikes during a Next.js build.

Can I use swap instead of upgrading my server's RAM?

You can use swap to avoid upgrading if your memory issues are caused by temporary spikes (like builds or package installs). If your application's day-to-day memory usage consistently exceeds your server's RAM, you should upgrade your RAM, as relying on swap for production traffic will lead to poor performance.

Tags

#Next.js#Linux#DevOps#Ubuntu#Server Optimization

Stay Ahead of the AI Curve

Discover the best AI tools, agents, and MCP servers curated by Stork.AI. Find the right solutions to supercharge your workflow.