Django's Celery Killer Has Arrived
Django 6.0 introduces a native background tasks framework, directly challenging Celery's long-held dominance. Discover if this new tool is the simple, built-in solution your project has been waiting for.
The 20-Year Wait for Native Tasks Is Over
For almost 20 years, Django developers have hacked around one glaring omission: no native way to run background work. Sending emails, processing uploads, crunching reports, or calling third‑party APIs all depended on third‑party tools, with Celery becoming the de facto standard as early as the late 2000s.
That unofficial standard came with a cost. Teams had to bolt on a separate broker, a worker pool, and a monitoring stack just to avoid blocking a single HTTP request, even for trivial jobs like “send a welcome email” or “recalculate a counter.”
The community has complained about this gap for over a decade in tickets, blog posts, and conference talks. What people actually wanted sounded simple: a standardized, built‑in API for asynchronous work that did not force Redis, RabbitMQ, or a specific worker implementation on every project.
Instead, the ecosystem fragmented. Some teams went all‑in on Celery; others picked Huey, RQ, or custom homegrown queues. Swapping one out meant invasive rewrites, because each library shipped its own decorators, result objects, and retry semantics baked directly into application code.
That pressure finally crystallized into a formal Django Enhancement Proposal from core contributor Jake Howard. His Django Tasks DEP proposed a minimal, pluggable Django Tasks Framework that would live in core, define the public API, and let backends compete on implementation details.
Howard’s design drew a sharp line: Django would own how you declare and enqueue tasks, while third‑party backends would own storage, workers, and scaling. That split mirrors how Django treats databases or caches, and it immediately resonated with maintainers tired of coupling business logic to a single queue library.
Django 6.0, released in December 2025 after an alpha in September, beta in October, and RC1 in November, is where that proposal lands for real. The new tasks module ships as part of the framework, not as an add‑on, and every fresh project now has a first‑class background jobs story from day one.
Architecturally, this marks a turning point on the scale of Django’s async views or migrations framework. Background work is no longer “some external system you glue on later,” but a core capability that other features, third‑party apps, and future DEPs can build on directly.
Django's New Weapon: The Pluggable API Layer
Django 6.0’s new Django Tasks Framework draws a hard line between definition and execution. Core Django now standardizes how you declare a task and push it onto a queue, but it refuses to dictate what runs that queue, how workers look, or where jobs get stored. That separation turns tasks into a pluggable API layer rather than a monolithic background worker baked into the ORM or request stack.
At the heart of the system sits the @task decorator. You slap it on a plain function, and Django registers it as a task with metadata like queue name, priority, and whether it needs a task context. Tasks must live at module scope, take JSON-serializable arguments and return values, and can opt into `takes_context=True` to receive a context object exposing attempt count and the `TaskResult` ID.
Once a function becomes a task, you hand work to the framework with `enqueue()`. From sync code you call `enqueue(my_task, *args, **kwargs)`; from async code, backends can expose `aenqueue()` for the same API, just awaited. Under the hood, Django passes this call to the configured backend, which decides whether that means pushing a message into Redis, writing to Postgres, or doing something entirely custom.
Every call to `enqueue()` returns a TaskResult object, Django’s single abstraction over “what happened to my job.” That object carries a unique ID you can stash in your database or send back to the client, then later reload via something like `get_result(id)` to inspect fields such as status, error details, attempts, and return value. Framework consumers never touch backend internals directly; they talk to TaskResult and let the backend map that onto its own storage.
Deliberate incompleteness defines this design. Django ships no production worker, no retry engine, no cron-style scheduler, and no chaining API; all of that belongs in backends or third-party libraries. Core only promises that `@task`, `enqueue()`, and `TaskResult` stay stable so the ecosystem can build workers, dashboards, and bridges to Celery, RQ, or Redis on top.
Initial backends stay minimal on purpose. The `immediate` backend executes tasks synchronously as soon as you call `enqueue()`, which makes unit tests and local debugging trivial because stack traces and breakpoints behave exactly like normal function calls. The `dummy` backend discards every task without running anything, ideal for environments where you must keep task calls in place but cannot allow actual execution, such as certain staging or dry-run setups.
The '80% Solution' You've Been Waiting For
Background work in Django mostly means the same three chores, over and over: send emails, process user uploads, and talk to external APIs. That “80% use case” rarely needs fan-out workflows, distributed routing, or per-queue rate limiting. It just needs to stop blocking the request thread while an SMTP server, S3 bucket, or payment provider wakes up.
Django Tasks Framework targets that band directly. You decorate a function with @task, call `enqueue()` (or `aenqueue()`), and hand off work like “send this password reset email” or “generate thumbnails for this image.” For many apps, that single pattern covers password resets, onboarding sequences, webhook dispatch, PDF generation, and cache warmers.
Historically, teams pulled in Celery for those jobs and ended up running a full queue stack for a handful of functions. You got a broker, a result backend, a worker pool, beat scheduling, and a forest of settings before your first email left the building. Django Tasks Framework removes that requirement for simple cases by standardizing the API while staying agnostic about the heavy machinery behind it.
Think of it as an entry ramp: you commit to one way of declaring and enqueuing tasks, long before you commit to RabbitMQ, Redis, or a Kubernetes-scale worker farm. Early-stage projects can start with the immediate backend, then swap in a real queue-backed worker when traffic and latency demand it. No mass refactor, no ripping out Celery-specific decorators across the codebase.
Django’s own backends stay intentionally minimal: an immediate backend that runs work inline, and a dummy backend that never executes. That keeps dependencies low, keeps everything feeling Django-native, and avoids pulling in extra daemons for CRUD apps or internal tools. For details on backends, context, and result handling, the Django 6.0 Tasks Framework Documentation spells out the contract third-party workers plug into.
Anatomy of a Modern Django Task
Background work in Django 6.0 starts with a plain Python function wrapped in the new @task decorator. A minimal example looks like this:
```python from django.tasks import task, enqueue, aenqueue, get_result
@task( priority=5, queue_name="notifications", takes_context=True, ) def notify_user(task_context, user_id, message): """ Send a notification to a user.
task_context: injected when takes_context=True user_id: database ID of the user message: text body of the notification """ # Access metadata about this specific execution attempt = task_context.attempt # 1 on first try, increments on retries result_id = task_context.task_result_id # stable ID for tracking
# Your real logic goes here (email, push, SMS, etc.) # You can log attempt/result_id for observability return {"status": "sent", "attempt": attempt, "result_id": result_id} ```
Those decorator arguments are the core knobs Django exposes. priority is a numeric hint to the backend; higher numbers mean “run sooner” relative to other jobs on the same queue. queue_name routes work to a specific queue like `"notifications"` or `"image-processing"`, while `takes_context=True` injects a task_context object as the first parameter.
task_context turns a fire‑and‑forget function into something you can actually observe. `task_context.attempt` exposes how many times this task has run so far, which becomes critical once your backend adds retries. `task_context.task_result_id` gives you a stable identifier you can stash in your database, logs, or analytics so you can reconnect to this run later.
Enqueuing work uses the enqueue() helper for sync code. From a view or signal handler, you might do:
```python def create_order(request): # ... create order, commit transaction ... result = enqueue( notify_user, user_id=request.user.id, message="Your order is on its way!", ) # result.id == task_result_id return JsonResponse({"task_id": result.id}) ```
Async code uses `aenqueue()` instead, mirroring Django’s async views and consumers:
```python async def async_view(request): result = await aenqueue(notify_user, user_id=1, message="Hi") return JsonResponse({"task_id": result.id}) ```
Once you have a `task_result_id`, status checks run through `get_result()`:
```python from django.http import JsonResponse from django.views.decorators.http import require_GET
@require_GET def task_status(request, task_id): result = get_result(task_id) return JsonResponse( { "id": result.id, "status": result.status, # e.g. "pending", "running", "finished", "failed" "attempts": result.attempts, # total attempts so far "value": result.value, # return value from notify_user, if finished "error": result.error, # backend-specific error info, if failed } ) ```
That trio — `@task`, `enqueue()`/`aenqueue()`, and `get_result()` — forms the minimal, standardized surface every Django Tasks Framework backend must honor.
The Missing Pieces: What Django Leaves Out
No surprise twist here: Django 6.0 ships no built-in worker. You can define and enqueue tasks with the Django Tasks Framework, but production setups still need a separate long-running process to pull jobs from a backend and execute them. The official docs and early talks hammer this home: a “Django tasks worker” or equivalent is mandatory once you leave the dev shell.
Django’s core API also skips the niceties Celery users take for granted. There is no first-party support for automatic retries when an email provider times out, no built-in cron-style schedules, and no native task chains or groups. If you want “run this every 5 minutes” or “only start task C after A and B succeed,” you will not find that logic in django.tasks.
Those features live entirely in whatever backend you bolt on. A Redis-backed worker, an RQ bridge, or a future Celery adapter can decide how to implement retries, backoff strategies, periodic jobs, and fan-out/fan-in pipelines. Django only promises a stable surface: `@task`, enqueue calls, and a TaskResult abstraction that backends can extend.
Designers of the Django Tasks Framework made this constraint explicit from day one. Core Django ships only an immediate backend (runs tasks inline) and a dummy backend (never runs them), both intentionally non-production. Anything that actually persists jobs, coordinates workers, or manages distributed execution comes from third-party packages.
That split creates a sharp line of responsibility. Django owns the interface: how you declare tasks, how you pass arguments, how you inspect results. The community owns the implementation: how tasks serialize, where they queue, how workers scale across containers or regions.
For teams used to Celery, this feels almost bare-metal. You gain a standardized, framework-blessed entry point but lose the batteries-included scheduler, inspector, and retry orchestration until your chosen backend reintroduces them. Early experiments already target Redis, RQ, and even Celery itself as backends that speak the Django Tasks Framework API.
Long term, this mirrors Django’s existing pattern: ORM, cache, and email layers define contracts, while Postgres, Redis, and SMTP servers do the heavy lifting. Tasks now join that list, on purpose, incomplete.
Celery's Kingdom Under Siege?
Celery built its kingdom on one promise: industrial-grade background jobs for Django long before Django cared. Now Django 6.0 ships a Django Tasks Framework that bakes task declarations into core, and suddenly Celery is no longer the default assumption, just the biggest player on the board.
Side‑by‑side, the models look very different. Django Tasks defines a standard API: `@task`, enqueue functions, and `TaskResult` objects, while delegating storage and execution to pluggable backends. Celery ships a full stack: broker, workers, schedulers, result stores, and a decade of ecosystem tooling.
A quick comparison makes the split obvious:
- Django Tasks: part of Django, minimal configuration, JSON‑only payloads, backends required for real queues
- Celery: separate package, requires a broker like Redis or RabbitMQ, supports complex serialization, includes workers and beat scheduler
- Django Tasks: great for emails, uploads, API calls, and on‑commit hooks
- Celery: tuned for distributed clusters, high throughput, and multi‑service architectures
Celery still dominates where scale and complexity matter. You get built‑in retries with exponential backoff, periodic tasks via beat, and workflow primitives like chains, groups, and chords that orchestrate dozens of jobs across many workers. Large deployments routinely push thousands of tasks per second through Celery backed by Redis or RabbitMQ.
Django Tasks pushes hard on ergonomics instead. You stay inside Django, import from `django.tasks`, decorate a function, and call `enqueue()` without touching a broker directly. For roughly “80% use cases” — transactional emails, thumbnail generation, cache warmers, webhook fan‑outs — that zero‑dependency feel inside Django removes a major adoption barrier.
Power users lose some niceties if they ditch Celery today. No official worker, no native retry policies, no built‑in scheduling, and no battle‑tested monitoring UI ship with Django 6.0. Those pieces live in community backends and third‑party dashboards, which still trail Celery’s long‑mature ecosystem.
Strategically, Django Tasks changes the default question teams ask. New projects will start with the core framework and only reach for Celery when requirements clearly demand distributed workflows, advanced routing, or strict SLAs. Celery becomes an escalation path, not the starting point.
So does Django 6.0 kill Celery? No — it narrows Celery’s territory. Background jobs now belong to Django by default, while Celery defends the high‑end, multi‑node, “never drop a job” frontier. For deeper technical details, the Django 6.0 Release Notes spell out exactly how the new Django Tasks Framework plugs into real backends.
When You Absolutely Still Need Celery
Celery still wins any time your background work stops looking like a side quest and starts looking like its own distributed system. When you need hundreds of workers, multiple queues spread across regions, and predictable behavior under spikes of 10,000+ jobs per second, Celery’s decade-plus of battle scars matters more than Django’s new shine.
Use Celery when you need advanced routing logic instead of “fire-and-forget.” That means features like: - Multiple queues per service with fine‑grained routing keys - Per‑task rate limiting and concurrency caps - Hard and soft time limits, and automatic circuit‑breaker behavior
Serious workflows still rely on Celery’s AMQP support and broker flexibility. If your architecture already runs on RabbitMQ, Redis, or even more exotic brokers, you get durable queues, message acknowledgements, dead‑letter exchanges, and backpressure semantics that Django Tasks Framework intentionally does not define.
Complex pipelines also push you back to Celery. Chords, groups, chains, and canvases let you orchestrate fan‑out/fan‑in jobs, multi‑step ETL pipelines, and long‑running data science workloads that can span dozens of tasks and machines. Django’s task API can enqueue work, but it does not model that kind of workflow graph.
Monitoring is another dividing line. Teams that live inside Flower, Prometheus dashboards, and custom Grafana boards built on Celery’s events stream will not accept “check the TaskResult” as an alternative. Celery exposes per‑worker metrics, queue depths, retry storms, and task latency histograms that SREs use to keep SLAs intact.
High‑volume SaaS platforms, fintech backends, and marketplaces processing millions of jobs per day still treat Celery as core infrastructure. For that tier, Celery’s proven retry semantics, result backends, and operational tooling remain unmatched by Django’s young, pluggable layer.
The Ecosystem Awakens
Shockwaves hit Django land almost as soon as the Django Tasks Framework merged. Within days of the 6.0 release candidates, community repos started popping up with “experimental” Redis backends, task runners, and admin integrations trying to turn the bare API into something you can actually ship to production.
Early adopters targeted the obvious gap: a production-ready worker that speaks the new interface. Packages surfaced that wrap Redis lists or streams, spin up a `worker` management command, and map `priority` and `queue_name` from `@task` directly onto Redis data structures, effectively recreating a lightweight RQ-style runner behind Django Tasks Framework.
Redis quickly became the default playground. One family of adapters focuses on dead-simple setups—single Redis instance, FIFO queues, no sharding—optimized for that “80% use case” of emails, image processing, and webhook calls. Another wave experiments with more advanced features like delayed jobs, backoff-based retries, and per-queue rate limits, all exposed via plain decorator kwargs.
Bridge packages now aim to connect existing ecosystems rather than replace them. You can already find prototypes that let you reuse Celery-style broker URLs while routing all task definitions through `@task`, so your code looks native but your infrastructure still leans on Celery’s hardened workers and monitoring tools.
Most ambitious are early sketches of a “Celery backend” that would let Celery act as a drop-in worker behind the standard Django Tasks API. The idea: tasks stay framework-native, but a backend adapter translates `enqueue()` calls into Celery tasks, maps result IDs, and proxies status checks so you can migrate incrementally instead of rewriting years of task code.
Discussions on Django’s mailing list, GitHub issues, and Twitter threads cluster around three missing pillars: observability, lifecycle hooks, and polished admin UX. People want structured events for “task started / retried / failed,” pluggable monitoring backends that can stream those events to tools like Better Stack, and first-class dashboards that show queues, workers, and hot paths directly inside Django’s admin.
If the pattern holds, expect a Darwinian ecosystem over the next 6–12 months: half a dozen Redis workers, at least one Celery bridge, and a few opinionated “batteries-included” distributions fighting to become the de facto standard backend for Django 6.0 tasks.
Practical Hurdles and Gotchas
JSON hits first. Django Tasks serializes arguments and return values using JSON, which means anything you pass into a task must be JSON-friendly: strings, numbers, booleans, lists, dicts. Hand it a `datetime`, `Decimal`, or a model instance and you’ll get serialization errors or silent loss once backends start enforcing strict types.
You can work around this, but you have to be explicit. Convert complex objects to primitive representations (IDs, ISO 8601 strings, plain dicts) and reconstruct them inside the task. A good rule: if it can’t survive a `json.dumps()` / `json.loads()` round trip, don’t send it as a task argument or return value.
Database writes introduce another sharp edge. If you enqueue a task during a view that runs inside `transaction.atomic()`, the task might run before the transaction commits, see stale data, or fail on missing rows. Django’s on_commit hook exists exactly to avoid that race.
Pattern to remember: do the write, then schedule the task from `on_commit`. For example, after creating an `Order`, use `transaction.on_commit(lambda: send_order_email.enqueue(order_id=order.id))` so the worker only sees committed state. Skipping this will create heisenbugs that only appear under load or with slow databases.
Workers still live outside your main process. You must run a dedicated worker process (or several) under something like systemd, Supervisor, or Kubernetes Jobs. That means extra deployment manifests, health checks, logging, and restart policies, just like you already do for Celery.
Operationally, this new stack shrinks dependencies but not responsibilities. You still need to think about concurrency limits, queue backpressure, and graceful shutdowns so tasks can finish before processes die. For many teams, that’s a new operational layer, not a free feature.
Monitoring lands in the “some assembly required” bucket. Celery has Flower, Prometheus exporters, and years of dashboards; Django Tasks ships with none of that. Early backends expose basic status via `TaskResult`, but you will likely wire custom admin views, logs, or APM integration first.
Source-level spelunking will help. The Django Tasks Framework Source Code shows exactly how results, statuses, and errors flow, which you will lean on while waiting for richer third-party UIs and metrics to catch up.
Your Next Move with Django Tasks
Existing Django projects should treat Django Tasks Framework as an incremental upgrade, not a flag day migration. If you already run Celery, RQ, or Huey in production and they work, keep them for critical paths and introduce Django Tasks only for new, low‑risk flows. Full rewrites of complex Celery setups rarely pay off in v1 of a new API.
New projects starting on Django 6.0 can safely default to Django Tasks for the “80%” jobs: emails, thumbnail generation, cache warmers, and simple third‑party API calls. Reach for Celery only when you know you need cross‑service workflows, scheduled jobs at scale, or thousands of tasks per second across multiple queues.
Teams stuck on Django 5 can still experiment today using the official backport. Install the backport package, wire it into `INSTALLED_APPS`, and configure the immediate or dummy backend so you avoid standing up Redis or RabbitMQ on day one. You get the same `@task` decorator and `enqueue()` API as Django 6.0 without a framework‑wide upgrade.
Safe experimentation means picking a non‑critical feature and swapping it to Django Tasks behind a feature flag. Ideal candidates include:
- Password reset or signup confirmation emails
- “Thanks for registering” or “we received your order” notifications
- Low‑volume webhooks to analytics or logging services
Start by moving a single password reset email to a background task that uses `on_commit()` to enqueue after the user model saves. Use the immediate backend in development, then flip to a simple Redis backend and a single worker process in staging. Measure request latency and task failure rates before you touch anything revenue‑adjacent.
Treat JSON‑only serialization as a forcing function rather than a limitation: refactor task arguments down to IDs and primitives instead of passing full model instances. That discipline makes it easier to swap between Celery, a Redis backend, or any future Django Tasks Framework adapter.
Strategically, Django Tasks marks a maturity point for Django as a batteries‑included web framework. Background work now speaks a common, core API, which lowers dependency bloat and makes Django more competitive and self‑contained for the majority of web apps that do not need industrial‑grade Celery orchestration.
Frequently Asked Questions
What is the new Django Tasks Framework?
It's a built-in API in Django 6.0 for defining and queuing background tasks. It standardizes how tasks are created but requires a separate backend and worker process for execution.
Does Django 6.0 completely replace Celery?
No. The framework is designed for simpler, common use cases (the '80% solution') like sending emails. For complex, distributed workflows with built-in retries and chains, Celery remains the more powerful tool.
Does the Django Tasks Framework include a worker?
No, it does not. This is by design to keep the Django core lean. You must configure a third-party backend and run a separate worker process to execute the tasks.
Can I use the new tasks API with older Django versions?
Yes, a backport library allows you to use the new tasks API with Django 5, making it easier to experiment and prepare for migration to Django 6.0.