Your Database is Now a Film Studio
Stop struggling to explain complex data with static charts and reports. This groundbreaking workflow transforms raw SQL tables into engaging explainer videos, 100% automatically.
The Data Communication Bottleneck
Rows and columns do not tell a story. A 30-column table of financial metrics, analyst ratings, and production forecasts might encode billions of dollars in risk, yet to most people it looks like static noise. Even seasoned analysts need hours to interrogate a SQL result set before they can explain what actually matters to an executive, a client, or a customer.
Traditional tools pretend to fix this. Spreadsheets, BI dashboards, and ad hoc charts surface KPIs and pretty graphs, but they rarely answer the question non-experts actually have: “So what?” A dashboard can show that free cash flow yields differ across segments; it will not explain why oil-heavy producers are beating gas-focused firms or what that should change in next quarter’s strategy.
Narrative usually arrives as an afterthought, bolted on in a slide deck or a rushed email summary. Someone exports CSVs, pastes screenshots into PowerPoint, and writes a script or speaking notes to walk stakeholders through the numbers. Context lives in the analyst’s head, not in the data product itself, and it disappears the moment they move on to the next request.
Manual reporting pipelines do not scale. Turning one complex dataset into a coherent 4-minute explainer video requires: - Deep analysis to find real insights - Scriptwriting to translate stats into a story - Visual design to build charts and diagrams - Video production to record, edit, and polish
Each step burns time and attention. A single explainer for a dense dataset—like North American energy futures with 30+ columns—can easily swallow several days of work across analysts, designers, and video editors. Repeat that for weekly updates, multiple regions, or dozens of product lines, and the math collapses.
Meanwhile, data volume keeps climbing. Companies log every click, trade, sensor reading, and support ticket, then rely on static reports to communicate what changed. Business intelligence teams become bottlenecks, fielding endless “Can you walk me through this?” requests from stakeholders who cannot—or will not—parse raw dashboards.
The real challenge now is not storing or querying information; databases and cloud warehouses solved that. The hard problem is communication: how to move from raw SQL output to clear, engaging explainer videos or narratives that anyone can understand, on demand, without hiring an army of analysts and video producers.
From Raw Data to Final Cut, Automatically
Imagine pointing an automation workflow at a crusty SQL table and getting back a fully narrated explainer that feels like it came from a motion-graphics studio. That is the pitch behind a Data-to-Video pipeline built entirely in n8n, where your database becomes the scriptwriter, storyboard artist, and video editor in one shot.
Data flows in from PostgreSQL or any SQL backend, hits a chain of AI Agents, and exits as a 4-minute whiteboard-style explainer video. No one writes a script, no one opens a timeline, and no one records a voiceover; n8n orchestrates every step, from query to final MP4.
The demo runs on a real North American energy-sector dataset: 30+ columns of futures pricing, analyst ratings, production forecasts, and financial metrics. From that mess, the workflow surfaces insights like oil-heavy producers making up 55% of the sample, delivering 11.2% free cash flow yields versus 7.1% for gas-focused firms, and trading at 7.9x P/E versus 10.5x.
All of that ends up in a clean, human-sounding narration over hand-drawn-style visuals. You see a split-screen chart of oil-heavy vs. gas-focused companies, yield differentials called out in bold labels, and valuation multiples sketched as simple bar stacks that move in sync with the voiceover.
A short 30-second clip in the tutorial shows this in action: the narrator explains how returns diverge by production mix while animated whiteboard diagrams draw themselves on-screen. No jump cuts, no awkward pauses, just a continuous explainer that feels scripted by an analyst and produced by a studio.
Behind the scenes, n8n pulls the data, routes it through OpenRouter-hosted models like GPT, Grok, or Gemini for analysis and narrative generation, and then passes text prompts to Google Gemini Nano Banana Pro for image creation. A final assembly step stitches visuals and audio into a single video file.
You press run once. Data updates, the workflow executes, and a new explainer drops out the other end—ready for your Tutorial, internal Resources, or public-facing Explainer Videos page.
The Four-Phase Automation Blueprint
Four distinct phases turn n8n from a workflow tool into a fully automated Data-to-Video studio. Each phase handles a specific creative job that humans usually do manually: analyst, scriptwriter, illustrator, and video editor. Together, they convert raw SQL rows into a four-minute explainer that looks deliberately produced, not auto-generated.
Phase 1 is Data Analysis. An AI agent connects directly to your SQL database (PostgreSQL in the tutorial, but any SQL source works) and pulls from a defined table, like an energy pricing table with 30+ financial columns. It inspects schema, column types, row counts, and distributions, then produces a structured 1,200-word analytical report with sections like executive summary, methodology, key findings, and recommendations.
This agent does more than summarize. It compares cohorts (for example, oil-heavy vs. gas-focused producers), surfaces metrics like free cash flow yield (11.2% vs. 7.1%), and valuation gaps (7.9x vs. 10.5x P/E). That report becomes the single source of truth every later phase consumes, so the narrative, visuals, and audio all stay aligned with the same underlying Data.
Phase 2 is Story Generation. A separate AI storyteller ingests the analytical report and restructures it into a five-part narrative: setup, context, key insight, implications, and takeaway. Instead of paragraphs of statistics, you get human-friendly beats that sound like a voiceover script for a YouTube breakdown.
This narrative agent enforces pacing and clarity. It decides which numbers matter for a general audience, which comparisons to highlight, and how to frame risk, upside, or anomalies. The result is a script that feels editorial, not like a database dump read aloud.
Phase 3 is Visual Generation. For each of the five narrative segments, the workflow calls Google Gemini Nano Banana Pro to generate whiteboard-style illustrations. Prompts include specific entities (oil-heavy producers, gas-focused firms), metrics, and relationships, so each frame directly reflects the underlying SQL insights.
These images act as visual anchors for the viewer. The system outputs five coherent, stylistically consistent frames that match the tone of a hand-drawn explainer. If you want to explore alternative tools, n8n Integrations - Official Integration Library lists additional image and AI services you can swap in.
Phase 4 is Video Assembly. n8n stitches together:
- The five narrative text segments
- Text-to-speech audio for each segment
- The five whiteboard-style images
Audio and visuals synchronize into a single MP4 explainer, typically around four minutes, ready to publish. No timeline scrubbing, no manual rendering—just a fully automated Video pipeline from SQL query to finished file.
Phase 1: Your Autonomous AI Data Analyst
Phase 1 turns your n8n workflow into something that looks suspiciously like a small data team. At the center sits an Orchestrator Agent, a high-level controller that decides what questions to ask, when to fetch more data, and how many analysis passes to run. It doesn’t touch SQL directly; it delegates that grunt work to a specialized partner.
That partner is the Data Retrieval Agent, a purpose-built worker whose only job is to talk to your PostgreSQL instance. Inside n8n, this agent gets wired to a database tool that exposes a live connection, so it can generate and execute its own SQL on demand. No pre-baked queries, no static dashboards—just dynamic prompts turning into real database calls.
Instead of hardcoding “SELECT * FROM energy,” the workflow hands the agent a tool description and lets the model decide which tables, columns, and filters matter. It can start broad, sampling the schema, then narrow down to specific joins, time ranges, or subsegments. That means one workflow can adapt to a 30-column energy dataset today and a marketing funnel table tomorrow without a single manual edit.
The orchestration logic matters. The Orchestrator Agent receives a mission: extract every meaningful trend, anomaly, and pattern from this database slice. It then instructs the Data Retrieval Agent when to: - Inspect schema (columns, data types, row counts) - Pull descriptive statistics - Segment by key dimensions like sector, product, or region
Once the raw data comes back, the system message pushes the Data Retrieval Agent into analyst mode. It must output a structured, ~1,200-word report, not a loose bullet list. The brief forces an executive summary, methodology, data profile, key findings, and recommendations, all written in clear, professional language.
Derek Cheung’s demo uses a Supabase-hosted energy table as the sandbox, but the pattern generalizes to any PostgreSQL-compatible backend. In his example, the agent automatically surfaces splits like oil-heavy vs. gas-focused producers, free cash flow yield gaps (11.2% vs. 7.1%), and valuation differences (7.9x vs. 10.5x P/E). Those aren’t canned insights; they emerge from the agent’s own queries and comparisons.
Because the objective is a narrative-ready report, the agent doesn’t stop at “what” the numbers say. It must translate trends into actionable insights: which segment outperforms, which metric drives that outperformance, and what a decision-maker should do next. That 1,200-word artifact becomes the master script Phase 2 will later chop into story beats and, eventually, frames of your explainer video.
Phase 2: Turning Dry Statistics Into a Compelling Story
Phase 2 hands the baton from spreadsheet-brain to storyteller-brain. The workflow promotes a new specialist: the Master Data Storyteller agent, whose entire job is to turn a 1,200-word analytical report into something that feels like a tightly directed whiteboard explainer, not a quarterly earnings call. Instead of tweaking charts, this agent thinks in beats, scenes, and visual metaphors.
Its persona is wired like a lead director for a high-density whiteboard animation studio. That means it assumes constraints you’d expect in production: 4-minute runtime, fast pacing, no wasted shots, and visuals that must read clearly in a single frame. Every decision it makes—what to emphasize, what to cut, how to transition—serves that production mindset.
Structurally, the agent outputs a five-part JSON array. Each element represents a segment of the final video and contains two payloads: a narration script and a detailed visual prompt. n8n doesn’t see “story,” it sees an ordered data structure that downstream nodes can consume without guessing.
Narration segments read like mini-scenes: 30–45 seconds each, tuned to hit one primary insight. For the energy dataset, one segment might focus entirely on oil-heavy vs. gas-focused producers, spelling out that oil-heavy companies (55% of the sample) deliver 11.2% free cash flow yields vs. 7.1% for gas players, and trade at 7.9x P/E vs. 10.5x. Every number the analyst surfaced in Phase 1 becomes dialogue the audience can follow.
Visual prompts go deep into shot design. Instead of “draw energy companies,” the JSON might specify: “wide whiteboard scene, split-screen; left side labeled ‘Oil-heavy (55%)’ with bold ‘11.2% FCF yield’ and ‘7.9x P/E’; right side ‘Gas-focused (45%)’ with ‘7.1% FCF yield’ and ‘10.5x P/E’; simple icons for oil rigs vs. gas wells; clean black line art, high contrast.” That level of specificity lets Google Gemini Nano Banana Pro generate consistent, on-brand frames.
Smooth transitions glue these five segments into a single narrative arc. The agent explicitly scripts connective tissue: callbacks to previous stats, foreshadowing of the next segment, and verbal handoffs like “zooming out from valuations, the real story appears in production mix.” Those transition lines live in the JSON alongside each segment, so when n8n later assembles audio and visuals, the final cut feels intentional—more studio pipeline than spreadsheet export.
Phase 3: AI Artistry for Data Visualization
Phase 3 is where the workflow stops sounding like back-office analytics and starts looking like a production studio. The five narrative segments from the story phase each become a visual brief, and n8n fans them out into parallel jobs against Google’s Gemini Nano Banana Pro model, accessed through the Higgsfield API.
Each brief arrives as a tightly formatted JSON payload: segment title, 2–3 sentence description, key data points, and explicit “whiteboard-style, black marker on clean background” art direction. The prompt also encodes constraints like “no UI chrome,” “no logos,” and “no photorealism,” so Nano Banana Pro behaves like a storyboard artist, not a stock-photo generator.
n8n’s HTTP Request node handles the Higgsfield call. For each segment, it sends a POST to the Nano Banana Pro endpoint with a body that includes: - The text prompt generated by the Master Data Storyteller - Output format set to PNG - Resolution set to 1920×1080 - Aspect ratio locked to 16:9 for video-safe framing
Higgsfield treats image generation as an async job, so the workflow does not block and hope. The first call returns a job ID, which n8n stores on the item and passes into a dedicated “job status” loop. That loop uses a Wait node configured for a 5–10 second delay between checks to avoid hammering the API.
Each pass through the loop triggers another HTTP Request node that hits the Higgsfield status endpoint with the job ID. The response exposes a simple state machine: queued, running, completed, or failed. A Switch node branches on that field so only completed jobs exit the loop into downstream processing.
Once a job hits “completed,” n8n pulls down the image URL or binary payload, normalizes file names like `segment-03-cashflow.png`, and stores them in a predictable location for the assembly phase. For deeper implementation details on HTTP nodes, looping, and binary data handling, n8n Documentation - Official Docs walks through the exact configuration patterns this pipeline uses.
Phase 4: The Final Assembly Line
Phase 4 turns a pile of assets into a finished, watchable file. By this point, n8n holds five narrative segments, matching whiteboard images, and structured metadata. Video assembly wires those into a strict timeline so an external renderer can treat your data like a storyboard, not a guessing game.
Everything starts with a JSON manifest. n8n maps each story beat into an ordered array of scenes, where every scene includes: - `text` (narration line or paragraph) - `image_url` (Gemini Nano Banana Pro output) - `duration_seconds` - `voice_id` or style - `scene_index`
That manifest also stores global settings: target resolution (typically 1080p), frame rate, background color, and audio mix levels. By standardizing this schema, you can swap out render engines later without touching the upstream Agents.
Heavy lifting moves to a custom video generation API, deployed on a service like Railway or a Hostinger VPS. The API handles: - Text-to-speech synthesis for each segment - Waveform alignment so visuals match spoken emphasis - Final MP4 rendering with transitions and background track
Under the hood, the service accepts the JSON payload over HTTPS, queues a render job, and returns a `job_id` plus a status endpoint. n8n sends this request from an HTTP node, passing the full manifest as raw JSON, including all image URLs and narration blocks.
From there, the workflow enters a polling loop. A simple Wait node pauses for 10–20 seconds, then an HTTP node checks `/status/{job_id}` until the API reports `completed` or `failed`. On success, the response includes a signed `video_url` pointing to cloud storage.
n8n finishes by downloading that URL to local disk or S3, attaching the file to an email, or posting it directly to Slack or YouTube. Your SQL query has quietly become a 4-minute explainer video.
The Engine Room: A Look at the Tech Stack
Every automated explainer in this system runs on a compact but opinionated stack: n8n for orchestration, OpenRouter for AI brains, PostgreSQL for truth, and a Higgsfield–Gemini combo for the visuals. Each piece slots into a specific phase of the pipeline, from SQL query to rendered frame.
At the center sits n8n, running self-hosted on a VPS such as Hostinger’s KVM2 plan. That setup matters: instead of bumping into SaaS rate limits, you get effectively unlimited workflow executions and AI Agents, full root access, and one-click n8n with queue mode enabled for parallel runs.
Self-hosting on a VPS also keeps latency and control in your hands. You decide when to scale CPU and RAM, how to handle secrets, and which regions your automation stack lives in—critical for teams pushing hundreds of Data to Video jobs per day.
AI logic flows through OpenRouter, which acts as a meta-layer over models like GPT, Grok, and Gemini. The workflow can route different tasks—data analysis, narrative structuring, visual prompts—to different models without changing the surrounding n8n nodes.
Because OpenRouter abstracts vendors, you can A/B test models on live workloads. Swap GPT for Grok on the “Master Data Storyteller” agent, or move the data analyst to a cheaper model tier, all via API keys and model names in n8n, not a full pipeline rewrite.
Underneath the Agents, PostgreSQL anchors everything as the source of truth. n8n’s native Postgres nodes run SQL queries against tables like the 30+ column energy dataset, returning structured rows that the AI agents consume directly.
That tight Postgres integration means the same automation can pivot from energy pricing to SaaS metrics or user logs by changing a query, not the architecture. Views, materialized views, and scheduled refreshes give the agents clean, pre-modeled data to work with.
Visuals come from a two-part stack: Higgsfield as the image-generation API layer, and Google Gemini Nano Banana Pro as the underlying model. n8n sends narrative segments and scene descriptions to Higgsfield’s endpoint, which calls Gemini to render those stark whiteboard-style frames.
Because Higgsfield exposes a simple HTTP API, the workflow can request five or fifty images per video, tweak prompt templates, and enforce consistent style across episodes. The result: a database-driven film studio where SQL, Agents, and Gemini co-direct every frame.
Case Study: Unlocking Insights in Energy Sector Data
Energy markets generate exactly the kind of dense, multi-factor data that usually dies in a spreadsheet. Derek Cheung’s Tutorial, Automate, Data, Video, Agents, From Database, Explainer Videos, Learn, Resources, Links workflow attacks that problem with a real dataset: North American energy companies, future pricing, and more than 30 columns of financial and operational metrics. Analyst ratings, production forecasts, balance sheet stats, and valuation ratios all sit in a single PostgreSQL table wired into n8n.
Instead of hand-built models, an AI agent in n8n pulls that table and generates a 1,200-word analytical report. It inspects column structure, row counts, and distributions, then segments the universe by production mix. From there, the pipeline identifies two clear cohorts: oil-heavy producers and gas-focused firms.
Those cohorts unlock the headline finding: oil-heavy producers are winning. They represent 55% of the sample yet post meaningfully stronger cash generation and more attractive valuations. The agent doesn’t just say “oil looks better” — it quantifies exactly how much better.
Free cash flow yield becomes the first anchor metric. Oil-heavy companies deliver 11.2% free cash flow yields, compared to just 7.1% for gas-focused peers. That 4.1 percentage point gap signals materially higher cash returns to equity, without anyone touching Excel or a BI dashboard.
Valuation flips the intuition further. Despite stronger cash generation, oil producers trade at cheaper P/E multiples: 7.9x versus 10.5x for gas names. The automated script calls this out explicitly, framing oil-heavy firms as both higher-yield and lower-multiple — a classic mispricing story that would resonate with portfolio managers.
All of those numbers feed straight into the narrative engine. The “Master Data Storyteller” agent turns 11.2% vs 7.1% yields and 7.9x vs 10.5x P/E into a five-part script about capital efficiency, risk, and market perception. Each segment gets a whiteboard-style visual from Google Gemini Nano Banana Pro: bar charts for yield spreads, side-by-side comparisons for multiples, and callouts for the 55% oil-heavy share.
The final 4-minute explainer video stitches narration, visuals, and on-screen metrics into a self-contained briefing on energy equity positioning. No human writes a word or draws a frame. For readers who want to inspect or fork the underlying automation, the n8n workflow and community ecosystem start with the official n8n GitHub Repository, which pairs cleanly with OpenRouter models and a PostgreSQL backend.
Build Your Own Data-to-Video Pipeline Today
Your database is already sitting on a backlog of explainer content. This n8n workflow turns those forgotten rows into a continuous stream of Explainer Videos without editors, motion designers, or voiceover artists in the loop. One SQL query in, one 4-minute whiteboard video out.
Start by cloning the core idea from the Tutorial: four phases, one pipeline. Wire n8n to your SQL source—PostgreSQL, Supabase, or whatever powers your dashboards—and have an AI agent pull a single, well-scoped dataset: one product line, one geography, one quarter. Ship something small before you dream about a 10,000-row content factory.
From there, replicate the Agents stack. Use OpenRouter to reach models like Grok, GPT, or Gemini for analysis and scripting, then pipe prompts into Google Gemini Nano Banana Pro for whiteboard-style images. Keep the same five-part narrative structure from the Tutorial so every video feels like a tight, executive-ready story instead of a meandering data dump.
You don’t have to reinvent the infrastructure either. Self-host n8n on a Hostinger VPS (Derek uses the KVM2 plan) so you can run unlimited executions without SaaS rate limits. One box, one workflow, thousands of automated runs pulling from your Data warehouse.
To get moving fast, hit the official Resources and Links: - n8n docs: https://n8n.io - OpenRouter: https://openrouter.ai - Higgsfield / Gemini Nano Banana Pro: https://cloud.higgsfield.ai - Derek’s AI Automation Skool community: https://www.skool.com/ai-automation-engineering-3014
Most important: plug in your own metrics. Marketing attribution tables, churn logs, sales funnels, sensor data—run them through the same four-phase blueprint. Adapt the prompts, tweak the visuals, and your database stops being a reporting chore and starts behaving like a fully automated Video studio.
Frequently Asked Questions
What is n8n and why is it used in this workflow?
n8n is a workflow automation tool that acts as the central orchestrator. It connects different services like your SQL database, AI models, and video generation APIs to create a seamless, automated pipeline.
Do I need advanced coding skills to build this?
No, this is a low-code approach. While some familiarity with APIs and data structures is helpful, n8n's visual interface allows you to build most of the workflow by connecting nodes without writing extensive code.
Can I use a different database besides PostgreSQL?
Yes. The workflow is adaptable. n8n supports various SQL databases, so you can connect it to your database of choice, such as MySQL or Microsoft SQL Server, with minor adjustments to the connection nodes.
What makes AI Agents critical for this process?
AI Agents automate the cognitive tasks. Instead of a human analyzing data and writing a script, the agents autonomously query the database, identify key insights, and then structure those findings into a compelling narrative.