This AI App Builds Itself
A creator built a fully automated AI that writes viral posts for LinkedIn and YouTube. The shocking part? He didn't write a single line of code to do it.
Escaping the Content Treadmill
Content never sleeps. Brands, solo creators, and one-person SaaS shops now feel pressure to ship a steady stream of posts, scripts, and thumbnails tuned to each platform, each audience segment, and even each individual lead. Manual workflows—jumping between ChatGPT tabs, Notion docs, and half-finished drafts—collapse under that demand long before you hit 5 posts a day across 3 networks.
Brendan Jowett’s answer is something he calls a content machine: a self-running app that turns raw ideas and videos into finished, publishable assets. Built on a canvas of draggable nodes and chat agents, it orchestrates models, prompts, and data sources into a single surface that behaves less like a tool and more like a production line.
Instead of typing a new prompt every time inspiration hits, Jowett wires his LinkedIn history, YouTube transcripts, and company backstory directly into the system. One chat agent holds several example LinkedIn posts plus the full script from his latest YouTube upload; a single instruction—“create a new LinkedIn post for this video”—spits out copy he says he would “directly copy-paste into LinkedIn.” Hyper-personalization stops being a chore and becomes the default.
The same canvas hosts a dedicated zone for YouTube descriptions powered by multiple AI models, including Gemini 2.0.0 and others tuned for speed. Jowett feeds it prior descriptions as examples, then points it at a new transcript; the system returns keyword-optimized blurbs with structured paragraphs and calls to action, ready for upload without touching ChatGPT’s UI.
Further down, a more “intense” LinkedIn engine pulls from: - Detailed company context - His personal backstory - Formatting rules and tone guidelines - Archived sample posts
All of that funnels into one agent capable of producing posts that sound like him, reference his products, and follow his preferred structure on command.
A separate track even handles visuals. Using a model he calls Nano Banana Pro, the app reads a finished LinkedIn post, extracts a key line, and auto-generates a matching graphic—logo included—so text and imagery ship as a pair. The result functions less as a single AI feature and more as an integrated workflow environment for every repetitive step in content creation.
Inside the AI Command Center
Step onto the canvas and you don’t see a chat box; you see a control room. Brendan Jowett’s app opens on a sprawling, zoomable grid dotted with colored blocks, each one a self-contained function: a LinkedIn writer here, a YouTube description engine there, an image generator further down. It feels closer to Figma or Miro than ChatGPT, but every block ultimately feeds an AI model.
Each block is a draggable module. Some hold raw text: example LinkedIn posts, company backstory, formatting rules. Others store media: full YouTube video uploads with transcripts attached. A third category hosts live chat agents wired to models like Gemini 2.0.0 and a custom Nano Banana Pro image model, ready to generate on command.
Connections between these modules form a visible graph of context. A LinkedIn post node, for example, pulls from: - A cluster of past LinkedIn posts - A direct upload of the latest YouTube video - A formatting-instructions block
Those arrows on the canvas literally define what the AI “knows” when it writes. Change a connection, and you change the brain.
For YouTube descriptions, another region of the canvas repeats the pattern. Example descriptions flow into a chat agent, alongside the new video’s transcript node. One prompt—“Create a video description based on the examples I have provided”—produces a multi-paragraph, keyword-optimized blurb tailored to that specific upload, no manual copy-paste into a generic chat window.
Further down, a denser LinkedIn system layers even more context. Jowett drags in nodes for his companies, personal backstory, and highly specific style rules. All of them funnel into a single chat agent, turning one short instruction into a post that sounds uncannily like him, because the graph literally encodes who he is and what he cares about.
The image section on the canvas pushes the metaphor further. A text-only LinkedIn post node connects to an image-prompt node, three example graphics, and an uploaded logo node. The Nano Banana Pro model reads the full post, extracts a key line, overlays it as text, and drops the logo into place—no Photoshop, just rearranged blocks.
Viewed as a whole, the interface turns prompt engineering into a kind of LEGO system. Instead of wrestling with a 30-line mega-prompt, you rearrange nodes, drag new context into the graph, and visually debug what the AI sees.
Automating Viral LinkedIn Posts
Brendan Jowett’s canvas has a dedicated lane for LinkedIn, and it behaves less like a prompt box and more like a production line. One cluster of nodes holds 5–10 of his highest-performing posts, another ingests the full transcript from his latest YouTube upload, and a central chat agent fuses them into a single, always-on writing partner.
Those past posts aren’t just dumped in as raw text. He feeds the agent examples that encode his structure—hooky first line, short paragraphs, a clear CTA in the last sentence—alongside tone notes and formatting rules. That context lives as a persistent knowledge source, so the agent “remembers” his style without him rewriting instructions every time.
When a new video goes live, the workflow starts with a transcript upload directly into the canvas. A YouTube node pipes the full script into the same chat agent that already knows his LinkedIn voice, while source settings confirm exactly which examples and transcripts the model can see. One click, and the system wires all of that into a single request.
The actual prompt is almost insultingly simple: “Please create a new LinkedIn post based on the examples for the new video I have provided you with.” Behind that sentence, multiple models spin up; for this LinkedIn workflow, Jowett routes through Gemini 2.0.0 Pro to keep responses fast and consistent. The agent uses every attached source—examples, transcript, formatting notes—without any manual copy-paste.
Output arrives as something that looks suspiciously human: a punchy opener, a one-sentence thesis, 3–5 scannable lines pulling key points from the video, and a comment-bait CTA pointing to the full YouTube link. In his demo, the post references LiveKit, no-code agents, and “building with no code” because the system lifted those phrases directly from the transcript.
Crucially, Jowett doesn’t need to edit more than a word or two. He skims for accuracy, copies the block, pastes it into LinkedIn, and moves on. Compared to bouncing between ChatGPT, docs, and LinkedIn’s editor, he cuts a 15–20 minute task to under 60 seconds.
Anyone can recreate this kind of workflow with a canvas-based agent builder like Voiceflow - AI Agent Platform, wiring together:
- Example post libraries
- Transcript ingests
- Style and formatting instructions
- Multi-model routing for speed and cost control
Conquering the YouTube Algorithm
Conquering YouTube starts on the same canvas, but with a different module snapped into place. Instead of thinking in posts and hooks, Brendan Jowett’s system pivots to video files, transcripts, and metadata, treating each upload as raw material for a full YouTube package. One node ingests the transcript, another stores formatting examples from past high-performing descriptions, and a chat agent stitches them together.
Feed the agent a new transcript and it doesn’t just summarize; it recreates your signature style line by line. The system copies paragraph structure, line breaks, emoji habits (if you use them), and call-to-action placement from your previous descriptions. Combined with keyword-rich phrasing pulled from the transcript, you get SEO-optimized blurbs tuned for YouTube search and Google’s crawler without touching ChatGPT or a docs file.
Jowett scrolls further down the canvas and things get more ambitious. A separate cluster of nodes hooks into transcripts from a whole library of past uploads—dozens of videos wired into a single chat agent. With that context, the agent can generate:
- New video ideas that don’t repeat topics
- Title variations optimized for click-through
- Rough script outlines that mirror your pacing and tone
Because the agent sees everything you have already published, it avoids duplicate angles and suggests adjacent topics instead. A creator with 50 uploads effectively hands YouTube “memory” to an AI that understands what has worked and what feels on-brand. That same memory powers title ideation, spinning 10–20 variants that lean into curiosity, numbers, or problem/solution formulas in seconds.
All of this sits on the same drag-and-drop canvas: upload a video, attach its transcript, route it through the YouTube description node, then into ideation and outline nodes. Raw footage turns into a thumbnail-ready title, SEO description, and script scaffold with minimal human edits. The app doesn’t just help you publish faster; it quietly learns your channel, then builds the next video for you.
The Shocking Reveal: It's All No-Code
Forget a secret team of engineers hiding behind this canvas. Brendan Jowett’s entire “content machine” — chat agents, transcript search, image generation, LinkedIn and YouTube modules — runs on no-code. No Python scripts, no React front end, no database migrations; just drag-and-drop blocks and natural language instructions.
The engine behind it is Replet, a “vibe coding” environment that turns prompts into working software. Instead of scaffolding a project, wiring APIs, and gluing models together, Jowett describes what he wants in plain English: “a canvas with movable modules,” “a chat agent wired to my last 20 YouTube transcripts,” “an image generator that knows my logo.” Replet compiles that intent into a live app.
This flips the usual AI story on its head. Most people use ChatGPT, Claude, or Gemini 2.0.0 as standalone tools — you paste text in, you copy results out. Here, Jowett prompts an AI to build the tool itself: the interface, the data flows, the model routing, even the reusable prompts that sit behind each node.
Instead of a developer translating requirements into code, the prompt becomes the spec, the backend, and the UX blueprint at once. When he wants a new workflow — say, “turn my latest video into a 3-paragraph SEO-optimized description using my past examples” — he doesn’t open VS Code. He adds a node, connects a transcript source, references his example bank, and describes the behavior.
Replet’s canvas abstracts away the stack that normally scares non-technical creators off. Under the hood you still have: - Multiple AI models (Gemini 2.0.0, Nano Banana Pro) - File and knowledge stores for transcripts, logos, and example posts - Routing logic deciding which model handles which task
On the surface, you just see labeled blocks and arrows.
For non-developers, this is a power unlock on the level of early Squarespace or Webflow, but for AI-native software. A solo creator can spin up a private “content OS” tuned to their voice, their assets, their channels, without waiting for a SaaS startup to ship the exact feature they need.
Jowett’s system hints at where this goes next. If you can describe your workflow in enough detail — “how I ideate, how I write, how I repurpose” — tools like Replet can turn that description into a custom app. Not using AI, but commissioning software from it.
How to Prompt Your Own AI App
Brendan Jowett doesn’t just prompt an AI app; he prompts an AI app builder. Before he ever drags a node on Replet’s canvas, he opens ChatGPT and asks it to write a master prompt that describes the app he wants in painful detail. That meta-prompt becomes the blueprint Replet uses to assemble his entire “content machine” without a single line of custom code.
His master prompt reads more like a product spec than a casual chat. Jowett describes a canvas system with a zoomable workspace, draggable modules, and connectable nodes that pass data between each other. He calls out that each module should support text inputs, chat-style interfaces, and file uploads like YouTube transcripts, because those are the raw materials for his LinkedIn posts and YouTube scripts.
Structure matters. He breaks the app into clearly labeled regions: one cluster of nodes for LinkedIn copy, another for YouTube descriptions, another for image generation. Each region gets explicit instructions about what sources it should use (examples of past posts, video transcripts, company background) and how to format outputs. That specificity lets Replet wire up a system where a single chat agent can see his past content, current video, and style rules all at once.
Jowett also names technologies directly inside the prompt. He asks for a React Flow-style graph editor for the canvas, so modules appear as draggable nodes with connections, not static forms. He specifies AI model slots that can switch between providers like Gemini 2.0.0 or custom models such as Nano Banana Pro, mirroring how tools like Make - Workflow Automation Platform let you swap integrations without rebuilding the whole workflow.
He doesn’t stop at UI and models. The prompt defines behaviors: how a LinkedIn node should read example posts, how a YouTube description node should optimize for keywords, how an image node should extract a single key line from a long post and overlay it on a branded graphic. He even tells the system that one node’s output must become another node’s input, enforcing a pipeline from transcript to post to image.
Quality scales with detail. A vague “build a content app” request yields a toy. Jowett’s multi-paragraph, component-by-component prompt yields a full control center: reusable chat agents, knowledge banks of transcripts, and image generation tuned to his logo and layout. The more you think like a product designer in the prompt, the more your no-code AI app behaves like a real, opinionated tool.
Generating On-Brand Visuals Instantly
Content automation usually falls apart the moment you need visuals. Brendan Jowett’s canvas dodges that failure mode with a dedicated image generation lane wired directly into his LinkedIn workflow, so graphics appear as fast as the posts themselves.
Once the system finishes a LinkedIn post, it pipes the entire text into an image agent. That agent doesn’t just slap the whole paragraph onto a slide; it scans the copy, extracts a single key insight, and turns that into a bold headline-style graphic designed to stop scrolling thumbs.
Jowett preloads three example image layouts into the canvas: clean, text-first cards with a single statement, generous padding, and his logo anchored in the bottom corner. The image agent uses those examples as a rigid template, so every new asset matches his existing style without him touching Figma or Canva.
Brand consistency comes from more than vibes. Jowett literally uploads his logo, drops it into the node graph, and gives the model explicit instructions on placement, color usage, and negative space. The system then references that logo object on every render, so alignment, sizing, and contrast stay identical across dozens of posts.
Under the hood, the visuals run on the new Nano Banana Pro model, which quietly solves one of generative AI’s most annoying problems: mangled text. Where typical diffusion models hallucinate letterforms or misspell simple words, Nano Banana Pro can render crisp, legible typography across multiple lines.
That matters because these aren’t abstract concept images; they’re text-led social cards. A single typo inside a graphic forces a manual redo. With Nano Banana Pro, Jowett can trust the model to keep the headline intact, the logo untouched, and the layout on-brand, turning image creation into just another automated hop in the content assembly line.
The 4 Pillars of a Powerful AI Agent
Step back from the flashy canvas and Brendan Jowett’s system starts to look almost boringly systematic. He reduces every “AI agent” to four moving parts: Model, Instructions, Knowledge, and Tools. Miss any one of them and your automation either hallucinates, stalls, or spams out generic content.
The AI model is the brain. In Jowett’s content machine, that brain is Gemini 2.0.0 for text plus a separate Nano Banana Pro model for images, each wired into different nodes on the canvas. Swap the model and you change the personality, speed, and quality of everything downstream without touching the interface.
Instructions define what that brain is supposed to do. Those are the long, structured prompts that tell Gemini 2.0.0 “you are my LinkedIn ghostwriter,” or “you are my YouTube description optimizer.” Jowett bakes tone, formatting rules, and constraints directly into these instructions so every output sounds like him, not a generic AI demo.
Knowledge supplies the memory. Jowett dumps full YouTube transcripts, past LinkedIn posts, company backstory, and style examples into the system as persistent context. When he asks for a new post, the agent pulls from dozens of prior scripts and posts, not a single 20-line prompt he typed 30 seconds ago.
Tools give the agent hands, not just a mouth. On his canvas, tools show up as app functions and data pipes: upload a new video file, parse the transcript, generate a LinkedIn post, create an image with his logo, push text into a YouTube description field. Each node is a discrete capability the model can call, chained into one-click workflows.
Together, these four pillars explain why Jowett’s setup feels like software, not a chat window. Gemini 2.0.0 (model) follows tightly written prompts (instructions), grounded in transcripts and past content (knowledge), then triggers canvas actions (tools) to ship finished posts and graphics. That mental model scales from a solo creator to a marketing team stitching together Notion, Webflow, and Zapier.
Anyone building AI automations can steal this blueprint. Start by explicitly defining: - Which model you use - What instructions govern it - What knowledge it can see - Which tools it can actually operate
The End of Manual Marketing Tasks?
Manual marketing work starts to look fragile once an app like Brendan Jowett’s content machine exists. One canvas, a handful of AI agents, and suddenly LinkedIn posts, YouTube descriptions, thumbnails, and ideation loops run on autopilot instead of willpower and late nights.
No-code AI platforms turn that canvas into a gateway drug for software creation. You stop being just a “user” of tools like ChatGPT and become the architect of bespoke workflows that match exactly how your business thinks, sells, and publishes.
Jowett’s setup quietly exposes a bigger shift: subject matter experts now sit one prompt away from their own internal SaaS. A marketer with deep knowledge of their audience can drag in transcripts, brand guidelines, and past posts, then wire them into a reusable agent that never forgets context and never gets tired.
That’s the core promise of AI-native no-code stacks such as Replet, Voiceflow, and Make.com. They let non-developers combine models like Gemini 2.0.0, custom knowledge bases, and APIs into production systems that rival what a small engineering team might have built five years ago.
Agencies like Inflate AI, Jowett’s own shop, already productize this shift at scale. INFLATE AI - AI Automation Agency sells prebuilt and custom automations that pull data from CRMs, ad platforms, and content libraries, then push results back into the tools teams already live in.
For businesses, the question stops being “Can we afford to build software?” and becomes “Can we afford not to?” When a sales leader can prompt an app that drafts outbound sequences from call transcripts, or a founder can spin up an agent that turns webinars into a week of content, the old build-vs-buy calculus collapses.
Look beyond marketing and the canvas gets crowded fast. Imagine prompted-into-existence apps for: - Sales call summarization and CRM hygiene - Customer support triage and reply drafting - Product research from user interviews and NPS data - Internal training content from existing SOPs and docs
So the real provocation isn’t whether manual marketing dies; it’s how much of your current workload only exists because you lacked the tools to automate it. If a few structured prompts can summon an app that eats your most hated task, what else in your business is quietly waiting to be replaced?
Your First Automated Workflow Awaits
Start with one workflow, not a grand vision. Pick a content task you repeat at least 3–5 times a week: rewriting LinkedIn posts from videos, drafting YouTube descriptions, turning newsletters into threads, or summarizing sales calls. If it already lives in a checklist, Notion doc, or your brain as “ugh, this again,” it qualifies.
Get specific. For your chosen workflow, write down: - Inputs (e.g., YouTube URL, transcript, past posts) - Outputs (e.g., 220–260 word LinkedIn post, 3 title options) - Constraints (tone, brand rules, forbidden topics, formatting)
Now hand that to ChatGPT as a meta-task. Your prompt is not “write my post,” it is “design my app.” Ask it to generate a structured spec for an automation canvas: modules, data flow, prompts, and models. Reference Brendan Jowett’s system: multiple chat agents, transcript ingestion, and reusable prompt templates that turn one video into posts, descriptions, and images.
Be explicit about models and context. Tell ChatGPT you want: - A primary text model (e.g., GPT-4, Gemini 2.0.0) - A knowledge layer for examples and transcripts - Clear input/output fields for each node - Error states and guardrails for off-brand content
Take that spec into Replet (or a similar no-code agent builder) and start dragging blocks instead of writing code. Recreate one lane of Jowett’s canvas: for example, “YouTube URL → transcript → LinkedIn post + description.” Wire in your own sample posts, logos, and brand rules so the agent stops sounding like a generic growth hacker.
Treat this as a 90-minute experiment, not a six-month platform build. When you get something that reliably saves you 15–30 minutes per day, ship it into your real workflow and stress-test it for a week.
Then share the result. Post screenshots of your canvas, prompt snippets, and before/after content on LinkedIn or X, tag the tools you used, and ask for feedback. Your first automated workflow will not be perfect—but it will be real, and it will be running while you sleep.
Frequently Asked Questions
What is the 'Content Machine' described in the article?
It's a custom, no-code application built to automate content creation. It uses a visual canvas to connect data sources like video transcripts and writing examples to AI models for generating personalized posts, scripts, and images.
Do I need coding experience to build a similar application?
No. The entire system was built using a 'vibe coding' tool called Replet, where you describe the application you want in plain English, and the AI builds it for you.
How is this different from just using a tool like ChatGPT?
This method creates a permanent, customized system tailored to your specific workflow. It saves time by pre-loading all your context (brand voice, examples, data sources) so you don't have to manually paste them into a prompt every time.
What AI models can this system use?
The system is flexible, allowing users to swap different AI models. The video mentions using Google's Gemini for text generation and a model called 'Nano Banana Pro' for high-quality image generation.