AI Designs an App in 36 Seconds. You In?

A new AI tool powered by Gemini can generate a full mobile app design from a single text prompt in under a minute. This isn't just a prototype—it's a revolution that could change the design industry forever.

ai tools
Hero image for: AI Designs an App in 36 Seconds. You In?

The 36-Second Disruption

Thirty-six seconds is not just a flex; it is a direct assault on the timeline of how software gets made. In Moritz’s viral demo, a user types “food delivery app,” toggles design max to tap Gemini 3, hits generate, and Compos.ai spits out full mobile app screens before a YouTube pre-roll would finish buffering.

Traditional app design moves at a completely different speed. A typical product cycle spends 1–2 weeks on stakeholder workshops, user journeys, and wireframes, then another 2–4 weeks on high-fidelity mockups in Figma or Sketch, with rounds of review stretching that to months for anything ambitious.

Design teams usually involve: - 1–3 product designers - 1 product manager - 1–2 engineers for feasibility checks

All of them bill hours while basic layouts crawl from whiteboard to prototype.

Moritz’s 36-second workflow compresses that entire front-loaded phase into a single prompt box. No component libraries to curate, no auto-layout fiddling, no color tokens to define—Gemini 3 infers patterns from millions of prior interfaces and outputs something that looks suspiciously close to a first client presentation.

For designers, the gut reaction often lands somewhere between awe and existential dread. If a prompt can generate 10–20 reasonably coherent screens in under a minute, what happens to the days they spend polishing navigation hierarchies, empty states, and onboarding flows?

Developers feel the ground shift too. UI that used to justify multi-sprint front-end schedules now appears instantly, pushing them toward integration, performance, and edge cases instead of pixel placement. Founders, meanwhile, see a pitch deck problem vanish: idea today, demo-ready visuals before lunch.

This is not a parlor trick stitched together in post. Tools like Compos.ai, Cursor, and CopyCoder already chain models so that one system designs, another writes code, and a third refines copy—turning “build an app” into a multi-agent workflow running at machine speed.

What Moritz shows is a visible breakpoint: ideation and first-pass design no longer belong to calendar time. They now live on GPU time, and that shift will not stay confined to mockups for long.

Inside the 'Magic Box': Compos.ai

Illustration: Inside the 'Magic Box': Compos.ai
Illustration: Inside the 'Magic Box': Compos.ai

Compos.ai sits at the center of Moritz’s 36‑second stunt. It is a browser-based AI design tool that turns a single sentence into a full set of mobile app screens, no Figma skills or design system knowledge required. Moritz does not touch a canvas; he only touches a prompt box.

Workflow looks almost offensively simple. You open Compos.ai, type something like “food delivery app” into the prompt field, toggle a setting called Design Max, and hit Generate. Within seconds, the interface fills with multi-screen layouts that resemble something you could hand straight to a front-end engineer.

Design Max is the crucial switch. Moritz calls out that it “uses Gemini 3,” which implies Compos.ai routes that mode to Google’s most capable Gemini 3 model instead of a cheaper tier. Higher-end models typically deliver better spatial reasoning, visual consistency, and copywriting, which translates into cleaner layouts, more coherent navigation flows, and on-brand microcopy.

Under the hood, Design Max likely trades cost and latency for fidelity. A powerful model can infer design patterns—tab bars, filters, cart summaries—from a vague prompt like “modern food delivery app for busy parents.” It can decide that you probably need onboarding, a home feed, restaurant detail pages, a checkout flow, and an order tracker, then generate all of them in one pass.

Text-to-UI is the real paradigm shift here. Instead of dragging rectangles and tweaking hex codes, users describe intent in language: “dark theme, minimalist, focus on photos, add promo banners.” The AI translates that description into layout, hierarchy, color, and typography decisions that once required a designer’s eye and a design system’s constraints.

That shift radically broadens who can participate in product design. A solo founder, restaurant owner, or student can sketch an entire app concept before lunch, then iterate by editing sentences instead of wireframes. Democratization here is not about replacing designers; it is about pulling more people into the earliest, messiest stage of ideation where speed and volume matter more than pixel perfection.

Once text becomes the primary design surface, tools like Compos.ai stop being novelties and start looking like new defaults.

The Engine Behind the Speed: Gemini 3

Google’s Gemini family sits at the center of this 36‑second trick. Gemini isn’t just a text model; it is multimodal from the ground up, trained to understand and generate text, images, and even higher‑level concepts about layout, flow, and interaction. That matters because app design is less about pretty screens and more about how those screens relate to each other.

Gemini 3, the version Moritz’s Compos.ai leans on, likely pushes harder into visual reasoning. Instead of just labeling an image as “home screen,” it can infer hierarchy: which element is a primary call‑to‑action, which components repeat across screens, how navigation should persist, and where a user’s eye will land first. That makes “design max” sound less like a style toggle and more like a UX brain.

Earlier generative models, including the first wave of Stable Diffusion or DALL·E, could spit out a single dribbble‑ready mockup. They struggled with: - Consistent navigation across 5–10 screens - Logical state changes (logged‑in vs. logged‑out) - Edge cases like empty states, errors, and loading flows

You got a poster, not a product.

Gemini‑class models aim to generate multi‑screen experiences that actually hang together. Ask for a “food delivery app” and you don’t just get a hero shot; you get a restaurant list, menu detail, cart, checkout, and order tracking that reference each other’s components and data. That coherence is the difference between concept art and something a developer can wire up in a day.

None of this happens without brutal amounts of training data. To understand what makes a “good” app, Gemini needs exposure to thousands or millions of mobile flows, design systems like Material Design and Human Interface Guidelines, and real‑world UI patterns from Figma, Sketch, and production apps. It has to internalize that a bottom nav bar should not randomly relocate, that contrast ratios affect readability, and that spacing and typography signal hierarchy.

If you want a sense of the broader ecosystem racing toward this, Top 6 AI Mobile App Design Tools & Trends for 2025 shows how quickly these Gemini‑style capabilities are becoming table stakes.

From Vague Idea to Viable Prototype

Product teams usually burn days in workshops just to get from a vague idea to a rough sketch. With Compos.ai wired into Gemini 3, that fuzzy “we should build a food delivery app” turns into a tappable prototype in under a minute, ready to drop into Figma or a usability test.

Brainstorming shifts from whiteboards and sticky notes to rapid-fire visualization. You can type “habit tracker for ADHD users, calming, low cognitive load, dark mode first” and watch entire flows appear: onboarding, streak views, notification settings, paywalls. Each iteration becomes a prompt tweak, not a fresh design sprint.

Wireframing also stops being a specialized bottleneck. Non-designers can generate multiple layout directions and interaction patterns without touching a grid or a component library. Designers then move up a level, curating, correcting, and enforcing brand systems instead of drawing every button from scratch.

For A/B testing, this speed is brutal in the best way. Instead of 1–2 variants per week, a team can spin up 10–20 screen sets in a day, run quick user tests with 5–10 people per variant, and kill weak concepts before they reach engineering. That compresses the classic “double diamond” into something closer to a rapid feedback loop.

Prompts become the new design spec, and quality matters. Effective prompts tend to be: - Goal-oriented (“increase checkout conversion by 10% on mobile”) - User-specific (“for first-time investors, age 25–35, anxious about risk”) - Constraint-heavy (“iOS only, bottom nav, no carousels, WCAG AA contrast”)

Weak prompts sound like: “cool app for everyone,” “make it modern and clean,” or “social network for pets.” These force Gemini 3 to guess business goals, target users, and platform rules, which usually yields generic, Dribbble-core layouts that collapse under real-world requirements.

A strong prompt might read: “Subscription meditation app for burned-out software engineers, Android, focus on 5-minute sessions, no sign-up wall, prioritize session discovery and streaks, use muted blues, material design.” That gives the AI a product brief, not a vibe check, and the resulting prototype is something a PM could actually ship.

Beyond Speed: Is the Output Actually Good?

Illustration: Beyond Speed: Is the Output Actually Good?
Illustration: Beyond Speed: Is the Output Actually Good?

Speed is easy to measure. Quality is not. When Compos.ai and Gemini 3 spit out a full set of app screens in 36 seconds, the obvious question is whether those pixels can stand next to something a human product team would ship.

On the upside, AI-generated UIs crush anything human in raw throughput. A single prompt can produce 10–20 coherent screens, with consistent typography, color tokens, and spacing rules that would take a designer hours to wire up in Figma. For basic flows—login, onboarding, item lists, checkout—the layouts often look indistinguishable from what a junior designer might draft on day one.

That speed also kills the classic “blank page” problem. Instead of staring at an empty canvas, product teams get a concrete starting point: nav patterns, card layouts, button hierarchies, and placeholder copy. For internal tools, MVPs, and CRUD-heavy apps, this design scaffolding is usually “good enough” to move straight into prototyping and usability testing.

AI also enforces ruthless consistency. Because models lean on pattern matching, they rarely forget to align components, maintain spacing scales, or reuse UI primitives. Style drift across screens—one of the most common sins in early-stage products—basically disappears when a single model generates everything from splash screen to settings.

The catch: pattern matching cuts both ways. These designs often feel generic, like a remix of the top 50 Dribbble shots from 2022. You see the same rounded cards, frosted-glass headers, pill buttons, and bottom nav bars, regardless of whether you are building a mental health app or an industrial IoT dashboard.

Where AI stumbles hardest is user empathy. World-class UX work starts with deep research: contextual inquiry, diary studies, segmentation, and behavioral data that expose subtle anxieties and motivations. A language model trained on public screens cannot intuit the fear of tapping “Submit,” the relief of a confirmation state, or the need to slow users down before an irreversible action.

That gap shows up in microcopy, edge cases, and emotional pacing. Human-led products deliberately modulate friction—adding extra steps around payments, privacy, or safety-critical actions. Current AI-generated flows tend to optimize for shortest path, not most humane path, and that is where seasoned designers still run laps around the bots.

The New Job Title: AI Design Director

AI-generated wireframes landing in your Figma in 36 seconds trigger an obvious fear: if Gemini 3 and Compos.ai can blast out every screen of a food delivery app on command, what happens to UI/UX designers? The short answer is they stop being pixel labor and start being AI design directors.

Instead of manually nudging every button and card, designers now orchestrate systems. They choose which models to trust, how to chain them, and when to override them. The job shifts from drawing rectangles to directing behavior, tone, and standards across dozens of AI-driven flows.

Prompt engineering stops being a meme and becomes a core design craft. A strong AI design director knows how to encode brand, accessibility, motion language, and platform conventions into prompts like “iOS-first, WCAG AA, thumb-reachable nav, focus on reorder flow for power users.” That prompt becomes the new design spec.

New baseline skills emerge fast: - Prompt engineering for design across Gemini, Midjourney, and proprietary tools - AI tool curation and evaluation, from Compos.ai to Stitch - Design with AI - Systematic critique and refinement of AI output - Deep research into edge cases, trust, and accessibility that models routinely miss

AI handles the what: onboarding screens, checkout flows, empty states, dark mode variants. It can generate 40 layout options in under a minute, each on-brand, each pixel-perfect enough for a usability test. Humans move upstream to own the why: which journeys matter, which trade-offs hurt users, which flows align with business risk.

Complex UX problems do not vanish. Consent design for health data, multimodal interfaces for neurodivergent users, cross-platform ecosystems spanning watch, car, and TV—these remain unsolved by pattern-matching models. AI can sketch options, but it cannot negotiate stakeholder politics or synthesize conflicting user needs.

Designers who cling to being the sole “makers” lose leverage. Designers who act like directors—writing prompts as briefs, building reusable prompt libraries, and stress-testing AI output against real users—gain it. The portfolio of 2026 will show less Dribbble polish and more evidence of how you steered an AI stack into a coherent, humane product.

A Turbo-Boost for the No-Code Tsunami

No-code already turned millions of people into accidental software makers; AI design tools like Compos.ai now threaten to remove the ugliest part of that revolution: cookie-cutter interfaces. Instead of scrolling through the same 40 templates in Webflow, Bubble, or Adalo, you type “food delivery app for college campuses” and get a tailored UI system in seconds.

No-code and low-code platforms solved data models, workflows, and deployment, but front-end aesthetics stayed stuck in drag-and-drop purgatory. Builders either lived with generic templates or paid a designer to clean things up later, creating a bottleneck that slowed down otherwise rapid development.

AI-generated design acts as the missing link, automating both the visual language and the structural layout that templates only approximate. Compos.ai does not just spit out a hero screen; it generates full screen flows, component hierarchies, and consistent design tokens that map cleanly to modern UI frameworks.

Moritz | AI Builder has built an entire channel on this premise: non-technical founders can ship real software by chaining specialized AI tools. His videos routinely show end-to-end builds—Chrome extensions, SaaS dashboards, mobile apps—assembled with AI copilots instead of IDEs and hand-written code.

A plausible 2025 workflow looks brutally simple. You ideate flows on paper, then open Compos.ai, type prompts like “subscription fitness tracker app,” and let Gemini 3 generate a multi-screen design system in under a minute.

Next, you export those screens as Figma files or production-ready components aligned with frameworks that no-code tools already understand. Many no-code platforms now accept imports via Figma plugins or React-like component schemas, so the jump from pixels to logic shrinks dramatically.

Then you move into a builder such as Bubble, FlutterFlow, or Framer and wire up: - Authentication and user accounts - Database models and CRUD workflows - Integrations with Stripe, Twilio, or third-party APIs

Instead of wrestling with layout, you spend your time on pricing, onboarding, and growth loops. No-code promised to democratize software; AI design plugged the last major gap between a napkin sketch and something users will not immediately uninstall.

Mapping the AI Design Ecosystem

Illustration: Mapping the AI Design Ecosystem
Illustration: Mapping the AI Design Ecosystem

AI design is already a crowded neighborhood, and Compos.ai is just one address on the block. Zoom out and you see a fast-forming stack of tools that all promise roughly the same thing—fewer clicks, more screens—but attack the problem from different angles.

Google quietly ships its own AI-native design system with Stitch, an internal tool that auto-generates production-ready UI for Android and web from high-level specs. Paired with Gemini, Stitch aims less at Dribbble aesthetics and more at shipping code that aligns with Material Design and accessibility rules by default.

On the other end of the spectrum, Uizard targets non-designers. Type “fitness coaching dashboard” and it outputs multi-screen wireframes, themes, and component variants, plus “autodescribe” features that turn screenshots or sketches into editable layouts. Uizard reported more than 1 million users by 2023, a signal that AI-first design tools already resonate far beyond Figma power users.

Figma, predictably, is not sitting this out. Its AI features—announced in 2024—promise instant wireframe generation from prompts, automatic layer renaming, content rewriting, and style clean-up inside existing files. The pitch is not a new tool, but an AI co-pilot embedded where 4+ million designers already work every day.

Then there are pure-play AI design engines like Galileo AI, which focuses on high-fidelity marketing and product UIs from text prompts. Galileo generates polished screens with copy, imagery suggestions, and component structure, then exports to Figma for serious editing, positioning itself as the “top of funnel” for visual exploration.

Different tools optimize different layers of the stack: - Wireframes and flows: Uizard, Figma AI wireframes - High-fidelity mockups: Galileo AI, Compos.ai - Design-to-code and systems: Stitch, Anima, Locofy

Direction of travel is clear: AI will not live in a separate tab for long. Every major design surface—Figma, Adobe XD’s successors, Webflow, Framer, even Notion and Miro—is racing to make generation, refactoring, and handoff natively AI-driven, so “draw this screen for me” becomes as standard as “Cmd+Z.”

The Unfair Advantage for Startups

Startups just got a new kind of leverage: time compression. When tools like Compos.ai can spit out a multi-screen app design in under a minute, the old two-month “UX sprint” morphs into a 20-minute prompt session. That shift rewrites early-stage strategy more than any pitch-deck tweak ever could.

For founders, the biggest impact hits the MVP and fundraising loop. A solo entrepreneur can walk into a weekend with only a problem statement and walk out with: - A clickable prototype - A polished mobile UI - Screen flows for onboarding, payments, and settings

That used to require hiring a designer, waiting weeks, and burning $5,000–$20,000 in agency or contractor fees. Now the marginal cost of another version approaches zero, so the rational move is to ship five variations and test them all.

Investor decks change too. Instead of abstract wireframes and feature bullet points, founders can drop in near-production-quality screens generated by Gemini 3-powered tools. A pre-seed deck can show three competing product directions, localized variants, and dark mode — all created in an afternoon. The story stops being “we will build this” and becomes “we already explored these six options.”

Solo founders gain something closer to a design department in their browser. They can iterate through onboarding flows, pricing pages, and referral screens at a pace that historically required a product manager, a UX designer, and a visual designer. That means more experiments, faster abandonment of bad ideas, and less emotional attachment to any single design.

Competitive pressure spikes accordingly. If your rival can visualize a new feature in 10 minutes and ship a prototype to users the same day, a three-month design cycle is not just slow, it is negligent. Markets where speed matters — consumer social, fintech, creator tools — will see “idea-to-interface” time become a core KPI.

Founders now have a growing menu of AI-native toolchains. Compos.ai sits alongside platforms in guides like 12 Best AI App Builder Tools for 2025, turning design and build into a blended, continuous process. The startups that survive will treat this as infrastructure, not a party trick.

Your First Move in AI-Powered Design

Start small, but start now. AI design only shifts from hype to muscle memory when you put a real idea through it and feel where it shines and where it breaks.

Head to Compos.ai and create a free account. In the prompt box, type a clear request: “Design a mobile app for tracking my personal reading habits. Include onboarding, a home dashboard, book detail pages, and monthly stats.”

Keep your first experiment scoped and specific. A reading-tracker app hits all the basics—navigation, data display, empty states, and simple interactions—without drowning you in edge cases.

Ask the AI for multiple variations. Generate a first pass, then refine with prompts like “make this more minimalist,” “optimize for one-handed use,” or “prioritize typography over imagery.”

Treat the output like a junior designer’s first draft, not a finished product. Export the screens, then run a quick critique: Are tap targets large enough? Is hierarchy clear? Do repeated patterns feel consistent?

Layer in another tool to see how this stacks with your existing workflow. Import the designs into Figma or Penpot, and manually tweak spacing, color, and motion to understand where AI accelerates you and where you still add the most value.

Document what works. Keep a short log of: - Prompt patterns that produced usable layouts - Failure modes (confusing flows, odd components) - Time saved versus your usual process

Share the experiment with one friend or teammate. Ask them to complete a task—“log a finished book and see your reading streak”—and watch where they hesitate or get lost.

Expect this to feel normal very soon. Over the next 12–24 months, AI copilots will sit inside every major design and product tool, from Figma to GitHub to Webflow, quietly generating flows, components, and copy by default.

Your advantage comes from building that collaboration muscle early. The sooner you learn how to speak “prompt” fluently and critique AI output ruthlessly, the more leverage you bring to every digital product you touch.

Frequently Asked Questions

What is Compos.ai?

Compos.ai is an AI-powered platform that uses advanced models like Google's Gemini to automatically generate complete mobile app design screens from a simple text prompt.

How does this AI app design process work?

Users input a natural language description, such as 'food delivery app'. The AI interprets the request and generates a full set of UI/UX screens, including layouts, components, and color schemes.

Is AI replacing human app designers?

Currently, AI tools like this augment the design process by automating initial mockups and wireframes. This allows designers to focus on higher-level strategy, user experience refinement, and creative problem-solving.

What AI model does Compos.ai use for its best designs?

According to the video, the 'Design Max' feature in Compos.ai is powered by Gemini 3, leveraging Google's advanced multimodal AI for high-quality visual generation.

Tags

#composai#gemini#generative-ai#app-design#no-code

Stay Ahead of the AI Curve

Discover the best AI tools, agents, and MCP servers curated by Stork.AI. Find the right solutions to supercharge your workflow.