The Robot That Ditched Its Legs to Win the AI Race

A secretive startup just unveiled a home robot that completely ignores the bipedal humanoid trend. Here's why its clever bet on wheels and a $200 data glove is a genius move that could change everything.

industry insights
Hero image for: The Robot That Ditched Its Legs to Win the AI Race

The Humanoid Hype Hits a Practical Wall

Humanoid robots have spent the past two years learning to strut. Tesla’s Optimus and Figure’s Figure 01 now walk, balance, and climb stairs with a confidence that would have looked like science fiction in 2018. Slick demo reels show slow-motion strides, one-legged poses, and careful ascents of industrial staircases, all backed by swelling synth soundtracks.

Those videos rack up millions of views because bipedal motion still feels like the holy grail of robotics. A machine that moves like a human must be just a software update away from working like one, the logic goes. But outside lab floors and staged factory sets, that promise keeps colliding with the mess of real homes.

Domestic environments punish complexity. Kitchens pack tight corners, reflective surfaces, and cluttered countertops into a few square meters. Cables snake under tables, kids leave toys on the floor, pets dart through pathways that a 170-pound humanoid cannot safely share.

Despite the hype, no bipedal system reliably runs long-horizon chores in unstructured apartments: clearing a table, sorting dishes, loading a dishwasher, and wiping surfaces without constant supervision. Even recent humanoid clips that show light pick-and-place actions usually involve constrained setups, carefully arranged objects, and heavily curated takes. Walking looks solved; useful autonomy does not.

The core problem is not whether a robot can stand on one leg. It is whether a general-purpose platform can manipulate fragile glassware, soft textiles, and irregular objects for 20–30 minutes straight without dropping, jamming, or colliding with people. That means robust perception, fine-grained force control, and reliable planning under uncertainty, all running on hardware that will share space with kids and pets.

Safety raises the bar even higher. A misstep on a stair in a lab is a blooper; a misstep near a toddler is a lawsuit. Until bipedal systems can prove they fail gracefully—no sudden swings, no hard falls, no runaway joints—deploying them in everyday homes remains a hard sell.

Against that backdrop, a new wave of companies is asking a heretical question: do useful home robots actually need legs? Wheelbases, vertical lifts, and manipulator-first designs are quietly challenging the assumption that walking must come before working.

Meet Memo: The Robot That Can't Climb Stairs

Illustration: Meet Memo: The Robot That Can't Climb Stairs
Illustration: Meet Memo: The Robot That Can't Climb Stairs

Sunday Robotics wants a home robot that does dishes before it does parkour. Founded in 2024 by Tony Zhao, the US startup is ignoring the humanoid arms-race over backflips and stair climbs and going straight for repetitive domestic work: cleaning tables, loading dishwashers, sorting laundry, and wrangling socks.

Its robot, Memo, looks less like Tesla Optimus and more like a kitchen appliance that grew a spine. A sturdy wheeled base supports a telescoping central column—the “Z‑axis” lift—that lets Memo reach from the floor up to high cabinets without taking a single step.

That vertical spine is the core of the design. Memo can lower its torso to grab a sock off the floor, then extend to place fragile glassware on the top rack of a dishwasher or into an overhead cupboard, all while its base stays planted.

Instead of fragile, tendon‑driven five‑fingered hands, Memo uses robust, task‑oriented grippers. They clamp plates, mugs, and utensils with consistent force, but also modulate grip to handle thin wine glasses and other breakable items without shattering them.

Those hardware choices come with a very explicit trade. Memo cannot climb stairs, and Sunday Robotics does not try to hide that; the company treats it as an intentional constraint to simplify everything else: balance, power, maintenance, and cost.

By skipping legs, Memo gains a low center of gravity and rock‑solid stability on flat floors, the surfaces that dominate kitchens and living rooms. That stability, plus efficient wheels, helps deliver over 4 hours of runtime per charge instead of burning batteries on dynamic balancing.

Mechanical simplicity also means fewer actuators, fewer failure points, and easier servicing. Sunday can focus engineering effort on manipulation, perception, and long‑horizon planning instead of debugging ankle joints.

The payoff shows up in early capability demos. Memo autonomously clears messy dinner tables, sorts cutlery and dishes, and packs them into a dishwasher with consistent placement.

It also tackles softer, less rigid tasks. Memo picks up crumpled T‑shirts and socks from the floor, smooths and folds them on a table, and stacks them into neat piles—slowly for now, but without frame‑by‑frame teleoperation.

Solving Robotics' Billion-Dollar Data Deadlock

Robotics hits a wall long before it hits the stairs: data. Training a robot to wipe a table, load a dishwasher, or fold socks demands millions of examples of precise motion and force, not just pretty footage. That creates a brutal “data deadlock” where you need capable robots to collect good data, but you need that data first to make robots capable.

Traditional teleoperation tries to brute-force through. Companies strap expert operators into $20,000-plus VR and haptic rigs, then stream their motions onto a robot arm. Those sessions burn time, money, and human attention, producing data at a trickle that cannot match the scale large models need.

Simulation promised a shortcut. Teams spin up endless virtual kitchens and train policies in physics engines, hoping to transfer them to reality. But the sim-to-real gap—tiny mismatches in friction, sensor noise, or object geometry—still causes brittle behavior when a robot meets a real greasy plate or a warped countertop.

Human video looks like an answer because YouTube, TikTok, and internal datasets offer billions of frames of people doing chores. Yet a camera feed rarely captures the critical quantities: contact forces, fingertip slip, joint torques, micro-adjustments when a glass starts to tilt. For delicate manipulation, missing that data is like training a self-driving car on dashcam footage without steering angle or throttle.

Sunday’s bet, and the industry’s emerging consensus, is that whoever cracks cheap, high-fidelity data collection wins the race. A scalable pipeline that records full 6-DoF motion, force profiles, and object state in thousands of real homes would feed a robot foundation model the way ImageNet fed vision and Common Crawl fed the modern Large language model.

Analysts already frame this as the core moat in humanoid and home robotics; reports like Humanoid robots 2025: The race to useful intelligence argue that data, not hardware, will separate the winners from the demo reels.

A $200 Glove to Dethrone a $20,000 Rig

Sunday’s answer to the data deadlock is not a $200,000 humanoid lab, but a $200 Skill Capture Glove. Instead of parking a six‑figure robot in a research bay and driving it like a puppet, Sunday ships these gloves to ordinary people and records how they actually clean, sort, and cook in the wild. Each glove logs fine‑grained motion and force data directly from human hands, frame by frame, as someone wipes a counter or loads a dishwasher.

Traditional teleoperation rigs look like something out of a motion‑capture studio: multi‑camera setups, VR headsets, haptic controllers, and a dedicated robot on the other end. A single high‑quality teleop station can run $20,000 or more once you add hardware, space, and an operator. Sunday’s glove costs roughly $200 to manufacture and ship, so the same capital that buys one rig can seed a hundred homes.

Capital efficiency translates straight into scale. Instead of one expert operator driving a robot in a lab, hundreds of people can quietly generate data in parallel while living their lives. Sunday calls these people “Memory Developers”—not roboticists, just regular users wearing a glove while they do chores.

A Memory Developer might run a dozen “episodes” in a single evening: clearing a messy dinner table, scraping plates, sorting utensils, and racking fragile glassware. Each episode becomes a labeled sequence of hand poses, contact forces, and object interactions, matched to the robot’s own perception of the scene. Over time, these sequences form a sprawling library of “how humans actually do chores” instead of idealized lab demonstrations.

Sunday has already shipped more than 2,000 Skill Capture Gloves to early Memory Developers. Those gloves are active in roughly 500 homes, from cramped apartments to sprawling suburban kitchens, giving the dataset a diversity no single test house can match. Every home adds new layouts, lighting conditions, and weird edge cases—junk drawers, kids’ cups, chipped plates—that a robot must handle gracefully.

All that activity rolls up into scale that starts to look more like internet data than robotics data. Sunday says it has logged around 10 million chore episodes so far, each one a structured recording of a real task in a real home. That volume underpins the company’s claim that its Act One model trains on “zero robot teleop data”—the robot learns from human hands first, then transfers those skills onto Memo’s manipulators.

From Human Motion to Robot Brain: The ACT-1 Model

Illustration: From Human Motion to Robot Brain: The ACT-1 Model
Illustration: From Human Motion to Robot Brain: The ACT-1 Model

Glove data does not just teach Memo how to move; it becomes the backbone of Sunday’s ACT-1 foundation model. Every time a “memory developer” wipes a counter or loads a dishwasher while wearing the $200 Skill Capture Glove, the system records precise joint trajectories and force profiles at high frequency, paired with RGB-D video of the scene.

Sunday routes this multimodal stream into ACT-1 as if it were training a Large language model, but for physical actions instead of words. The model learns a vocabulary of motions—grasp, scrub, sort, stack—and how humans sequence them over minutes, not seconds, across roughly 500 real homes and more than 2,000 shipped gloves.

Crucially, ACT-1 trains on zero robot teleop data. No engineer puppeteers Memo with a joystick, no expensive motion-capture rig wraps the robot; the entire pretraining phase happens in human space, then a learned mapping translates human hand and arm motions to Memo’s kinematics.

That translation layer handles the ugly details: different limb lengths, joint limits, and the fact that Memo rolls instead of walks. ACT-1 outputs high-level action plans and continuous control signals, while a lower-level controller enforces safety, contact forces, and collision avoidance on the actual hardware.

Long-horizon autonomy sits at the center of Sunday’s bet. ACT-1 does not just learn how to pick up a plate; it learns the full routine of clearing a messy dinner table, sorting dishes, opening the dishwasher, loading racks, and closing the door without a human stepping in.

Those routines involve dozens of substeps: navigating around chairs, avoiding stacked glassware, choosing where to put each item. ACT-1 encodes these as temporal plans, allowing Memo to recover when something shifts—a plate in a new spot, a chair slightly out of place—without restarting the entire task.

Zero-shot generalization makes this viable outside a lab. Because ACT-1 sees varied layouts, lighting conditions, and clutter patterns during training, Sunday claims Memo can roll into a new kitchen and perform tasks like:

  • Wiping counters
  • Clearing a table
  • Loading a dishwasher

all without task-specific fine-tuning, just a short calibration and a natural-language command.

Is the Bipedal Dream an Expensive Distraction?

Humanoid robotics right now looks like a fork in the road. Sunday Robotics is unapologetically taking the function-over-form path: wheels, a telescoping spine, and arms that actually clear tables, sort clutter, and load dishwashers. Tesla Optimus, Figure 01, and Unitree’s G1 are doubling down on legs, betting that a human-like silhouette and stair-climbing will matter more than raw task throughput.

In a typical apartment or single-level home, wheels quietly win. A wheeled base gives Memo long battery life, high stability, and fewer failure modes than a 30+ actuator biped constantly fighting gravity. You trade away stairs, but you gain hours of runtime and a much simpler maintenance story.

Domestic usefulness today lives at the end of the arm, not the end of the leg. Clearing a messy dinner table, loading a dishwasher, or folding a pile of socks are long-horizon manipulation problems: perception, planning, and dexterous control. Whether the robot walked or rolled to the kitchen matters far less than whether it can recognize a fragile glass, grasp it safely, and not shatter it.

Most early customers for a $20,000–$80,000 home robot will not ask, “Can it climb?” so much as, “Can it handle my kitchen without supervision?” Sunday is betting that once a robot can robustly manipulate in hundreds of real homes—500 homes’ worth of glove data and counting—locomotion becomes a solvable engineering upgrade, not the core differentiator. You can redesign a base; you cannot retrofit a mature foundation model overnight.

Critics argue that buyers at this price point probably live in multi-story houses with stairs, which the video explicitly calls out as a likely demographic. Sunday’s counter-bet is that those same buyers will tolerate a first-generation robot that owns the main floor and ignores the upstairs, especially if it reliably does dishes and laundry. Stairs become either a future hardware revision or a niche add-on, not a blocker to initial market capture.

For anyone tracking how this strategic split plays out across hardware, data, and deployment timelines, Humanoid Robots: From Demos to Deployment offers a useful backdrop to Sunday’s wheels-first gamble.

China's Parallel Universe of Practical Robots

China runs a kind of parallel universe for practical robots, and it looks very different from glossy Bay Area humanoid sizzle reels. Instead of heroic stair climbs, you get fleets of boxy machines quietly scrubbing streets at 3 a.m. and humanoids that skip parkour to master laundry. The focus shifts from “Can it walk?” to “Can it clean, sort, and reset a room faster than a human?”

On the streets of Shenzhen, that pragmatism turns into competition. A recent sanitation robot contest lined up dozens of autonomous cleaners from different vendors and turned them loose on actual city roads, not lab mockups. Organizers measured coverage, obstacle avoidance, and uptime across long routes with real traffic, pedestrians, and grime.

These sanitation bots lean hard into specialization. Most ride on low-slung wheeled platforms with big water tanks, rotating brushes, and lidar domes, tuned for curbs, bike lanes, and crosswalks. Instead of chasing acrobatics, teams optimize routing algorithms, battery swaps, and remote fleet management dashboards that can dispatch or reroute dozens of units in seconds.

Shenzhen’s streets effectively become a living benchmark suite. City officials do not care if a robot looks human; they care about square meters cleaned per hour, incident reports, and maintenance costs. That pressure rewards systems that can run 8–12 hours a day, tolerate rain and dust, and gracefully fail over to teleoperation when a delivery truck blocks the lane.

Indoors, a different Chinese demo shows how far imitation learning can push humanoids once you solve the “what should it do?” problem. Robotics company Mindon took a stock Unitree G1—a relatively low-cost humanoid that sells in the $16,000–$20,000 range—and turned it into a shockingly capable housekeeper. No exotic hardware, no custom exoskeleton, just smarter training.

Mindon’s clips show the G1 tearing through household chores at almost unsettling speed. The robot wipes counters with smooth, continuous motions, sorts clutter into bins, opens cabinets, and manipulates bottles and boxes with two-handed coordination that looks closer to a sped-up human than a cautious industrial arm. The demo runs in real apartments, not staged fake kitchens.

Under the hood, Mindon leans on high-quality human demonstrations and advanced policy learning to compress complex multi-step routines into a single control stack. Instead of scripting “pick up plate → walk → open dishwasher,” the system learns trajectories and force profiles from humans doing the job end-to-end. The result: a humanoid that behaves less like a motion-capture puppet and more like an overcaffeinated intern.

Taken together, Shenzhen’s sanitation fleets and Mindon’s chore-optimized G1 underline a global shift. Real progress clusters around narrow but valuable domains—street cleaning, kitchen resets, laundry cycles—while the industry still argues about legs versus wheels. Whether the chassis looks like a person matters less than whether the Large language model and control stack on top can turn raw demonstrations into reliable, fast, repeatable work.

The Open-Source Challenger You Can Build Yourself

Illustration: The Open-Source Challenger You Can Build Yourself
Illustration: The Open-Source Challenger You Can Build Yourself

Sourcey arrives as the anti-Memo: an open-source home robot that trades polished hardware and a closed ecosystem for hackability and transparency. Where Sunday’s Memo hides its brain and firmware behind NDAs and invite-only betas, Sourcey ships with a GitHub repo, documentation, and an expectation that you will pop the hood.

Built as a “personal home robot,” Sourcey focuses on the same category of chores—cleaning, organizing, simple household routines—but exposes every layer of the stack. Users can train it by demonstration: show Sourcey how you want towels stacked or toys binned, and its AI models refine behavior over multiple sessions instead of relying on a giant centralized dataset.

Under the shell, Sourcey leans on the Larot framework, an open-source robotics library that handles motion, perception, and control. Full access to source code, APIs, and configuration files turns the robot into a living lab for: - Robotics programming - Machine learning - Real-world human–robot interaction

Price might be Sourcey’s most radical feature. Starting at around $1,500, it undercuts humanoids like Unitree’s G1 or 1X’s Neo, which sit closer to $16,000–$20,000 or subscription-style pricing around $500 per month. That cost shift moves home robotics from research labs and well-funded startups to makerspaces, classrooms, and serious hobbyists.

You feel the trade-off immediately. Sourcey does not chase the sleek, appliance-like finish that Memo or Tesla Optimus aim for; it behaves more like a dev kit on wheels. But for educators and indie developers, that roughness is a feature, not a bug: every sensor, behavior, and failure case becomes teachable.

Framed broadly, Sourcey vs Memo looks like the Linux vs Windows split for domestic robots. Memo bets on a vertically integrated, tightly controlled experience; Sourcey bets that a messy, community-driven platform will move faster, break more things, and ultimately teach more people how home robots actually work.

Placing Bets: The Race to a Robot in Every Home

Sunday, Figure, Tesla, and Sourcey all talk about “general purpose” robots, but their roadmaps could not look more different. Sunday wants Memo in real homes by 2026 through an invite-only beta of roughly 50 households, after already shipping over 2,000 Skill Capture Gloves and gathering data from about 500 homes. Sourcey is shipping now as a $1,500 open-source platform for hobbyists, prioritizing experimentation over polished autonomy.

Industrial humanoid players follow a slower, factory-first arc. Figure has inked deals with BMW and others to pilot Figure 01 on tightly controlled manufacturing lines, a path similar to Tesla’s Optimus ambitions. Products like 1X Neo and Unitree G1 also lean into enterprise deployments and research labs before even talking about a consumer timeline, despite splashy coverage like Figure 03 – Best Inventions of 2025 (Time).

Sunday’s bet: skip the factory and go straight for the kitchen. A 2026 home beta would put Memo in front of paying or near-paying users years before most bipedal robots have UL certifications, service contracts, or consumer support playbooks. That home-first stance forces Sunday to solve messy realities now—crumbs under tables, cluttered counters, weird lighting, kids and pets—rather than the neatly fenced work cells of a car plant.

Who delivers tangible value first depends on which “job” you care about. A factory robot that can reliably move parts 24/7 already has a clear ROI and a buyer with a budget line. But a home robot that can actually clear a dinner table, load a dishwasher, and fold laundry—even slowly—solves a daily pain point for millions of people, not just a few OEMs.

Hardware alone will not decide that race. Sunday’s Skill Capture Glove attacks the data deadlock, building a proprietary pipeline of real-home manipulation trajectories that feed its ACT-1 model. Figure, Tesla, and others lean on teleoperation, synthetic data, and large video corpora, but still need scalable annotation, safety layers, and deployment tooling.

Whoever nails the end-to-end loop—data collection, model training, on-robot intelligence, plus a go-to-market that actually gets machines into homes or factories—wins. Legs, wheels, or tracks are just the chassis for that software and data engine.

Your Future Chore Assistant Isn't What You Imagined

Your first real robot roommate probably rolls. Not because engineers gave up on legs, but because wheels get you to the dishwasher faster than a bipedal strut. In a race judged by chores, a stable base and long battery life beat acrobatics every time.

Humanoid robots like Tesla Optimus, Figure 01, and Unitree G1 still chase biomimicry: knees, ankles, and a carefully tuned human-like gait. Sunday Robotics’ Memo ignores all of that and bolts a telescoping torso onto a wheeled platform that glides between countertops and cabinets. It trades stairs for four-plus hours of runtime and fewer ways to fall on your glassware.

Success metrics are quietly changing. Instead of flexing stair climbs or parkour, the new bragging rights look like: - Messy dinner table cleared end-to-end - Plates, bowls, and cutlery sorted and loaded into a dishwasher - Fragile glasses handled without a single crack - Piles of socks folded into neat stacks

Memo’s entire stack orbits around those outcomes. Sunday ships a Skill Capture Glove that costs a few hundred dollars, not a $20,000 motion-capture rig, and has already sent more than 2,000 of them into the wild. Around 500 homes now stream real chore data back to Sunday, turning everyday routines into a training set for the company’s ACT-1 foundation model.

That data-first strategy lets ACT-1 attempt “zero-shot” generalization: see a new kitchen, infer where the dishes live, and still pull off a multi-step clean-up cycle. No one cares if the robot’s gait looks uncanny when it can wipe down a table, sort leftovers, and pack the dishwasher while you’re on a Zoom call. Form becomes a UI detail; function becomes the product.

Sunday plans an invite-only beta of about 50 households in 2026, a conservative number that still dwarfs how many full humanoids any lab has quietly installed in real homes. If that rollout works, wheels-and-torso might leapfrog the legged prototypes still perfecting their walk. When you finally buy a robot to share your kitchen, will you pick the one that looks most like you—or the one that simply gets more done?

Frequently Asked Questions

What is Sunday Robotics' Memo robot?

Memo is a general-purpose home robot designed for domestic chores. It uses a wheeled base and a vertical lift for stability and reach, rather than humanoid legs.

How is Memo different from robots like Tesla Optimus or Figure 01?

The key differences are its wheeled form factor for home environments and its training method. Memo is trained using data from a low-cost 'Skill Capture Glove' worn by humans, not from expensive teleoperation rigs.

What is the 'Skill Capture Glove'?

It's a low-cost device Sunday Robotics sends to users to record motion and force data while they perform chores. This data is then used to train the Memo robot's AI foundation model, ACT-1.

When will the Sunday Memo robot be available to buy?

The company has announced an invite-only beta program planned for 2026. A broader consumer release is expected to follow, but no specific date has been set.

Tags

#robotics#humanoid robots#AI#home automation#Sunday Robotics

Stay Ahead of the AI Curve

Discover the best AI tools, agents, and MCP servers curated by Stork.AI. Find the right solutions to supercharge your workflow.

Sunday Robotics Memo: Why This Wheeled Robot Will Beat Humanoids | Stork.AI