Figure AI's Robot Cover-Up

A fired safety chief claims Figure AI ignored warnings their robots could fracture human skulls. This explosive lawsuit exposes the dark side of the race to deploy humanoids in our homes.

industry insights
Hero image for: Figure AI's Robot Cover-Up

The Robot That Could Fracture a Skull

Rob Gruendel says a humanoid robot at Figure AI AI hit a refrigerator so hard it carved a three-quarter-inch gouge into stainless steel. In a lawsuit filed in California, the company’s former Head of Product Safety claims that same class of robot generates enough force to “fracture a human skull” — and that when he pushed CEO Brett Adcock to confront those risks, he was pushed out instead.

According to the complaint, Gruendel was recruited in late 2024 as principal robotic safety engineer, reporting directly to Adcock and tasked with building Figure AI AI’s global safety strategy from scratch. He alleges that once his warnings started to slow the company’s march toward commercialization, leadership treated him less like a safeguard and more like an obstacle.

Figure AI AI sits at the front of the humanoid robot race, showing off its F.02 and F.03 models in polished demo videos and raising money from giants like Nvidia, Microsoft, and Jeff Bezos–backed funds at valuations reportedly hitting tens of billions of dollars. The lawsuit lands as the company tries to convince investors and partners that its robots can leave the lab and enter factories — and eventually homes — without turning into workplace hazards.

Gruendel’s central allegation is stark: he says he was fired in September 2025 in retaliation for escalating “catastrophic” safety failures to Adcock and chief engineer Kyle Edelberg. Figure AI AI, he claims, cited a vague “change in business direction” and later “poor performance,” despite previously giving him strong reviews and a raise less than a year into the job.

Court filings describe a company that celebrated speed over guardrails. Gruendel says Figure AI AI had no formal incident reporting, no standardized risk assessments, and no dedicated employee health and safety staff, even as engineers worked inches from high-torque limbs driven by non-deterministic AI control systems.

That clash between move-fast ambition and methodical safety engineering now sits at the heart of one of the first major whistleblower fights in humanoid robotics. As rivals like Tesla and Agility Robotics race toward human-adjacent machines on factory floors, the case raises a blunt question: how safe is “safe enough” when a software bug can swing a 150-pound robot arm with skull-cracking force?

Silicon Valley's Most Dangerous Mantra

Silicon Valley's Most Dangerous Mantra
Silicon Valley's Most Dangerous Mantra

Silicon Valley has a mantra problem, and Figure AI AI engraved it into company DNA. Its self-declared core values, quoted in Rob Gruendel’s lawsuit, are to “move fast and be technically fearless” and “bring a commercially viable humanoid to market.” That is less a value system than a shipping order, and it frames safety as friction, not foundation.

Facebook’s old motto, “move fast and break things,” broke ad systems and privacy promises. Figure AI AI builds 1.6‑meter‑tall, 60‑kilogram humanoids that can allegedly generate forces “powerful enough to fracture a human skull.” When a malfunctioning F.02 punched a stainless-steel refrigerator and left a gash up to three‑quarters of an inch deep, it nearly hit an employee standing beside it.

Gruendel claims that behind the glossy demo videos sat a company with almost no written guardrails. During his first week, he allegedly found no formal safety procedures, no incident reporting system, and no structured risk assessment process for robots under development. Figure AI AI also had no dedicated employee health and safety staff, despite workers operating around high-powered machines.

He drafted a global safety roadmap, which CEO Brett Adcock and chief engineer Kyle Edelberg initially approved. But the lawsuit says both men “expressed a dislike of written product requirements,” a stance Gruendel flagged as “abnormal” in machinery safety. In an industry governed by ISO and OSHA paperwork, rejecting documentation looks less like efficiency and more like liability avoidance.

According to the complaint, that anti-paper trail mindset bled into culture. Safety meetings with Adcock allegedly slid from weekly to quarterly. Gruendel’s detailed warnings about robot impact forces, AI unpredictability, and the need for employee training went unanswered or delayed as the company chased investor milestones.

Growth pressure only heightened the tension. Figure AI AI’s valuation reportedly hit $39 billion after a September 2025 funding round backed by Nvidia, Microsoft, and Jeff Bezos-linked capital. Gruendel alleges executives “gutted” his safety roadmap after the money cleared, stripping out key commitments that investors had seen on paper.

Move fast and be technically fearless sounds heroic in a pitch deck. Around a robot that can cave in a skull, it starts to read like a confession.

The Refrigerator Incident: A Metal-Bending Warning

Momentum inside Figure AI AI’s lab allegedly snapped the day an F.02 unit lost control and swung at a kitchen appliance. According to Rob Gruendel’s lawsuit, the humanoid robot unexpectedly punched a stainless-steel refrigerator door, driving its metal fist a quarter inch into the stainless-steel skin.

An employee reportedly stood just inches away, close enough that a slightly different trajectory could have connected with a torso, shoulder, or skull. The complaint frames this as a textbook “near miss” — not a hypothetical risk model, but a physical demonstration of what a misaligned F.02 strike can do to hardened steel.

Gruendel cites this incident as proof that Figure AI AI’s machines were, in his words, “powerful enough to fracture a human skull.” Impact tests already suggested forces more than double the threshold needed to break an adult skull; the refrigerator gash turned those lab numbers into a gouged, razor-edged scar across a consumer-grade appliance.

Details in the filing describe a chaotic scene: an F.02 running in a development mode, an employee within arm’s reach, and no robust interlocks or exclusion zones to keep people out of the robot’s strike radius. No one suffered physical injury, but the refrigerator door absorbed a blow that, redirected a few inches, could have slammed into bone.

For Gruendel, this was the tipping point. He had already flagged the absence of formal incident reporting, lack of dedicated employee safety staff, and resistance to written product requirements; the refrigerator punch convinced him the test floor exposed workers to unacceptable risk.

After the incident, he pushed harder for structured safety training, stricter operating envelopes, and mandatory logging of all malfunctions and near misses. The lawsuit claims those demands clashed with Figure AI AI’s “move fast and be technically fearless” culture and its rush to impress investors tracking demos on the Figure AI AI - Official Website.

Gruendel’s narrative casts the refrigerator not as collateral damage, but as a bent-metal warning that leadership allegedly chose to ignore.

A Trail of Ignored Red Flags

Red flags started piling up almost as soon as Rob Gruendel arrived. According to the lawsuit, Figure AI AI had no formal safety procedures, no incident reporting system, and no structured risk assessments for its F.02 humanoid robots, despite their ability to generate forces allegedly more than double what’s needed to fracture a human skull. Gruendel responded by drafting a comprehensive safety roadmap, building incident-tracking spreadsheets, and proposing training modules for anyone working near the robots.

Leadership initially signed off. CEO Brett Adcock and chief engineer Kyle Edelberg approved the roadmap on paper, but the complaint says they balked when it turned into actual process. They “expressed a dislike of written product requirements,” a stance Gruendel flagged as abnormal for machinery safety, where documentation underpins compliance with ISO and OSHA-style standards.

Communication channels allegedly broke down just as the stakes rose. Weekly safety briefings with Adcock reportedly slipped to monthly, then quarterly, before stopping altogether. Gruendel’s Slack messages about near-misses and uncontrolled robot motions, including references to the refrigerator strike that gouged a 0.75-inch gash in stainless steel, allegedly went unanswered by the CEO.

Those ignored messages weren’t theoretical. Employees began reporting close calls directly to Gruendel because there was no Employee Health & Safety (EHS) office or formal near-miss process, according to the filing. He became a de facto safety hotline, logging incidents ranging from unexpected arm sweeps to unplanned contact with workstations.

The lawsuit describes this as a systemic failure, not a paperwork oversight. In a lab with multi-kilowatt actuators and non-deterministic AI control, there was no dedicated EHS staffer, no anonymous hazard-reporting channel, and no standardized investigation procedure. Instead, Gruendel allegedly relied on ad hoc Google Sheets and one-off Slack threads to track risks that could maim workers.

One of his key proposals, an E‑Stop initiative to standardize emergency stop hardware, software behavior, and response drills, reportedly ran headfirst into leadership resistance. The program aimed to define how many emergency stops each test cell needed, where to place them, and how the robot should behave electrically and mechanically when triggered. According to the complaint, Adcock and Edelberg ultimately canceled or froze the E‑Stop effort, calling it unnecessary friction for development speed.

By early 2025, the pattern was clear in Gruendel’s telling: safety plans got approved for pitch decks, then quietly sidelined. Near-misses accumulated in his personal logs, not in any corporate system. And the person tasked with building a safety culture found himself increasingly talking into a void.

The $39 Billion Investor Deception

The $39 Billion Investor Deception
The $39 Billion Investor Deception

Forty billion dollars buys a lot of confidence. According to the lawsuit, Rob Gruendel’s job in mid‑2025 was to manufacture that confidence on paper: a detailed safety whitepaper designed to calm the nerves of Jeff Bezos, Nvidia, Microsoft, and other deep‑pocketed backers circling Figure AI AI’s Series C round.

Gruendel alleges he pulled together a full‑spectrum safety roadmap: formal risk assessments, incident logging, employee training plans, and guardrails for deploying the F.02 humanoid around humans. The document, he says, did not mince words about forces “powerful enough to fracture a human skull” and the refrigerator‑punching malfunction that gouged steel inches from a worker.

According to the complaint, that rigor became a fundraising asset. Prospective investors allegedly received a version of the plan that portrayed Figure AI AI as a company maturing past “move fast and be technically fearless” into one that respected ISO machinery‑safety norms and AI risk management. By September 2025, the round closed at a reported $39 billion valuation, a roughly 15x jump from early 2024.

Then, Gruendel claims, the safety plan was quietly hollowed out. Engineering leadership, including chief engineer Kyle Edelberg, allegedly “gutted” and downgraded core commitments once the money cleared—paring back written requirements, sidelining incident tracking, and shelving elements that slowed development of the F.02 platform.

That allegation moves the story from messy startup culture into potential securities fraud. Raising capital on the strength of specific safety controls, then abandoning them without disclosure, can look less like “iterating” and more like misleading investors about operational risk and time‑to‑market.

Regulators and litigators would ask hard questions: Were Bezos, Nvidia, and Microsoft given safety materials that no longer reflected reality after the round? Did any board‑level oversight committees sign off on a shift away from the advertised roadmap? Were internal risk assessments updated and shared?

Capital‑intensive robotics startups live or die on their ability to project an image of industrial‑grade discipline. Humanoid platforms like F.02 demand nine‑Figure AI checks for actuators, custom silicon, data centers, and test labs long before meaningful revenue appears.

That dynamic incentivizes glossy narratives about reliability and “enterprise‑ready” systems, even when the shop floor still runs on ad hoc processes and unwritten rules. Gruendel’s account suggests safety can become another slide in the pitch deck—dialed up for due diligence, dialed down once the term sheets are signed.

By the Numbers: 20x the Pain Threshold

Superhuman speed is not a metaphor in Rob Gruendel’s complaint; it is a measurement. According to the lawsuit, internal impact tests on Figure AI AI’s F.02 humanoid logged end-effector velocities above typical human punching speeds, while carrying far more mass and driven by high-torque actuators.

Those tests allegedly referenced ISO 15066, the collaborative robot safety standard that defines a “pain threshold” for human contact. Gruendel claims F.02’s impacts hit forces roughly 20x higher than that threshold, meaning contact would not just hurt—it would overwhelm the level regulators assume as the upper bound for acceptable human-robot interaction.

Gruendel goes further, offering an expert estimate that those forces exceeded by more than 2x what is required to fracture an adult human skull. Skull fracture literature typically cites ranges around 3–10 kN depending on impact area and direction; the lawsuit asserts F.02’s peak forces lived comfortably above that band.

These numbers connect directly to the refrigerator incident already documented in the filing. When F.02 allegedly malfunctioned and punched a nearby stainless-steel fridge, it left a gash up to three-quarters of an inch deep, physical damage consistent with high-energy impact and with Gruendel’s fracture calculations.

Impact testing, as described, did not occur in a vacuum. Gruendel reportedly tied the data to real operating modes—arm swings, reaches, and mis-trajectory events—rather than purely synthetic lab strikes, arguing that similar force profiles could occur during routine development work or future commercial deployments.

Those internal measurements matter because they undercut any argument that these are abstract, sci-fi hypotheticals. Figure AI AI’s own numbers, if accurate, document a machine that can out-punch a human boxer, ignore established pain limits, and cross into bone-breaking territory.

For readers tracking the broader collision between robotics, regulation, and investor hype, coverage on outlets like CNBC - Business News and Analysis has already highlighted how fast capital is flowing into humanoid platforms that still lack mature safety regimes.

From Star Performer to Persona Non Grata

Recruited in October 2024, Rob Gruendel arrived at Figure AI AI as a star hire. With more than two decades in robotics safety and a direct reporting line to CEO Brett Adcock, he was tasked with building the company’s global safety strategy from scratch. Early reviews, according to the lawsuit, praised his work and culminated in a $10,000 raise less than a year into the job.

That raise matters because it anchors a clean before-and-after. For most of his tenure, Gruendel’s push for formal safety processes, incident tracking, and employee training did not trigger open hostility. Leadership approved his safety roadmap, and weekly check-ins with Adcock signaled, at least superficially, that safety held a seat at the table.

Everything shifted after the impact tests on the F.02 robot. Gruendel’s data-backed warning that the machine could generate forces well above skull-fracture levels landed in executives’ inboxes alongside his most direct written alerts about risk to employees and future customers. Within days of those messages, the company terminated him.

Figure AI AI’s official line: “poor performance.” No performance improvement plan, no prior written warnings, and no documented demotion appear in the complaint to support that narrative. Instead, the paper trail shows the opposite arc—strong feedback, a raise, growing responsibilities, then a sudden reversal immediately after he quantified how dangerous the robots could be.

That pattern maps almost perfectly onto classic whistleblower retaliation cases in high-pressure tech environments. An employee initially advances the mission, then becomes a friction point when their expertise collides with aggressive timelines and investor expectations. Once safety requirements threaten the schedule, the same traits that earned praise—insistence on documentation, resistance to shortcuts—get rebranded as “obstruction” or “not a fit.”

Context around Figure AI AI’s funding amplifies the suspicion. By September 2025, the company chased a $39 billion valuation from backers like Jeff Bezos, Nvidia, and Microsoft, while executives allegedly gutted Gruendel’s detailed safety roadmap after the round closed. In that light, labeling the head of product safety a poor performer days after he raised red-flag impact data looks less like routine HR housekeeping and more like removing the last internal check on an accelerated launch.

The Great Humanoid Arms Race

The Great Humanoid Arms Race
The Great Humanoid Arms Race

Humanoid robotics now looks less like research and more like a land grab. Companies such as Figure AI AI, Tesla, Agility Robotics, Apptronik, and Sanctuary AI are racing to ship autonomous, general-purpose machines into warehouses, factories, and eventually homes. Whoever plants the first credible humanoid at scale could lock in platform power for decades.

Figure AI AI has set one of the most aggressive targets in the field: deploy 200,000 robots by 2029. That implies thousands of units leaving the line every month in the back half of the decade, operating in close proximity to human workers. Gruendel’s lawsuit lands squarely in the middle of that sprint, alleging that safety became a negotiable detail, not a gating requirement.

Analysts have poured gasoline on the race. Morgan Stanley projects humanoid and general-purpose robots could drive a $5 trillion market by 2050, a number executives now quote as casually as daily active users. Internal decks at robotics startups increasingly frame humanoids as the “next smartphone” or “next cloud,” with similar winner-take-most dynamics.

Those incentives reshape internal priorities. Every quarter spent hardening fail-safes or rewriting procedures is a quarter a rival can post a new demo, sign a pilot, or announce a mega-deal with an automaker or logistics giant. In that environment, a head of product safety who slows a launch can look less like a guardian and more like a drag coefficient.

Humanoid systems also straddle multiple regulatory gray zones. They are part industrial robot, part consumer device, part AI system, and existing standards for machine safety, functional safety, and workplace health often do not cleanly apply. Startups can interpret that ambiguity as permission to move first and argue about compliance later.

Gruendel’s complaint essentially alleges that Figure AI AI embraced that logic. He describes a culture driven by “move fast and be technically fearless,” while Morgan Stanley and similar projections hang over the sector like a scoreboard. When investors talk about a multi-trillion-dollar prize, every safety meeting starts to feel like lost market share.

Powerful Tools or Unpredictable Threats?

Power tools already surround us with lethal potential. A 3,500‑pound car can kill at 25 mph, a midrange table saw spins a blade at 3,500 rpm, and a cheap kitchen blender hides razor‑sharp steel behind a plastic lid. We tolerate that risk because their behavior is predictable and our safety systems—guards, training, regulations—assume that predictability.

Humanoid robots like Figure AI AI’s F.02 sit in a different category. They combine industrial‑grade actuators—impact tests in Gruendel’s lawsuit allege forces capable of fracturing a human skull—with AI control that does not always behave the same way twice. That non-determinism breaks the mental model that underpins how we treat dangerous tools.

Traditional robots and machine tools follow deterministic code: given input X, they do Y, every time. AI systems like Figure AI AI’s Helix AI instead generate outputs from probabilistic models, which means: - They can “hallucinate” actions or misinterpret sensor data - They can make unexplainable decisions that defy simple debugging - They can fail in edge cases that designers never anticipated

A table saw never “decides” to lunge sideways; its failure modes are mechanical and mappable. An AI-controlled humanoid can, in principle, choose a motion sequence that no engineer explicitly programmed, then repeat a different motion the next time. That variability complicates everything from emergency‑stop design to insurance underwriting.

Risk tolerance usually rests on a clear cost-benefit trade. We accept 40,000+ annual U.S. car deaths because cars unlock commuting, logistics, and economic activity that society deems indispensable. For general-purpose humanoids, the promised upside is huge—labor substitution in warehouses, elder care, domestic chores—but still speculative.

So the real question becomes: how much unpredictable risk will people accept in their kitchens, warehouses, and nursing homes in exchange for a robot that can unload a truck or unload a dishwasher? Regulators can set exposure thresholds and certification regimes, but public tolerance will hinge on early incidents, viral videos, and whether failures look like freak accidents or systemic design choices. Investors already model these scenarios; reports like Morgan Stanley Research increasingly treat safety, explainability, and liability as core to the humanoid business case, not an afterthought.

A Legal Battle to Define the Future of Robotics

Courtrooms rarely decide how emerging technologies evolve, but Gruendel v. Figure AI AI could become one of those outliers. A senior safety engineer with 20+ years in robotics claims he was fired in 2025 for warning that Figure AI AI’s F.02 humanoid could “fracture a human skull” and had already carved a three-quarter-inch gash into a steel fridge door. If a jury treats those warnings as protected activity, every robotics startup racing toward humanoids will have to rethink how it handles internal dissent.

At stake is whether existing whistleblower laws—built around finance, healthcare, and defense—stretch cleanly to robots powered by non-deterministic AI. Gruendel says he reported missing incident tracking, no dedicated employee safety staff, and executives who “disliked written product requirements.” A ruling that punishes his firing could turn safety engineers into de facto compliance officers for autonomous systems, not just internal nagging voices.

Precedent here would land right as humanoids leave labs for warehouses and, eventually, homes. If a court finds Figure AI AI retaliated after impact tests allegedly showed forces more than 2x skull-fracture thresholds, plaintiffs’ lawyers will cite that in every future case involving industrial arms, delivery bots, or home assistants. Companies might face legal exposure not only when robots injure people, but when they sideline the person who said they might.

Regulators are watching. Today, robot safety leans on standards like ISO 10218 and ISO/TS 15066 for collaborative robots, but nothing fully anticipates AI-driven humanoids that learn and improvise. A high-profile verdict could accelerate: - New OSHA guidance for human–robot workplaces - Updated ISO standards for AI-powered motion planning and force limits - Mandatory incident logging and third-party audits for general-purpose robots

For Figure AI AI, the lawsuit hits at a fragile moment: a reported $39 billion valuation, backing from Jeff Bezos, Nvidia, and Microsoft, and a public narrative of graceful bipedal machines powered by Helix AI. A loss could mean intrusive discovery, investor skittishness, and a forced pivot to slower, standards-first development. A win might embolden the “move fast and be technically fearless” crowd—until the first serious injury makes Gruendel’s warnings look less like a lawsuit and more like prophecy.

Frequently Asked Questions

What is the Figure AI lawsuit about?

It's a wrongful termination and whistleblower retaliation lawsuit filed by Robert Gruendel, Figure AI's former head of safety. He alleges he was fired for raising critical safety concerns about the company's humanoid robots.

What are the main safety allegations against Figure AI?

The lawsuit claims Figure's robots are powerful enough to fracture a human skull, that a robot malfunctioned and punched a refrigerator, and that the company ignored safety protocols and misled investors about its safety plans.

Who is Robert Gruendel?

Robert Gruendel is a highly experienced robot safety engineer who was recruited by Figure AI to be its head of product safety. He has over two decades of experience in human-robot interaction and safety compliance.

How did Figure AI respond to the allegations?

A company spokesperson denied the allegations, stating Gruendel was terminated for 'poor performance' and that his claims are 'falsehoods' that will be refuted in court.

Tags

#robotics#figure ai#whistleblower#lawsuit#ai safety

Stay Ahead of the AI Curve

Discover the best AI tools, agents, and MCP servers curated by Stork.AI. Find the right solutions to supercharge your workflow.