Forget AI Doom. The Real Danger Is Us.

Everyone is debating whether AI will replace us, but they're missing the point. The real threat isn't the technology; it's the powerlessness we've been practicing for decades.

industry insights
Hero image for: Forget AI Doom. The Real Danger Is Us.

We've Been Asking the Wrong Question About AI

Ask people what scares them about AI and you usually get a single answer: jobs. Polls from Pew and Gallup show majorities worried that automation will replace workers or slash wages. The script casts humans as passive objects on an assembly line, waiting to see whether the robot arm swings our way.

That question sounds practical but behaves like a trap. It assumes AI arrives as weather, not infrastructure, and that our only role is to endure it. We reduce ourselves to variables in someone else’s spreadsheet, then act surprised when we feel expendable.

The more interesting claim is that this fear is a red herring. Our crisis didn’t start with GPT‑4 or Midjourney; it has been compounding for decades as we quietly offloaded choices to software, experts, and institutions. AI just throws a harsh, 4K spotlight on a loss of agency we already normalized.

Scroll back 20 years. Autoplay feeds, GPS turn‑by‑turn, one‑click buying, auto‑recommendations on Netflix and Spotify—each removed tiny frictions, and with them tiny decisions. Studies on social platforms show people now spend over 2.5 hours per day inside algorithmic feeds, most of it in passive consumption.

That passivity hardens into identity. When you let opaque ranking systems decide what you read, watch, and even who you date, you start to see yourself as a consumer of reality, not a participant in it. AI enters that landscape as just a bigger, faster system to surrender to.

So AI doesn’t invent our powerlessness; it mirrors it. A large language model only looks godlike if you’ve spent years training yourself to wait for instructions—from your calendar, your manager, your notifications. The technology reflects a pattern of “tell me what to do” that predates ChatGPT by a generation.

This framing shift matters. If the danger lives solely in AI, salvation must come from regulators, CEOs, or some future technical safety breakthrough. If the deeper danger lives in our eroded participation, then the leverage point moves back to how we design, adopt, and resist systems in the first place.

The Decades-Long Habit of Outsourcing Your Brain

Illustration: The Decades-Long Habit of Outsourcing Your Brain
Illustration: The Decades-Long Habit of Outsourcing Your Brain

Call it outsourced agency. For at least two decades, we have quietly handed decision-making to machines, experts, and institutions we barely understand, from credit-scoring systems to opaque ad auctions that decide which politician’s face fills your screen first.

Autoplay, infinite scroll, and “Up Next” queues mean your next video, song, or article arrives before you even form a desire. YouTube reports that 70% of watch time comes from recommendations, not search; TikTok’s For You page and Instagram Reels work the same way, turning curiosity into a conveyor belt.

GPS finished what recommendation feeds started. Waze and Google Maps don’t just suggest routes; they dictate them, to the point where drivers blindly follow directions into lakes or down closed roads. When every trip becomes “follow the blue line,” you stop building your own mental maps of cities, streets, or even your daily commute.

Social platforms reinforce the same passivity. Facebook, X, and TikTok optimize for engagement, not agency, tuning feeds via thousands of behavioral signals: dwell time, likes, rewatches, rage comments. You don’t choose what matters; the ranking algorithm reverse-engineers what keeps you there 30% longer and serves more of it.

Over time, that conditioning rewires expectations. You wait for Netflix to recommend what to watch, Spotify to auto-generate what to hear, Amazon to surface what to buy. Creation tools sit one tap away on every phone, yet most people spend hours per day as passive consumers, not active makers, scrolling through other people’s output.

So when large language models and image generators arrive, the helplessness feels familiar. AI just scales up the same pattern: a system you didn’t design, trained on data you didn’t choose, proposing your emails, your code, your art. It feels less like a toolbelt and more like another opaque layer between you and reality.

Fear of AI “taking over” grows from this long-running habit of giving up the steering wheel. The doom isn’t new; it’s the logical endpoint of decades spent training ourselves to hit Play, accept the suggestion, and assume the machine knows best.

Your Reality Is a Feedback Loop, Not a Movie

Most of us still treat reality like a Netflix stream: fixed, buffered, and playing regardless of what we do. That’s the purely objective model—Reality with a capital R exists “out there,” and we’re passive spectators while big forces (markets, models, governments) run the show. Believe that long enough and you get fatalism: if AI, climate, and politics are just giant scripts, your choices barely register.

Swing hard the other way and you get the purely subjective model: everything is perception, mindset is destiny, “thoughts become things.” It feels empowering, but pushed to its limit it collapses into delusion—if you don’t land the job or raise, you simply “didn’t manifest hard enough.” Systems, power, and material constraints vanish behind self-help slogans.

Both frames break down in a world where your behavior trains the systems that shape you. Recommendation engines, credit scores, and large language models all adapt to your clicks, queries, and pauses. Treat reality as static and you miss how your patterns feed those systems; treat it as purely mental and you ignore who owns the infrastructure and data.

Cognitive scientist John Vervaeke calls a third option “participatory knowing.” Reality is not just out there or just in your head; it emerges in the loop between what you do and how the world answers back. You walk into a meeting convinced you’re useless, stay silent, get no feedback, and leave with “proof” you had nothing to add. The belief wasn’t objectively true, but your behavior helped make it functionally real.

Researchers describe this as reciprocal opening or closing. Engage, and new options surface: people respond, algorithms recalibrate, opportunities appear that were previously latent. Withdraw, and the loop tightens: no outreach, no response, no evidence that action matters.

Human–AI systems already encode this dynamic. Studies like The human–AI interaction continuum: The impact of automation on human agency show that how actively people participate in automated workflows measurably changes outcomes. Reality, especially in a networked, AI-mediated world, behaves less like a prewritten movie and more like a live co-production that depends on whether you step onto the stage at all.

The Upward Spiral vs. The Doom Scroll

Powerlessness rarely arrives as a single catastrophic event; it accretes through what cognitive scientist John Vervaeke calls reciprocal closing. You withdraw a little, engage a little less, and the world quietly stops offering you as much to work with. That shrinking field of possibility confirms your hunch that you don’t matter, so you pull back further.

Reciprocal closing now runs at industrial scale through feeds and notification systems. You doomscroll, autoplay the next video, and refresh dashboards you don’t control while recommendation engines optimize for watch time, not wisdom. Each passive swipe trains your nervous system that problems get solved elsewhere, by someone—or something—else.

That passivity creates a measurable loop. U.S. adults now average over 4.5 hours of mobile screen time per day, much of it in algorithmic feeds. When you spend that window consuming rather than creating, you’re not just losing time; you’re rehearsing a story about yourself as a spectator inside systems too complex to touch.

Reciprocal opening is the opposite dynamic: a compounding return on agency. You take a small, deliberate action, however trivial it seems, and the environment responds with new information, relationships, or options you literally could not see before. That feedback doesn’t just change outcomes; it changes what you believe you can attempt next.

Consider walking into a meeting convinced you have nothing to offer. You sit near the wall, avoid eye contact, and never unmute. Colleagues route questions around you, your name never appears on follow-up threads, and you leave with “proof” that you were dead weight.

Now rerun the same room with a participatory stance. You ask one clarifying question, sketch a quick diagram, or share a concrete metric from your team. Someone riffs on your point, another person DMs you for the slide, and suddenly you’re on the invite list for the next planning session. Same job title, different reciprocal opening.

Digital life amplifies both spirals. Email, Slack, and large language models like Claude or ChatGPT can either deepen closing—auto-completing your thoughts, templating your decisions—or they can extend your reach if you treat them as instruments rather than autopilot. The crucial variable is not the tool, but whether you show up as a co-author of your environment.

Upward spirals start obscenely small: one outbound message instead of lurking, one prototype instead of another tab of research, five minutes of intentional writing before touching a feed. Each act reclaims a sliver of participation, and those slivers add up faster than any doom scroll ever will.

Win the Day Before You've Had Breakfast

Illustration: Win the Day Before You've Had Breakfast
Illustration: Win the Day Before You've Had Breakfast

Win the first hour and you usually win the day. Neuroscientists call the period right after waking the hypnopompic window: cortisol peaks, your prefrontal cortex boots up, and your attentional system looks for a script to follow. Hand that script to TikTok’s “For You” page, and you’ve effectively delegated your agency to an ad-optimized ranking function before you’ve had breakfast.

Grabbing your phone within 5 minutes of waking is now so common that surveys put it at roughly 60–70% of adults. That reflex isn’t neutral; it hands your scarce, high‑leverage morning attention to notification systems tuned for engagement, not your long‑term goals. You start the day reacting to badges, outrage headlines, and algorithmically juiced micro‑dramas you cannot affect.

Contrast that with a deliberately boring, five‑minute practice. No biohacking stack, just: - 5 minutes of breath‑focused meditation - 1 page of handwritten notes about what matters today - A short walk or stretch without headphones

Those tiny acts flip the script: you choose the first object of attention, not an opaque feed. Behavioral research on “implementation intentions” shows that even simple, repeated morning rituals significantly increase follow‑through on goals across the day.

This isn’t a productivity hack; it’s daily training in either powerlessness or agency. When you wake into a slot machine of variable rewards, you rehearse being a node in someone else’s optimization problem. When you wake into a self‑authored routine, you rehearse being a creator who can shape context, not just cope with it.

Attention sits at the root of every higher‑order capacity you care about: focus, judgment, creativity, even ethics. Where you aim it first thing doesn’t just color your mood; it biases which possibilities you even notice. Train your morning on autopilot, and outsourced agency becomes muscle memory. Train it on deliberate engagement, and reciprocal opening stops being a philosophy and starts becoming how you move through Reality.

Your Job Is a Craft, Not a Transaction

Most AI job panic assumes work is a vending machine: you put in hours, you get out money, and a robot might soon do it cheaper. That transactional mindset almost guarantees anxiety, because if your value is just “time in chair,” a model that runs 24/7 will always win. You’ve already ceded the game before the tools even load.

Treat the same job as a craft, and the equation flips. Craft says your real output isn’t hours, it’s skill, judgment, and problem selection. Two people can share a title, but the one who treats it as a craftsperson’s studio rather than a time clock lives in a different reality.

Transactional workers ask, “What do they want from me?” Craft workers ask, “What problem actually matters here?” That shift in orientation is pure agency: you move from reacting to tickets and pings to actively shaping what gets built, shipped, or fixed. AI then becomes a power tool on your bench, not a foreman replacing you.

Meaning at work does not arrive baked into job descriptions or mission statements. You create it through how you participate: the questions you ask, the standards you enforce, the experiments you run. A 2023 Gallup report found only 23% of workers worldwide feel engaged; that’s not a metaphysical crisis, it’s a participation crisis.

Treating work as craft looks like: - Turning vague requests into clear problem statements - Instrumenting your work with metrics and feedback loops - Using AI to prototype 10 options, then exercising taste to pick 1

That last step is where humans still dominate. Large language models can draft 50 marketing variations in seconds, but deciding which aligns with brand, ethics, and long-term strategy is a human craft move. You are training your pattern-recognition, not just shipping copy.

Agency compounds. When you consistently bring craft to even mediocre roles, people notice: you get pulled into higher-leverage projects, given more autonomy, or spin out on your own. Structural forces still matter—Shoshana Zuboff’s The Age of Surveillance Capitalism – Shoshana Zuboff maps how platforms harvest our attention—but within those constraints, craft is how you claw back leverage.

AI might compress certain tasks to near-zero cost. Workers who show up as craftspeople, not time-sellers, stay valuable anyway.

Stop Waiting for an Invitation

Stop waiting for someone to add you to the group chat, the calendar invite, or the “inner circle” Slack channel. In relationships and networking, that same outsourced-agency script shows up as social inertia: you scroll, admire, and lurk instead of participating. In a world where a single DM can cross continents in 200 milliseconds, silence is still the default.

“I’d love to reach out, but they’re too busy. I have nothing to offer.” That story feels humble, but it functions as a self-fulfilling filter on your reality. You pre‑reject yourself, so you never send the message, never ask the question, never show up in the room where anything could actually happen.

That pattern guarantees exactly one outcome: nothing. No reply, no mentorship, no collaboration, no weak tie that later turns into a job lead. Sociologist Mark Granovetter showed decades ago that “weak ties” drive a huge share of career opportunities, yet this story stops you from forming any.

There’s a different move: reach out with genuine curiosity and a concrete offer of value. Not “pick your brain?” but “I loved your piece on AI agency—here’s a dataset, tool, or small prototype that builds on it; would 10 minutes of feedback be useful?” You can always offer attention, synthesis, or legwork, even early in your career.

Two outcomes exist. They ignore you, and you still win because you practiced agency, not fantasy. Or they respond, and your reality now contains a relationship, however small, that literally did not exist yesterday. That is reciprocal opening in social form.

Opportunities rarely arrive as formal invitations; they sit latent in people, projects, and half-finished ideas. Proactive engagement—DMs, pull requests, thoughtful comments, small collaborations—collapses those possibilities into actual paths. You don’t wait for Everyone to notice you; you behave like someone already worth noticing, and the world updates accordingly.

You Are a Node in the System

Illustration: You Are a Node in the System
Illustration: You Are a Node in the System

Systems like climate change, platform monopolies, or political polarization feel untouchable because they operate at scales measured in gigatons, billions of users, and national electorates. That scale tricks you into seeing yourself as background noise, not a participant. When your mental model is “only presidents, billionaires, or AGI matter,” you’ve already surrendered.

Power looks broken if you define it as “single-handedly fixing global CO₂” or “personally ending polarization.” That fantasy sets the bar at superhero or nothing. In practice, complex systems run on distributed agency: lots of small, locally rational actions that add up to global behavior.

Network science backs this up. In a graph, you are a node: a person with edges to friends, coworkers, group chats, and feeds. Change at the node level rarely spreads linearly; it spreads through network effects, where each additional participant amplifies impact instead of just adding to it.

Online movements show this every day. A single Reddit post can trigger a 10,000-comment thread, a GitHub repo can attract 5,000 contributors, a local protest can scale to millions across 150 countries. None of those start as “solve everything”; they start as one node deciding to act in public.

You see the same pattern offline. One person organizing a neighborhood heat-mapping project can feed data into city planning tools and influence where trees, cooling centers, or transit routes go. That doesn’t “fix climate change,” but it measurably shifts resilience, block by block, in a way models can detect.

Mentoring is another high-leverage edge. Guiding one 19-year-old into climate tech, civic data, or open-source tooling can compound across decades of their work, the teams they build, and the products they ship. You don’t control their path, but you alter the probability distribution of what becomes possible.

Small, targeted builds matter too. A dev who ships a tool that helps 200 local organizers track air-quality sensors or mutual-aid logistics changes what those 200 people can execute. That’s not “viral” in social-media terms, but in systems terms it’s a structural upgrade to a subnetwork.

You can’t be the system, but you are always in it. Once you treat yourself as a node with edges you can strengthen, rewire, or create, “too big to matter” stops being an excuse and starts looking like bad math.

AI Is the Ultimate Test of Agency

AI lands as a stress test for everything you’ve been doing with your attention, your time, and your sense of control. It doesn’t arrive in a vacuum; it plugs straight into decades of outsourced agency to feeds, search rankings, and black-box recommendation engines.

Treat AI like Netflix on steroids and it will happily deepen that passivity. One-click article summaries, auto-generated emails, synthetic meetings, infinite TikTok clones—each convenience shaves off a sliver of participation until you mostly supervise what machines decide for you.

Use the same tools as creative multipliers and the curve bends the other way. A solo developer with GitHub Copilot, Claude, and Midjourney already approximates a 3–5 person team from 2015, shipping prototypes, pitches, and content in days instead of months.

That divergence is brutal and simple: consumer orientation vs. creator orientation. Both groups use AI; only one uses it to actually change their environment rather than just anesthetize it.

Narratives of inevitability—“AGI will run everything,” “regulation can’t keep up,” “big models are too expensive to matter to individuals”—smuggle in a quiet assumption: your role is to adapt, not to shape. That story does more damage to agency than any single model card or GPU cluster.

History keeps contradicting that script. Open-source LLMs like Llama, Mistral, and Phi cut state-of-the-art capabilities down to laptops and $20 cloud instances, while small teams fine-tune domain models for law, medicine, and logistics without a hyperscaler’s budget.

The real variable is how you show up to this stack every day. Do you ask AI to: - Entertain you - Spare you from thinking - Or extend what you can build, learn, and negotiate

Researchers framing algorithmic governance argue exactly this: power shifts when systems automate choices about what you see and can do. Algorithmic Governance and the Crisis of Agency (Oxford Handbook chapter) reads like a manual for how easily people surrender that power.

AI will not decide whether you live in a doom loop or an upward spiral. Your ongoing decision—consume, or participate—sets the gradient for everything that follows.

Your First Act of Defiance: Begin Today

Forget AI doom scenarios running on trillion‑parameter models for a moment. Powerlessness is not a sci‑fi threat; it is a daily habit, trained by years of push notifications, autoplay feeds, and one‑click everything. That habit is a pattern, and like any pattern, you can overwrite it with practice.

Every doom scroll, every “I’ll just see what the algorithm recommends,” rehearses reciprocal closing. You act as if nothing you do matters, so fewer things end up mattering. Agency works the same way in reverse: small, repeated choices compound into a different identity and a different Reality.

Start obscenely small. Pick one domain you already control: - Your first 15 minutes after waking - One recurring work task you can own end‑to‑end - One conversation you will have today

Then engage with it on purpose, not by default. No optimization hacks, no 30‑day challenges. Just a clear, deliberate act.

For your morning, that might mean 10 minutes without a screen: write three sentences, stretch, or walk around the block. For work, rewrite that status report as if it were a product brief, or document a process so someone else could run it. For a conversation, prepare one hard question or one honest compliment and actually deliver it.

Treat each move as a micro‑experiment in participatory knowing. Notice how people respond when you show up slightly differently. Notice how your own attention shifts when you choose what to look at before your phone chooses for you. Track the feedback loop, not your feelings about it.

AI will accelerate whatever pattern you are already running. If your default is “wait to be told,” large language models just make waiting more efficient. If your default is “act, observe, adjust,” the same tools become leverage instead of fate.

You do not need more threads, more think pieces, or another 200‑page policy PDF to feel less afraid. You need one concrete, repeatable act of agency today, and another tomorrow. Stop refreshing the discourse and change one thing you actually touch. Start now.

Frequently Asked Questions

What is the core argument of this article?

The main argument is that the real danger associated with AI is not the technology itself, but our pre-existing, learned helplessness and tendency to outsource our agency to systems we don't control.

What is 'participatory knowing'?

Coined by philosopher John Vervaeke, it's the idea that reality isn't just objective or subjective, but is co-created by how we engage with it. Our actions and mindset shape the possibilities that become available to us.

How can I start reclaiming my agency today?

Start with small, intentional acts. For example, create a deliberate morning routine instead of checking your phone, approach your work as a craft rather than a transaction, and actively reach out to people you admire.

Why is this perspective important for the future of AI?

It shifts the focus from being a passive victim of technological change to an active participant. By cultivating agency, we can learn to use AI as a tool for empowerment rather than seeing it as a force of replacement.

Tags

#AI#Agency#Psychology#Future of Work#John Vervaeke

Stay Ahead of the AI Curve

Discover the best AI tools, agents, and MCP servers curated by Stork.AI. Find the right solutions to supercharge your workflow.