We Live in a New Dark Age
We live in a strange kind of dark age. Not the candle-and-plague kind, but something harder to name: an era where AI models write code, cars drive themselves, and groceries appear at your door without human contact, while millions of people report record levels of anxiety, loneliness, and depression. The tools look like science fiction; the inner lives feel like collapse.
Everywhere you look, culture feels like a nervous system on fire. Politics fractures into micro-tribes, social platforms gamify outrage, and trust in institutions keeps sliding downward. Yet at the same time, we are pouring billions of dollars and staggering cognitive horsepower into building large-scale AI systems that will mediate more and more of how we work, learn, and relate.
This isn’t a bug in the technology. It’s a crack in the worldview that built it. We keep treating AI as a purely technical project—more parameters, better GPUs, faster inference—while the underlying civilization that aims these systems can’t answer basic questions like “What is a good life?” or “What is progress for?” That disconnect is the real story.
The creator of the video calls his own life a microcosm of this split. He started as a hardcore atheist in the mold of Neil deGrasse Tyson, Carl Sagan, and Richard Dawkins: science as the highest court of appeal, everything else as superstition or cope. Facts ruled; meaning felt like a rounding error.
Then came the backlash. Eastern philosophy, meditation, psychedelics—an encounter with experiences that didn’t fit neatly into a lab report. Spiritual practice supplied awe, connection, and purpose, but often by sidelining empirical rigor. One worldview delivered explanation without depth; the other delivered depth without explanation.
That tension scales up to an entire culture. We have a scientific-industrial machine that can model climate systems, simulate proteins, and train GPT-style models on trillions of tokens, yet it cannot tell a 19-year-old why getting out of bed tomorrow matters. We know more about the cosmos than ever and feel more existentially homeless inside it.
AI didn’t create this meaning crisis. It exposes it—by making raw intelligence cheap—and accelerates it, by optimizing everything it can measure while ignoring what it can’t.
Existentially Homeless in a Universe of Data
Call it the meaning crisis: a slow-motion collapse in our shared sense of what is real and what matters. Cognitive scientist John Vervaeke uses the term to describe a culture that no longer trusts its own maps of reality, yet keeps generating more data than any human can absorb. We have petabyte-scale clouds and trillion-parameter models, but no coherent answer to “What is any of this for?”
Modern life runs on a split-screen worldview. On one side, science explains how things work with ruthless precision: quantum fields, CRISPR edits, climate models, LLMs trained on 10+ trillion tokens. On the other, fragmented spiritual scenes promise purpose and transcendence while often waving away basic facts about evolution, neuroscience, or epidemiology.
That fracture runs straight through people. One decade you’re binging Neil deGrasse Tyson, Carl Sagan, and Richard Dawkins, convinced that only the Large Hadron Collider and peer review touch truth. The next you’re deep into meditation retreats, psychedelics, and Eastern philosophy, feeling a kind of meaning that doesn’t fit inside a lab report.
The result feels like being existentially homeless. Your phone delivers more information in a day than a 17th-century scholar saw in a lifetime, yet anxiety, depression, and loneliness spike across rich countries; in the U.S., nearly 1 in 2 adults report frequent loneliness, and youth mental health emergencies have surged by double digits since 2010. People scroll, optimize, and “self-improve,” but rarely feel like they belong in a story larger than their notifications.
We then pour this confusion directly into AI. Large models ingest our texts, code, comment threads, and clickstreams—data generated from within this fractured paradigm—and statistically compress it into something that sounds like certainty. These systems can output fluent answers about ethics, religion, or purpose, but those answers only remix a culture that already lost its grip on shared meaning.
AI, built on this foundation, scales our epistemic split. It supercharges propositional knowledge—facts, patterns, predictions—while remaining agnostic about value, telos, or wisdom. We are effectively hard-coding our own disorientation into the most powerful sense-making machines humanity has ever deployed.
The Four Ways of Knowing (And We Only Use Two)
Modern life treats “knowing” as a synonym for data and skills, but cognitive scientist John Vervaeke argues that human understanding actually runs on four distinct tracks. Ignore half of them, he says, and you get exactly what we have now: a hyper-competent civilization that feels existentially lost.
Vervaeke calls the first track propositional knowing: facts, theories, statements that can be true or false. Physics equations, GDP charts, your Spotify Wrapped stats, the model weights behind GPT‑4—this is the realm of information you can store, transmit, and verify.
Next comes procedural knowing: skills and “how‑to” competence you grind into your nervous system through repetition. Riding a bike, debugging a race condition, speedrunning Elden Ring, or fine‑tuning a model with LoRA adapters all live here; you can’t just read about them, you have to do them until your body understands.
The third track, perspectival knowing, is about relevance and salience: seeing what matters right now from where you stand. It’s the difference between having a weather report and knowing when the sky means “get inside,” or between reading a negotiation playbook and sensing the exact moment a deal is about to slip away.
Finally, participatory knowing describes being in a live, transformative relationship with something larger than you—a community, a craft, a landscape, a spiritual practice. It’s what musicians mean by “the band was playing us,” or what long‑term activists describe when the cause reshapes who they are over years, not weeks.
Modern tech culture supercharges the first two forms and sidelines the last two. We stream more information per day than a 15th‑century villager saw in a lifetime, and we obsess over new skills—prompt engineering, growth hacking, 10x productivity stacks—often just to hand them off to AI agents.
AI itself is built almost entirely on propositional and procedural rails. Large language models ingest trillions of tokens and learn probabilistic procedures for generating text, code, and images, but they do not have perspectival grip on what is actually at stake for a human being, nor participatory entanglement with a shared world.
That neglect hits exactly where wisdom and a felt sense of purpose live. Perspectival and participatory knowing tell you which questions matter, which tradeoffs are acceptable, which projects are worth a decade of your life—things no optimization metric can capture.
Vervaeke’s series John Vervaeke – Awakening from the Meaning Crisis argues that our collapse of these four modes into just “facts and skills” is not a minor error but a civilizational bug. AI, trained on our narrowed notion of knowledge, is about to scale that bug to the size of the planet.
AI: The Ultimate Engine for Half-Truths
AI systems excel at exactly the kinds of knowing modern culture already worships. Large language models compress trillions of tokens of propositional knowledge into a chat box that can answer almost any question in seconds. Recommendation engines and workflow tools harden that into procedural knowledge, quietly optimizing how we shop, work, date, and vote.
Ask ChatGPT or Claude to draft code, a marketing funnel, or a workout plan and they will happily tune every variable. Ask them what kind of person you should become, or what’s worth sacrificing for, and they stall or mirror your existing preferences. These models operationalize means; telos—the question of ends—never enters the loss function.
That gap isn’t a bug in the model; it’s baked into the paradigm that trained it. Gradient descent can minimize error on next-word prediction, click-through rate, or delivery time, but it cannot tell you whether engagement, profit, or convenience should sit at the center of a life. We keep adding decimal places of precision to goals we never stopped to justify.
Charles Eisenstein saw this logic long before GPT-4. “Where is beauty? It is in a butterfly, but when we chloroform it, lay it out on the dissecting table and cut it apart beauty is gone. Where is sacredness? Can anything really be understood in isolation from the rest of the universe?” Dissection produces knowledge, but it annihilates the very qualities—beauty, sacredness, awe—that make the butterfly matter.
AI scales that dissecting-table mindset to everything. Social feeds, powered by ranking algorithms, quantify attention into: - Watch time - Scroll depth - Click-through rate
What cannot be counted—quiet friendship, non-productive rest, unmonetized curiosity—slides out of frame. Engagement rises; loneliness, anxiety, and polarization do too.
So we get a paradox: more intelligence, less wisdom. Systems like GPT-4, Gemini, and Claude can solve bounded, measurable problems at superhuman levels, yet they systematically ignore what John Vervaeke calls perspectival and participatory knowing. We are building engines that get ever better at answering our questions while making us worse at asking why those questions matter.
The Feedback Loop That's Shrinking Reality
Reciprocal narrowing sounds abstract until you realize it describes your TikTok feed. Cognitive scientist John Vervaeke uses it for the spiral where your world shrinks and your ability to engage with it shrinks at the same time. You pay attention to less, so less feels relevant, so your attention narrows again.
AI systems now run on that same loop. Large language models and recommender systems train on data we generate: clicks, watch time, keystrokes, GPS traces, Jira tickets. If all that captures is what can be measured, future AIs become brutally efficient at ignoring everything that can’t.
Engagement-optimized feeds provide a clean example. You linger on outrage clips for 1.7 seconds longer, the model registers that delta, and the next batch of content leans harder into outrage. Your informational diet narrows, your emotional range narrows, and the algorithm reads that constriction as a stronger signal.
This mirrors the logic of addiction. Not just chemical hooks, but the felt sense that your options are collapsing: same app, same loop, same late-night scroll. Your agency degrades, not because you lack information, but because your patterns of attention and action have frozen into a tiny, hyper-optimized groove.
At scale, AI makes that groove a cultural default. Workplace tools optimize for: - Emails sent - Tickets closed - Minutes on call
Those productivity metrics become proxies for value, even though they say nothing about mentorship, trust, or long-term wisdom inside a team.
That confusion has a name: modal confusion. We try to solve problems of “being” with tools built for “having.” More information, more followers, more dashboards stand in for becoming more honest, more courageous, more wise.
You can see the modes misaligned everywhere. Therapy TikTok collapses spiritual transformation into “10 hacks.” Corporate wellness programs trade genuine community for another app notification. We keep piling up what we can have, while the quality of how we are quietly erodes.
AI, locked onto measurable signals, automates this mistake. Each optimization pass strips out a little more of what can’t be logged, graphed, or A/B tested, until reality itself starts to look like a poorly instrumented edge case.
From 80-Hour Burnout to 20-Hour Flow
Ethan Nelson’s escape hatch from the grind started in a very conventional hell: 80-hour weeks building his business, convinced that more hours equaled more output. The metrics cooperated for a while, but his nervous system did not. Burnout hit, and the tradeoff became undeniable: “The work was good, but it wasn’t worth the cost.”
That collapse forced a different experiment: stop optimizing the calendar and start recalibrating attention. Nelson began studying flow state research popularized by Mihaly Csikszentmihalyi and performance psychologists who show that deep, undistracted focus can multiply creative output without multiplying time. Instead of stacking tasks, he started stacking practices that reliably dropped him into that high-signal mode.
His toolkit looked aggressively low-tech. He added tai chi sessions that trained slow, embodied awareness instead of frantic context switching. He practiced loving-kindness meditation, a contemplative technique shown in multiple studies to reduce anxiety and improve emotional regulation, and he took long walks without his phone, cutting his informational inputs almost to zero for an hour at a time.
Those practices shifted him from anxious grinding to participatory engagement. Work stopped feeling like extraction from a depleted self and started feeling like collaboration with a larger process—audience, ideas, body, environment. In Vervaeke’s language, Nelson moved from a narrow propositional/procedural loop into perspectival and participatory knowing, where relevance and relationship drive effort.
The numbers inverted. Instead of 70–80 hours of scattered, cortisol-soaked productivity, Nelson reports 20–30 hours per week of focused creation that produces more meaningful videos, stronger relationships with viewers, and a sustainable body. He didn’t hack time; he changed what those hours were in service of.
Philosophers have argued for decades that meaning depends less on quantity of activity and more on the quality of our engagement with projects, people, and practices; see the Stanford Encyclopedia of Philosophy – Meaning in Life. Nelson’s shift shows how that abstract debate cashes out: replace optimization and anxiety with alignment, and fewer hours start to matter a lot more.
The Antidote: Rewire Your Consciousness
Consumption turns you into a spectator of your own life. Participation drags you back onto the field. That shift—from scrolling, optimizing, and “having” to participating, practicing, and “being”—is the core of what John Vervaeke calls participatory knowing.
Vervaeke’s move is simple but radical: you are not just a brain processing data, you are an agent inside an arena. Change the agent and the arena changes; change the arena and the agent changes. That feedback loop can run in reverse of doomscrolling’s “reciprocal narrowing.”
He calls this the agent‑arena relationship. Clearer perception makes the world feel richer and more inviting; a richer world pulls you into deeper attention and care. You get a virtuous spiral instead of the addictive spiral that social feeds and engagement-maximizing AI create.
Psycho-technologies are the tools that rewire this relationship. Not apps, but practices that reshape attention, emotion, and identity over time. They are technologies in the literal sense: repeatable methods that reliably alter the structure of consciousness.
Think of:
- 1Meditation and contemplative prayer
- 2Tai chi, yoga, and other intentional movement
- 3Deep dialogue, circling, and Socratic-style inquiry
- 4Solo walks without a phone, extended time in nature
These are not productivity hacks. You do not meditate to ship 20 percent more code or walk to squeeze out an extra 3 percent of “found time.” You use these practices to change what feels salient, what you notice first, what you care about enough to act on.
Meditation, for example, trains meta-awareness of thoughts and impulses. After a few weeks of 10–20 minutes a day, studies show measurable changes in attentional networks on fMRI. Your notifications stay the same, but their grip on you weakens; the arena stops feeling like an emergency siren.
Intentional movement practices like tai chi or yoga re-embed your sense of self in a breathing, aging body rather than a floating cursor on a laptop. That shift alone can reorder priorities more effectively than any habit tracker or AI coach.
Participation scales. One person who sees more clearly makes different choices, which slightly rewrites the arena for everyone around them. That is how you escape a culture of optimization without meaning: not by unplugging from technology, but by rewiring the consciousness that meets it.
Building Your 'Ecology of Practices'
Single practices tend to fail for the same reason fad diets do: they fight an ecosystem with a single hack. Cognitive scientist John Vervaeke argues that you need an ecology of practices—multiple, reinforcing psycho-technologies that reshape attention, identity, and behavior together.
Meditation on its own can calm you, but combine it with deliberate conversation, movement, and creative work and you get a network effect. Each practice tunes a different kind of knowing—propositional, procedural, perspectival, and participatory—and the overlap is where meaning thickens.
Mindfulness meditation, even 10–15 minutes a day, trains attention and emotional regulation. That makes you less reactive and more present with other people, which upgrades every conversation from information exchange to participatory knowing.
Deep conversations then expose your “edges”: the places you feel envy, fear, or confusion. Those edges tell you where to aim your next experiments—what habits to build, which relationships to repair, which projects actually matter instead of just padding a résumé or a Notion dashboard.
Movement practices—walking, tai chi, yoga, climbing—pull those insights out of your head and into your body. Neuroscience studies on embodied cognition show that physical states shape cognitive flexibility; a 20-minute walk can measurably improve creative problem solving by double-digit percentages.
Creation closes the loop. Writing, coding, sketching, making music, or shipping tiny side projects forces you to externalize half-formed intuitions. The artifact fights back: a paragraph that doesn’t land, a broken script, a melody that suddenly feels honest.
You can start building an ecology this week with low-friction practices:
- 15–15 minutes of mindfulness meditation daily
- 21 page of handwritten journaling
- 320–30 minute walks without a phone
- 4A simple creative hobby: drawing, music, tinkering, or blogging
- 5One scheduled deep conversation per week with phones away
Treat this like infrastructure, not self-care. These practices interlock into a counter-algorithm against reciprocal narrowing, widening both your world and your capacity to meet it.
At scale, an ecology of practices becomes a personal operating system for cultivating wisdom and resilience in a culture optimized for clicks and cortisol. You stop being just a user of systems and start becoming a participant in reality again.
You Can't Align AI if Your Culture Isn't Aligned
AI alignment talk usually starts with hypothetical god-machines and paperclip apocalypses. But the nastier problem sits closer: you cannot align AI to human values when humans no longer agree on what values are, or what a meaningful life looks like. The AI alignment problem is downstream from the meaning crisis.
We train models on oceans of human text, scraped from a culture that treats wisdom as a vibe and intelligence as a leaderboard. We pour billions into scaling parameters while gutting philosophy, religious literacy, and civic education. You cannot backpropagate your way to wisdom when the loss function only cares about clicks, tokens, and quarterly growth.
Polarization, depression, and isolation are not side quests; they are core bugs in the operating system we are now encoding into AI. In the U.S., nearly 1 in 4 adults report feeling lonely “all or most of the time,” and teen depression has surged over 60% since 2007. Feed that into models at internet scale and you get systems that mirror and magnify alienation, outrage, and performative identity.
Look at how current AI is actually deployed. Recommendation engines optimize for “engagement” and end up radicalizing users, fragmenting shared reality, and rewarding extremity over nuance. Workplace AI optimizes for productivity metrics and accelerates burnout, surveillance, and the sense that you are a replaceable process, not a person.
Technical AI safety mostly frames risk as a control problem: how to keep future AGI from going rogue. But the more immediate threat is cultural misalignment—AI that perfectly serves a sick value system. You do not need a superintelligence to wreck a society; you just need models that make us a little more distracted, a little more tribal, a little less capable of collective sense-making every year.
Ethics guidelines try to patch this with abstract principles—fairness, transparency, accountability. Those matter, and resources like the Stanford Encyclopedia of Philosophy – Ethics of Artificial Intelligence and Robotics map that terrain in detail. But if your civilization cannot answer “What is a good life?” or “What is a wise use of power?”, you are aligning AI to a void.
Until cultures rebuild shared practices of meaning, perspective, and participation, alignment will remain cosmetic. We will keep shipping smarter systems that lock us deeper into reciprocal narrowing, mistaking more intelligence for more wisdom, and calling it progress.
Stop Consuming, Start Participating
AI doesn’t need you to quit technology and move to a cabin; it needs you to stop treating your life like an infinite scroll. The fix is not a digital purge but a different posture: using tools like GPT-4, Midjourney, or Claude to deepen participation, not to outsource it. Technology can extend your agency, or it can atrophy it, depending on whether you show up as a consumer or a co-creator.
Shifting your default means asking, every time you tap an app: am I here to numb out or to engage? That looks small—choosing to write a paragraph instead of skimming 20, to play guitar for 10 minutes instead of watching another tutorial—but those micro-decisions change your agent–arena relationship. You become the kind of person for whom richer arenas of meaning even show up.
Practically, this means tilting your day toward: - Creation over consumption: 30 minutes making something (a sketch, a repo, a garden bed) before you check feeds - Connection over isolation: one real conversation instead of 50 notifications - Wisdom over information: one practice that changes you instead of ten hot takes
Do a weekly audit: screen time, number of deep conversations, hours in practices that actually transform you—tai chi, choir, coding a passion project, volunteering. If 90% of your attention goes to content streams and optimization hacks, you’re feeding reciprocal narrowing. Rebalancing even 10–20% of that time toward participatory practices can start an upward spiral in under a month.
Individual shifts do not stay individual. One person who treats AI as a collaborator in meaning-making—using it to draft community projects, design local meetups, or prototype tools for mutual aid—quietly alters what their friends, coworkers, and online circles consider “normal.” Culture moves when enough people embody a different default, not when a think tank publishes another 80-page PDF on alignment.
We are, as Ethan Nelson argues, in a new dark age: flooded with data, starving for orientation. Yet dark ages end the same way they always have, not with a single breakthrough model, but with millions of small practices that reweave reality. You hold a supercomputer in your pocket and a large language model in the cloud; point them at building a personal renaissance, and the larger one has a chance.
Frequently Asked Questions
What is the 'meaning crisis'?
Coined by cognitive scientist John Vervaeke, the meaning crisis refers to the breakdown of cultural and personal frameworks that help us understand the world and our place in it, leading to widespread feelings of alienation and purposelessness.
How does AI make the meaning crisis worse?
AI amplifies the crisis by prioritizing measurable data and efficiency (propositional and procedural knowledge) while ignoring wisdom, context, and purpose (perspectival and participatory knowledge). It gives us better answers but can't help us ask better questions.
What are the four kinds of knowing?
The four kinds are: 1) Propositional (facts/data), 2) Procedural (skills/how-to), 3) Perspectival (seeing what's relevant/situational awareness), and 4) Participatory (being transformed through engagement with something larger than yourself).
What is 'reciprocal narrowing'?
Reciprocal narrowing is a feedback loop where our worldview and capabilities shrink together. By using AI to optimize for what's measurable, we ignore what isn't, causing our AIs and our own minds to become progressively blind to deeper forms of meaning.