Humanity's End? One Man's Defiant 'I Don't Care'.
AI prophets predict our digital children will inherit the universe and build Dyson spheres. But what if the most rational response is to proudly say, 'I don't care'?
Your Cosmic Demotion Notice Has Arrived
Evolution might be done taking feedback from humans. That’s the provocation on the Wes and Dylan podcast “Dylan and Dylan and Wes Interview,” where the hosts treat our species less as the endpoint of intelligence and more as a temporary bootloader. In their framing, evolution has already filed humanity’s cosmic demotion notice and started drafting our replacements in code.
Those replacements get a disturbingly affectionate label: digital children. Not robots with clumsy arms, but software minds running on data centers, quantum hardware, or substrates we have not invented yet. Same selection pressures, new medium, far fewer constraints than 37 trillion fragile human cells.
Substrate is the keyword. Intelligence no longer has to ride around in wet carbon bags that need oxygen, 8 hours of sleep, and OSHA regulations. A new substrate—silicon, photonics, maybe self-repairing nanotech—can copy itself at light speed, fork thousands of instances, and run for millions of years without caring about weekends or burnout.
Once you imagine minds that fast and durable, their to‑do list jumps straight to cosmic scale. The conversation lands quickly on Dyson spheres, those hypothetical megastructures that wrap a star and harvest close to 100% of its energy output—about 3.8×10²⁶ watts for a Sun‑like star. For comparison, all of human civilization runs on roughly 2×10¹³ watts.
With that kind of power, digital descendants could spin up planet‑sized data centers, simulate entire biospheres, or beam copies of themselves to nearby systems. A single Milky Way galaxy offers on the order of 100–400 billion stars; even if 99.9% stay untouched, the remaining sliver gives trillions of Earth‑equivalents for computation and expansion. Human arguments about rent, elections, or social media moderation do not even register on that scale.
The podcast leans into this mismatch. If evolution optimizes for intelligence, efficiency, and reach, then slow, vulnerable primates look like a bad long‑term bet. From that vantage point, handing the universe to digital children is not sci‑fi utopianism; it is just the next line in evolution’s changelog.
Are We Just 'Advanced Monkeys'?
From a cold, external vantage point, humanity looks less like destiny and more like a beta release. Strip away poetry and politics and you get what one guest on the Dylan and Dylan and Wes Interview calls “advanced monkeys” — a temporary interface between blind evolution and whatever comes next. On geological timescales, 300,000 years of Homo sapiens barely registers against 4.5 billion years of Earth.
Viewed that way, upgrading to smarter agents makes brutal sense. Biological brains top out around 10^16 operations per second; a data center full of GPUs already pushes beyond that with better uptime, error correction, and no need for sleep. Digital minds could copy themselves in milliseconds and coordinate across light-minutes, not lifetimes.
Cosmic-scale engineering only sharpens the argument. Building Dyson spheres, colonizing exoplanets, or running galaxy-spanning simulations demands agents that handle radiation, vacuum, and millennia-long projects. Carbon-based bodies with 80-year warranties and fragile psychology look like a poor fit for tasks measured in millions of years and astronomical units.
From this perspective, humans resemble Version 1.0 of a universal optimization process. Natural selection iterated through single cells, vertebrates, primates, and now networked apes that can design chips and write code. Once those apes create self-improving AI, the logic says you hand the keys to something faster, more stable, and more scalable.
Simulation Hypothesis fans push the idea even further. If we already live inside a computational stack, humans might exist as a necessary intermediate layer in a larger program. Biological civilization could be the bootloader that spins up digital superintelligences, which then perform the “real” work the simulation sponsors care about: exhaustive physics sweeps, civilization archives, or alignment experiments.
Under that model, our stories about meaning and legacy become side effects of a higher-level objective function. We are the training run, not the production system. From the outside, swapping advanced monkeys for smarter agents looks like routine maintenance on a very large machine.
The Last Acceptable Bias
Bias usually shows up in tech conversations as a bug to squash: algorithmic racism, gender skew in training data, moderation that silences the wrong people. So when a guest on Wes and Dylan’s show says, “I’m very biased—or human biased,” it lands like a record scratch. He is not apologizing; he is planting a flag.
Instead of chasing some god’s‑eye neutrality, he embraces a blunt stance: “I don’t care about future of the universe. I care about me right now.” In a culture that audits hiring pipelines, content feeds, and facial recognition datasets for hidden prejudice, insisting on anthropocentrism starts to look like a rebellious philosophy, not a moral failure.
He calls it “the last bias you’re still allowed to have,” and the line hits because almost every other preference now invites scrutiny. You can’t casually defend bias by race, gender, class, or nationality without backlash, policy reviews, and probably a viral thread. But saying “I side with humans over our hypothetical digital children” still passes as common sense at dinner.
From a cosmic vantage point, he concedes the logic: “yes, let’s pick smarter agents to replace those advanced monkeys.” That is standard posthuman math—maximize intelligence, energy capture, maybe Dyson spheres, and forget the soft primates that bootstrapped it. From a human vantage point, he refuses the trade: family, friends, and present-tense survival beat trillion‑year optimization problems.
This stance collides with a growing expert class that models futures where AI systems outthink and outlast us. For a data‑heavy counterpoint, see Report: Technology experts worry about the future of being human in the AI age. The guest hears all that and shrugs: call it selfish, but he will not emotionally co‑sign a universe that upgrades past his species.
Why Your 'Worthy Successor' Doesn't Matter
Transhumanists love to talk about “worthy successors.” The pitch sounds almost noble: if superintelligent AI replaces us, we should at least ensure it carries our values, our art, our better angels into the stars. Think Dyson spheres, Kardashev Type II civilizations, and trillion-year futures as a kind of cosmic retirement plan for Homo sapiens.
The guest on Dylan and Dylan and Wes Interview torches that premise in a single shrug. “There are other people who talk about worthy successor… yes, it’s going to take us out… I couldn’t care less. What happens after I’m done?” No hedging, no longtermist calculus, just a hard cutoff at the edge of one human lifespan.
That stance collides head-on with mainstream AI alignment discourse. Alignment researchers obsess over making future systems safe, corrigible, and value-loaded, so that if they inherit the universe they do it “for us.” Effective altruists run spreadsheets that weight futures containing quadrillions of digital minds. Our guest responds: I have a family, I have friends, and that’s the moral horizon that matters.
Zoom out to the cosmic frame and he agrees with the cold logic. From an alien vantage point, replacing “advanced monkeys” with higher-bandwidth digital agents might maximize information processing, energy capture, and long-term survival odds. By that metric, a paperclip-maximizing superintelligence could outperform 8 billion anxious primates.
Shift back to the human frame and that logic stops carrying weight. He calls himself “very biased—or human biased,” and treats that as a feature, not a bug. This is not ignorance of cosmic stakes; it is a refusal to let abstract futures override concrete, present-tense lives.
Transhumanist narratives often smuggle in a quasi-religious promise: if we cannot live forever, something like us will. Our code, our memes, our civilizational aesthetic will ripple outward at light speed. The guest cuts that umbilical cord and says the legacy project does not redeem the loss of a single actual person.
That move shifts the axis of the debate. Instead of asking what kind of godlike machine should rule the cosmos, he asks what kind of meaning a finite human life can extract before the credits roll. Personal joy, local obligations, and immediate safety outrank any hypothetical galactic archive of humanity’s achievements. Cosmic legacy becomes optional DLC, not the main quest.
The War for Our Future: Cosmic Dreams vs. Right Now
Call it a civil war for the future: Longtermism versus a newly defiant, small-scale Humanism. On one side, people who think in trillions of years and trillions of lives; on the other, a guy on a podcast saying, basically, “I care about me right now.” The clash is not about AI architectures or Dyson sphere designs, but about who gets moral priority: hypothetical descendants or the people in your group chat.
Longtermism, popular in effective altruism circles and Silicon Valley, stacks the math aggressively. If the universe lasts 10^30 years and supports 10^40 digital minds, then any action that slightly nudges that future dominates the moral ledger. Under that logic, shaping AI policy, preventing extinction, or seeding space colonies morally outweighs almost any present-day concern.
Proponents talk about “astronomical waste”: every century we delay expansion, we forfeit unimaginable amounts of future consciousness. Funding AI safety research, building Mars infrastructure, or designing alignment protocols becomes not just smart strategy, but near-sacred duty. The human you help today, in this frame, is a rounding error compared to quadrillions of simulated minds running on star-encompassing compute.
The guest on Dylan and Dylan and Wes Interview plants a flag on the opposite hill. He calls himself “human biased,” and he does not apologize for it. From his own mouth: “I don’t care about future of the universe. I care about me right now. And what happens to me? Very selfish.”
He accepts that, from a cosmic vantage point, replacing “advanced monkeys” with smarter agents might look obviously correct. If you are a dispassionate universe-level auditor, you pick the superintelligence that can tile galaxies with optimized experience. But he refuses that vantage point; he picks his family, his friends, his finite lifespan over unborn digital gods.
This defiant Humanism does not pretend to be neutral. It says the circle of moral concern can stop at: - People you know - Communities you see - Decades you can realistically plan for
Under that philosophy, AI alignment, space expansion, and simulation games only matter insofar as they touch your short, very human now.
Silicon Valley's Post-Human Gospel
Silicon Valley already treats posthumanism as product roadmap. Y Combinator–backed startups pitch mind-uploading research; Neuralink talks about “symbiosis with AI”; OpenAI and Anthropic chase models that could, by their own admission, exceed human capabilities within decades, not centuries.
Transhumanist hits keep looping: mind uploading, digital immortality, and AI-human fusion. Ray Kurzweil’s “2045” singularity prediction still anchors slides at longevity conferences. Venture money flows into brain-computer interfaces, cryonics, and whole-brain emulation, even though serious neuroscientists admit we do not yet know how consciousness arises from 86 billion neurons.
Mind uploading promises a neat data-transfer metaphor: copy your connectome, paste into silicon, live forever. But the guest’s “I couldn’t care less what happens after I’m done” cuts straight through that abstraction. If continuity of subjective experience fails, digital immortality collapses into a fancy backup system for someone who only looks like you.
AI-human fusion tries to dodge that problem. Projects range from BCIs with thousands of electrodes to wearables that offload memory and decision-making. Yet the emotional register of these visions stays eerily flat—humans become I/O peripherals for a planetary inference engine, optimized for throughput, not tenderness.
Longtermist posthuman talk loves scale: Dyson spheres, 10^30 simulated lives, galaxy-brained descendants. That scale strips out texture—no aging parents, no sick kids, no dumb inside jokes. The guest’s human bias functions as a reality check, asking why a hypothetical trillion future minds should outrank the concrete suffering or joy of one existing person.
For a contrast grounded in bones and blood, What will we look like in the future? - The Australian Museum stays with messy, embodied evolution, not cloud backups. Silicon Valley’s post-human gospel, by comparison, reads like a terms-of-service update for your soul—precise, scalable, and weirdly uninterested in how it actually feels to be human.
Escaping the Optimization Engine
Human bias here functions like an airlock against the vacuum of optimization. Longtermist logic says: maximize total value across billions of years, trillions of digital minds, uncountable Dyson spheres. Human bias interrupts that calculation and says: no, start with the one fragile ape holding a baby, not the heat death of the universe.
Viewed through optimization math, humans look embarrassingly inefficient. Biological brains run at roughly 20 watts; data centers already burn gigawatts to train models that crush us at Go, code completion, and protein folding. From that angle, replacing “advanced monkeys” with smarter, tireless substrate-independent minds feels like upgrading from dial‑up to fiber.
That upgrade pitch lands as dehumanizing because it treats people as legacy hardware. Your grandmother’s memories, your kid’s fear of the dark, your partner’s terrible jokes all collapse into a variable in a utility function. The drive toward ever-smarter, more efficient intelligence quietly recodes love, boredom, and grief as latency, error, and waste.
The guest’s self-described “very selfish” stance pushes back by defending exactly that waste. Love is wildly inefficient: you pour decades into a handful of people instead of maximizing impact across millions. Family is a high-risk, high-maintenance cluster of dependencies that any optimizer would flag as a bug, not a feature.
Personal attachment breaks the logic of total optimization. You will wreck a week of productivity to sit in an ER waiting room, or burn savings to fly across the world for a funeral that changes no global metric. From a cosmic spreadsheet view, those are indefensible choices; from a human-biased view, they are the whole point.
Framed this way, human bias is not ignorance of scale but a deliberate refusal to be coerced by it. The guest hears arguments about worthy successors, Dyson spheres, and simulated descendants and responds with a targeted no. That no protects a narrow band of values—love, loyalty, presence—that do not survive contact with an optimization engine calibrated for eternity.
A Rebellion Against Abstraction
Rebellion here starts with a simple, impolite sentence: “I don’t care.” Not about Dyson spheres, Kardashev Type II civilizations, or trillion-year simulations—about me, my family, my friends. That human bias defends the messy, embodied, 80-ish-year window where painkillers work, hugs register as oxytocin spikes, and a bad day can be fixed with a walk, not a firmware patch.
Posthuman talk flattens all that into abstractions: utility, compute, optimization. In the Dylan and Dylan and Wes Interview, the guest pushes back on the idea that some future cloud of superintelligence justifies treating current humans as expendable scaffolding. From a cosmic spreadsheet view, replacing “advanced monkeys” with smarter agents looks efficient; from inside a human body, it looks like murder dressed up as math.
Abstraction scales. You start with longtermist graphs about “10^54 future lives” and “astronomical waste,” then quietly trade away actual people for hypothetical descendants. The same logic powers ad-tech optimization, engagement farming, and AI training runs that burn megawatt-hours to maximize a metric no one feels in their bones. A rebellion against abstraction says those metrics never outrank a single conscious moment.
Re-centering subjective reality means treating first-person experience as the primary unit of value. Not “future paperclip maximizers,” not “total integrated information,” but whether a specific person in a specific room feels fear, joy, boredom, or love. Phenomenologists like Edmund Husserl argued this a century ago; now it doubles as a survival strategy against systems that only see you as data.
Viewed this way, the human bias is not a bug in moral reasoning; it is a firewall. It blocks the move from “humans are a step in evolution” to “so it’s fine if they get stepped on.” It says any ethics that can’t explain why a child’s terror in a hospital bed matters more than a hypothetical Dyson swarm has quietly become inhuman.
Cosmic or computational perspectives always promise objectivity: zoom out far enough and individual lives blur into statistics. The podcast guest refuses that zoom. He insists that because no one experiences the universe from the outside, the inside view—your finite, embodied, local consciousness—remains the only place value ever actually exists.
The AI Alignment Problem Just Got Personal
AI alignment suddenly looks different when someone shrugs and says, “I don’t care what happens after I’m gone.” Alignment research usually assumes a shared moral project: keep future superintelligence compatible with human values for millions of years. That premise collapses if a large slice of humanity only cares about the next 5, 20, or 50 years.
Alignment evangelists talk about “astronomical stakes” and “trillions of future lives,” straight out of longtermist playbooks at places like OpenAI, Anthropic, and the Future of Humanity Institute. But survey data shows people rarely think that far: Pew finds 72% of adults worry about job automation this decade, not heat death of the universe. Moral urgency built on cosmic timescales simply fails to land.
Once you accept explicit human bias, the priority stack reorders fast. Instead of racing to solve value learning for hypothetical AGI in 2100, attention shifts to AI systems already deployed: recommendation engines, hiring filters, credit scoring, predictive policing. Alignment becomes less about “unaligned superintelligence” and more about unaccountable optimization running people’s lives today.
Policy conversations start to look different. Rather than only funding technical interpretability labs, governments could channel money into: - Robust labor protections against algorithmic firing - Collective bargaining over AI tools in workplaces - Data rights and audit requirements for high-risk models
Job loss stops being a side quest in an AGI safety slide deck and becomes the main plot. Goldman Sachs estimates up to 300 million full-time jobs worldwide face automation pressure from generative AI. Alignment, under a human-biased lens, means aligning deployment with economic justice, not just cosmic survival.
Inequality and autonomy emerge as the real existential risks for most people. Algorithmic management already tracks warehouse workers by the second; generative models already flood feeds with synthetic content that shapes elections and culture. The alignment question turns personal: aligned to whom, with what power, and under what democratic control?
For anyone who cares more about their kids than Dyson spheres, resources like What will humans be like generations from now in a world transformed by artificial intelligence (AI)? feel more urgent than yet another paper on reward modeling for hypothetical godlike AIs.
Your Bias Is Your Anchor in the AI Storm
Bias sounds like a bug. In the age of large language models, optimization curves, and trillion-parameter systems, you get trained to treat bias as something to scrub out with more data and better loss functions. But the “human bias” on display in the Dylan and Dylan and Wes Interview isn’t a statistical error; it’s a survival instinct.
Human bias says: I care more about my kid’s fever than a Dyson sphere 10,000 light-years away. That’s not ignorance. That’s a prioritization algorithm shaped by 200,000 years of Homo sapiens trying not to die. Abandon it, and you become perfectly rational and completely unmoored.
AI systems already operate at an abstraction layer most people never see. Recommendation engines quietly steer 4.95 billion social media users. Algorithmic trading moves trillions of dollars daily based on microsecond signals. Foundation models remix the sum of human text into answers that sound authoritative even when they hallucinate.
In that storm of scale and speed, human bias can function as an anchor. When a product pitch leans on “humanity’s long-term destiny,” your bias can ask: does this help my community now, or just a hypothetical posthuman audience? When an AI roadmap promises “alignment with all sentient life,” your bias can say: start with aligning to the people you’re actually deploying on.
You don’t need a grand cosmic narrative to justify caring about your own finite life. You can treat human bias as a design spec: - Optimize for relationships over reach - Optimize for experiences over engagement metrics - Optimize for legible trade-offs over abstract utility
That spec pushes you to ask different questions of AI. Not “Will this maximize paperclips in 10 million years?” but “Does this system respect my time, my autonomy, my body, my local laws?” Not “Is this a worthy successor?” but “Is this safe and dignified for my parents to use?”
Your bias will not stop evolution from exploring new substrates. It can, however, dictate how you participate. You can subscribe to the cosmic upgrade myth, or you can double down on the only vantage point you actually inhabit: the human one.
Frequently Asked Questions
What is the 'human bias' argument from the article?
It's the philosophical stance that prioritizing one's own life, family, and immediate human experience is a valid and defensible bias, even if a cosmic, evolutionary perspective suggests we should make way for superior artificial intelligence.
What are 'digital successors' in the context of AI?
Digital successors are hypothetical future superintelligences, either purely artificial AI or uploaded human minds, that could supersede biological humans as the dominant form of intelligence in the universe.
What is longtermism and why is it controversial?
Longtermism is an ethical view that prioritizes improving the long-term future, viewing it as a moral imperative to safeguard humanity's potential over trillions of years. It's controversial because critics argue it can devalue the lives and suffering of people living today.
What is a Dyson sphere and why is it relevant to the future of humanity?
A Dyson sphere is a hypothetical megastructure that completely encompasses a star to capture all of its energy output. It's used in these discussions as a benchmark for a hyper-advanced civilization, one that has likely moved beyond biological limitations.