Anthropic's Dirty Secret: AI for Wall Street
The AI lab built on safety is now creating tools for Wall Street's biggest players. This isn't just a business move—it's a sign that AI is being captured by the elite.
The 'Safety' Darling's Shocking New Friend
Anthropic built its reputation on constitutional AI, a safety-first framework that bakes rules and values directly into its models. The company’s public story centers on alignment research, existential risk, and preventing runaway systems from turning into Skynet-grade disasters. Now that same lab is quietly rolling out sophisticated financial tooling for Wall Street, targeting hedge funds, banks, and quant shops that move trillions of dollars daily.
Early deployments go far beyond a chatbot that explains earnings calls. Anthropic is pitching agent-like systems that can ingest real-time market feeds, parse 10-Ks, and help construct DCF models and trading strategies at machine speed. Internal demos, according to people familiar with the product, emphasize automating work traditionally handled by junior analysts and quants, the exact layer that feeds institutional finance its edge.
That pivot collides head-on with Anthropic’s carefully curated safety image. A company founded in 2021 after a safety-driven split from OpenAI now wants a slice of the same high-frequency, alpha-hunting ecosystem that helped turn machine learning into an engine for rent seeking. Finance is where algorithmic opacity, asymmetric information, and “move fast, break markets” incentives already dominate.
Critics see a glaring contradiction: a lab that warns about AI amplifying systemic risk is wiring its models directly into one of the most systemically critical sectors on earth. Wall Street is not using Claude to write poetry; it is using Claude to find basis points, front-run slower players, and squeeze inefficiencies out of already brittle markets. Safety language starts to sound like branding when your biggest customers sit on trading floors.
David Shapiro, a longtime alignment commentator, captured the unease in his video “Anthropic SELLS OUT to Wall Street!” He opens by asking whether a company “supposed to be pro-safety” is now “getting in bed with Wall Street” and whether that is “a sign of elite capture.” His framing cuts through Anthropic’s careful PR and goes straight to the core question: has the safety darling simply decided that, when the money gets big enough, All that alignment talk can coexist with building the next generation of quant infrastructure?
From Saving Humanity to Maximizing Profits?
Anthropic’s defenders have a ready-made answer for the Wall Street pivot: when you’re “safety-pilled,” anything that keeps Skynet at bay becomes morally acceptable. In this worldview, partnering with hedge funds and banks is not selling out; it is a necessary compromise so long as it funds alignment research and keeps Anthropic in the room when the real decisions about AGI deployment happen. If you sincerely believe misaligned AI could kill billions, then routing your models through Goldman Sachs looks like a rounding error on the cosmic ledger.
That logic leads to a stark, almost cartoonish trade-off: cyberpunk hellscape or extinction. Safety absolutists argue that a “cyberpunk” future—AI-enhanced corporations, militarized governments, ubiquitous biometric tracking, full-spectrum financial automation—is ugly but survivable. What cannot be tolerated, they say, is a world where some unaligned system goes full Skynet and wipes out humanity because the “good guys” refused to work with unsavory partners.
Yet the cyberpunk scenario no longer lives in sci-fi concept art. AI already powers: - High-frequency trading and financial arbitrage - Predictive policing and border surveillance - Algorithmic management that squeezes workers by the minute
Plug Anthropic’s most capable models into that stack, and you accelerate a trajectory of concentration, not liberation.
So the question stops being abstract philosophy and starts sounding like corporate strategy: is this a principled moral trade-off or a convenient justification for chasing power and profit? Safety rhetoric frames Wall Street deals as a shield against catastrophe; cap tables and enterprise contracts suggest something closer to elite alignment. When the same models that could help with climate science or pandemic response get tuned first for quant desks, the priorities speak louder than the blog posts.
Anthropic still markets Claude as a tool “for All,” a democratizing force for universal benefit. Behind the scenes, reality tracks David Shapiro’s description of elite capture: stratified access, bespoke features for finance and defense clients, and regulatory influence that locks in incumbents. The gap between public mission and private deployment keeps widening, and each new Wall Street integration pushes Anthropic’s “for everyone” promise further into PR fiction.
The Playbook of Power: What is 'Elite Capture'?
Power doesn’t just hoard money; it hoards new tools. Elite capture describes the moment a technology that arrives wrapped in utopian language quietly routes its biggest gains to a small, well-connected class—corporate giants, financiers, and state security agencies—while everyone else gets the marketing slogans and throttled access.
Railroads in the 19th century promised to knit countries together and open markets to all. Instead, a handful of barons used control over tracks and freight rates to crush competitors and dictate who could participate in commerce. Lawmakers eventually had to invent antitrust law just to keep the system from calcifying into permanent monopoly.
Early commercial internet rhetoric in the 1990s sold a story of decentralization and permissionless innovation. Two decades later, a few platforms—Google, Amazon, Meta, Apple—dominate search, shopping, ads, and mobile distribution, taking a cut of nearly every transaction and attention stream. The same pattern shows up repeatedly: open frontier, rapid consolidation, then regulatory capture.
Elite capture follows a recognizable playbook: - Control over infrastructure bottlenecks (rail lines, fiber, data centers, chips) - Vertical integration across the stack - Preferential access and pricing for major financial and government clients - Heavy influence over the rules that govern everyone else
AI now sits squarely in that danger zone. Training frontier models already requires billions of dollars in compute, proprietary data deals, and cloud contracts that only a few firms can sign. Public or open alternatives exist, but they run at smaller scales and lag on capabilities, mirroring how “independent” ISPs faded against telecom incumbents.
Viewed through this lens, Anthropic’s quiet pivot into high-end financial tooling for Wall Street is not an isolated business decision; it is a data point in a broader consolidation story. The company’s own messaging on the Anthropic Official Website lives alongside this emerging reality, and that tension is exactly where the stakes of AI’s future are being decided.
Six Red Flags That AI is Being Hijacked
Control is the first tell. When a handful of firms own the bottlenecks—advanced chips, hyperscale cloud, data centers, and cheap power—everyone else rents reality from them. Training a frontier model already runs into the hundreds of millions of dollars; that price tag alone locks most universities, startups, and public institutions out of the game.
Vertical integration tightens the vise. The same companies now race to own every layer of the stack: silicon design, cloud platforms, foundation models, app stores, and consumer interfaces. X.ai’s Musk-branded data centers, Nvidia’s end‑to‑end AI “platform,” and hyperscalers bundling models with proprietary tooling all push toward a world where switching providers becomes almost impossible.
Stratified access is where the hierarchy goes explicit. The most capable, highest‑context models run behind closed doors for defense contractors, hedge funds, and megabanks, while the public gets throttled, safety‑wrapped versions. Even OpenAI has admitted it withholds its strongest systems from broad release, citing compute limits; Shapiro’s point is that those limits mysteriously disappear when a $200 million defense contract lands.
Regulatory and narrative capture cement that power. Frontier labs help draft the very rules that govern them, then show up in front of Congress or the EU as neutral “experts.” When Sam Altman can walk from a Senate hearing to a private dinner with lawmakers, or Anthropic executives brief regulators on “responsible scaling,” the story about existential risk doubles as a lobbying framework that conveniently freezes smaller rivals out.
Rent‑seeking optimization shows up in where AI actually gets deployed. Instead of curing cancer first, the big money flows into: - High‑frequency trading and financial arbitrage - Hyper‑targeted advertising and engagement hacking - Worker automation and algorithmic management
Those use cases don’t expand the economic pie so much as re-slice it upward, turning AI into a tollbooth for every transaction, click, and gig.
Underpowered public alternatives complete the picture. Open‑source models like Stable Diffusion or LLaMA derivatives exist, but they trail closed systems by orders of magnitude in scale, training data, and inference budget. Without state‑backed compute, shared data infrastructure, or serious funding, “public AI” risks becoming the digital equivalent of a crumbling library across the street from a gleaming private research park.
The Unseen Chains: An Infrastructure of Control
Infrastructure is where AI’s lofty rhetoric about openness collides with hard economic reality. A handful of companies own the physical stack that everyone else must rent. That first bottleneck—who controls the compute—defines who even gets to play.
Start with cloud. AWS, Azure, and Google Cloud command roughly 65% of the global cloud market, according to most analyst estimates. If you want to train or deploy serious models, odds are you are paying one of those three gatekeepers by the GPU-hour.
Go a layer down and the grip tightens. TSMC manufactures around 90% of the world’s most advanced chips, the bleeding-edge nodes that power AI training clusters, hyperscale data centers, and high-frequency trading rigs on Wall Street. If TSMC catches a cold—geopolitics, earthquakes, export controls—the entire AI ecosystem gets pneumonia.
Then comes Nvidia, which effectively owns the accelerator tier. Nvidia controls close to 90% of AI accelerators in use today, from H100s in frontier labs to A100s quietly crunching risk models at hedge funds. Its CUDA software stack and proprietary networking turn raw chips into an ecosystem competitors struggle to escape.
Economists call this structure a natural monopoly: markets where enormous fixed costs and network effects push everyone toward a few dominant providers. No law forbids Nvidia from being the default accelerator, or TSMC from being the only viable fab for 3 nm wafers. The physics of semiconductor manufacturing and the capital costs of hyperscale cloud simply punish smaller entrants into irrelevance.
That matters because “competition” in AI is no longer about who has the cleverest algorithm in a GitHub repo. It is about who can secure tens of thousands of GPUs, priority access to TSMC’s advanced nodes, and favorable long-term contracts with AWS, Azure, or Google Cloud. Without that, your breakthrough model is a demo, not a product.
So when Anthropic builds tools for Wall Street, it does so on a hardware stack already captured by a microscopic club. Access to that stack is the real moat—and the real unseen chain—locking everyone else out.
The $300 Billion Moat Big Tech Is Building
Capital now functions as the sharpest instrument of elite capture in AI. Whoever can burn the most cash on chips, data centers, and talent sets the rules for everyone else. In 2023, Big Tech quietly bankrolled the future: by some estimates, large platforms supplied roughly 67% of all generative AI startup funding, turning “ecosystem support” into a dependency pipeline.
Startups pitch themselves as disruptors, but their cap tables tell a different story. When Microsoft, Google, Amazon, and Nvidia write most of the checks, they don’t just buy equity; they buy leverage over product roadmaps, infrastructure choices, and who gets premium access to frontier models. Dependency becomes structural, not temporary.
Now zoom out to 2025, where industry forecasts point to around $300 billion in AI-related spending in a single year. That includes hyperscale data centers, GPU clusters, undersea cables, and the power infrastructure to keep them running. For context, $300 billion rivals or exceeds the annual GDP of countries like Chile, Pakistan, or Finland.
No university, non-profit lab, or small nation-state can credibly match that burn rate. A well-funded academic consortium might scrape together a few hundred million dollars over several years; Big Tech treats that as a rounding error on a single GPU contract. The result is a capital moat so wide that “open alternatives” exist mostly as branding, not as peers.
This spending spree does not just buy hardware; it buys agenda-setting power. Whoever owns $300 billion of AI infrastructure can decide: - Which languages and regions get good models - Which industries receive tailored tools - Which regulators get “help” drafting rules
Coverage in outlets like the Financial Times frames this as a race for innovation, but the structure looks more like enclosure. When a handful of firms control the capital stack, elite capture stops being a risk and starts looking like the default operating system for AI.
Writing Their Own Rules: The Capture of Governance
Regulatory power does not emerge in a vacuum; financial and structural dominance converts directly into political leverage through regulatory capture. Agencies that are supposed to police AI end up taking their cues from the companies they regulate, because those firms control the expertise, the jobs pipeline, and often the funding narratives that justify weak rules.
In Washington and Brussels, AI policy has turned into an invite-only conversation where the same half-dozen CEOs keep showing up. Sam Altman’s tour of Congress set the template: hours of private briefings, repeated hearings, and direct line access that no labor union or privacy NGO can match.
Lawmakers in the US and Europe now default to “frontier labs” as their primary advisors on AI safety and competition. When staffers don’t understand model weights, GPUs, or foundation models, they call Anthropic, OpenAI, Google, or Meta—and those firms happily draft the guardrails they will later be asked to obey.
That access disparity is not hypothetical. During negotiations over the EU AI Act, industry lobbyists reportedly filed thousands of pages of amendments, many copy-pasted into compromise texts, while civil society organizations struggled to get meeting slots or translation support for their proposals.
The UK AI Safety Summit at Bletchley Park made the imbalance explicit. Of roughly 100 invited organizations, a large share were corporations or industry-backed institutes, while trade unions, worker cooperatives, and grassroots digital rights groups occupied a token slice of the room.
Governments framed that summit as a global democratic conversation, but the agenda centered on what frontier corporations already wanted to talk about: model evaluations, compute thresholds, and voluntary safety commitments that lock in today’s giants as permanent stewards of “responsible” AI.
That is where narrative capture kicks in. By dominating podiums, press briefings, and closed-door roundtables, these companies define what AI is “really” about—existential risk, rogue systems, and sci-fi Skynet scenarios—while pushing issues like algorithmic wage theft or eviction scoring to the margins.
When existential risk monopolizes oxygen, present-tense harms become negotiable line items. Bias in credit scoring, automated union-busting, and mass white-collar displacement look like secondary concerns instead of core questions about who AI serves and who pays the price.
The Sound of Silence: What's Missing From the Boom
Silence can be data. When you zoom out from the AI gold rush headlines, the quiet gaps in the story say more about who this technology is actually for than any triumphant launch event or keynote.
Start with infrastructure. For all the rhetoric about AI as “like electricity,” there is no public equivalent of a power grid. The US National AI Research Resource (NIRR) pilot has a proposed six-year budget that analysts peg in the low single-digit billions, while Meta reportedly spends that kind of money on GPUs in a single year just to keep its own models fed.
That asymmetry matters. Publicly funded compute remains a rounding error next to private AI buildouts measured in tens of billions from Microsoft, Google, Amazon, and Nvidia. If you are a university lab, a civic group, or a city government, you are effectively begging for scraps from the same clouds that sell priority access to hedge funds and defense contractors.
Governance looks just as lopsided. Boards at Anthropic, OpenAI, Google DeepMind, and xAI contain investors, founders, and former regulators—but not a single elected worker representative, community delegate, or independent civil-society director. The people most exposed to AI-driven layoffs, surveillance, and disinformation have zero formal voice in how frontier systems roll out.
Missing, too, are serious experiments in shared governance. No major lab has set up binding veto power for affected communities, worker councils with control over deployment decisions, or city-level oversight boards with access to model audits. Instead, “safety” lives inside internal red teams and advisory councils that can be thanked in blog posts and ignored in boardrooms.
Then there is the financial architecture quietly sketched behind closed doors. OpenAI’s CFO reportedly floated the idea of explicit government backstops for frontier AI—public guarantees if things blow up, private capture of upside if they don’t. That is textbook “socialized risk, privatized gain,” the same logic that turned 2008’s subprime implosion into a taxpayer-funded bailout.
Taken together, these absences form a pattern. No public-scale infrastructure, no shared governance, and early calls for state-backed insurance for private bets all point in one direction: AI built as a critical system, owned and steered by a narrow, well-capitalized elite.
Is 'AI Safety' Just a Trojan Horse?
Anthropic’s most ardent defenders insist that selling high-end models to Wall Street quants, hedge funds, and defense contractors is a necessary evil on the road to “alignment.” This is the provocative core of David Shapiro’s critique: long-term existential risk talk functions as a moral blank check for near-term power grabs. If you convince yourself Skynet is looming, almost any partnership starts to look like responsible stewardship rather than capture.
Effective altruists and rationalists inside labs like Anthropic frame their work as a literal survival project. In that mindset, cutting deals with militaries, intelligence agencies, and the biggest funds on Wall Street becomes not a compromise but a sacrifice for the greater good. A cyberpunk surveillance dystopia, they argue, still beats a paperclip-maximizer wiping out humanity.
That worldview quietly rewrites the ethics of AI deployment. Once you accept “we are the only ones who can stop Skynet,” then: - Exclusive contracts with governments become “containment” - Preferential access for megabanks becomes “risk testing” - Secrecy and closed models become “security measures”
All of those also happen to entrench the incumbents holding the purse strings.
Control over the safety narrative then turns into a policy weapon. Frontier labs warn of rogue open-source models, bio-risk, and model exfiltration, and then propose safety rules that conveniently require billion-dollar compute clusters, red-teaming teams, and compliance departments. Startups, universities, and public labs cannot clear that bar; hyperscalers and frontier labs already have.
You can see the outline in calls for licensing regimes tied to FLOP thresholds, mandatory monitoring of training runs, and centralized incident reporting to government-vetted entities. On paper, those measures target “frontier” systems. In practice, they lock the frontier inside a small club of players who can afford the audits, lawyers, and custom silicon. Safety becomes a moat, not a guarantee.
Pro-safety elites often frame the tradeoff as binary: accept a tightly controlled, corporate-dominated AI landscape or roll the dice on extinction. That framing erases a third option: a democratically governed, publicly accountable AI ecosystem with strong labor protections, antitrust enforcement, and real public compute. Work like Anthropic Research showcases technical alignment, but who owns and governs those aligned systems remains a political choice, not a law of physics.
Your Move: Can We Reclaim AI's Promise?
Anthropic’s Wall Street pivot exposes how concentrated AI power already is, but it also clarifies where counter-pressure can come from. Open-source projects like Llama 3, Mistral, and Stable Diffusion prove you do not need a $300 billion valuation and a hyperscale data center to build capable systems. You can fine-tune a 7B-13B model on a single high-end GPU and ship something useful.
Open source still hits a hard ceiling. Training frontier models demands tens of thousands of Nvidia H100s, megawatts of power, and data-center footprints that only Amazon, Microsoft, Google, and their closest friends can afford. Even the most radical open weights model still rents time on someone else’s racks.
Real pluralism in AI needs public compute that rivals private clouds. That means national or regional supercomputing facilities explicitly earmarked for academic labs, nonprofits, startups, and municipalities, not just defense contractors and Fortune 100s. Think “public library,” but for TPUs and GPUs, with transparent allocation rules and mandatory publication of results.
Governments already spend hundreds of billions on digital infrastructure; redirecting even 5–10% toward shared AI stacks would matter. Public research agencies could fund open training runs, require open data documentation, and back community audit teams that tear into foundation models the way security researchers probe operating systems. Without this, “AI for All” is just a slogan stapled to a cloud invoice.
Individual users are not powerless spectators. You can:
- Support digital rights groups like EFF, Fight for the Future, and AlgorithmWatch that track AI abuses and lobby against regulatory capture
- Email or call lawmakers to demand disclosure of AI lobbying meetings, compute subsidies, and government AI contracts
- Back open-source tools with your usage, bug reports, and donations instead of defaulting to whatever your cloud provider bundles
Critical conversation matters too. Ask vendors who owns your data, who gets early access to premium models, and what happens when the subsidy money runs out. Ask friends and coworkers who actually benefits from “AI productivity” at their job and who absorbs the risk. Reclaiming AI’s promise starts with refusing to pretend that Wall Street’s version is the only possible future.
Frequently Asked Questions
What is 'elite capture' in the context of AI?
Elite capture refers to the process where the development, benefits, and governance of AI technology become concentrated in the hands of a small group of powerful corporations, investors, and government entities, rather than serving the public good.
Why is Anthropic, an AI safety company, working with Wall Street?
Critics argue it's a sign of elite capture, where financial incentives and the need for capital override the original mission. The company's defenders might claim that working within powerful systems is necessary to guide AI development safely and secure resources to compete.
What are the main signs of AI centralization?
Key signs include control over essential infrastructure (chips, cloud data centers) by a few firms, vertical integration, preferential access for high-paying clients, and dominant corporate influence over government regulation.
Can open-source AI compete with companies like Anthropic and OpenAI?
While open-source models are advancing rapidly, they face significant hurdles. They often lag behind frontier models and require expensive hardware, which reinforces the dominance of large corporations that control the underlying infrastructure.