Knowledge workers spent the 20th century believing credentials meant stable income. Get a degree, accumulate expertise, get paid for it; a simple exchange.

The anomalies are everywhere now. Junior lawyers with top credentials watching their document review skills become worthless overnight. Creative workers with genuine taste losing income while the platforms that control distribution get rich. Companies hitting record valuations, then laying off the people who actually built the product.

The productivity gains are flowing to capital holders instead of the workers generating them.

The old framework said credentials lead to knowledge, knowledge leads to value. It was some sort of a linear career progression. But that’s not what’s happening. Value is concentrating in weird corners now. Places where someone has to be sued (accountability barriers). Places where platforms control access to customers (distribution chokepoints). Places where you need tacit knowledge you can’t just extract from text.

Kuhn called these moments “crisis periods”; when enough anomalies pile up that you need a completely new framework to make sense of what’s happening. Most professionals are still navigating by the old map.

Expertise is a commodity

ChatGPT launched November 2022. By Feb 2025 it has fundamentally changed how knowledge workers operate now. Legal AI handling document review that used to require armies of junior associates. Medical diagnosis systems matching specialist accuracy on scans.

I keep hearing the same thing from practitioners. “I don’t write code anymore. I review it.” “Document review used to be 60% of my job. Now it’s 5%.” “My entire job is knowing when the model’s wrong.” That last one stuck with me because it captures something important about what’s left after information processing becomes commodity work. Judgment about when the model fails and deciding what to trust.

But there’s a slight difference this time round, the diffusion rate is unprecedented. Printing press took 50 years to reach mass adoption. Calculators took 20. ATMs took 15. ChatGPT → 18 months.

These comparisons are rhetorically clean but methodologically messy. We’re comparing physical infrastructure rollout (printing presses) to software distribution (ChatGPT). Different adoption curves, different enabling conditions. The 18-month number measures account creation, not actual sustained use or economic impact. But the directional point holds: software-based technologies diffuse faster than hardware-based ones.

Technology diffusion usually follows predictable patterns; infrastructure dependency, capital barriers, regulatory friction all slow things down. Physical infrastructure takes decades to build, but cloud services scale instantly. Network effects that cost billions slow adoption, freemium accelerates it. Regulated industries crawl, unregulated spaces sprint.

Which tells you which sectors transform fastest. Creative work moves at maximum speed (digital delivery, low capital requirements, minimal regulation). Healthcare slowest (physical presence required, FDA approval gates, heavy regulation). Legal work somewhere in the middle with its mixed constraints.

The question is no longer about whether the professions will transform (they will). The question is whether people within can transform faster than their jobs disappear. For knowledge workers operating in unregulated domains, I’m increasingly convinced the answer is no.

The old economics models no longer hold

David Autor’s research at MIT helped me understand the split that’s happening. AI automates routine cognitive tasks; document review, code templates, the stuff junior people used to do. But it struggles with non-routine judgment (partner-level decisions) and physical manipulation (elder care work).

So value concentrates at extremes. Judgment at the top, manual work at the bottom. The middle just disappears. Those junior lawyers who spent years mastering document review? That skill is suddenly worthless. Junior developers who learned CRUD apps discovering the ladder they climbed doesn’t reach anywhere useful anymore.

The Piketty Paradox: When AI Validates What Economists Rejected

Markets hitting records while companies lay off the people who built them. Productivity growth flowing to capital, not labor. The numbers are stark: Top 1% U.S. wealth share went from 32% in 2006 to 37% in 2021 (Federal Reserve’s Survey of Consumer Finances). Bottom 50% went from 2.5% to 2%. AI adoption is accelerating this divergence.

Thomas Piketty predicted exactly this pattern back in 2014 with Capital in the Twenty-First Century. His argument was straightforward: when return on capital (r) exceeds economic growth (g), inequality increases indefinitely. The rich save more, get higher returns, wealth concentrates automatically across generations.

Economists rejected this pretty much immediately. On theoretical grounds. Their reasoning went like this: capital faces diminishing marginal returns. Invest in more tractors, each one adds less value because you eventually run out of farmers to operate them. Labor and capital complement each other. You can accumulate all the hammers you want, but hammers lose value without hands to use them. Capital accumulation lowers interest rates, limiting r. Meanwhile labor productivity rises as capital becomes plentiful, which supports wages.

The whole argument rested on labor-capital complementarity.

AI could break this. If it achieves complete labor displacement.

Dwarkesh Patel and Philip Trammell make the case that if AI completely displaces human labor; not just automates specific tasks but actually replaces judgment, tacit knowledge, physical manipulation; then capital returns don’t diminish anymore. Because human labor adds zero marginal value.

This is a conditional argument though, not a certainty.

With historical automation (tractors, assembly lines), you replaced specific tasks but still required complementary human labor. Someone had to operate the tractor, manage the factory, handle exceptions. Labor maintained marginal productivity. Diminishing returns on capital held.

But if AI achieves complete displacement, the dynamics change. AI handles routine tasks AND exceptions AND judgment calls. Each additional AI inference costs pennies. Training costs get amortized across billions of queries. Human labor’s marginal product approaches zero in automated domains. Capital compounds without requiring proportional labor scaling. The diminishing returns mechanism breaks down.

If this scenario plays out, Piketty may have been wrong about the past but could be right about the future. The same mechanism economists used to refute him (labor-capital complementarity) only works if labor maintains positive marginal value. Remove that assumption and his r > g inequality spiral becomes structurally inevitable.

There’s a second mechanism I didn’t appreciate until recently. The shift to private markets. By the time AI companies go public, exponential gains have already accrued to early investors. Ordinary citizens hit three barriers: accreditation requirements keep regular investors out of private markets. Information asymmetry makes intangible valuations (model capabilities, training data quality) opaque until deployment. And capital concentration means wealth managers control allocation while median households get locked out.

The data tells the story. Corporate capital held privately rose from 8% in 2000 to 19% in 2024 in the US (NVCA/PitchBook data). Compare this to the 1960s tech boom when IBM and Xerox went public early and middle class participated in gains through pension funds.

Modern pattern looks completely different. OpenAI raises at $157B valuation in private markets. By the time it goes public (if it ever does), early capital holders have already captured orders of magnitude returns. High capital requirements for training ($100M+ for frontier models) create natural barriers favoring incumbent wealth.

Top 10% now hold 67.2% of total U.S. household wealth (Federal Reserve, 2024). That number keeps growing.

Why this time is different

Look, every technological transition faces similar concerns. And every time, new work categories emerge. People have been predicting technological unemployment for 200 years and been wrong every time. So why should AI be different?

Three things, if the trajectory continues:

Speed of displacement outpaces adaptation. Previous transitions took 40-60 years. AI adoption compressed to 18 months. Human institutions (education, retraining, social safety nets) operate on decade timescales. The gap between displacement speed and adaptation capacity is unprecedented.

Second thing: cognitive automation, not just physical. Past automation replaced physical labor which created cognitive work. AI automates cognition itself. New work categories might exist, but absorbing the scale of displaced cognitive workers… I don’t see it.

Third: winner-take-all economics at AI scale. Network effects and economies of scale in AI are extreme. Training GPT-4 cost $100M, but marginal inference costs pennies. First mover with best model captures market, second place gets scraps. AI market structure trends toward natural monopolies in a way previous technologies didn’t.

The most telling signal I’ve seen: AI assistants now negotiate directly with business services. Your assistant books appointments, compares prices, handles refunds. No human required. And here’s the kicker; more AI suppliers just makes workers more replaceable. Platforms don’t even need to own the agents. They own the marketplace where transactions happen.

Reasons for Optimism

The Piketty-validated-by-AI argument makes logical sense under certain conditions. But I’m not convinced it’s inevitable. A few optimistic paths for the future:

Everyone Benefits in Abundance

Advanced AI sounds incredible for everyone, even if inequality persists. If AI generates massive productivity gains, even those without capital ownership could live materially better lives than previous generations.

Think about how abundance works. When AI makes medical care, education, entertainment, food production cheaper, even those at the bottom benefit materially. Housing costs might stay high, but most consumption goods drop toward free. You feel poor relative to AGI billionaires, sure. But you live better than 20th century millionaires in objective terms.

That’s the optimistic case anyway.

Complete Displacement Seems Implausible

I’m skeptical that AI acquires such perfect substitutability for human labor while humans become completely obsolete. That scenario requires AI mastering cognitive tasks, physical manipulation, judgment under uncertainty, multi-objective optimization, and creative synthesis simultaneously across all domains.

Seems unlikely. More plausible scenario: AI handles routine cognitive work while humans retain value in edge cases, physical domains requiring dexterity, and situations requiring accountability (someone to sue). This partial automation still disrupts labor markets but maintains some labor-capital complementarity.

Historical precedent supports this view. ATMs didn’t eliminate bank tellers. Spreadsheets didn’t eliminate accountants. Automation creates new categories of human work we can’t predict in advance.

Humans Create New Value Categories

Entirely new kinds of work get created when previous categories automate. Agriculture went from 90% of labor in 1800 to 2% today. Manufacturing dominated mid-1900s, now a fraction. Humans didn’t sit idle. Different work emerged.

Professional podcaster didn’t exist thirty years ago. Same with content creator, social media manager, newsletter writer. All viable careers through digital platforms. AI will eliminate current knowledge work while creating categories we can’t yet imagine.

But here’s the catch (there’s always a catch). These new categories won’t absorb the scale of displaced workers. If AI eliminates 40% of current jobs while creating new categories employing 10%, the math doesn’t work. And adoption speed matters. Previous transitions took decades. AI adoption compressed to 18 months. Even if long-run equilibrium looks positive, the transition period could be catastrophic.

What still resists?

When information is free and processing automated, what’s left? The usual answer is taste, good judgment, synthesis ability. But that’s too broad. Plenty of skills resist automation yet command no market value.

Where AI Hits Walls (and how those walls might move)

Legal systems need someone to sue. AI can’t be jailed or held liable. This accountability gap creates work that resists automation regardless of capability. MIT economist Tavneet Suri did research on Kenyan entrepreneurs and found high performers succeeded by knowing which AI advice to ignore. Judgment about consequences, not pattern matching.

But accountability barriers aren’t as fixed as they look. Insurance companies already back professional services. Corporate liability wrappers can shield AI-generated work the same way they shield human work. If an insurer-backed AI service provides legal advice and gets it wrong, you sue the insurer and the company, not the AI. The accountability requirement gets satisfied without requiring a human in the loop.

This restructuring could happen faster than the essay’s main argument assumes. We’re already seeing early versions: companies offering AI-generated code with liability coverage, AI medical diagnostics with professional oversight structures, AI financial advice with E&O insurance. The accountability barrier might be temporary scaffolding, not permanent protection.

Some decisions still require balancing competing values that can’t be reduced to a single metric. Safety or innovation? Privacy or convenience? AI optimizes for one objective at a time. It fails when the choice involves genuinely competing priorities. Judgement matters.

Then there’s knowledge that never made it into training data. The mechanic who hears an engine misfire and just knows it’s the fuel injector. That pattern recognition comes from embodied experience. Non-verbal, built from thousands of micro-exposures. You can’t automate what was never written down.

And here’s something counterintuitive: the better AI gets at pattern recognition, the more valuable human judgment becomes. As AI handles routine decisions, remaining decisions involve higher stakes, harder trade-offs, deeper tacit knowledge.

The Human Premium

Ben Thompson argues for a fourth category the three barriers don’t capture. Humans want humans. Not because AI can’t produce quality content or art, but because authenticity has inherent value separate from technical execution.

Bill Simmons’ “50 Most Rewatchable Movies” podcast drew massive engagement not because AI couldn’t analyze films. Listeners valued Simmons’ specific human perspective. His lived experiences with these movies. The way his taste developed over decades. The podcast worked because you’re getting Bill Simmons, not just film analysis.

So work survives automation where the human identity of the creator is intrinsic to the value.

You’ll pay more for a meal cooked by a Michelin chef even if an AI robot replicates the molecular structure perfectly. You’ll value advice from a mentor who’s lived through challenges versus an AI that pattern-matches solutions. You’ll prefer art from a human artist who struggled with the medium over AI-generated images that match the same aesthetic.

Why this works: Trust builds through repeated interaction with a known human, not algorithmic consistency. Context matters; you understand advice differently when you know the advisor’s background, failures, biases. People want what someone they respect recommends, not what an optimization function surfaces. And there’s social proof; “I worked with [named expert]” carries status. “I used Claude” doesn’t.

But (again with the caveats) this only works for those who already have platform, reputation, or audience. Bill Simmons built his following over 25 years. New entrants face the cold start problem. How do you build reputation when AI floods the zone with cheap content? The human premium exists but access to it concentrates among incumbents.

Distribution Trumps Everything

The three barriers suggest creative work should resist commoditization. But music creators are facing major revenue losses according to CISAC’s 2024 global study. Platforms control recommendation algorithms. Thousands of tracks flood Spotify daily. Quality becomes undiscoverable in volume.

Distribution power beats skill quality. Every time.

Automation-resistant skills command no market value if platforms control distribution. Same pattern in legal work, journalism, software. Skill quality matters less than platform position.

This extends to wealth too. If AI generates value while eliminating labor, who benefits? By the time redistribution becomes necessary, those who control the AI economy may have already structured things to evade taxation. History repeats. Power concentrates, the powerful reshape the rules.

The Psychology of AI Inequality

The Relative Deprivation Trap

What if material abundance makes inequality feel worse, not better?

Louis C.K. did this bit back in October 2008 on Late Night with Conan O’Brien. One of the most incisive observations about technology and human happiness I’ve ever heard. He talked about flying on airplanes with WiFi: “The guy next to me goes, ‘This is bullshit!’ I’m like, how quickly does the world owe you something you knew existed only ten seconds ago?”

You’ve probably seen this clip. Louis C.K. focuses on the miracle of flight itself (sitting in a chair in the sky), but the deeper insight is about relative expectations outpacing absolute improvements. Everything is amazing and nobody’s happy. Technology makes life objectively better while making people subjectively more miserable.

Technological innovations, by conferring their benefits broadly and quickly, actually increase the feeling of inequality. When iPhones cost $1000 and only the wealthy have them, you don’t feel deprived. When iPhones drop to $400 and everyone except you has one, you feel poor. The democratization of access paradoxically amplifies relative deprivation.

Social media demonstrates this at scale. We have unprecedented material prosperity coexisting with epidemic mental health crises. The connection is constant comparison. You don’t compare yourself to medieval peasants. You compare to peers’ curated feeds. And algorithmic feeds surface exactly what triggers your status anxiety.

Why AI Amplifies This Pattern

AI will distribute benefits faster than any previous technology. ChatGPT reached 100M users in 2 months versus 4.5 years for the internet. Within 18 months, AI assistants moved from impossible to commoditized. Everyone gets access to baseline AI capabilities almost simultaneously.

This should reduce inequality concerns. Instead, it amplifies them.

When everyone has AI assistants, status differentials between those with basic AI and those with proprietary superintelligence feel more acute than the gap between pre-AI haves and have-nots. You’re not comparing yourself to someone without AI. You’re comparing your constrained AI to their unconstrained version that knows more, reasons better, has priority compute access.

Louis C.K. identified something fundamental. Human happiness is determined by relative position, not absolute circumstances. You feel rich or poor, successful or unsuccessful, based on comparison to your peers, not objective measures.

Which creates a problem none of the proposed solutions really address.

Progressive taxation (Piketty’s model) addresses material inequality but not status anxiety. Universal basic income provides material security but might worsen relative deprivation. “I’m living on UBI while capital owners command AGI empires.” You can’t tax away the psychological experience of being lower status.

Predistribution (worker ownership, AI bonds) works better for status reasons. You’re a participant, not a dependent. But if ownership distributes broadly while control concentrates, you own equity while others make decisions. Psychological benefit exists but has limits.

The human premium (Thompson’s optimism) works for those who successfully build audience, reputation, authentic voice. Doesn’t address the 80% who lack platform or incumbency advantage. Creates a new status hierarchy: recognized humans versus anonymous humans versus AI agents.

None of these models directly tackle the core problem. Algorithmic amplification of relative deprivation in an era of material abundance.

When does felt inequality trigger instability?

Most inequality discussions focus on material distribution. Who owns what, who earns what, how to redistribute or predistribute. The psychological dimension gets overlooked.

If Louis C.K. is right that relative position drives happiness more than absolute circumstances, and if AI makes relative position more visible while distributing benefits broadly, then policies that reduce material inequality might fail to reduce felt inequality. Everyone has AI, everyone’s materially comfortable, everyone’s miserable because they’re constantly comparing to those with marginally better access.

This doesn’t mean abandoning predistribution or redistribution efforts. Material security matters. But focusing solely on economic distribution misses something deeper. How do humans maintain psychological well-being in a world of algorithmic comparison and exponential capability gaps?

At what point does felt inequality trigger instability regardless of material conditions? Traditional inequality metrics only capture distribution, not experience.

Building New Models From First Principles

Kuhn observed these shifts don’t happen through persuasion. Conversion happens at the edges. People willing to abandon the collapsing framework. New models emerge by asking different questions, not patching old ones.

So here’s what I keep coming back to. Three barriers protect certain work from automation: accountability (someone has to be sued), trade-offs between competing values, and tacit knowledge from lived experience. But there’s a catch. Distribution power beats skill quality every time.

Work where AI can’t: Take decisions where you’re personally liable for the outcome. Balance competing priorities that can’t be reduced to a single metric. Build expertise through repetition in high-stakes situations.

Own the customer relationship. If a platform sits between you and your customers, you’re competing on the platform’s terms. Either become the platform, build direct relationships, or accept you’re playing a rigged game.

Look for what becomes scarce. When AI makes something abundant, the adjacent scarcity becomes valuable. Knowledge gets cheaper, so judgment about which knowledge to trust becomes more expensive. Code generation gets easier, so knowing when generated code will fail in production becomes the bottleneck.

The Social Choice: Ownership by Design, Not Policy

These transformations are never purely technical. They’re social. Who defines the new model? Whose problems get solved?

The shift from knowledge scarcity to abundance could go two ways. Platforms capture distribution and capital captures infrastructure. Or ownership gets distributed before inequality locks in.

Traditional predistribution focuses on policy: AI sovereign wealth funds, AI bonds, worker ownership legislation. These approaches face the same problem. By the time you convince legislators to act, the powerful have restructured the rules to protect their position.

But there’s a more interesting path.

Labor reclassified as capital:

Think about what this actually means. Right now, we treat labor and capital as separate categories. You either own the means of production or you sell your time. But AI enables a third category.

Micro-equity in agentic workflows. You don’t just deploy an AI agent. You own a stake in the automations you create and deploy. Every workflow you build, every prompt chain you optimize, every dataset you curate becomes equity you own, not labor you sell.

Revenue share attached to data and relationship assets. Your customer relationships, your domain expertise, your historical data; these become assets that generate ongoing revenue streams, not one-time compensation. The mechanic who knows engine sounds doesn’t just get paid per repair. They own a stake in the diagnostic system trained on their expertise.

Personal brands as durable income streams. This is basically human IP equity. Bill Simmons doesn’t just have a following. He owns the value of his perspective, his taste, his accumulated knowledge. That ownership generates income independently of his time.

Co-ops and guilds that bundle liability, distribution, and reputation. Individual freelancers can’t compete with platforms. But cooperatives can. Pool accountability through shared insurance structures. Pool distribution through collective recommendation algorithms. Pool reputation through verified guild membership.

This isn’t hypothetical. Early versions already exist. GitHub Copilot sharing revenue with open-source maintainers. Substack giving writers equity in platform growth. Creator DAOs bundling audience relationships into tradeable assets. These are experiments in designing ownership into the tools, not hoping policy arrives on time.

The critical insight: If the next decade’s fight is labor versus capital, you want to reclassify labor as capital before the lines get drawn. Not through legislation (too slow, too easily captured). Through product design. Build ownership structures directly into the tools people use to work.

Timing matters because of the feedback spiral. AI adoption concentrates wealth. Concentrated wealth buys political influence. Influence blocks redistribution. History shows when inequality crosses certain thresholds (French Revolution at 60% wealth concentration, Gilded Age at 45%, Roaring Twenties at 50%), structural reform becomes nearly impossible.

Current US wealth concentration sits at 37%. Three percentage points below where the feedback loop becomes self-reinforcing.

Two patterns are emerging. Stability AI’s open models and Barcelona’s Decidim participatory platform show one path. Faster adoption with distributed ownership.

Platform-captured models show another. Gains concentrate, workers lose leverage. Adoption speed matters less than ownership structure. Who owns the tools, who controls the infrastructure, whether users can take their data elsewhere.

The old model required decades of education, institutional access, capital for training. The new one doesn’t have to replicate that structure. But the default path replicates the same concentration under different mechanisms.

Those who navigate this transition won’t be those who accumulated the most knowledge. They’ll be those who rebuilt ownership structures from first principles before the 40% threshold locked in the new hierarchy.


Key Sources