Haunted by Intelligence: AGI, Growth, and the Ghosts of Progress
Who decides the exception when machine imperatives collide with human ones?
TLDR version - We keep mistaking cognition for power. You can build a system that reasons; that doesn’t make it sovereign. Intelligence without bodies, institutions, and energy is cleverness on stilts—astonishing on benchmarks, brittle at the margins where rules break and stakes are real.
Labor: If AI substitutes, we’ll talk dividends; if it complements, we must redesign work around judgment, care, and taste. Either way, meaning—not just income—goes scarce. Ownership sets the slope of inequality.
Growth: Abundance meets two hard facts—demand and atoms. Without new distribution and new infrastructure (compute, energy, materials), “smart” accelerates old limits. Deployment beats invention.
Sovereignty: Chips are oil with pins. Expect export controls, nationalizations, and brand‑network monopolies—phantom sovereigns that rule by dependency more than decree.
Safety: Not a wrapper but a constitution: who can halt, reroute, or break the rules on purpose? That answer defines citizenship in the machine century.The provocation under all the charts: Progress isn’t automatic; it’s a project. We can get godlike tools and still build godforsaken societies. The question isn’t can we make AGI—it’s who decides what flourishing means, and who gets to flourish, when we do.
Dwarkesh Patel and Noah Smith open their conversation by grappling with the very definition of artificial general intelligence (AGI). Is it an economic phenomenon—machines performing any task a human can for the same cost? A cognitive one—machines that think like us? Or something almost theological—a “godlike” superintelligence that defies human comparison? The a16z hosts frame AGI as one of tech’s “biggest questions”, yet from the outset a tension emerges: even if we build a machine that reasons, does that alone capture the full spectrum of intelligence? Patel and Smith note that “reasoning alone isn’t enough”, echoing the thesis that raw cognition, in isolation, cannot carry the weight of real-world agency or understanding. A purely cerebral AI might ace logic puzzles yet stumble in the street. In their studio chat, they marvel at large models’ feats but admit these systems still lack something—common sense, embodiment, emotional depth, the survival instinct born of having skin in the game.
This gap between computation and comprehension recalls the warning that intelligence is not enough on its own. Even as silicon minds score ever higher on benchmarks, thinking cannot be divorced from a material context: real intelligence is enmeshed in infrastructure, institutions, bodies. The podcast’s definitional debate hints at this. Smith, an economist, toys with an “economic AGI”–machines so generally capable they spark a productivity explosion. Patel muses on the “godlike,” an AI so powerful it borders on myth. Yet already you, dear reader, might chuckle: every age casts its newest powers in the image of its old gods, and every such idol has feet of clay. Today’s godlike AI, like yesterday’s omnipotent algorithm, may just be a mirror of our hopes and fears, not a savior. Reason alone will not save us if it remains blind to its own limits.
“Sovereign is he who decides on the exception,” wrote Carl Schmitt nearly a century ago. By that measure, no AI today is sovereign; none can decide when rules break. Patel and Smith note how even advanced models falter outside the guardrails of their training. They are brilliant idiot savants: able to compose symphonies or code but baffled by a toddler’s puzzle or the nuance of a joke. True intelligence reveals itself at the margins, in the exceptions and emergencies that no algorithmic training set fully anticipates. It’s in those unruly moments of life—the political crisis, the ethical dilemma, the personal heartbreak—where machines remain naive. Patel, the ever-curious provocateur, asks how close we really are to AGI; Smith hedges that despite rapid progress, the frontier keeps moving. The cognitive horizon is hazy. Each breakthrough in reasoning unveils new layers of what machines cannot do – at least not yet. The closer our machines come to mimicking human thought, the more we appreciate the dark matter of intelligence: tacit knowledge, intuition, the embodied know-how that doesn’t fit neatly into code. Their dialogue, for all its optimism about GPT-style models, underscores a critical point: AGI will not be a single Eureka moment of logic, but a gradual enmeshing of mind with world. It will force us to ask, what kind of intelligence we truly value – disembodied problem-solving, or the situated wisdom that arises from struggle, culture, and lived experience.
Labor in the Age of Automation: Complement or Replacement?
If defining intelligence is hard, predicting its impact on jobs and work is harder. Patel and Smith dive into the debate over whether AI will complement human labor or outright substitute for it. Will generative AI be a tool that makes each of us more productive (as spreadsheets and search engines once did), or a competitor that renders whole professions obsolete? History offers dueling analogies. Smith, wearing his economist hat, points to past technological revolutions: the tractor displaced farmhands but created new jobs in manufacturing; the computer eliminated clerical roles but spawned programmers and IT specialists. AI might simply continue that pattern – painful disruptions followed by new opportunities. In this optimistic scenario, human labor remains vital, just refocused: teachers become coaches overseeing AI tutors, doctors become interpreters of AI diagnoses, artists harness AI as a medium. Patel is not so sure. He probes the darker possibility: what if this time is different? An AGI that truly lives up to its name could, in theory, do anything a person can, and do it faster and cheaper without sleep or salary. In that case, how does a human worker compete? The classic economic answer is we don’t – we move on to doing other things that machines can’t. But that circle is tightening. The promise that “new jobs will save us” sounds hollow if each new task we take on can itself be automated in turn. There is a latent anxiety in their conversation, an understanding that even if AI creates new work, it might not be meaningful work.
Here I invite the quarrelsome Republic of Letters chorus to chime in. JSG representing muscular PRC state ideology, cheerfully notes that capitalism has always been an engine of creative destruction, “incessantly destroying the old… incessantly creating a new one”. Others demur. RJT points out that the scale and speed of AI-driven disruption might exceed our social capacity to adapt. If all sectors are hit at once – from truck drivers to radiologists, copywriters to code monkeys – where do tens of millions of displaced workers go? Education and retraining take time; people cannot be refashioned overnight for brand new industries. And if those industries themselves employ far fewer people (a handful of AI engineers replacing thousands of clerks), the arithmetic of employment looks grim. Smith mentions the idea of a universal basic income around the 31-minute mark, and sovereign wealth funds fed by AI-driven profits. These once-radical ideas are entering mainstream discussion precisely because the old link between productivity and jobs is fraying. Patel wonders if an AI-saturated economy could generate so much abundance that work becomes optional – a post-scarcity utopia where robots till the fields and humans, freed from toil, pursue higher ideals.
Korean HBC offers a melancholic rebuttal: “The society of laboring and achievement is not a free society… the dialectic of master and slave does not yield a society where everyone is free and capable of leisure… rather, it leads to a society of work in which the master himself has become a laboring slave.” Even without work, we might invent new internal pressures to exploit ourselves. A future of mass UBI might liberate us from wage labor, yet condemn us to aimless consumption or depression, absent any structure of purpose.
In HBC’s view, our neoliberal era has already turned workers into self-exploiting “achievement-subjects”; without a job, one might simply hustle in other ways, or languish in boredom. The meaning of work, of having a role in society, cannot be replaced by a stipend alone. Patel and Smith acknowledge this when they touch on “the evolving meaning of work” – a nod to the psychological and social dimension.
Still, their discussion glimmers with hope that human creativity will find new outlets. Smith suggests that truly human jobs – those requiring empathy, ethical judgment, or the personal touch – may flourish even if algorithms handle the drudgery. Caretakers, mentors, entertainers, and artisans could thrive in an AI economy precisely because they are authentically human. But this may be cold comfort: relegating humans to inherently “human” jobs sounds a lot like a consolation prize if all the productive, high-paying roles go to AI. There is a whiff of the Elizabethan poor laws in some tech visions of the future – a world where a few elite humans design and own the machines, while the masses survive on guaranteed income and maybe gig work providing “human experiences.”
Patel voices concern about power: Who owns the AI? If the gains flow to a tiny group of tech oligarchs or to great powers, inequality could skyrocket. This leads them to float an intriguing idea: what if nations treated AGI as a public utility – hence talk of sovereign wealth funds capturing AI’s bounty for the people. It’s a moonshot policy, but it signals a recognition that without systemic intervention, AI-driven capitalism might hollow out the middle class. This is not new, folks! Similar ideas were proposed in past automation waves and went nowhere; entrenched interests have little incentive to share their new spoils. The gulf between imagining UBI in a podcast studio and implementing it in the real world is, as always, a political one.
Abundance or Scarcity? The Growth Dilemma
One striking theme in the Patel–Smith conversation is growth: Will AGI bring an economic boom or a bizarre stagnation? Techno-optimists proclaim that smart machines will unlock unprecedented productivity – curing diseases, designing new materials, running efficient supply chains, maybe even “galaxy-colonizing robots” in some far future. Smith, who often writes about boosting economic dynamism, leans into the optimistic scenario: an AI-saturated economy might break us out of the long stagnation and sluggish productivity growth of recent decades. Perhaps annual GDP growth could surge into the high single digits or beyond, as automated research labs churn out innovations and armies of robot workers build, deliver, and serve at near-zero marginal cost. Patel, playing devil’s advocate, notes a paradox: if AI systems do everything, who will have money to consume the fruits of this robotic labor? It’s a twist on the classic underconsumption problem. The duo muses about consumer demand in an AI-driven economy – if most people aren’t earning wages, aggregate demand could crash unless wealth is redistributed. They even toy with the sci-fi vision that the economy might evolve beyond human consumers altogether (imagine self-sufficient AI swarms mining asteroids and building colonies with no need for human input or enjoyment). At that point, growth becomes almost a meaningless concept from a human perspective. Smith is quick to clarify we’re far from that; instead, he suggests managed abundance: using policies like UBI or public dividends to ensure humans benefit from AI-driven growth. It’s a noble idea, but terribly utopian in the current political climate.
Enter the ghosts of scarcity. JSG – the specter of sovereign power – cautions that talk of abundance often overlooks the brutal politics of distribution. Even if AGI could generate wealth beyond measure, someone will decide who gets what. And history shows those decisions can be contentious, even violent. As Schmitt famously declared, the political at its core is about the friend-enemy distinction; translate that to economics, and in a world of AI abundance we might still see new forms of class division: those aligned with the machines and those left redundant. States will not idly sit by while tech companies or supranational AIs control the commanding heights of the economy – at some point, sovereignty reasserts itself. Indeed, Patel and Smith note the possibility of nationalizing AI or at least heavily regulating it. They cite how geopolitics and national competition could lead governments to declare AGI a strategic asset, much like nuclear technology. After all, if AGI is as transformative as they suspect, any nation that falls behind could see its industries and military eclipsed. This race for AI superiority might force even laissez-faire societies to adopt a more dirigiste approach: government-funded labs, state equity in AI firms, or outright takeovers. Growth then becomes not just an economic issue but a security imperative. The optimistic view of shared global prosperity from AI starts to look naïve when even vaccines and energy supplies saw cutthroat nationalism in recent crises. Why would nations treat a general intelligence any differently?
Yet, ironically, growth is not guaranteed even with advanced AI. If institutions and infrastructure lag, if politics or social fractures prevent effective deployment, we could get a weird scenario of technological potential without economic payoff. Imagine hyper-smart AIs hamstrung by legal battles, or productivity gains eaten up by inequality and precarity. This is not far-fetched – one might recall how the productivity boost from computers since the 1970s often failed to translate into broad prosperity in many countries. Without the ethical and institutional bandwidth to integrate AI into society, intelligence alone may just concentrate wealth in a few hands, leading to economic feudalism rather than utopia. Patel and Smith touch on this when they discuss redistribution and the role of policy. Their underlying assumption is that growth is desirable and likely; the ghosts urge caution that growth for its own sake can also accelerate planetary boundaries (climate change doesn’t pause for Moore’s Law) and social upheaval. More machines churning out more goods means more resource extraction, more energy use – unless AGI also miraculously solves sustainability. We barely touch on material constraints: but every exponential curve in tech meets the reality of a finite world. One hopes any AGI-led boom would be harnessed to heal the earth, not just exploit it faster. But in the podcast, climate and ecology barely get a mention amid the excitement. That blind spot – the assumption that progress will be benign or at least manageable – is the hubris of intelligentism: the belief that smartness can solve all problems, even the ones smart solutions created in the first place. The specter of limits, both physical and moral, lurks behind the shiny graphs of infinite GDP.
Geopolitics and the New Great Game
As their discussion moves to geopolitics, Patel and Smith acknowledge that AGI is as much a strategic concern as an economic one. They reflect on global power shifts: Could mastering AI make a nation a superpower? Could it reorder the current balance between the U.S., China, and others? The subtext is a modern echo of the nuclear age – just as the atomic bomb forced a new geopolitical calculus, so too might AGI. Here, their analysis is tbh weak, and there are far better China watchers on AI. Smith and Patel frame it as a race – a “global AI race” – language that is itself shaping reality. Call it a race, and it may become one, with all the zero-sum thinking and fear that entails.
Geopolitics also bleeds into ideology. China already sets the pace at the policy frontier, and there is policy isomorphism, a fancy way to say how other societies converge with China’s policy choices at the frontier. We should listen more to HBC: “Big Data has announced the end of the person who possesses free will… Big Data is making it possible to predict human behavior.”. Under AI-powered governance, the line between protecting society and dominating it can vanish. HBC’s critique is that in pursuing total security and efficiency, we risk a world where individuals are transparent, predictable, and manipulated – the death of the subject. Patel and Smith don’t go fully down that dystopian rabbit hole on-air, but their mention of AI reshaping society hints at it. Perhaps they assume liberal values will somehow prevail, that the West’s use of AI will be tempered by law and ethics. I’m not so sure. Technologies of control, once available, tend to be used; the history of the 21st century so far – from the Patriot Act to ubiquitous social media tracking – suggests that convenience and fear often override liberty.
That raises a final geopolitical specter: technocratic feudalism. If, say, an American company’s AGI becomes the world’s essential service, does the U.S. effectively rule the world by proxy? Or could a rogue AGI itself become a de facto global power, obeying no flag? Patel half-jokes about galaxy-colonizing robots – but if we take that seriously, an autonomous AI that expands beyond Earth would introduce a non-human actor in geopolitics. It sounds far-fetched, yet the fact we’re even contemplating non-human power shows how unsettling the terrain has become. Phantom Sovereigns – potential entities that upend our traditional notion of who holds power. Be it a super-corporation, a super-intelligence, or a superpower with super-AI, we face the erosion of human agency on the global stage unless we actively assert our values.
Who Decides the Future? (A Provocation)
The conversation between Dwarkesh Patel and Noah Smith ends on a thoughtful, cautiously optimistic note – they envision possibilities of human adaptation, policy ingenuity, perhaps even flourishing alongside our intelligent creations. But in the cold silence after the podcast, a more somber chorus sets in. The ghosts assembled here – from Schumpeter to Schmitt, HBC to the unnamed multitudes – pose a question that the technocrats cannot easily answer: Who decides what flourishing means in the age of AGI? And who gets to flourish? The thesis of “Intelligence Is Not Enough” hangs in the air: cognition alone, absent the hard constraints of infrastructure, institutions, and ethics, guarantees neither stability nor justice. We could achieve the dream of godlike AI, only to find ourselves living through a nightmare of godforsaken societies.
In Patel and Smith’s hopeful scenarios, there is a conspicuous absence of conflict. But politics has not evaporated; class struggle has not been debugged by code. The material world – that messy realm of hungry bellies, oil and lithium, water and land, guns and ballots – remains as real as ever. Sovereign decisions will still be made, with or without AGI, and likely they will be made by those with power to protect that power. As I’ve argued before in the future premium, the working class, rendered surplus, might not sit forever in quiet resignation on their UBI stipends; political movements could arise that make the tumult of the 20th century seem tame. Or, conversely, as HBC fears, the masses might simply sink into digital delirium, a burnout society anaesthetized by cheap thrills and algorithmic propaganda, too exhausted to revolt. Both outcomes are profoundly melancholic: either new upheaval or a hollow stagnation of the human spirit.
Perhaps the most jarring ghost voice is one we haven’t yet named openly: call it the ghost of reality. This is the reminder that beyond all our debates, the world moves. Climate change accelerates; great power tensions simmer; inequality deepens. All the intelligence in the world may not be enough to steer these titanic forces if we do not also show wisdom, restraint, and solidarity. Progress is not a law of nature; it is a project – one that can fail. Civilizational flourishing is a choice, not an inevitability, even with AGI. Do we deploy AI to mitigate scarcity and strengthen the common good, or just to maximize the profits of a few and the power of the state? Do we use the productivity gains to give people dignified lives and time to care for each other, or to pile up more gadgets and distractions in a planet of loneliness? These are questions no machine can answer for us. They are political questions, ethical questions – questions of sovereignty in the deepest sense: what values shall rule?
We are rushing into an age of intelligent machines, but we have yet to prove we are an intelligent civilization. We entrust our fate to algorithms while our institutions crumble, our communities fray. We speak of godlike AI even as we lose faith in any gods, even the humanistic ones that fueled our hopes in the last century. In the end, the question isn’t can we build AGI – it’s can we build a world around it that is worth living in? Patel and Smith believe we can, with reforms and creativity. The ghosts remain skeptical, their chorus a cautionary elegy.
Nataraja the divine dancer, Chola bronze
Who or what will shape the post-AGI order? Someone or something will decide – and if it’s not us, it will be decided for us, by code or by decree. The future, haunted by our intelligent creations, sits before us like Nataraja the divine dancer, demanding an answer. In the gathering twilight of the old order, as machines hum in the background, we are left once again with a quiet warning: intelligence alone is not enough to save us; the ghost of our humanity must speak, must act, or else it will simply fade into the great silence of history. The provocation stands. Who among us is ready to decide on the exception, to seize the hard task of guiding fate, before the machines – or those who own them – decide for us? Who decides?

