It has been a bewildering time lately to attempt to follow AI discourse. On one hand, I’ve encountered a number of strongly worded arguments from people I respect as to why people who aren’t taking AI seriously enough are sleepwalking into the most momentous change since the industrial revolution or even the dawn of homo sapiens. On the other hand, I’ve also come across a good deal of pushback to this sort of argument from people I also take seriously. At the same time, I’ve been seeing some commentary on the launch of GPT-5, which seems to have roundly disappointed expectations. Regardless of whether AGI is a coherent concept at all (I don’t think it is, but I’ll save that for another time), it seems fair to say that OpenAI is still a good way away from being able to market its products as that.
One aspect of the AI debate that should be refreshing is that it’s a rare topic that doesn’t map neatly onto culture-war battle lines. To be sure, the left-liberal journalistic hivemind, as represented by writers like Jia Tolentino, seems to be of the view that AI is very bad but also very fake, while many AI accelerationists seem to now be in the MAGA camp, in part because they have good reason to think the blue-state regulatory machine (informed as it is by the Tolentinos of the world) will put the kibosh on the whole thing if given the chance. But I find I can’t reliably predict someone’s views on AI based on partisan alignment, which makes it slightly less tedious than many other topics. It’s partly for this reason that I’ve waded into it again lately.
But alas, I do still find it tedious for slightly different reasons. The main one is that it’s based on premises that are deeply faulty but that no one participating seems to want to examine. The first such premise isn’t even really that, more of a fetishistic displacement in which political questions are reformulated as technological ones. Hence, the question of whether most human beings are demoted to a vast impoverished underclass is treated not as something to be determined by democratic struggles over wealth and power but technical decisions made by corporate leaders and maybe regulators. Want to avoid apocalyptic mass immiseration? Well, the terms of the AI debate tell us, you need to sign petitions to persuade Sam Altman to slow things down, or maybe get President Newsom in office in ’28 so he can pursue some sort of “AI safety” agenda. Notice what happened here: a political question that long pre-existed “AI” is converted into a somewhat abstract question for “the experts.”
At the same time, the AI debate is symptomatic of what technocratic politics looks like after the collapse of expertise as a legitimating strategy for authority. This is evident on all sides of the conflict. Consider, first, a basic tenet of the pro-AGI camp: In order to enjoy fully automated luxury [insert preferred descriptor for desired political system of the future], we must create a “superintelligence” that will figure out how to do stuff we mere mortals can’t manage: cure cancer, generate infinite energy abundance via nuclear fusion, colonize space, etc. Underlying the conviction that we need superintelligence to achieve these and other aims is what my colleague Ashley Frawley calls a “misanthropology”—specifically, the rationalist view that our innate cognitive biases are holding us back from realizing the true potentialities that might otherwise be enabled by the human species high IQ vis-à-vis non-human animals. If you could only extract IQ in its pure form, the thinking goes, there’s no limit to what it could do. In other words, the promised future in which we are all watched over by AGIs of loving grace is, a fantasy of technocracy in which the “experts” are no longer all-too-human.
The anti-AI camp is even more straightforwardly reflective of the constraints of post-expertise technocracy. Drawing on the (at least) half-century tradition of “limits to growth” pessimism, it sees in all technological development a hubristic overreach that will produce inevitable blowback. Hence, popular varieties of AI doomerism don’t differ greatly in its rhetoric from climate doomerism, positing a point of no return in the near future and a desperate need to pull the brakes or face utter annihilation. “AI safety” and “alignment,” like the discourses of “sustainability” that preceded them, are visions of minimal technocracy in which the “experts” limit themselves to adjusting the controls to avoid outright disaster, promising little more than the survival of “bare life” (although the Covid years showed us that this vision is paradoxically capable of morphing into a maximalist authoritarian agenda).
Langdon Winner’s classic Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought should dispel any notion that there is anything novel in any of the varieties of contemporary AI discourse, other than perhaps the peculiar idiom of grating obscurantism much of it has adopted from gurus like Eliezer Yudkowsy. Winner published his book in 1977, a high point of the initial backlash to 20th-century Promethean modernism. Reading it is a reminder that one didn’t need to have AGI supposedly around the corner to have the same hopes and/or fears that it inspires today. Indeed, Winner shows that the theme of “autonomous technology” could be traced back much further. He quotes from Henry Adams’s reaction to the 1900 Great Exposition in Paris: “Man has mounted science and is now run away with. I firmly believe that before many centuries more, science will be the master of men. The engines he will have invented will be beyond his strength to control.”
As Winner demonstrates, Adams was not the first modern thinker to harbor this vision of the future and even the present, which was framed by many earlier thinkers as already under the sway of technology unconstrained by human technicians. Indeed, such visions of autonomous technology were articulated over the course of many centuries in both optimistic and pessimistic modes. But, Winner argues, “whether taken in a positive or negative light, all theories of technoevolution suffer from the same basic flaw. Their major discovery—the eclipse of mankind—turns out to be something they had assumed in the first place.” That is, they “begin with the adoption of abstract categories which do not include a role for free, conscious human agents.” If history ends with humans as mere appendages to their machines, which now possess the only meaningful form of agency, that would seem to be because humans never possessed meaningful agency in the first place.
The current prominence of the AI debate—in which the prospect of a radically transformed future is being entertained, detailed, and contested from various angles—might give the impression that we have at last escaped the condition Franco Berardi called “the slow cancellation of the future” (a phrase subsequently popularized by Mark Fisher). But this appearance is deceptive. Beneath all of its confident predictions, AI discourse reveals our continued incapacity for hope. The future, if by that we mean a future that is built as a political project by free human beings, remains out of reach.
This week in Compact
Philip Cunliffe on the mirage of Palestinian statehood
Sam Kahn on Catherine Lacey’s The Möbius Book
John McMillian on Democrats’ delusions about crime
Emma Ashford and Peter Slezkine on the Trump-Putin summit
Daniel Kishi on how trade agreements have hurt workers
Clifford Ando on the crisis at the University of Chicago
Michael Reynolds on the DC foreign-policy blob’s culpability in Ukraine
On the topic of AGI, Leif Weatherby’s “Language Machines” makes a good case for why it’s an incoherent concept. AI is fundamentally a cultural technology, meaning that the parts we’d like to weed out of it, like cognitive biases, are actually baked into the technology. This is why you can’t stop an LLM from hallucinating—the same reason it works is the same reason it hallucinates, because of its immersion in our semiotic environment of language and symbols. The concept of “intelligence” is also culturally specific, but people who drone on about IQ and AGI don’t seem to realize this.
Something people call AGI might be achieved, but the idea that it would be objective rather than culturally determined is ultimately a political fantasy.
I mostly agree with this, Geoff, but I also think that to properly assess the doomer argument you have to not just critique it culturally, but also address the substantive arguments and predictions it makes about the world. When you take its points seriously, I've found that it's hard to dismiss them. The only way I can really see them being proven wrong is if we simply never achieve AGI/ASI at all, which is a pretty big bet to stake human existence on.