Discussion about this post

User's avatar
Derek Neal's avatar

On the topic of AGI, Leif Weatherby’s “Language Machines” makes a good case for why it’s an incoherent concept. AI is fundamentally a cultural technology, meaning that the parts we’d like to weed out of it, like cognitive biases, are actually baked into the technology. This is why you can’t stop an LLM from hallucinating—the same reason it works is the same reason it hallucinates, because of its immersion in our semiotic environment of language and symbols. The concept of “intelligence” is also culturally specific, but people who drone on about IQ and AGI don’t seem to realize this.

Something people call AGI might be achieved, but the idea that it would be objective rather than culturally determined is ultimately a political fantasy.

Expand full comment
Leighton Woodhouse's avatar

I mostly agree with this, Geoff, but I also think that to properly assess the doomer argument you have to not just critique it culturally, but also address the substantive arguments and predictions it makes about the world. When you take its points seriously, I've found that it's hard to dismiss them. The only way I can really see them being proven wrong is if we simply never achieve AGI/ASI at all, which is a pretty big bet to stake human existence on.

Expand full comment
5 more comments...

No posts