Fahrenheit 98.6
Turns out there is a fire alarm for AI and if I can't get it to stop going off I'm going to take the batteries out.
Like most people, I like the way Matt Levine writes. Here’s something he wrote recently:
…will someone ask ChatGPT a question like “what companies are frauds,” and ChatGPT will cheerfully and confidently say “oh Company Y is a total fraud, their revenues are super inflated,” and then people will short the stock, and then it will turn out that Company Y is fine and ChatGPT is just really good at confidently making stuff up? Is that securities fraud? By whom? (Or: Ask “what stocks are good,” ChatGPT says “Company Z is great, they have found a cure for cancer and their revenue will double,” people buy the stock, it was all made up.) Will sell-side analysts or journalists use ChatGPT to do their work, and will ChatGPT introduce market-moving factual mistakes?
It’s interesting to speculate about the potential impact of AI on corporate finance, but the thing that struck me about this paragraph was its form, not its content. Look at what is going on in the language of this one paragraph, chosen pretty much at random. When Levine writes that Company Z found a cure for cancer we are not confused, even for a second, about what he is saying. We know that Company Z isn’t a real company (even though there is, in fact, an actual company called Company Z.) The quotation marks make it clear that Levine himself isn’t making this statement, he’s reporting that someone else said it (although, honestly, we would be able to tell this even without the quotation marks.) Furthermore, we know that he isn’t, in fact, reporting that someone else said it, but instead speculating about what might happen if someone did say it.
This simple, unremarkable paragraph is a densely layered construction of nested references, with varying layers of fictionality and imaginary scenes, made from a series of questions which are each operating at some nebulous position along the spectrum from literal to rhetorical, and yet we have absolutely no trouble correctly interpreting it instantly and effortlessly.
This quality of language, its infinite plasticity and our capacity to navigate its mercurial meanings, is one of the reasons I’m not all that worried about the impact of deep fakes. Our world is heavily mediated by language, it governs our social, practical, institutional, and personal interactions, and it is trivially easily for anyone to use it to generate illusions that are perfectly indistinguishable from reality. I can make up something from whole cloth and tell you that I saw it with my own eyes, or heard it from a friend, or read it in a paper, and there is absolutely no way to tell, from the text itself, that it is fake. I can put quotation marks around any statement and tell the world you said it, and this illusion will have perfect fidelity - there are no possible forensic tools that could ever, simply by looking at the text itself, show it to be fake. This is the world we already live in, and we do OK.
The tools we use to do OK in this world - our sensitivity to context and tone, our complex, redundant, recursive trust protocols, our general sense of skepticism and caution, our bullshit detectors - are robust but imperfect. Occasionally they fail us and we look for bargains at a store that isn’t really going out of business, buy a box that says video camera but contains a brick, give our heart to someone who says they won’t break it but then does, or hand Derren Brown our watch. But given language’s ubiquity and hyper-plasticity it’s impressive the degree to which these failures remain the exception and not the rule.
Another reason that deep fakes won’t, in my opinion, cause that much trouble is the fact that, for the most part, humans don’t reason by looking at evidence and drawing conclusions from it. Mostly, we start with conclusions, based on what feels right, and then use our reason to construct plausible explanations for our beliefs and actions. The idea that we would look at a really convincing, high-res picture of William Shatner shoplifting and then conclude that he was a thief is based on a naïve theory of how our minds work that doesn’t bear close scrutiny. We are far more likely to arrive at that conclusion if a good friend mentions it casually as a well-known fact.
Even then, imagine hearing that statement. Don’t imagine a hypothetical “poor helpless stupid internet person” hearing it; picture famous smart person you, yourself, hearing a friend say “William Shatner is a thief”, and think about what your reaction might be…
I misheard them
They’re joking
They’re talking about a movie or something
They’re over-simplifying some complicated situation
Maybe he really is a thief, when I get a chance I’ll check with some established, venerable, trusted authority, like cracked.com
What’s for lunch?
When you think about it, the whole system is a real mess, but it sort of works. It could definitely be improved, but overall it’s probably working better now than it was a few thousand years ago when complete nonsense was even more rampant. And even those dark times were probably better than 150 thousand years ago, when, limited to pointing and grunting, we couldn’t lie at all.
This realization that we already live in the worst possible world, relative to some specific problem, is good to keep in mind when confronting current warnings about immanent disaster like this Economist article from Yuval Harari arguing that AI has hacked the operating system of human civilization.
You might think that alarming headline was imposed by some click-hungry editor, and that Harari’s argument is more nuanced. But here’s an excerpt from the first paragraph:
William Shatner is a thief!
Just kidding, here’s the real excerpt:
…over the past couple of years new AI tools have emerged that threaten the survival of human civilisation from an unexpected direction. AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.
Harari goes on to make the same basic observation that I made above, that language is a ubiquitous ingredient that mediates all of culture. But, where I find some comfort in this, and see it as a reason to expect that, whatever weirdness AI has in store for us, we will deal with it using the same kinds of kludgy methods we’ve been using all along, Harari sees this as a radical inflection point that will crash humanity by ruthlessly exploiting the weaknesses of our cultural code.
In a recent Atlantic piece Ian Bogost looks at how teachers and students are adapting to an academic world that has been utterly transformed by large language models. It’s a mess, but it’s a real mess that real people are awkwardly figuring out how to make their way through.
Harari has little patience for such things:
…this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of AI tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.
Oh no! Not the presidential race!
How can I say this? Let me just put it bluntly, and please forgive me, but I genuinely think this is true - the vast majority of “political content” is already perfidious trash. Speeches are formulaic nonsense - a slurry of clichés, anecdotes, tropes, and memes designed by committee to send just the right constellation of signals with just the right combination of vagueness and precision to tactically maneuver the candidate through a space of policy positions which are, themselves, only distantly related to the issues that matter, to the important problems of collective action and their potential solutions. We suffer through them, inspired by a vague memory of hearing something once, long ago, that genuinely stirred our hearts, hoping against hope to encounter such inspiration again, and instead making do, over and over, with the shallow satisfaction of watching the guy from our team score a few points from the foul line.
The commentary and analysis surrounding this empty verbiage is, arguably, worse. Cheap, gossipy entertainment that wraps itself in a sanctimonious cloak of public service. Day in and day out we are force-fed this lukewarm gruel and we grimly choke it down out of a sense of civic duty, desperate for some shred of half-remembered meaning. Or we guzzle it happily, fully engrossed in the shared hallucination, fascinated by the technical details of the competition or carried away by mob energy, swept up in torrents of fear, pride, outrage, greed, contempt, and cruelty. And this isn’t the catastrophe of a failed democracy, this is just the standard operating procedure of an ordinary, working democracy.
Adding machine-generation into this mix is, in my view, mostly more of the same. Maybe, in fact, it will improve things, by giving us a shared baseline of rock bottom trash and forcing us to occasionally ask ourselves “what the hell am I looking at?” But if, as I agree seems likely, it makes things even worse, oh well, that sucks, but it’s probably just more turbulence not critical engine failure.
Harari continues…
On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually AI.
Hi. Again, let me say something that seems obvious to me but is, perhaps, scandalously controversial - if you find yourself conducting lengthy online discussions about abortion, climate change, or Russia with anyone, regardless of the carbon content of their functional substrate, something has already gone terribly wrong. Does Harari really think that AI threatens the important work of Facebook Uncle discourse? Does anyone think such conversations should be protected from AI incursion rather than just, you know, generally discouraged?
Is Harari himself engaged in such conversations? No, of course not! It’s the poor helpless stupid internet people he’s worried about. He himself is too busy giving TED talks, appearing on 60 Minutes, and having filmed conversations with everyone’s favorite Facebook Uncle Slavoj Žižek. Harari’s interactions with language take place in a domain in which authenticity is protected by multiple layers of institutional authority, trusted channels, and celebrity identity. In such a lofty domain the threat of lengthy online conversations between simulated people is just an amusement, not a genuine threat. It doesn’t occur to Harari that similar protections are available to everyone, that it is a matter of basic literacy to understand them and use them properly, and that spreading this literacy is simply the ongoing project of teaching everyone to read and write.
Harari also worries about what an AI-powered version of QAnon might achieve. And again, I’m thinking - what? Is the constraint on this kind of addle-brained conspiracy cult the quality or quantity of the text? Aren’t existing “Q drops” exactly as good as they need to be to achieve the surprising level of success they have? Isn’t the limiting factor on the spread of QAnon its inherent stupidity? If some advanced AI were to attempt to improve QAnon’s reach and influence wouldn’t it start by making the whole thing slightly less idiotic?
And he also worries about AI-powered religion. A worry to which you will no doubt by now be able to predict my response. To wit - just wait ‘til he finds out about regular old-fashioned, non-AI-powered religion, a world-conquering, mind-eating, civilization-transforming megaforce that even most of us non-believers consider a mixed bag at worse.
Harari wraps up by invoking Descartes’ demon (the one who could be keeping us as a floating brain-pet whom he feeds a steady stream of illusory sensations), Plato’s allegory of the cave, and the Buddhist concept of Maya, the seductive illusion of the material world:
The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there.
If we are not careful? Um, no. These famous concepts are not warnings about what might happen if we are not careful. They are attempts to describe as accurately as possible the existing world that we already find ourselves in. We are, according to these perspectives, already trapped behind a curtain of illusions, and we must find some way to deal with it. Which is, as far as I can tell, what we’ve been doing and, in my estimation, what we will continue to do in the age of AI.
Plato did once make a warning, one not unlike the one Yuval Harari is making here. It was about writing:
If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom but with the conceit of wisdom, they will be a burden to their fellows.
Maybe Plato was right to worry. Maybe writing was a mistake. Maybe writing unleashed, into a world of illusions, the machinery for producing ever more complicated and seductive illusions. But I don’t think so. I’m glad we invented writing, and if, as I suspect, AI turns out to be the same kind of mixed bag, full of the same kind of peril and promise, well, here we go again.
Next up: Tales from the Drift Wars
This one caught me off guard. "If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there." He's a student of Buddhism, he must know the veil is one of the five hindrances to awakening. Good luck throwing that off in this life or any other. I really like Harari's work, but I think I've got to disagree with some of his hot takes on A.I. Thanks for sharing, great article!
"And even those dark times were probably better than 150 thousand years ago, when, limited to pointing and grunting, we couldn’t lie at all."
I know this is just a throwaway line, but whoa, a time before lying, talk about prelapsarian! But of course lying is possible even with only points and grunts. Anytime there's a signal there can be a false signal. Maybe lying implies conscious intent? We don't normally think of the cuckoo's egg as lying after all. But I mean honest to goodness real lying, and that it is far older part of our evolutionary history than walking upright.
How do I know that? I could be lying. I have come by this knowledge not in the normal manner of economists and theorists (that is, imagining what I would do if *I* were a caveman, and finding I could maximize my utility by lying), but rather, by reading empirical studies of an analogous case. That is, other apes and monkeys, all of whom have been documented lying! (cf. https://doi.org/10.1098/rspb.2009.0544, but there are tons of these). Among the best studied are alarm calls in the wild, which monkeys and apes will consciously make to warn friends of approaching danger. But ALSO, they'll fake those cries to get friends to scatter, so they can get more foraging time themselves while their relatives run and hide. Never trust a hominid!
The amazing thing about this isn't the use of deliberate deception itself. The amazing thing is that language and communication as we know it was even able to evolve at all, when there's such a large payoff for betrayal. The costly signaling of honest communication is under constant assault by noise, and one of our main evolutionary superpowers appears to be a capacity for detecting the lying smiles of a cheat. Exactly to your point!