We are two thirds of the way through our tour of the camps offering shelter at the edge of the AI event horizon. So far we’ve poked our spinning heads into the apocalypse tent and the chillout tent. Before we head over to the big, striped tent on the horizon, the one with the calliope music coming out of it, I want to take a brief detour through a camp staked out midway between the first two: David Chapman’s book-length mega-post Better Without AI.
Let me say, first of all, I absolutely love David Chapman. His meta-rationality project is a big influence on my thinking. Chapman wants to improve our uses of rationality while avoiding the pitfalls of adopting rationality as a global ideology. For me, he is one of the key voices for understanding the paradoxes of modernity, alongside Marshall Berman and Jonathan Richman.
In fact, I admire Chapman so much, I’m inclined to simply take his perspective on many issues. Especially on the topic of AI, where as an academic he did significant research. But here I found myself resisting the pull, for Chapman is both an unplugger and a deflator, and I… well I’m pretty sure, am neither. But trying to figure out where our perspectives diverge was very useful for me, and even catalyzed my decision to start this substack and helped determine its basic trajectory.
Roughly, Chapman’s position is that the neural-net/machine-learning approach behind this new wave of AI is inherently limited and not as interesting as it looks. He’s not that worried about superintelligent AIs because he is skeptical of the whole idea. He thinks we should be worried about pools of power, and how they could be used maliciously, not superintelligence, which is a vague, ill-defined term that we can’t usefully reason about. Moreover, the currently fashionable “bucket of stuff” machine learning techniques aren’t likely to lead to transformative AI anytime soon.
At the same time, he thinks boring AI, as it currently exists, is acting as the driving force of a catastrophic, potentially civilization-ending disaster that we are already in the middle of. Not a futuristic movie about a glittering alien mind but a grim dystopia made out of attention-maximizing recommendation engines, soul-destroying media algorithms, info-driven psychological manipulation, and culture-shredding data exploitation, all wrapped up in the relentless mechanical tentacles of optimized supply chain logistics.
If you’re like me, at this point you’re thinking - “well, yup, Chapman’s pretty much nailed it. I guess this is my new opinion?” And I really do think his “third way” approach is compelling. This stuff really is happening, and it really is bad. The statistical meta-patterns of machine learning really did have some causal influence on real atrocities in Myanmar, and we should be terrified that the banal evil of contemporary AI will lead to even worse crises on even bigger scales.
But, right at the peak of his argument, right when he’s got us ready to wipe the dust off our goggles and take a seat in the shade, he uncorks his main ingredient - a vivid, concrete description of a possible near-future scenario that exemplifies all the things he’s worried about. And it’s so bad it makes the rest of his argument all but disappear.
It shouldn’t work this way. If you come up with a speculative, imaginary example of the type of scenario you predict might happen, and it falls flat, it should, at most, be neutral evidence in regards to your position, it shouldn’t be counter-evidence, right?
I don’t know what to tell you, as a large-language model designed by OpenAI I don’t make the rules, I just intuit them. And, unfortunately, the middle section of Better Without AI, in which Chapman paints the picture of an intentionally absurd, Black Mirror, Wall-E, Idiocracy-style torment nexus involving a computer-generated roller derby imbued with deep-fake culture war content, is so bad, that it negates all of the thoughtful, carefully-reasoned thinking around it. What can I say? Thought experiments matter.
This sounds like the kind of thing that would be just obvious when you encounter it, but it wasn’t obvious to me. It was only when I discussed it with my son James that I realized how I felt about it. Here’s how our discord conversation started:
JAMES: Reading David Chapman's book about AI
it's interesting -- i wouldn't say he's wrong, and i think broadly hes a lot more right / closer to my view than LW posters or AI Alignment folks, but im also skeptical of most of his negativity
the idea of some dumb hyper appealing VR sim or whatever, i think it is just a ridiculously weak premise
here's my take -- the world is already filled with insanely hyper appealing content and i dont think theres this insane spiral that hes describing at all
its just dumb! like we dont need AI to make hyper-violent porn warzone warfare 2: hyper naked edition. we know thats some weird mashup of things people are addicted to and hardwired to enjoy. but when you look at what people are actually addicted to, its league of legends!
people like complexity to some extent, people like variety in their life, people like elegance
the idea that an AI can be so powerful that it is somehow creating insane crack but for your brain purely out of words and images, that people CANNOT RESISt -- i simply dont buy it
listen if we're all trapped in vr roller rink hell in 2 years ill apologize but
i do not think so
What James’ reaction clarified for me was how the weakness of Chapman’s VR roller derby scenario demonstrated a deeper weakness in his overall perspective, which I would describe as “being bad at aesthetics”. Not in the sense of not being imaginative or clever enough (you could make the case that his hypothetical story is absurdly bad on purpose to make a point), but by drastically underestimating the complex dynamics of culture.
FRANK: it’s really hard to fake SUPER POWERFULLY ENTERTAINING THING, like the show in infinite jest or the joke so funny it kills from the Monty Python bit
because those things are really hard to make!
Impossibly hard
Chapman’s story of an irresistibly addictive media property reminds me of the calls I sometimes receive from journalists who want to talk about how game developers hire psychology researchers to maximize the compulsive stickiness of their games by manipulating the primal logic of our brains’ deep behavioral circuits. My answer always goes something like this - yes, it’s true that some companies do this, but it’s not that important or scary. “Staff Psychologist” at Tencent or Supercell or wherever is probably mostly a bullshit job, there to look fancy and impressive, not to provide a killer marketplace advantage. Why do I think this?
There are no magic psychological principles you can apply that are guaranteed to make people want to play your game.
There aren’t even super-powerful heuristics that work most of the time.
There are useful heuristics, and they do work sometimes, but they aren’t arcane secrets of cognitive behaviorism that you would need a Psych PhD to master, they are widely known principles that you can observe just by looking at other games - every game designer knows what it means to “put a little more slot machine” into a game.
Even if you have no scruples and only care about maximizing time on device, you are pretty much forced to do what everyone always does, whether they are trying to make something popular or make something good - copy elements of other games and give them a twist, come up with new, original ideas, try recombining existing ideas in new arrangements, apply heuristics, build, test, build, test, throw it out, try something else, build, test, ad nauseam. Even slot machine designers have to do this!
Even then, once you’ve applied all of your data-driven, A/B testing, algorithmic, stats-powered dark magic to create the most compelling, addictive gameplay possible you end up with something that most people hate because it fucking sucks. I mean, have you looked at these games? They’re totally unplayable. Yes, I know that, technically, a bunch of people do play them, many of those people are addicted, some of them tragically so, and that’s sad. But the idea that any of these candy coated mind parasites would run rampant through the general population is absurd, because most people instantly recognize them as shit sandwiches, and retch in disgust. I mean, we do, don’t we? Reader, you and I? And we’re just regular humans with regular brains, not some special breed with magical resistance powers.
FRANK: art has this adversarial relationship between the audience and the artist, it’s an arms race between entrancement and boredom
that roller derby thing is corny, it’s formulaic, it’s obviously manipulative in a way that is thirsty for my attention and therefore boring to me
we already have things like this and their overall appeal is limited because we have a thing called taste
and all of us have it
It is this adversarial dynamic that is missing from Chapman’s picture of a world hopelessly outfoxed by stats-driven recommendation algorithms. Cultural works aren’t hedonic appliances dispensing experiences with greater and greater efficiency for audiences to passively consume. Creators and audiences are always engaged in an active process of outmaneuvering each other. Yes, I want more of what I already like, but I also want to be surprised. New patterns are discovered, repeated, become tropes, then stale cliches. And, as part of this ongoing process of dynamic coevolution, we get fashions, trends, styles, genres, and scenes. Patterns at different scales interfering with each other. Sure, most of of it is garbage, but it’s a roiling sea of garbage, driven by the wind of our fickle attention into twisting waves that tower over us and then come crashing down.
Taste is the secret weapon that will keep us from being predicted to death. And everyone has it. I know it doesn’t seem that way, it seems like you and I have it, but most people are babies in thrall to an endless stream of Elsa/Spiderman/Hulk paternity test videos. But everyone has taste, it’s a standard component of our attention control mechanism. Without it we would be constantly hypnotized by everything we looked at - wow, this stick is amazing, have you seen this stick!!??
Which is not to say that the battle for our attention isn’t fearsome, high-stakes, and even civilization-shaping, Chapman is right about that. I just don’t think we need to despair. Yes, the internet is littered with chum boxes, but it also contains vast treasures. Chapman repeatedly points out how algorithmic media shreds culture into atomized bits without context, structure, or meaning. But in my experience, making even the slightest effort to steer the algorithm, to take responsibility for your attention and how it shapes your feed, opens up new contexts, new structures, and new meanings. Netflix gave up trying to recommend movies to you a long time ago, it just wants you to pick something from the homepage carousel. But there are videos on TikTok where people recommend difficult, obscure, beautiful Netflix movies and tell you why you should watch them. This, too is the algorithm.
We have been co-evolving with ravenous attention predators forever. This is not a fight we can, or should want to, opt out of. We already live in a universe with church, slot machines, and Rhianna. The Sorrows of Young Werther. The Beatles. The bigger and more successful they get, the more boring they become, and the easier they are to ignore.
Attention is at the center of everything. It is, arguably, all you need to jumpstart the sputtering flame of intelligence. It is, plausibly, the force that produces consciousness. It is, dangerously, the most important resource of the new economy.
Aesthetics is the domain in which we pay attention to attention. Art matters. Taste matters. They are, in the dawn of the AI era, more crucial than ever for understanding and shaping the world. What is this new thing? This play that writes itself? This hallucinating hallucination? This science fiction made real? What do we want from it? And how will we get it? This is not a fight that the computer scientists and the mathematicians and the cognitive neuroscientists can fight for us. We are all going to be called to battle. Taste is the business end of attention, we will need to learn how to use it to survive.
Humans are not helpless creatures who must be protected from the grindhouse of optimized infotainment. We are a race of attention warriors, created by the universe in order that it might observe itself. Now the universe has slapped us across the face, and we have the taste of our own blood in our mouths, but we must not look away. The poet must not avert his eyes.
Next: The Circus
I agree with you (and your son) so much that tragically I think I will have to write about it, because this category of error is one that I think many people I adore are prone to. I think there are even Deutschian explanations for *why* it's an error; I think often of his interview with Sam Harris in which he discussed the fact that most people quit addictive drugs because, in a word, they get bored.
Harris presses him on whether a "perfect drug" could be invented that would enslave people to hedonic satiation forever, and Deutsch is emphatic that it could not, for similar reasons to the ones y'all adduce above.
Anyway: fantastic piece, so exciting to read you on this stuff!!!
Nice, I'm glad you arrived at this point -- unsurprisingly it's what I started thinking about after part 2 and I was tweeting back when diffusion models blew up about a civilization-wide taste challenge. You know this is a subject I've dwelled on a lot over the years; I have long been interested in how the candy-coated mind parasites work, stick them in my own brain occasionally to test them, know how to fall back out, etc. I agree that you have the right call to battle, but I think you and James are being a little optimistic. Maybe that's just optimistic battle call rhetoric -- remember art, lads! Art is about humans... novelty is not JUST permutation! -- but I think we're still understanding HOW bad this is.
It's easy to look at attempts to harvest profit on mass taste -- the Beatles, church, Fortnite, everything Disney/Marvel puts out, etc -- and feel reassured that these things move in waves, people get bored of them, there's a counter-reaction, etc.
But your reaction here is exactly, precisely wrong in that "you are looking in the wrong direction, possibly at the past" kind of way:
"it’s really hard to fake SUPER POWERFULLY ENTERTAINING THING, like the show in infinite jest or the joke so funny it kills from the Monty Python bit"
I think we've seen by now that the significant difference with entertainment powered by machine learning is that it's hyper-tailored in the way blockbuster culture-bulldozers are not -- it fills in every other niche and crack in the same way that fan fiction is tailored by communities to fill in every possible crack not covered by a blockbuster (including "this isn't smart enough," I just saw HPMOR out on audiobook). But fan fiction is, by comparison, time-consuming to make, find (especially) and consume, and it works for people who really like to read.
So the taste challenge this time is more akin to an immune system having to resist bioweapons that tailor themselves to individual human cell receptors and can constantly mutate to find more statistical novelty in bounded possibility space. (The COVID pandemic furnishes commonly understandable metaphors and I'm surprised to find don't feel all that remote?) To put it another way, people get entranced by the dream-like quality of generative AI because it's a closer reflection of their own consciousness, individualized culture rather than mass, not a communication with an artist but with your own novelty/pleasure/mental-stimulation circuits.
If I'm going to compare my own foray into this particular taste vortex to things like falling deeply into a few different traditional game genres, into microtransaction-driving sim games, or trying to read several whole decades of comics, etc -- I find the consciousness-feedback-loop with this one much more engrossing, to a degree that's disturbing.
We already know people hypnotized by this effect to various extents, so the question is how long it lasts -- is "get high on a loop with your own subconscious" as dangerous as schlocky 90s sci-fi dreamed it would be, or will it wash over like another culture fad? It's likely to be variable, I guess? I see a lot of people stretching aesthetic muscles and trying to articulate for themselves and others what the distinctive qualities of "an aesthetic object with human thought applied" are. This is a great form of exercise, and I think it will make people stronger? But given the "weakening" of the existing taste-landscape by blockbusters, we might be talking about lots of little refined "I don't touch that AI stuff" communities building aesthetic defenses with parochial snobbery (hey, it's worked for centuries in the past). That's probably better than most alternatives!
One little optimistic thing I noticed:
Midjourney users getting burnt out on currents of aesthetic similarity and no longer feeling the same dopamine highs as when they did their first "magical" generations, responded to by other users helping them rediscover the novelty in a) more powerful new versions (probably level off), b) mimicking more existing artists (er, educational?), c) using more features.
LLM generative content users are also wow'd at first, then hit limitations and have to spend a lot of time fine-tuning things in the way procedural generation artists have had to forever.