8 Comments

I agree with you (and your son) so much that tragically I think I will have to write about it, because this category of error is one that I think many people I adore are prone to. I think there are even Deutschian explanations for *why* it's an error; I think often of his interview with Sam Harris in which he discussed the fact that most people quit addictive drugs because, in a word, they get bored.

Harris presses him on whether a "perfect drug" could be invented that would enslave people to hedonic satiation forever, and Deutsch is emphatic that it could not, for similar reasons to the ones y'all adduce above.

Anyway: fantastic piece, so exciting to read you on this stuff!!!

Expand full comment

It's great to see good writers coming to similar, good conclusions on this stuff. The artists have their courage and risks; the audience has its taste. I don't see AI managing to stick its nose in that relationship any time soon.

Expand full comment

Nice, I'm glad you arrived at this point -- unsurprisingly it's what I started thinking about after part 2 and I was tweeting back when diffusion models blew up about a civilization-wide taste challenge. You know this is a subject I've dwelled on a lot over the years; I have long been interested in how the candy-coated mind parasites work, stick them in my own brain occasionally to test them, know how to fall back out, etc. I agree that you have the right call to battle, but I think you and James are being a little optimistic. Maybe that's just optimistic battle call rhetoric -- remember art, lads! Art is about humans... novelty is not JUST permutation! -- but I think we're still understanding HOW bad this is.

It's easy to look at attempts to harvest profit on mass taste -- the Beatles, church, Fortnite, everything Disney/Marvel puts out, etc -- and feel reassured that these things move in waves, people get bored of them, there's a counter-reaction, etc.

But your reaction here is exactly, precisely wrong in that "you are looking in the wrong direction, possibly at the past" kind of way:

"it’s really hard to fake SUPER POWERFULLY ENTERTAINING THING, like the show in infinite jest or the joke so funny it kills from the Monty Python bit"

I think we've seen by now that the significant difference with entertainment powered by machine learning is that it's hyper-tailored in the way blockbuster culture-bulldozers are not -- it fills in every other niche and crack in the same way that fan fiction is tailored by communities to fill in every possible crack not covered by a blockbuster (including "this isn't smart enough," I just saw HPMOR out on audiobook). But fan fiction is, by comparison, time-consuming to make, find (especially) and consume, and it works for people who really like to read.

So the taste challenge this time is more akin to an immune system having to resist bioweapons that tailor themselves to individual human cell receptors and can constantly mutate to find more statistical novelty in bounded possibility space. (The COVID pandemic furnishes commonly understandable metaphors and I'm surprised to find don't feel all that remote?) To put it another way, people get entranced by the dream-like quality of generative AI because it's a closer reflection of their own consciousness, individualized culture rather than mass, not a communication with an artist but with your own novelty/pleasure/mental-stimulation circuits.

If I'm going to compare my own foray into this particular taste vortex to things like falling deeply into a few different traditional game genres, into microtransaction-driving sim games, or trying to read several whole decades of comics, etc -- I find the consciousness-feedback-loop with this one much more engrossing, to a degree that's disturbing.

We already know people hypnotized by this effect to various extents, so the question is how long it lasts -- is "get high on a loop with your own subconscious" as dangerous as schlocky 90s sci-fi dreamed it would be, or will it wash over like another culture fad? It's likely to be variable, I guess? I see a lot of people stretching aesthetic muscles and trying to articulate for themselves and others what the distinctive qualities of "an aesthetic object with human thought applied" are. This is a great form of exercise, and I think it will make people stronger? But given the "weakening" of the existing taste-landscape by blockbusters, we might be talking about lots of little refined "I don't touch that AI stuff" communities building aesthetic defenses with parochial snobbery (hey, it's worked for centuries in the past). That's probably better than most alternatives!

One little optimistic thing I noticed:

Midjourney users getting burnt out on currents of aesthetic similarity and no longer feeling the same dopamine highs as when they did their first "magical" generations, responded to by other users helping them rediscover the novelty in a) more powerful new versions (probably level off), b) mimicking more existing artists (er, educational?), c) using more features.

LLM generative content users are also wow'd at first, then hit limitations and have to spend a lot of time fine-tuning things in the way procedural generation artists have had to forever.

Expand full comment
author

Thanks Naomi. Yes, I was thinking a lot about your interest in the topic when writing this. I hope I'm not being overly optimistic, but I also think this is one of those situations where we are in the process of imagining the future and I want to find a positive, optimistic version that isn't naive metaverse/web3.0 booster/huckster-ism. I'm well aware that this is one of those situations where someone says "Hey, you know what's *really* needed here? more [thing the speaker does]!" But that's what I've got. I'm not trying to suggest that we're going to succeed, but I do want to express that, for me, art/taste/aesthetics feel vital and important in a way that they haven't since the last century, and the stakes, and the danger of failure, are part of that feeling.

Expand full comment
Mar 16, 2023Β·edited Mar 16, 2023

Great article Frank - this is my favorite of this series yet! I think there are kind of two thrusts here - one is your characterization of "taste" and how we all have it and how it will be our main weapon against any sort of super AI attention-weapon.

The only gripe I have is that I feel like you are overly rejecting of Chapman based, I guess *entirely* on the bad thought experiment. Sure, thought experiments matter, but they're not everything. And we know enough about the world to know that his overall point - the real concern is pools of power - is very likely to be the thing we ought to be concerned about. My feeling is that it's kind of just less sexy of a take. We gotta find a way to make talking about those boring old political battles, about democracy, or climate change, exciting. Maybe this is the real way in which "taste" is failing us.

Expand full comment
author

I agree. I think there is something to like and learn from in all of the positions I've looked at, especially Chapman's. But I can't help seeing things from my own perspective, which is, ultimately, mostly, game design. I am not going to reject this new technology, I am doing my best to figure out how to make games with it, because that's my job. So I'm mostly trying to explain how I think about that.

Expand full comment

It's the hallmark of lots of science fiction (and therefore, unfortunately, lots of thought from technocrats who grew up loving science fiction) that it extrapolates some current trend to absurd degrees, assuming humans will never notice or react to what's happening. A lot of it falls apart if you stop assuming we're all dumb frogs content to sit and never notice the water rising to a boil around us.

Kind of funny that people who want us to scale back and people who want us to forge ahead, technologically, can sometimes see humans in the same reductive way. Horseshoe Theory, but for Malthusianism!

Expand full comment
author

Yeah, Kevin Kelly makes a similar point in a recent Econtalk interview. There's a widespread attitude of "Look, I'm not worried about myself, I'm capable of seeing through all this bullshit, I'm worried about all these other poor slobs who are sitting ducks."

Whereas, in fact: 1) the speaker *isn't* capable of seeing through bullshit, and *should* be more worried about themselves, and 2) other people aren't dummies, they're approximately as smart and capable as you are. Come on!

Expand full comment