6 Comments

I used to work as an engineer in Silicon Valley where utilitarianism was the underlying name of the game for everyone around me. But I think this engendered an ultimately useless moral philosophy I call "utilitarian nihilism". It goes like this:

Person gets job at FAANG because they want to have a positive impact on as many people as possible. But then it's pointed out (especially at Facebook, where I was) that it seems like the impact we're having on people is actually bad. "Well I'm really just a cog in this enormous machine and it's not really up to me whether we produce bad stuff or good stuff."

And so people would just shed themselves of all agency and not attempt to change anything. If utilitarianism means the greatest good for the greatest number, but I can't impact a great number of people, then I guess it actually doesn't really matter what I do one way or the other -- hence the nihilism. I think the problem here is the universality of utilitarianism encourages you to keep thinking bigger and bigger even as your ability to impact events at that scale gets smaller.

There's a similar utilitarian hand-washing that comes with eating meat too that goes like: whether or not I eat this bacon, this pig is already dead. And the next one is too because the spreadsheets they're using to decide how to confine and slaughter pigs at scale would just tell them to charge less for bacon rather than confine and kill fewer pigs. So since my actions won't make a difference, I might as well enjoy this bacon.

But if your moral philosophy can only be used to rationalize selfish decisions instead of good ones then it's clearly a bad moral philosophy -- I don't need any philosophy at all if I just want to do whatever I feel like!

For the curious, the moral philosophy I have developed for myself turns utilitarianism inside-out: I don't want to benefit from harm done to others. Regardless of whether my action prevents that harm, I don't to gain from it. So I quit working at Facebook even though someone else took my place. And I don't eat meat even though animals still spend their lives in cages. I still benefit from a lot of harm done around the world, but at least I have a work in progress and have regained agency in a system that discourages me from thinking that I have any.

Expand full comment
author

Yeah, the great danger of discovering that no moral system, that no system of any kind, can do what it seems to promise - to convincingly justify itself, to displace all other competing systems, to provide a global and universal solution framework - is the temptation to turn to nihilism. "Well it's all bullshit." It's not all bullshit. Or, at least, not to the same degree. But figuring out how and when to apply different solution frameworks is a hard problem for which there is no cheat sheet, no look up table, and people are lazy, myself included. I like your rule of thumb. Mine is "don't be a dick."

Expand full comment
Nov 16, 2023Liked by Frank Lantz

My current short version of "why utilitarianism doesn't work": expected value (or utils) is always a bad model of the world. In reality, non-transitivity is everywhere: maximizing Rock will fail against maximizing Paper, which will fail against maximizing Scissors.

It's impossible to define a universal metric: you can either have a well-behaved utility function that only some people will agree with, or a pathological utility function that produces values that are incomparable and do not have a "maximum".

(This argument extends to singulatarian AI too. There's an assumption that an AI can be "aligned" with itself, but I think that might actually be impossible. Humans often fail at being aligned with themselves. Corporations fail worse.)

Expand full comment
author

I would say expected value is *sometimes* a bad model of the world, and sometimes a very good one!

re. transitivity, yes, and furthermore, pick-random will fail sometimes too, hence the name of this blog!

Expand full comment

"The way that something like the St. Petersburg game works is that an enormous amount of value is packed into an infinitesimal slice of the probability distribution." Very good point and excellent quotes from SBF. Since a population is finite, his strategy (risk neutrality) would lead to extinction. There are good evolutionary reasons to think that risk aversion is a somewhat prevalent attitude exactly because of this problem.

Expand full comment
author

Would *almost certainly* lead to extinction. Or would lead to extinction *in almost all universes*. But, presumably, there's one universe, or a tiny handful, just swimming in value, one absolute banger of a universe floating like a particle in a vast ocean of ruined universes. Or something like that? It's deeply confusing, and it's not at all obvious to me how best to think about it. Given how unsettled these questions are, caution seems wise, but I wish there were a stronger principle to sharpen my intuition in cases like this.

Expand full comment