Idle side-note, probably not original, just a pleasant thought to me:
When reading the remarks about sci-fi stories that turn bad once they move beyond relatable human characters, it struck me how games actually seem better suited for these scenarios than stories, as the player is their own "relatable" agent interacting with a foreign system. Obviating that need for characters. It then struck me how that's what Universal Paperclips does, and well, that was a nice mental loop back to the article.
Thanks for the transcriptions/summaries, useful and entertaining, especially the sic burn. The idea of partiality across species will stick with me, I think.
The core of Robin's argument is intriguing.
The ways in which cultures have evolved over time is a topic I have found very interesting to ponder for awhile. The contingency of our current moral intuitions and the genuine diversity of possible ways of being is something that is easy to not explore. There are two particular people I think of when considering this topic. One is of course Nietzsche, but another is the historian Tom Holland who makes the case that much of our cultural waters that we swim in (in the west) are a consequence of Christianity. In particular the very notion of the universalism of human dignity or even the distinction between the secular and non secular. It is interesting to consider the argument that Christianity's universalism may have given it some sort of evolutionary advantage allowing it such influence in the west.
So to me, the idea that the worldviews and morals of people in the west before Christianity were in some sense fundamentally different to our own, seems rather plausible. That said, I also think it reasonable that a lot has stayed the same. They still had families that they loved, enjoyed friends, felt envy and shame and pride and lust and heartbreak and fear et cetera.
Robin is getting at a fundamental difficulty of universalism when taken to its limit. At what point do we stop extending the reach of moral value? If every possible configuration of being is worthy, do we end up with a kind of nihilism where the idea becomes meaningless?
I like the idea of extending the moral value of beings beyond human. To animals and plants and things. Are they still morally less than human? Why? Because we are conscious? Do we only extend moral worth to AIs that are also conscious?
My hunch is that morality is an imperfect generalisation of the types of relationships people form with people they know. People make it up (and just because we make it up doesn't mean it's unjustified, it is just people who have the responsibility for the justification, not some absolute truth). So we shouldn't expect this stuff to generalise too well. So I don't try to think too hard about the precise universal conditions for moral valuehood.
And to some extent it can be healthy to have a limit to how general ones moral universe is. It is healthy to have some notion as to what one values.
Whoo! That was a lot to digest. You know how one of the problems with LLMs is confidently uttering bad information? All of these speakers utter their positions with supreme confidence. LOL
I don't know enough to agree or disagree. There seems to be a disconnect, however, between macro and micro lenses. People who are scared of losing their jobs to AI are hardly concerned about what AI might do 50 years from now. Conversely, the "big picture" thinkers operate on a time scale longer than economic cycles. It's a dichotomy between lifestyle and culture.
Hanson's arguments contain at least two logical fallacies. The first is that "the ancients didn't care as much about happiness as we." There's not a scrap of historical evidence to support this claim. In fact, despite the wealth of anecdote to support the "humanity sure has changed a lot since the Greatest Generation saved the world" assertions that are common folk wisdom, what evidence we do possess--from personal letters and diaries thousands of years old to DNA samples even older--suggest a remarkable solid state among humankind both in physical development and in temperament. His second fallacy is that our descendants will populate the stars. The odds of that actually happing before we recede into a new Dark Age (or worse) are infinnitessimal. In 50 years we haven't even put astronauts on the moon again, much less built colonies or mines in our own solar system. We have no space elevators or spaceships capable of reaching the outer planets in under half a lifetime; to say nothing of the ability to build generation arks. We have nothing to offer except video fantasy on the subject of FTL science, nor have we solved any of the effects of radiation on humans beyond the Van Allen Belts. No, if anyone is left from our civilization to colonize the stars it will be AI robots. Animals are too frail. This elegiac thinking taints all of his other interviews as well. The fact is that AIs are an enthusiasm of the moment to reassure the educated classes that they are somehow shaping and participating in a narrative that will render them unemployed during a gradual economic collapse. AIs will never replace farmers, plumbers, engineers, or blue-collar repair workers because of the mammoth weaknesses inherent in a collectivist intelligence based on data agglomeration, 90% of which is inaccurate, outright worthless, or biased. The other great weakness of AIs is that their apparent sentience has far outpaced advances in robotics. "Androids" that can actually fool people into believing in their humanity will only be a rich man's hobby, and none of us will live to actually see them. Not even rich men. All the AI replacement in the service industries even in countries like Japan which are desperate for them, will collapse as hotels close, banks fail, and communications black out. Economic life will revert to its most basic forms again and nothing is cheaper than human labor. We'll be lucky if no slavery but wage- and sex- slavery exists in another 50 years.