13 Comments

When I talk about Humans v. Machines with my journalism students, I start with John Henry. John Henry and Desk Set.

Expand full comment
author

Two absolute bangers!

Expand full comment
May 2, 2023Liked by Frank Lantz

I disagree with the upshot of the existence of adversarial policies. If you trained alphago on a few of these games, it would easily be resistant to them. So you can only eke out a tiny fraction of wins against the system by using that strategy, after which it’ll continue dominating you. So the real conclusion IMO is like “after losing 1e6 games in a row, you can win like 10 games, after which the strategy stops working, and then you’ll lose the next 1e12 games until you find another trick, etc.”

So you’re still losing like 99.999999……% of the time.

Expand full comment
author

Here's the response from the researchers to this question:

Q: Now that this vulnerability and its severity is known, is it easy to fix? Can we just show the AI a few examples?

A: It is not straightforward. KataGo is currently training with numerous positions from games our adversary algorithm played. There are clear improvements, but so far it is still vulnerable. The process is not complete, so it may just need more time (and computation), but already this shows it is not as easy as one might hope to fix an issue like this.

Expand full comment

I guess I mean a bit more generally - there isn’t any foundational reason why those policies are particularly harder for the system to learn relative to other strategies. The pattern of Gary Marcus pointing out flaws in GPT-2/3/4 only for them to be solved like 6 months later comes to mind

Expand full comment
author

Yeah, I take your point. I also find Marcus annoying. Nonetheless, I think there is something illuminating about this particular error. Especially given its fundamental nature and how long it sat unnoticed. Moreover, once we include the notice-the-error-and-address-it loop as part of the system, we are on our way towards to creating actual thinking.

Expand full comment
May 3, 2023Liked by Frank Lantz

Sure, you can eventually train out every exploit, once it's found, but this isn't the upshot. The upshot is that human amateurs are able to see through this exploit in advance, without having lost to it once. It's a direct proof that AIs don't yet capture some deep crucial aspect of "intelligence", and are instead able to reach superhuman heights through a giant mess of shallow heuristics. There's still some "secret sauce" left to master before the robot takeover.

Expand full comment
author

But also, the secret sauce might be a bunch of kludgy stuff, like hooking up some control function that "notices mistakes" in a way that is unrelated the error function that produced its move picking logic. Something that does some version of what the KataGo team does when it reads this paper and goes back and tries to re-train the model in a way that patches it out. But then, by the time you've got something like that hooked up, you might have trouble keeping the whole system interested in continuing to play Go. https://twitter.com/flantz/status/1653825953087516672?s=20

Expand full comment
May 4, 2023·edited May 4, 2023Liked by Frank Lantz

Hmm, I think I disagree that you can achieve what I mean by secret sauce in this way. You'll just make your mess of shallow heuristics somewhat more robust, but a "superintelligence" would likely still be able to outperform your (also shallow) adversarial module.

I also don't think that an AGI with a single-minded focus is impossible, but it does seem unlikely to emerge in the course of development of brain-inspired architectures, which so far appear to be on track to get to AGI first.

Expand full comment

I'm not so sure about your re-training idea. I watched them version up AlphaGo before the Sedol match, and it was never as trivial as "adding a few examples to the database". It was often a complete re-working of some fundamental design premises, then re-baking the neural net, then re-testing, iterating, etc. There are deep neural paths in the "baked" neural net that are not well-understood by anyone, including its creators. It is this inherent "black box" aspect of modern AI that is so maddening to all of us (in the industry). We control the start point, but post-training, we are working with an unknowable beast, and merely trying to explore its vast possibility space of output and response. I'm not sure you're wrong, but I'd like to see clarity or hear confirmation from the DeepMind team before saying that you're right.

Expand full comment
May 15, 2023Liked by Frank Lantz

Nice

Expand full comment
May 4, 2023Liked by Frank Lantz

Thank you for putting this into words, I feel like this is a bigger deal than it's being treated as. No one is writing about it. Cade Metz is too busy with CEOs. This is a quiet story with an insight counter to the doom-hype that tech companies live on.

AlphaGo v Lee Sedol is what got me into Go. Learning about this huge blind spot and watching these videos about it has been exhilarating and clarifying. I know this specific issue will be patched, but what the exploit reveals about the nature of AI in general, and about us, given that it hid in plain sight for 7 years, is what is compelling to me.

These groups have built intelligences but they have not built souls.

The machine knows how to win, but the soul knows how to play.

Expand full comment
Jun 19, 2023Liked by Frank Lantz

What a dazzler! Brilliant, to compare the jobless Matrix-like future to the post-Go Japanese twilight. As you can imagine, I've had to devote a great part of my own time contemplating the question of what lends a man (note I don't mention woman) a sense of identity or worth in the absence of employment, personal wealth, or credentials. The answer in my case has more or less ended with my generation (and it was a faint enough social pulse even within us); the idea of trying to remain an old-fashioned gentleman in spite of all adversity. Teddy Roosevelt perhaps best exemplified this notion on this side of the Atlantic; Heinlein's "self-sufficient man" but with a veneer of intellectual curiosity and good breeding. For me, the ideal gentleman has always been George Saintsbury, expert on Jane Austen, 19th-Century French Literature, and the world's foremost authority on Walter Scott; an academic, though he would have been quite content as a dilettante, and the preeminent oenologist of his day--indeed, many of the wine-tasting societies he founded are still extant. He was a man supremely uninterested in titles and to a great degree wealth, a man who pursued a lifetime of passions and hobbies (in spite of having his house bombed over his head), a man of impeccable manners who wrote so gently and observantly of his favorite authors that his words are a constant inspiration to read them. In short, a man who would have lived a full and rich life even if his post at Oxford had been usurped by an AI.

Expand full comment