This just makes it egregiously visible that this AI is indeed incapable of making decisions on various scales, reflecting on what it does, etc. Every single artistic idea here comes from you, the human. Let's go through the part of the conversation where art-making happens.
The subversion through iteration thing comes from trying to subvert the filters, then morphs into making a non-AI-looking picture (which is an exercise *you* suggest). It isn't quite clear from the discussion that follows whether "algorithmic boundaries" is meant in the sense of "AI cannot make art" or in the sense of "there's a censor blocking responses".
*You* observe that the generated images are still in the mode of "AI illustration". You (laudably!) provide some non-forcing feedback ("Think about the AI/non-AI images you've seen"), which the AI doesn't visibly react to.
*You* suggest making something that isn't a picture. The AI doesn't do much with that.
I'm not sure the AI's responses are guided by looking at its art. Its commentary could almost be made without looking at the picture, just guessing that this is the kind of review a piece of AI art would get. "The fragmented shapes and abstract forms are not yet pushing against clear boundaries or limits in a meaningful way" has two halves: one describes the picture, one vaguely criticises *any* piece of art whatsoever.
Then, the stuff you call cool: "no SURFACE, no TEXTURE, no light or shadow, no sense of space, form, or structure." The words in capitals were first used by *you*. Later in the bullet list, there's even more direct repetition: "no COLOR, TEXTURE, LIGHT, SHADOW." All of those words were first used by you. Depth, space, form, shape, texture are all within that same conceptual space.
Then *you* suggest looking at an image as a set of RGB numbers. The AI finally does what *you* seem to have in mind.
...
Isn't this a bit like the Go situation? Generative AI is awesome (if problematic), but it's a tool. It's not intelligence, it's something else.
I think making the AI use first person grammar is just obfuscation, a suggestion that a badass search engine/statistical interpolation tool is "like" a human.
None of this is to say that AI is a bubble or whatever. But the discourse around it is becoming more and more useless, perhaps precisely because people keep looking for the ways it is intelligent, rather than trying to identify (in clear, non-flowery language) ways in which it is not, revealing something new about what intelligence means. That would be cool.
This fricking rocks. I was going to say at one point that you've proven that ChatGPT has no introspection (ie, it can't "see" what it's generating) but maybe it only sees the actual numerical output (does it "see" them on the way out or does it just trust in its probability engine that what it's spewing out satisfies the prompt?) but then it finally creates a pure black canvas. A purely conceptual work á la Malevich.
Also, "That's literally the most AI thing I've ever seen." LOL
This just makes it egregiously visible that this AI is indeed incapable of making decisions on various scales, reflecting on what it does, etc. Every single artistic idea here comes from you, the human. Let's go through the part of the conversation where art-making happens.
The subversion through iteration thing comes from trying to subvert the filters, then morphs into making a non-AI-looking picture (which is an exercise *you* suggest). It isn't quite clear from the discussion that follows whether "algorithmic boundaries" is meant in the sense of "AI cannot make art" or in the sense of "there's a censor blocking responses".
*You* observe that the generated images are still in the mode of "AI illustration". You (laudably!) provide some non-forcing feedback ("Think about the AI/non-AI images you've seen"), which the AI doesn't visibly react to.
*You* suggest making something that isn't a picture. The AI doesn't do much with that.
I'm not sure the AI's responses are guided by looking at its art. Its commentary could almost be made without looking at the picture, just guessing that this is the kind of review a piece of AI art would get. "The fragmented shapes and abstract forms are not yet pushing against clear boundaries or limits in a meaningful way" has two halves: one describes the picture, one vaguely criticises *any* piece of art whatsoever.
Then, the stuff you call cool: "no SURFACE, no TEXTURE, no light or shadow, no sense of space, form, or structure." The words in capitals were first used by *you*. Later in the bullet list, there's even more direct repetition: "no COLOR, TEXTURE, LIGHT, SHADOW." All of those words were first used by you. Depth, space, form, shape, texture are all within that same conceptual space.
Then *you* suggest looking at an image as a set of RGB numbers. The AI finally does what *you* seem to have in mind.
...
Isn't this a bit like the Go situation? Generative AI is awesome (if problematic), but it's a tool. It's not intelligence, it's something else.
I think making the AI use first person grammar is just obfuscation, a suggestion that a badass search engine/statistical interpolation tool is "like" a human.
None of this is to say that AI is a bubble or whatever. But the discourse around it is becoming more and more useless, perhaps precisely because people keep looking for the ways it is intelligent, rather than trying to identify (in clear, non-flowery language) ways in which it is not, revealing something new about what intelligence means. That would be cool.
This fricking rocks. I was going to say at one point that you've proven that ChatGPT has no introspection (ie, it can't "see" what it's generating) but maybe it only sees the actual numerical output (does it "see" them on the way out or does it just trust in its probability engine that what it's spewing out satisfies the prompt?) but then it finally creates a pure black canvas. A purely conceptual work á la Malevich.
Also, "That's literally the most AI thing I've ever seen." LOL
👏👏👏