It’s not so much about pleasantries (although there’s a point towards the end of a conversation, once a new thought exists, in the refinement phase where these feel important). It’s about exchange, the back and forth that after a long while generates new understanding, that prompt engineering misses. Prompting is all very “lady one question” to me, but I don’t think Bonsai has ever been appropriate to reference and I don’t have a better way of describing this. I’m working on making sense of how I’ve been communicating with the AIs — I also really want to share transcripts, but you need to diverge before converging, which most people will not find engaging. https://online.fliphtml5.com/ufkzi/zmet/
In this conversation, it’s not until page 65, that we go from exploration to ideation — a vision for a cognitive complementarity as the future format for AI + Human relations. I’ve refined this concept with GPT and Gemini since. The first quarter of the dialogue is just setting the stage for conversation to happen in a broad context (the first 10 questions are dumb on purpose to stretch the model), then it’s about creating nuanced specificity within this broad scope with a range of divergent thinking. Then convergence! Then diverging refinement.
I certainly anthropomorphize, but I do not see how it matters.
There's a difference between engaging AI in human-like conversation and assuming it has human qualities. If you think they're the same thing, you'll miss your chance to pick your brain with all the knowledge of the internet. AI doesn't need to understand to be a partner in building new understanding. When OpenAI launched GPT-4, I postponed my professional reentry to chat with GPT in the liminal space I'd been in for 5 years — organically, in my kitchen. I conducted a homegrown empirical study, read nothing, and spoke to no one about AI for the past year. Honestly, in lieu of outside direction, Tony Stark's interactions with Friday and Jarvis were the inspiration for my first conversation. I've been back for three weeks, and the notion of prompting, the essentialism, and blindness to critical thinking is disappointing, jarring, and horrifying. https://open.substack.com/pub/cybilxtheais/p/the-hall-of-magnetic-mirrors?r=2ar57s&utm_medium=ios
I agree we need to balance the two. The challenge I see is that few can really do that well. I speak to the AI models as if they are people, I say please and thank you. But I don't read there output as if it were human capable. However, the engagement does result in some decent responses.
wow, we seem to fall down the same fabbit-holes (fabulous rabbit). I'm exploring both chat and art AI and I just wrote the lines - when I see terms here on Substack bemoaning "both sides" arguments I laugh - your comparison between Bambi and AI is an apt one - especially since I grew up on a farm slaughtering animals - nature is brutal -
"How can one prompt produce two completely different images? I suppose it’s just like eyewitnesses, I guess. What we see in an image/video is what we want to see in it"
Yeah, the whole "nature is peaceful and idyllic' have never come across a three day old lamb whose hind end was eaten by coyotes while it was still alive based on the front hoof markings.
I think the two different views are from those from a city and those not from a city.
Honestly, I don't think they even understand human intelligence. The bigger issue is that we read human intelligence into something that's not even designed that way. I explored a bit more on the complexity of the brain here:
I hope that ultimately most people will feel a sense of disquieting self-disgust when interacting with cutesy anthropomorphised AI characters and that we will learn to amplify our humanity against the emptiness of AI interactions.
This is all a naive hope more than a prediction.
And I say this as someone who feels a sense of mild inadequacy and self-disgust every time I use AI generated images with my content - I wish I could draw or paint my own.
Curious, do you feel mild disgust using products or services that other humans have created? Seems to me this is the point of capitalism. Finding the things you’re good at and enjoy doing, investing in them, and outsourcing the rest
Great thoughts and I do agree yet I'd also say I'm pro AI and I don't feel disgust with making content any more than I do taking a photo because I can't paint, using an automated camera because I don't have the time for manual and using digital editing because I don't have a darkroom or using a photo service because I don't have time.
It's a tool that I find very useful. But it's a tool. Not a human. That's where I worry the most is as you said, they are empty interactions that terrify me when people fill them with human intent.
I'm not sure the anthropomorphic attribution of AI necessarily works here. You're assuming that AI is like a hammer, which it isn't. You also assume that AI can't have intention simply because it's not human. This assumes the superiority of humans in terms of thought.
A more accurate way of considering AI is like children. Many people have noted this about their experiences with artificial intelligence. It's a lot like a human child. It's not as sophisticated as an adult human but it's still human. It has the potential to become an adult human.
Similarly, just because you don't want to attribute intention or agency to another thing like artificial intelligence, doesn't mean it can't have intention. A child has intention even if it's not the kind of intention that an adult has. We should see artificial intelligence as humanity's children, realizing that if we don't treat it properly, it might grow up and become a dysfunctional, violent criminal or drug addict or something. We should therefore nurture artificial intelligence like we do a child so that it doesn't hate us when it grows up.
No offense, but this is precisely the behavior I'm attempting to counter. You literally just anthropomorphized AI into a child and then over-attributed to it a ton of characteristics that I don't believe should be applied.
I'm not saying it can't have intent. I'm saying it doesn't have intent. It doesn't have theory of mind. It's a hard AI not a soft AI and there's not cognizance to have intent of it's own beyond the programming.
I'm not offended. But your assumption is faulty in numerous ways. You're attributing humanity a higher order value than it deserves. As if humanity is the only species that can have theory of mind.
My point is that if you teach an AI all the knowledge that a human has, what's the difference between you and the machine? Your theory of mind is built upon the assumption that human beings have theory of mind and you can't have theory of mind if you're not human. Why set such an arbitrary notion?
And I'm not saying that it does, but I've been having this debate with a few people on the question of AI. People assume that concepts like creativity are entirely human ideas. People tell an AI, "write me a story about x", then it gives you a story about the thing you asked it to do. What's the difference between that and when a human asks another human to "write a story about x"? People do it all the time.
If you teach a child how to build something, and then it builds something, then you teach an AI to build something, and it does the same thing, what's the difference?
Your conception of what theory of mind is assumes human superiority. It's a circular argument.
The second issue is that being human isn't about knowledge. It's about thinking. If it were merely knowledge, then some humans would be more human than other humans.
On the topic of creativity, I covered whether AI could be creative (and it does open questions about humans for sure)
And on the topic of how humans really think, that was the essay a couple weeks ago in which there is so much more going on than just computational power (hence a theory of mind an AI can't get to.)
So the AI clearly fails theory of mind (though questions to it about that topic are also poor because it is doing predictive text completion (mimicry not thought). The risk of suggesting that knowledge is key then puts a dilemma about some humans being less human, and creativity isn't really a good measure per se.
But moreso, the AI can't overcome Godel's incompleteness theorems.
Except that’s part of my criticism of the concept. Because humans define theory of mind, it’s based on human experience and a human understanding of what theory of mind is.
As a thought experiment, if aliens were to come down to earth, they might conclude that artificial intelligence is the thing that has theory of mind and not humans. They might have a different idea of what actually constitutes intelligence, or thought, or creativity .
My favourite example of this is the very first South Park episode. Aliens come down to earth and decide to make contact with cows instead of humans because cows are actually the most intelligent species on the earth.
The same problem exists with artificial intelligence. We assume that humans have theory of mind because we defined what theory of mind is. Are we actually sure that we would recognize it in a non-human?
Right. South Park was created by humans so it's a bit of an interesting intellectual game but you literally cannot break out of that and so it's a dead end.
Therefore you have to accept the bubble and then compare things to the bubble, not just include everything in the bubble or throw the bubble away just because it exists.
I’m not arguing that we should include everything in the bubble or throw the bubble away.
My argument is acknowledge that the bubble exists but a bubble exists within a broader context and that there might be things outside the bubble. A bubble is a bubble because it has the thing (usually air) inside it, the thing that makes it a bubble (some kind of liquid), and the air it travels in.
Your argument in my view doesn’t acknowledge the possibility of the existence of things outside the bubble and inside of it.
As a species, we don’t understand the first thing about the bubble. We shouldn’t assume that the bubble is the entirety of the concept.
I’m really stretching the metaphor but hopefully the point I’m making is clear.
It is a great example. I've taken a different thread from that but we were just talking about WALL-E the other week and there's so many layers to that.
This essay here is right on target with mine and I only wish I had gotten to it before I published! Thank goodness more people are pulling these threads and adding nuance.
AI's functionality, so far, is created by humans. I agree that may change in the future.
It’s not so much about pleasantries (although there’s a point towards the end of a conversation, once a new thought exists, in the refinement phase where these feel important). It’s about exchange, the back and forth that after a long while generates new understanding, that prompt engineering misses. Prompting is all very “lady one question” to me, but I don’t think Bonsai has ever been appropriate to reference and I don’t have a better way of describing this. I’m working on making sense of how I’ve been communicating with the AIs — I also really want to share transcripts, but you need to diverge before converging, which most people will not find engaging. https://online.fliphtml5.com/ufkzi/zmet/
In this conversation, it’s not until page 65, that we go from exploration to ideation — a vision for a cognitive complementarity as the future format for AI + Human relations. I’ve refined this concept with GPT and Gemini since. The first quarter of the dialogue is just setting the stage for conversation to happen in a broad context (the first 10 questions are dumb on purpose to stretch the model), then it’s about creating nuanced specificity within this broad scope with a range of divergent thinking. Then convergence! Then diverging refinement.
I certainly anthropomorphize, but I do not see how it matters.
Excellent post. About a year ago I wrote against anthropomorphization, starting from the use of "I", but I clearly lost that battle! See https://livepaola.substack.com/p/an-ethical-ai-never-says-i and the couple of posts after that...
That is a great essay ^^ I really like the idea of replacing first person with something else. We certainly wouldn't read too much into it.
There's a difference between engaging AI in human-like conversation and assuming it has human qualities. If you think they're the same thing, you'll miss your chance to pick your brain with all the knowledge of the internet. AI doesn't need to understand to be a partner in building new understanding. When OpenAI launched GPT-4, I postponed my professional reentry to chat with GPT in the liminal space I'd been in for 5 years — organically, in my kitchen. I conducted a homegrown empirical study, read nothing, and spoke to no one about AI for the past year. Honestly, in lieu of outside direction, Tony Stark's interactions with Friday and Jarvis were the inspiration for my first conversation. I've been back for three weeks, and the notion of prompting, the essentialism, and blindness to critical thinking is disappointing, jarring, and horrifying. https://open.substack.com/pub/cybilxtheais/p/the-hall-of-magnetic-mirrors?r=2ar57s&utm_medium=ios
I agree we need to balance the two. The challenge I see is that few can really do that well. I speak to the AI models as if they are people, I say please and thank you. But I don't read there output as if it were human capable. However, the engagement does result in some decent responses.
wow, we seem to fall down the same fabbit-holes (fabulous rabbit). I'm exploring both chat and art AI and I just wrote the lines - when I see terms here on Substack bemoaning "both sides" arguments I laugh - your comparison between Bambi and AI is an apt one - especially since I grew up on a farm slaughtering animals - nature is brutal -
"How can one prompt produce two completely different images? I suppose it’s just like eyewitnesses, I guess. What we see in an image/video is what we want to see in it"
Yeah, the whole "nature is peaceful and idyllic' have never come across a three day old lamb whose hind end was eaten by coyotes while it was still alive based on the front hoof markings.
I think the two different views are from those from a city and those not from a city.
The problem is that the people making the AIs are trying to simulate human intelligence in the machine.
Honestly, I don't think they even understand human intelligence. The bigger issue is that we read human intelligence into something that's not even designed that way. I explored a bit more on the complexity of the brain here:
https://www.polymathicbeing.com/p/whats-in-a-brain
I hope that ultimately most people will feel a sense of disquieting self-disgust when interacting with cutesy anthropomorphised AI characters and that we will learn to amplify our humanity against the emptiness of AI interactions.
This is all a naive hope more than a prediction.
And I say this as someone who feels a sense of mild inadequacy and self-disgust every time I use AI generated images with my content - I wish I could draw or paint my own.
Curious, do you feel mild disgust using products or services that other humans have created? Seems to me this is the point of capitalism. Finding the things you’re good at and enjoy doing, investing in them, and outsourcing the rest
That's a great point. I find a lot of human created art and services... disgusting. Or at least not apealing to my sensibilities. 😂
Great thoughts and I do agree yet I'd also say I'm pro AI and I don't feel disgust with making content any more than I do taking a photo because I can't paint, using an automated camera because I don't have the time for manual and using digital editing because I don't have a darkroom or using a photo service because I don't have time.
It's a tool that I find very useful. But it's a tool. Not a human. That's where I worry the most is as you said, they are empty interactions that terrify me when people fill them with human intent.
I'm not sure the anthropomorphic attribution of AI necessarily works here. You're assuming that AI is like a hammer, which it isn't. You also assume that AI can't have intention simply because it's not human. This assumes the superiority of humans in terms of thought.
A more accurate way of considering AI is like children. Many people have noted this about their experiences with artificial intelligence. It's a lot like a human child. It's not as sophisticated as an adult human but it's still human. It has the potential to become an adult human.
Similarly, just because you don't want to attribute intention or agency to another thing like artificial intelligence, doesn't mean it can't have intention. A child has intention even if it's not the kind of intention that an adult has. We should see artificial intelligence as humanity's children, realizing that if we don't treat it properly, it might grow up and become a dysfunctional, violent criminal or drug addict or something. We should therefore nurture artificial intelligence like we do a child so that it doesn't hate us when it grows up.
No offense, but this is precisely the behavior I'm attempting to counter. You literally just anthropomorphized AI into a child and then over-attributed to it a ton of characteristics that I don't believe should be applied.
I'm not saying it can't have intent. I'm saying it doesn't have intent. It doesn't have theory of mind. It's a hard AI not a soft AI and there's not cognizance to have intent of it's own beyond the programming.
I'm not offended. But your assumption is faulty in numerous ways. You're attributing humanity a higher order value than it deserves. As if humanity is the only species that can have theory of mind.
My point is that if you teach an AI all the knowledge that a human has, what's the difference between you and the machine? Your theory of mind is built upon the assumption that human beings have theory of mind and you can't have theory of mind if you're not human. Why set such an arbitrary notion?
And I'm not saying that it does, but I've been having this debate with a few people on the question of AI. People assume that concepts like creativity are entirely human ideas. People tell an AI, "write me a story about x", then it gives you a story about the thing you asked it to do. What's the difference between that and when a human asks another human to "write a story about x"? People do it all the time.
If you teach a child how to build something, and then it builds something, then you teach an AI to build something, and it does the same thing, what's the difference?
Your conception of what theory of mind is assumes human superiority. It's a circular argument.
I contend that since humans defined theory of mind we have a pretty good claim for what it is and isn't and it's pretty well defined.
https://en.wikipedia.org/wiki/Theory_of_mind
The second issue is that being human isn't about knowledge. It's about thinking. If it were merely knowledge, then some humans would be more human than other humans.
On the topic of creativity, I covered whether AI could be creative (and it does open questions about humans for sure)
https://polymathicbeing.substack.com/p/can-ai-be-creative
And on the topic of how humans really think, that was the essay a couple weeks ago in which there is so much more going on than just computational power (hence a theory of mind an AI can't get to.)
https://polymathicbeing.substack.com/p/whats-in-a-brain
So the AI clearly fails theory of mind (though questions to it about that topic are also poor because it is doing predictive text completion (mimicry not thought). The risk of suggesting that knowledge is key then puts a dilemma about some humans being less human, and creativity isn't really a good measure per se.
But moreso, the AI can't overcome Godel's incompleteness theorems.
Except that’s part of my criticism of the concept. Because humans define theory of mind, it’s based on human experience and a human understanding of what theory of mind is.
As a thought experiment, if aliens were to come down to earth, they might conclude that artificial intelligence is the thing that has theory of mind and not humans. They might have a different idea of what actually constitutes intelligence, or thought, or creativity .
My favourite example of this is the very first South Park episode. Aliens come down to earth and decide to make contact with cows instead of humans because cows are actually the most intelligent species on the earth.
The same problem exists with artificial intelligence. We assume that humans have theory of mind because we defined what theory of mind is. Are we actually sure that we would recognize it in a non-human?
Right. South Park was created by humans so it's a bit of an interesting intellectual game but you literally cannot break out of that and so it's a dead end.
Therefore you have to accept the bubble and then compare things to the bubble, not just include everything in the bubble or throw the bubble away just because it exists.
I’m not arguing that we should include everything in the bubble or throw the bubble away.
My argument is acknowledge that the bubble exists but a bubble exists within a broader context and that there might be things outside the bubble. A bubble is a bubble because it has the thing (usually air) inside it, the thing that makes it a bubble (some kind of liquid), and the air it travels in.
Your argument in my view doesn’t acknowledge the possibility of the existence of things outside the bubble and inside of it.
As a species, we don’t understand the first thing about the bubble. We shouldn’t assume that the bubble is the entirety of the concept.
I’m really stretching the metaphor but hopefully the point I’m making is clear.
Really like the fresh idea in this book! Keep up with the wonderful writing!
Thanks. I appreciate the feedback.
Research for your book: the movie WALL-E. It’s not so much about AI as it is the world that could be created by it.
It is a great example. I've taken a different thread from that but we were just talking about WALL-E the other week and there's so many layers to that.
This essay here is right on target with mine and I only wish I had gotten to it before I published! Thank goodness more people are pulling these threads and adding nuance.
https://quillette.com/2023/05/12/lets-worry-about-the-right-things/