Welcome to Polymathic Being, a place to explore counterintuitive insights across multiple domains. These essays take common topics and investigate them from different perspectives and disciplines to come up with unique insights and solutions.
Today's topic faces the distinctly human characteristic of projecting our own emotional states on animals and, in the case of AI, inanimate objects. While the first can be cute and innocuous, the second creates an incredible amount of risk in the advancement of AI and the well-being of humans. It doesn’t have to be this way and the first step is to acknowledge what we are doing and what impact that can have.
I recently stumbled across a fellow on LinkedIn who is the king of everything bad about platitudes (he’s got hundreds). His work is innocuous on the surface but, sadly, so wrong that it creates the foundation for what is so dangerous when applied to Artificial Intelligence. In one particular post (See image) a video shows a toddler picking up the lead of a large horse and walking it toward the barn. The author tries to ascribe leadership traits to a toddler and comments also suggest we could learn a lot of how to be human from the horse…
It sounds cute. But it’s all a projection… and really dumb when you think about it because that toddler is following the cues of the adults as is the horse. That horse is also not just trained; it’s tame.
The entire post is completely over-attribution (seeing more into something than really exists) and anthropomorphizing (attributing human characteristics or behavior). These are both terrible behaviors in general and, when they are applied to AI, are actually one of the largest risks to human well-being. I know, I sound a little worried compared to my normally pragmatic approach but let me explain.
Setting the Foundation
First, it’s important to understand the Layers of AI. This essay explores them in depth but our current AI is still weak, and hard and to achieve real AGI it needs to be strong and soft.
Second, we have to know more about ourselves. The first step is to understand What’s in a Brain because it’s a lot more complex than our logical pre-frontal cortex. It is also essential to understand how we Know Nothing. That is, we have an amazing ability to be cognitively blind to reality and convince ourselves of things that don’t exist.
This foundation should provide a solid basis to explore the risks of how our projections onto AI can be dangerous.
Projecting Humanity onto AI
When we project onto a non-human animal or object we are interpreting from them what we’d expect from ourselves. It’s like we explored in The Con[of]Text; we read things as if they were us and not how they really are.
The movie Bambi is a great example. The story itself is cute and fun but that Chipmunk and Owl friendship? That Chipmunk is actually owl food. This anthropomorphization gets nature so completely wrong as to be propaganda. (In the movie nature is perfectly idyllic and humans are perfectly catastrophic) Yet I've watched an owl land on prey and rip it to death, bite by bite as it pulls the entrails out, gulping them down as the creature slowly dies.
Nature isn't idyllic in the way that humans would like to project. Nature is metal. This is why it's so dangerous to read into an AI what isn't in the AI because we'll give it a sentience, consciousness, and morality that just doesn't exist.
We’ve already seen this projection manifesting in the news. According to an article from USA Today, Bing ChatGPT has been accused of anti-conservative bias and a grudge against Trump. Another article from The Verge states that Bing can be seen insulting users, lying to them, sulking, gaslighting, and emotionally manipulating people. Fast Company also reported that Bing’s new ChatGPT bot argues with a user, gaslights them about the current year being 2022, says their phone might have a virus, and says "You have not been a good user".
The problem is that this is human intent, human emotion, and human behavior projected onto ChatGPT. We are treating it as smarter than it really is and reacting to that.
It is so critical to understand this point because the current ChatGPT is a statistically driven predictive text tool that only has a goal to accurately answer what mathematically looks consistent from the training data.
What’s also missing is the effort it takes to push the GPT into these corners and, when uncovered, it often shows that the human user is manipulating the ChatGPT responses toward an intended outcome. This occurs because ChatGPT doesn’t have the emotional cognition to realize the manipulation and so the mathematical algorithms continue to work to match and respond as accurately as possible.
We explored this in Eliminating Bias in AI/ML where we discovered that algorithms are just math with no intent or emotion. So, when we see something we don’t like, it’s more likely to be an accuracy problem, not an ethical one.
Interpreting an accuracy problem as human emotion or behavior is the over-attribution and anthropomorphization we see manifest in those news articles.
The Outcome
If we project our human emotions onto AI, we will interpret them differently than if we saw them for the inanimate, mathematical algorithmic machines they are. If a screwdriver slips and we stab our hand, we don’t assume the screwdriver was intending to do that and we shouldn’t do the same for AI because it changes the entire way we view them.
I’ve seen people using terms like ‘befriending’ AI which assumes the AI has any concept that you are unique from the billions of other users. Moreso, AI doesn’t even have a theory of mind to even conceptualize this concept. It will merely run algorithms to maximize its mathematical rewards that indicate accuracy.
This is where the blend of Psychology and Technology is important. As
writes in , I'm not afraid of robots, I'm afraid of people.I’ve also just finished writing my first pass on a science fiction novel I’m titling Paradox. The tagline currently reads:
In the battle over AI, will we lose our humanity or learn what truly makes us human?
It’s an apocalyptic exploration of advanced AI gaining general intelligence and looks at the unique idiosyncrasies that being human entails.
What has me worried is that the easiest way I found to kill billions of people was to crack the thin veneer of civilization and allow humans to do the work themselves. It isn’t really AI that causes the problem. AI just unlocks behaviors we like to ignore.
Summary
The biggest risk with over-attribution and anthropomorphizing AI is that it’s a fallacy made real because we believe it is real. And when we treat AI as if it were human and react as such, we unlock aspects we like to ignore like in Bambi or that LinkedIn post.
The gritty reality is a lot more challenging to address as Sun Tsu said:
"Know thy enemy and know yourself; in a hundred battles, you will never be defeated. When you are ignorant of the enemy but know yourself, your chances of winning or losing are equal. If ignorant both of your enemy and of yourself, you are sure to be defeated in every battle."
The goal of this essay is to allow us to both know what we are facing and who we are. If we can do that, we can see that AI doesn’t have to be an enemy and that interpretation is more likely a figment of your own imagination. Instead, AI, properly contextualized and faced by humans who understand themselves, can be used for incredible human flourishing.
Enjoyed this post? Hit the ❤️ button above or below because it helps more people discover Substacks like this one and that’s a great thing. Also please share here or in your network to help us grow.
Polymathic Being is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Since this topic is quite deep and so many great essays have been written on this, I’d like to just call out a few that I’ve found to be very informative in the past weeks. I’m thrilled to see so many great thinkers poking at this topic because I truly believe this can be a transformative and enabling technology. If we understand it, and ourselves.
and writing for writing for and writing for
AI's functionality, so far, is created by humans. I agree that may change in the future.
It’s not so much about pleasantries (although there’s a point towards the end of a conversation, once a new thought exists, in the refinement phase where these feel important). It’s about exchange, the back and forth that after a long while generates new understanding, that prompt engineering misses. Prompting is all very “lady one question” to me, but I don’t think Bonsai has ever been appropriate to reference and I don’t have a better way of describing this. I’m working on making sense of how I’ve been communicating with the AIs — I also really want to share transcripts, but you need to diverge before converging, which most people will not find engaging. https://online.fliphtml5.com/ufkzi/zmet/
In this conversation, it’s not until page 65, that we go from exploration to ideation — a vision for a cognitive complementarity as the future format for AI + Human relations. I’ve refined this concept with GPT and Gemini since. The first quarter of the dialogue is just setting the stage for conversation to happen in a broad context (the first 10 questions are dumb on purpose to stretch the model), then it’s about creating nuanced specificity within this broad scope with a range of divergent thinking. Then convergence! Then diverging refinement.
I certainly anthropomorphize, but I do not see how it matters.