Welcome to Polymathic Being, a place to explore counterintuitive insights across multiple domains. These essays take common topics and investigate them from different perspectives and disciplines to come up with unique insights and solutions.
Today's topic wrangles with my love/hate relationship with Artificial Intelligence (AI). On the one hand, I feel like the interwebs are chock full of absolute rubbish about AI, snake-oil salesmen selling AI, and people who should know better but are still freaking out about AI. On the other hand, it makes a fantastic literary foil that accentuates and draws attention to the qualities of what it means to be human.
I’ve always been a fan of technological advancements and have been studying, developing, and applying AI solutions to complex problems for a decade. It makes up a core element of my portfolio of advanced technology innovations and is a crucial enabler for my professional career.
But I hate it.
To be clear, I hate how much misunderstanding there is out there. We covered a few of these topics already:
Like when people claim AI has biases but never pause long enough to think about what that means:
Or when people claim sentience or superintelligence without understanding the layers of AI:
Or the fact that most of the issues that will ever manifest from AI are due to how HUMANS react to it:
Or the panic about AI replacing artists without understanding what makes us unique:
Or the discussions about AI intelligence without first even understanding how our Brains work:
And lest you wonder if this is just me being an ‘old man yelling at clouds’ as the meme goes, I’m merely riding, often delayed, on the shoulders of other giants in the field. One such giant is
who wrote this wonderful piece that synthesized many of these topics.Clearly, a pattern is emerging here... To be honest, I’m kind of tired of writing on the topic because it’s sucking me in and I feel like it distracts from so many other cool Polymathic conversations.
But AI is Useful!
The thing is, AI does make a great literary foil:
In any narrative, a foil is a character who contrasts with another character, typically, a character who contrasts with the protagonist, in order to better highlight or differentiate certain qualities of the protagonist. A foil to the protagonist may also be the antagonist of the plot.
In this case, the protagonist is us; the Human.
And counterintuitively, the antagonist is also us.
We are the ones who are both creating and panicking about AI. We are the ones who desperately look for 'sparks of intelligence' so that we can justify reading in human-level intelligence instead of having an honest conversation about who we are, and how AI is limited in that concern.
It makes the perfect literary foil because I use both the protagonist and antagonist roles in my book Paradox whose tagline reads:
In the battle over Advanced AI will we lose our humanity, or learn what truly makes us human?
It’s an apocalyptic exploration of advanced AI gaining general intelligence and looks at the unique idiosyncrasies that being human entails.
What I like about AI is that it provides a popular zeitgeist to explore psychology and sociology in a way that captivates the human imagination. It provides amazing examples of real humans reacting to the unknown and allows us to explore what that really means for us.
AI is exceptionally useful, not just for the technological revolution it can provide that can boost human thriving (and yes, there are innumerable good things!) but also for the psychological and sociological introspection it can foster.
(For an in-depth analysis of the psychological frameworks I recommend
and his essay on entitled “The Impending AI Culture War and How to Avoid It”What to do:
Technology has evolved continuously through time immemorial. AI is one more step in that evolution yet given its association with human intelligence (from which we derived the concept of ‘artificial’) it creates a schism in how we think about it.
Like other topics we’ve explored by first looking inside ourselves and knowing who we are, AI also requires this to fully understand how to maximize the benefits and intelligently reduce the harms.
Truly, In the battle over Advanced AI will we lose our humanity, or learn what truly makes us human?
This is the action that needs to be addressed. My book Paradox doesn’t have the answers though. In fact, I hope that readers will walk away still curious about which side they’d pick!
And that’s the key that I hope you take away today; that we remain curious, that we stop believing we know the full system, and that we constantly shift our perspectives to ensure we fully account for all the implications.
That’s the foundation of Systems Thinking and it’s also what I believe to be the solution to stemming any apocalyptic ending to AI. There’s so much potential, it’d be a shame to lose our humanity.
Enjoyed this post? Hit the button ❤️ above or below because it helps more people discover Substacks like this one and that’s a great thing. Also please share here or in your network to help us grow.
Polymathic Being is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Further Reading from Authors I really appreciate
I highly recommend the following Substacks for their great content and complementary explorations of topics that Polymathic Being shares.
Goatfury Writes All-around great daily essays
Never Stop Learning Insightful Life Tips and Tricks
Cyborgs Writing Highly useful insights into using AI for writing
One Useful Thing Practical AI
Great all-around tech topics
Right there with you. I'm to the point that when the topic comes up, as it always does, I pinch the bridge of my nose and take a deep breath. It is so useful. I love your conclusion:
"And that’s the key that I hope you take away today; that we remain curious, that we stop believing we know the full system, and that we constantly shift our perspectives to ensure we fully account for all the implications.
That’s the foundation of Systems Thinking and it’s also what I believe to be the solution to stemming any apocalyptic ending to AI. There’s so much potential, it’d be a shame to lose our humanity."
As an Engineer - I only hate the vapid nature of the term "Artificial Intelligence" and all the inherent 'pseudo intellectual sciolism' of consumer opinion about it. That's the most annoying aspect of all Information Technology users - the arrogant ignorance of never reading the 'bleeping' manual - then pretending to know.