Welcome to Polymathic Being, a place to explore counterintuitive insights across multiple domains. These essays take common topics and investigate them from different perspectives and disciplines to come up with unique insights and solutions.
Today's topic is a collaboration with
who writes on tech topics at . He wrote a piece recently that caught my attention and sparked a conversation resulting in this essay. What we’ll explore today is the challenge that, all too often, conversations around AI reflect an either/or binary between humans and machines. Either AI or a Doctor. Either AI or a Coder. Either AI or an Artist. Instead, if we analyze where our current systems could benefit from AI we can find better pairings between humans and machines.The Doomsayer stands at the cliff's edge, watching with horror as humanity teeters towards the brink, pulled by the sinister forces of AI. According to this grim prophet, AI is our self-engineered apocalypse, our Frankenstein's Monster, a catastrophe in waiting. The Doomsayer's mantra? The end is nigh. It's an unshakeable belief that the fusion of silicon and software will unseat us from the throne of civilization, leading to our collective ruin. In this narrative, AI isn't just the enemy; it's the final enemy.
The Utopian, on the other hand, is AI's most ardent cheerleader. The Utopian sits in a glass tower, gazing at the horizon through rose-tinted spectacles, seeing a future so bright it could outshine the sun. To them, AI is the silver bullet, the cure-all, the genie that grants all wishes. AI, they say, will wipe out disease, mend our broken environment, erase inequality, and elevate us to a state of near-divine omnipotence. In this worldview, AI is our ultimate ally, our ticket to a utopia where all problems are mere footnotes in the annals of history.
Sounds a little…dramatic, right?
It's as if the discourse around AI has been scripted for a blockbuster sci-fi movie, polarized to such extremes that it borders on the absurd. But life is rarely so cut and dry. And so is the case with AI. It's neither our impending doom nor our saving grace. It's a tool, a complex one, capable of extraordinary things if we wield it right, and capable of horrifying things if in the wrong hands (or if used wrongly in the right hands).
What's needed in the debate around AI is not fear or blind faith but nuance. AI, like any tool, is not inherently good or evil; its impact depends on its application. This collaborative piece aims to sift through the hyperbole and explore the middle ground, where AI becomes a collaborator, not a competitor. While extreme outcomes are certainly possible, there exists a much broader spectrum of possibilities.
This challenge becomes the core exploit that Michael explores in his new Sci-Fi novel Paradox. The binary becomes the catastrophe but the characters still explore the complexity and nuance. It’s an adventure fusing technology, psychology, and sociology where, in the battle over advanced AI, the book explores whether we lose our humanity or learn what truly makes us human. (see video trailer below!)
It's Not Either Humans or Machines
Both the Doomsayer and the Utopian have extremely strong, almost comical points of view. They present an either-or scenario. Since the topic is serious, let's break out of the binary and explore the fertile middle ground where humans and AI can truly collaborate. Clearly, this calls for an “and.”
This essay is going to build on a lot of topics, so we are going to shortcut and share a bunch of links that provide the foundations and allow you to:
Learn more about the nuances of AI and how to apply it
Connect with fantastic authors who are exploring AI without the hyperbole
Our journey into the heart of the AI false-binary started with Andrew's conversation about the transformative yet complex facets of AI in healthcare. Often, debates on this issue end up focusing heavily on the risks and the calls for regulation, overshadowing the potential for AI to actually mitigate these risks, especially when we break free from binary thinking.
From
, we learn a key lesson: it all boils down to a balancing act. This insight mirrors a challenge Michael grappled with five years ago in the realm of Autonomy. The discussion of AI is closely intertwined with that of autonomy, and our tunnel vision often leads us to focus on isolated elements without considering the broader system.For instance, in discussing an autonomous weapons system, the debate often gets mired in ethical, trust, and assurance issues related to a fully autonomous weapon. The truth is, we don't necessarily need a completely autonomous weapon. What's required is a system that's quicker and more accurate to minimize collateral damage. The same principle applies to AI in healthcare.
We don't need AI to replace doctors or make diagnoses, nor do we need it to make fully autonomous decisions. What we do need is AI to fill the gaps in our healthcare systems, not widen them. By moving beyond the binary, we can focus on fostering system collaboration—a third dimension where AI and humans can work together.
This idea of generative AI co-creation is articulated by
, while uses the metaphor of "onboarding your AI intern." Rather than replacing human skills, AI can augment them, as Mollick's detailed guide to AI illustrates across various capabilities.Imagine a healthcare system where AI processes thousands of medical images, distilling them down to a few critical ones for the doctor to review and make decisions on. Consider the possibilities with higher fidelity medical imaging that can detect diseases and recommend treatments long before humans can. Michael Spencer discusses these potentialities in his work on
.What if we have better AI tools at home that can engage and encourage patients to follow through on physical therapy or even someone to talk to for mental health? This idea was first explored in 1966 by MIT and named ELIZA. This is a great example of just how long we’ve been working with AI and an application that also brings us to the conversation on risk.
At What Risk?
If we accept that it’s not a binary replacing anyone but augmenting medicine, the next concern is around bias in AI or AI making improper decisions based on dirty data.
writing in brings up these concerns in her essay talking about “the slimy side of AI.” She brings up a lot of great points about AI including bias, diversity, IP theft, identity theft, and other protected data concerns, all of which need to be addressed.Can we use AI to learn more about these risks? Instead of simply identifying AI as a source of bias and discarding it, can we harness its capabilities to provide a deeper understanding of our existing systems? Andrew’s essay cites an experienced researcher's concern about an AI system that "systematically gave Black patients longer wait times". This is a striking example of how AI can replicate and perpetuate existing systemic biases if not properly regulated.
Yet we need to be careful on how we fix the problem. The critical point here isn't to deny the existence of bias or to justify it but to understand its nature more thoroughly. For instance, if Black patients historically receive later appointments and the AI continues this trend, it's an indication of a pattern. The existence of this pattern isn't inherently good or bad, but it is essential information. It prompts us to ask: Why does this pattern exist?
Could there be underlying reasons for these delays? Perhaps Black patients require additional preparation time for factors like childcare, seeking a second opinion, or discussing with family? If we rush to equalize the scheduling without understanding these factors, we might inadvertently lead to more rescheduling and further delays.
Michael explored this concept of understanding bias more deeply in Eliminating Bias in AI/ML. It's vital to not just identify differences in outcomes but to explore why these differences exist. Is it because certain communities are marginalized, or are there other contributing factors? We need to acknowledge that we don’t have all the answers, yet.
This is where AI can provide value. It can help us identify patterns and anomalies in data that might otherwise go unnoticed. Using these insights, we can work towards a more nuanced understanding of systemic bias and how to address it. Recognizing and exploring these biases doesn't absolve them; it provides a path forward to address them meaningfully.
AI Governance
What we’ve covered thus far is critical in understanding how we’d begin to govern AI. Clearly, we know that there’s nuance. A false binary of either/or doesn’t help clarify and, in fact, can cause regulation to be overbearing and actually establish a layer of bias like a big hammer looking for a solution.
writing is working on the problem of AI Governance through an organization called CATAI - the Center for The Advancement of Trustworthy AI. He captures the risk of an overreaction in that either/or mindset that a push for regulation can lead to even larger problems:“Many of us have also become worried about the possibility of regulatory capture, and whether shrewd executives at the major tech companies might induce governments to alight on regulation that ultimately does more harm than good, keeping out smaller players, and entrenching Big Tech’s own status, while doing little to address the underlying risks”
Gary is 100% correct here and we don’t think there’s any coincidence that those stoking the flames of fear in the media happen to be the same companies rushing forward at breakneck speed to be the first to market before people react. (Sam Altman from Open AI and Anthropic, maker of Claude are constantly proposing doom (or maybe it’s just risks to their market share?))
The thing is, we can even use AI to help us understand how to better regulate AI. It just requires stepping back and contextualizing the multiple facets of the challenge.
Stepping Back and Concluding
Today, we acknowledge the strides made in AI integration as the Utopian might, recognizing its transformation of healthcare, work, and life in subtle yet profound ways. At the same time, we heed the Doomsayer's warnings about inherent biases, potential misuse, and the grave responsibility that comes with such a power. But as we have seen, AI is not an absolute savior nor an impending doom; it is a complex tool with the potential to amplify human productivity and creativity in unprecedented ways, while also introducing risks that must be carefully managed.
Our journey certainly doesn't end with the adoption of AI; it is merely a new chapter in our ongoing narrative of technological progress. Thus, we must navigate this nuanced reality with balanced caution and optimism, appreciating the extraordinary capabilities AI presents, while critically analyzing and addressing its risks and ethical implications. This balanced approach is our key to unlocking the potential of AI without succumbing to its pitfalls.
Collaboration is also one of the most human things we can do and so if anyone is interested in co-authoring, reach out to Andrew or Michael for more opportunities!
Enjoyed this post? Hit the ❤️ button above or below because it helps more people discover Substacks like this one and that’s a great thing. Also please share here or in your network to help us grow.
Polymathic Being is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
List of Tech / AI Authors to check out:
Artificial Intelligence Made Simple by
The Muse by
The Road to AI We Can Trust by
AI Supremacy by Michael Spencer
- by
I love the connections to the other authors! Now I have week's worth of reading!
Great article. Gave me insights into the two extremes about Al