Welcome to Polymathic Being, a place to explore counterintuitive insights across multiple domains. These essays take common topics and explore them from different perspectives and disciplines, to come up with unique insights and solutions.
Today's topic continues a thread we started exploring in Don’t Trust AI, Entrust It which provides a solid framing for how to consider the ethics of advanced technology. Today we’ll do a thought experiment involving the ethical quandary where we have incredible forgiveness for human accidents but zero forgiveness for technology.
Intro
Let’s focus on Autonomous Vehicles (AVs) and their underlying AI systems. There’s a theme that emerges that any mistake, any risk from autonomous vehicles demonstrates a failure of the technology and paints the developers as irresponsible.
This LinkedIn post is a great example. In fact,
’s entire feed is full of one autonomous vehicle mishap after another. Everything from the mundane errors of bumping into a light pole, to taking a wrong turn, to slightly erratic driving, to more serious collisions. There’s nothing insignificant enough to not be reported.I’d like to point out a little psychological lesson here too. This example has subtle keywords intended to trigger a response like we learned in Clickbait Priming. Did you notice it’s a “Viral video?” That suggests many people have watched it. Then Philip uses the language of drunk driving priming you to bucket the concept into illegal irresponsibility and his hashtag implies oversight.
There’s also the newness of autonomous vehicles which makes them perfect targets for the evening news. In Phoenix, where Waymo had the issue above, there are about 105 human-caused car accidents a day. That’s not to mention the thousands of near accidents or close calls. Human accidents are so common as to be mundane and so, for the News, Waymo makes a way more compelling story and social media post.
Thought Experiment
But what if that car wasn’t autonomous and was being driven by a 15-year-old in driver’s training? It might be the first time this student has ever been behind the wheel. If it were, they had a week of classroom education on laws and theory and there are no simulations, they are put in a car for hands-on practical training.
This isn’t a hypothetical either. This is driver’s training today and it’s the same as when I went through after my freshman year in high school. While I had practiced on backroads with my dad, many of the kids in our class hadn’t. The instructor would pile three kids in the car and let us drive. One of my peers was BAD! We almost hit the gate going out of the school parking lot forcing the instructor to slam on his brake. Those first miles were a nerve-wracking hell. He was all over the road with inconsistent speed, and nearly out of control. The instructor finally turned us around and had him practice in the parking lot for a few hours. Thankfully my friend and I could get out and watch from the sidelines as he almost hit every light post.
This fellow wasn’t an outlier as the other 30-odd students were similar especially when we were taken into the larger cities to practice. Let’s just say I’ve seen worse driving from student drivers than that AV Philip was appalled about in that post.
Therefore, my thought experiment is this: Why are we okay with tossing 15-year-old kids behind the wheel of a car to learn as they drive but panic when autonomous vehicles learn the same way and call that completely and utterly irresponsible?
Now, I’m not trying to say it’s the best way to train an autonomous vehicle. This thought experiment is just to explore how we train our human children. From there, we have a baseline of what we might expect in other technological pursuits. It provides a pretty solid context to look at the data.
Data Analysis
Right now, analysis shows that autonomous vehicles are about as bad at driving as a 16-19-year-old on the road. They are slightly worse in minor accidents but much better in fatalities. The statistics are messy and hard to make into an apples-to-apples comparison but here’s how it breaks down:
The National Law Review reported that for every 1 million miles driven, there are 9.1 self-driving car crashes. In comparison, conventional human-driven vehicles have a crash rate of 4.8 with 16-19 year olds being closer to 7.
However, drivers 16 to 19 years old were involved in 4.8 fatal crashes per 100 million travel miles and self-driving cars were involved in less than 1 per 100 million miles1
Narrow down a little tighter and comparing robo-taxies like the Waymo above to equivalent human-driven ride-sharing, one study found that “human drivers caused 0.24 injuries per million miles (IPMM) and 0.01 fatalities per million miles (FPMM), while self-driving cars caused 0.06 IPMM and 0 FPMM.”
Edit 8/26:
recently published some good statistics here2
AVs have more crashes but fewer injuries which makes sense since they typically drive in highly complex urban environments like downtown Phoenix and San Fransisco and travel at lower speeds. The data becomes more skewed when we consider that, only very experienced and confident human drivers navigate those areas. We don’t recommend inexperienced drivers go into downtown San Francisco or Phoenix and plenty of experienced, adult drivers avoid it too.
Autonomous vehicles, like the 15-year-old student drivers, are also learning. The companies making these vehicles collect an incredible amount of data each day and regularly deploy updates. Tesla’s auto-drive has continued to improve significantly over the years and will continue to get better over time. (like we hope a human driver does)
We anticipate and accept mistakes by human drivers. It’s why every driver is required to carry insurance and why insurance is so much more expensive when you are younger. With humans, we call these mistakes accidents and we have a ton of forgiveness. But we have vastly different expectations from our technology.
Zero Forgiveness
For decades, we’ve expected our technology to have highly robust verification and validation. Verification means the machine does exactly what we told it to and validation means that it actually gets the intended job done. This all makes sense for technology. This works for very specific and controlled tasks that we don’t want breaking and killing people.
This paradigm is challenged when building an artificial intelligence to perform autonomous tasks in a chaotic world similar to a human. It’s actually more challenging than rocket science. This is because the rocket ship doesn’t have any/many decisions to make. It’s a deterministic system. It can be verified because no matter the circumstances, if A happens, then B will happen.
Advanced AI driving autonomy isn’t deterministic. Heck, it’s not even explainable. We literally don’t know precisely how an AI model processes the inputs to give the outputs. We can only validate that it’s doing what we want, not how. (For a deeper primer check out The Layers of AI)
And guess what? That’s the same with human intelligence. We allow ourselves to make decisions between A and B where the goal isn’t to eliminate any negative outcome but to reduce the negative outcome and, hopefully, learn from that experience.
In the essay Don’t Trust AI... Entrust It I shared a framework for trust that balances ethics and assurance to tease apart this nuance. It’s why we need to consider whether our technologies need a human-in-the-loop controlling it, on-the-loop overseeing the actions, or off-the-loop, letting it be fully autonomous.
Back to the driving example; Autonomous Vehicles currently have a human-on-the-loop where in a Telsa, the driver is still supposed to be in control, and in a Waymo, there’s typically a human command center overseeing everything and able to take control. In our student driver example, we have the instructor and the parent as the human on-the-loop until the teen achieves their final validation through a driver’s test. And yes, we take that validation as a proxy for verification.
Not swayed by this argument, Philip, in a totally separate engagement, accused me of whataboutism when I brought up our tolerance for human mistakes and our refusal to tolerate an autonomous vehicle making a mistake.
Yet it appears he mostly agrees as he elucidates in this paper he wrote: “We argue that blaming automated vehicle technology for each crash a competent human driver might reasonably have avoided – regardless of net safety outcomes – is a safety promoting feature, not a public policy mistake.”
So it’s not whataboutism to bring up student drivers, it’s just that he’s holding AVs to a higher level of expectation than we hold the vast majority of our human drivers. Competent is a pretty high bar. For comparison, in courts of law, we use a reasonable person argument which settles closer to the lower common denominator than competency. What this means for driving is that a reasonable person will allow a 15-year-old to get behind the wheel of a car and drive so that by the time they’re 20, they’re borderline competent.
Moreover, Philip does accept my critique by recommending we improve driver’s education aligning with his earlier argument but still missing mine. He has zero tolerance for technology making a mistake while overlooking the legions of mistakes we, mere humans, make every day.3 He demands perfection while we are still learning.
It’s the classic paternal view that we can never be safe enough. However, as we learned in Risk Compensation, often, the more risk we eliminate, the more danger we face. We are demanding such perfection on AVs that we allow thousands of humans to continue to die in car accidents out of fear an AV might make a mistake.
Summary
I largely understand where Philip is trying to come from with autonomous vehicles. We do need responsible and ethical designs. We also need to step back and consider why we have so much forgiveness for humans and zero forgiveness for technology.
Again, this isn’t to say we should blithely accept risks with technology just because we do so with humans. It’s just helpful to pause and consider that we let 15-year-olds learn to drive on the highways by tossing them in a car but panic about an AV making mistakes on a complex highway.
Fundamentally it strikes me as a great example of “perfection being the enemy of progress”, to paraphrase Winston Churchill. Right now there’s an illogical drive to measure any fault against a desire for zero mistakes without the context of what faults and mistakes we accept every day from humans without that technology.
I hope this creates a less panicked reaction to autonomy and a more informed and balanced discussion. It’s too easy to moralize on either side. It’s much harder to critically look at what we accept and forgive with human accidents, and compare that to the zero forgiveness we have with technology.
What are your thoughts on autonomous vehicles, trust, ethics, and forgiveness?
Enjoyed this post? Hit the ❤️ button above or below because it helps more people discover Substacks like this one and that’s a great thing. Also please share here or in your network to help us grow.
Polymathic Being is a reader-supported publication. Becoming a paid member keeps these essays open for everyone. Hurry and grab 20% off an annual subscription. That’s $24 a year or $2 a month. It’s just 50¢ an essay and makes a big difference.
Check Out Refind: Brain food, delivered daily
Every day, Refind analyzes thousands of articles and sends you only the best, tailored to your interests. Loved by 503,336 curious minds. Subscribe Here
Further Reading from Authors I Appreciate
I highly recommend the following Substacks for their great content and complementary explorations of topics that Polymathic Being shares.
- All-around great daily essays
- Insightful Life Tips and Tricks
- Highly useful insights into using AI for writing
- Integrating AI into education
- Computer Science for Everyone
The frustration with this number is that it’s challenging to derive since we have a good number of millions of miles per human driver but not yet for AVs. Also, the fatalities and accidents for AVs include all levels of autonomous support such as lane keep assist, adaptive cruise control, to full self-driving. Here are some additional sources:
From the essay: “What’s even more surprising: According to Swiss Re, one of the world's leading reinsurance companies, autonomous cars already outperform human drivers in terms of real-world safety. In over 3.8 million miles driven without a human being behind the steering wheel, the autonomous driving company Waymo incurred zero bodily injury claims in comparison with the human driver baseline of 1.11 claims per million miles. The Waymo Driver also significantly reduced property damage claims to 0.78 claims per millions miles in comparison with the human driver baseline of 3.26 claims per millions miles. So, it seems fair to say that the state-of-the-art of level 4 automated driving is already 3-4 times safer than human driving.”
I try to avoid directly poking at a person and try to argue their points but Philip’s behaviors have been poor from the start. For example, when I first brought the idea of our acceptance of 15-year-old drivers to his original LinkedIn post he immediately blocked me and another person who agreed with me. I had to reach out from my academic e-mail and call him out for poor behavior before he unblocked me. I tried to have a respectful conversation on e-mail but he refused to even acknowledge a different view. Finally, we have his accusation of whataboutism, a clear logical fallacy and a deflection.
It’s a classic glitch of cognitive dissonance on his side and frustrating because it doesn’t help move the conversation forward. I do understand that he’s got a plush gig pearl-clutching about autonomy. He is regularly asked to comment on autonomous driving mishaps by the media so I understand his reluctance to consider data that would challenge his reputation. It’s understandable to resist and I’m not too worried because, as Max Planck is summarized as saying, “Science progresses one funeral at a time.”
Edit 8/21 - It appears Phil blocked me again on LinkedIn without engaging on the topic. It certainly shows the fragility of his position and his inability to engage with gentle critique.
My own perspective, and why I am keen on autonomous vehicles, is that they are capable of fleet-wide self-improvement. To use Nassim Taleb's terms, fleets of autonomous vehicles are "antifragile," they gain from disorder. One autonomous vehicle has an accident, and the data recordings and the records of the vehicles decision-making are available to the company and regulators, who can identify the cause of error correct and correct for it with software uodates, making all deployed vehicles safer. This is like airplanes crashes, where each plane crash gets a follow up investigation and new regulations and design changes that improve the safety of air travel for everyone. This is obviously not perfect and as prone to corruption as any human institution, but it's a hell of a lot better than our current system, where a 16-year old drives too fast and flips their parents' SUV, killing themselves, their friends, and possibly one or two innocent bystanders. A complete deadweight loss, with no system wide improvement.
In our current, human-driven vehicle paradigm, system wide improvements only happen when the manufacturer is at fault, or after years of study and advocacy, which lead to changes in road construction, seatbelt rules, airbags, speed limits,and other things. You can't just broadcast a software change to all vehicles in the field.
My first impulse is to say that it’s a category mistake to think of forgiveness with respect to AI, but tells a lot about what we think AI is.
To start, forgiveness is interpersonal— whether at the social level or the personal – and with AI, there is nothing there with which forgiveness could occur. Thus, the reaction toward the failures of AI are more exacting because we perceive that it is not a person.
By contrast, consider that we are more patient with animals, because we understand that they are dissimilar from humans in manifold ways and communication and expectation between humans and non-humans doesn’t happen in a straightforward fashion. We are not exacting with them because it would be inappropriate to do so. And yet, this is not how we approach AI: we expect more of it.
That we would consider tolerance (but not forgiveness) toward AI indicates that we both think that it’s something more than nonhuman, but yet incapable of the interactive relation that we associate with humans.