My own perspective, and why I am keen on autonomous vehicles, is that they are capable of fleet-wide self-improvement. To use Nassim Taleb's terms, fleets of autonomous vehicles are "antifragile," they gain from disorder. One autonomous vehicle has an accident, and the data recordings and the records of the vehicles decision-making are available to the company and regulators, who can identify the cause of error correct and correct for it with software uodates, making all deployed vehicles safer. This is like airplanes crashes, where each plane crash gets a follow up investigation and new regulations and design changes that improve the safety of air travel for everyone. This is obviously not perfect and as prone to corruption as any human institution, but it's a hell of a lot better than our current system, where a 16-year old drives too fast and flips their parents' SUV, killing themselves, their friends, and possibly one or two innocent bystanders. A complete deadweight loss, with no system wide improvement.
In our current, human-driven vehicle paradigm, system wide improvements only happen when the manufacturer is at fault, or after years of study and advocacy, which lead to changes in road construction, seatbelt rules, airbags, speed limits,and other things. You can't just broadcast a software change to all vehicles in the field.
This is a fantastic point and one I'm hoping to build out better in the future. If we could upgrade our road systems to be more AV friendly we'd reduce the complexity for the technology while dramatically improving the efficiency and safety. I love your thinking here.
This a great point! I agree on the joint learning, but to some degree I can also see it as an argument to have higher safety standards for AVs. Every safety improvement scales fleetwide. So, I do want the update for all cars if one makes a mistake - whereas there is no real point in giving a human driver a required retest because another human driver has made a mistake somewhere.
My first impulse is to say that it’s a category mistake to think of forgiveness with respect to AI, but tells a lot about what we think AI is.
To start, forgiveness is interpersonal— whether at the social level or the personal – and with AI, there is nothing there with which forgiveness could occur. Thus, the reaction toward the failures of AI are more exacting because we perceive that it is not a person.
By contrast, consider that we are more patient with animals, because we understand that they are dissimilar from humans in manifold ways and communication and expectation between humans and non-humans doesn’t happen in a straightforward fashion. We are not exacting with them because it would be inappropriate to do so. And yet, this is not how we approach AI: we expect more of it.
That we would consider tolerance (but not forgiveness) toward AI indicates that we both think that it’s something more than nonhuman, but yet incapable of the interactive relation that we associate with humans.
First off, AI isn't intelligent. It's a glorified algorithm. Second, 15 year olds should be practicing on back roads and low traffic areas because it's safer for them too. Trusting an algorithm to do the right thing isn't very smart.
In his early conceptions of AI, Turing insisted that, with the exception of philosophers, no one questions whether we can actually think, and instead, we proceed to have polite conversations where it is assumed that everyone thinks.
The most technophobic and Luddites insist on demanding much more from technology than they would from any human, for example, when assessing whether a self-driving car is safe or even if a device is truly intelligent. That’s why I have written about the "Turing courtesy," advocating for that same level of consideration.
That's a great point. I've poked at that about whether AI can be creating and ended up uncovering just now limited human creativity is compared to what we imagine.
Great post - I wonder has any AV company actually tried (for show) to pass a human drivers license exam? My sense is Waymo would easily pass in most locations.
That's an obvious question and I don't know. It would be interesting to aleviate concerns. The irony is we put Waymo in much more complicated driving situations than we do most human drivers.
I am considering the implications of liability. Insurance coverage is based on the idea of having a "responsible party." If a self-driving car is in an accident with a human being, who is going to be held "responsible" for the accident? If the human is held responsible and goes to court, who stands in for the self-driving car? Is the passenger or owner of the car held responsible for the actions of the car? Is the human "driver" able to override the car and take control when things aren't going well, and if so, it's that the expectation of the insurance company?
Great questions. I'd say, in the case of a Tesla, it's the owner using self driving. In the case of Waymo, it's Waymo, the company who is liabile. This isn't really any different than an automotive company being liable for faulty equipment today like Toyota and their sticky gas pedals.
Another example are the driversless trams all over the airports (especially DFW and Orlando) These have a chance for issues and the liability is held by the airport running them.
Adding self-driving to our mix of legal liability doesn't seem that far of a stretch.
Great post! "Quandary" is definitely the correct word.
Yet I think forgiveness is a dangerous context. These quandaries fundamentally come down to one simple thing: what are moral terms of engagement when two different types of moral agents share the same moral arena?
Instead of designing a *new* system optimized for the moral capacities of self-driving AI, we're attempting to adapt them to our *existing* system, one that was entirely optimized for the moral natures of human beings.
This becomes obvious if you imagined a system for self-driving AI designed from scratch. It would look nothing like our human road system. It would be designed to be purely deterministic, because it would be relatively easy to ensure that no novelty could ever exceed the training data. It would be an arena optimized for the moral agency of self-driving AIs, not humans.
But instead, we're trying to adopt them to the existing road system. This means we will need to define the moral terms of engagement beyond the quantifiable. A human will always "forgive" another human for certain types of moral tragedies that they would never apply to an *artificial* agent causing the same tragedy.
So we'll have to grapple with how words like *forgiveness* (and agency, responsibility, culpability, etc.) can "translate" between these two very different moral systems. Otherwise the overall quantitative story of reduced fatalities will never be as compelling as you'd like it to be.
Great points and I'd say tolerance is a better word than forgiveness.
Your point on the road systems is spot on and one I've made as well. We could significantly reduce the complexity of we designed for AVs. Not designing AVs for our road.
For whatever reason, the public has lost the ability to ask “compared to what?” They demand absolute perfecting and zero risk from new technology, even if doing so traps us in an even riskier status quo.
I’ve had this argument before. Many have told me that autonomous vehicles must be “perfect” before being deployed on the road. Each time I counter, “No, they will never be perfect, that’s an impossible ask, all they need to be is better than your average driver and there is a moral imperative to deploy them.”
Without fail, each and every time I say this, it falls on deaf ears. It’s so bizarre, that even when I point out that their position condemns the early deaths of thousands, they still hold firm that nothing less than absolute perfection is ever acceptable in new technology.
I’ve run into that before as well. I mean, the main example in the essay clearly seems to be that person. There is literally nothing mundane enough to not criticize. Worse, there’s never a suggestion for how to make it better except more accountability.
All we need to do is invest in road infrastructure and we could significantly reduce the complexity of an AV. Instead of having to sense everything, if intersections, speed limits, and even other vehicles announced themselves, the pool of deltas dramatically reduces.
Technology is, by nature, a way to offload responsibilities that we should all have. The same way that volunteer and professional firemen learn to fight fires so the rest of us don't have to, and police and soldiers to be the designated shooters when we need them. Naturally, since we rely on, train them, and equip them so much, we are very angry when things don't go to plan.
There are a lot of things we take for granted today... that if we had condemned the technologies based on first attempts... we'd be living in a much different world today.
Automobiles and Airplanes both fall into that bucket.
Aeroplanes are a great example because a ton of people crashed and died over the years to break through and learn how to fly is insane. If we held them to the exact standards then as now we'd never be flying.
I'm really glad you circled back to talk about how terrible of an idea it is for a fifteen year old to drive. Sure, folks out there might disagree with me here, but the driver I'm thinking of is me.
Autonomous vehicles will only get safer from here. It is well past time they began being integrated into more cities.
Yeah, we have the kludge we built over all these years to contend with. I'm still optimistic that the current car I own will be the last one I have to drive.
I don't know if we have a higher standard. It's probably because we're giving away our personal autonomy to a machine. So we want to feel comfortable knowing that it won't go wrong.
True, but we do give away our personal autonomy to any number of humans. Cab drivers, Bus Drivers, Airline Pilots, etc. At least with airline pilots we know their testing was more rigorous than a cab driver. But an Uber driver only has to pass a basic driver's test that our 15 year old does. (and I've had much sketchier Uber rides than my worst Waymo ride)
Yes but there’s a difference between giving your autonomy away to another human and giving it away to a non-thinking programmed machine. You can understand a human being making a mistake because we make a mistake. We don’t understand a machine.
There’s an example of this which isn’t necessarily about a autonomous vehicles or AI. You don’t have an emotional reaction to a regular car until it doesn’t work. Then you get angry because you don’t understand why it doesn’t work anymore. But a human can say “My leg hurts” or “I’m sick.” That tells you why and can feel something other than anger.
Love the intention of your post …“I hope this creates a less panicked reaction to autonomy and a more informed and balanced discussion. “ And framing the conversations around this challenging societal shift. TY
I'd say it's tolerance over forgiveness but I get your point and completely agree. Forgiveness does catch the eye well but it's a bit too human related.
People have "zero forgiveness" with self-driving cars because we´re rightly wary of the downside of supposed technological advancements. Imagine how much better off we might be if we´d only been more suspicious of our smart phones.
I find it interesting that the downside to cell phones for Gen Z isn't driven by the technology, but by their parents, Gen X, who were never raised with that technology. Gen X doesn't trust their kids and so they became helicopter parents and forced them to stay home where, coupled with cell phones, you've got a mess.
My own perspective, and why I am keen on autonomous vehicles, is that they are capable of fleet-wide self-improvement. To use Nassim Taleb's terms, fleets of autonomous vehicles are "antifragile," they gain from disorder. One autonomous vehicle has an accident, and the data recordings and the records of the vehicles decision-making are available to the company and regulators, who can identify the cause of error correct and correct for it with software uodates, making all deployed vehicles safer. This is like airplanes crashes, where each plane crash gets a follow up investigation and new regulations and design changes that improve the safety of air travel for everyone. This is obviously not perfect and as prone to corruption as any human institution, but it's a hell of a lot better than our current system, where a 16-year old drives too fast and flips their parents' SUV, killing themselves, their friends, and possibly one or two innocent bystanders. A complete deadweight loss, with no system wide improvement.
In our current, human-driven vehicle paradigm, system wide improvements only happen when the manufacturer is at fault, or after years of study and advocacy, which lead to changes in road construction, seatbelt rules, airbags, speed limits,and other things. You can't just broadcast a software change to all vehicles in the field.
This is a fantastic point and one I'm hoping to build out better in the future. If we could upgrade our road systems to be more AV friendly we'd reduce the complexity for the technology while dramatically improving the efficiency and safety. I love your thinking here.
Thanks. I look forward to reading more as you build out the idea.
This a great point! I agree on the joint learning, but to some degree I can also see it as an argument to have higher safety standards for AVs. Every safety improvement scales fleetwide. So, I do want the update for all cars if one makes a mistake - whereas there is no real point in giving a human driver a required retest because another human driver has made a mistake somewhere.
My first impulse is to say that it’s a category mistake to think of forgiveness with respect to AI, but tells a lot about what we think AI is.
To start, forgiveness is interpersonal— whether at the social level or the personal – and with AI, there is nothing there with which forgiveness could occur. Thus, the reaction toward the failures of AI are more exacting because we perceive that it is not a person.
By contrast, consider that we are more patient with animals, because we understand that they are dissimilar from humans in manifold ways and communication and expectation between humans and non-humans doesn’t happen in a straightforward fashion. We are not exacting with them because it would be inappropriate to do so. And yet, this is not how we approach AI: we expect more of it.
That we would consider tolerance (but not forgiveness) toward AI indicates that we both think that it’s something more than nonhuman, but yet incapable of the interactive relation that we associate with humans.
Tolerance is a good word. It's actually a better word!
First off, AI isn't intelligent. It's a glorified algorithm. Second, 15 year olds should be practicing on back roads and low traffic areas because it's safer for them too. Trusting an algorithm to do the right thing isn't very smart.
In his early conceptions of AI, Turing insisted that, with the exception of philosophers, no one questions whether we can actually think, and instead, we proceed to have polite conversations where it is assumed that everyone thinks.
The most technophobic and Luddites insist on demanding much more from technology than they would from any human, for example, when assessing whether a self-driving car is safe or even if a device is truly intelligent. That’s why I have written about the "Turing courtesy," advocating for that same level of consideration.
Great reflection. Thanks.
That's a great point. I've poked at that about whether AI can be creating and ended up uncovering just now limited human creativity is compared to what we imagine.
Great post - I wonder has any AV company actually tried (for show) to pass a human drivers license exam? My sense is Waymo would easily pass in most locations.
That's an obvious question and I don't know. It would be interesting to aleviate concerns. The irony is we put Waymo in much more complicated driving situations than we do most human drivers.
I am considering the implications of liability. Insurance coverage is based on the idea of having a "responsible party." If a self-driving car is in an accident with a human being, who is going to be held "responsible" for the accident? If the human is held responsible and goes to court, who stands in for the self-driving car? Is the passenger or owner of the car held responsible for the actions of the car? Is the human "driver" able to override the car and take control when things aren't going well, and if so, it's that the expectation of the insurance company?
Great questions. I'd say, in the case of a Tesla, it's the owner using self driving. In the case of Waymo, it's Waymo, the company who is liabile. This isn't really any different than an automotive company being liable for faulty equipment today like Toyota and their sticky gas pedals.
Another example are the driversless trams all over the airports (especially DFW and Orlando) These have a chance for issues and the liability is held by the airport running them.
Adding self-driving to our mix of legal liability doesn't seem that far of a stretch.
Great post! "Quandary" is definitely the correct word.
Yet I think forgiveness is a dangerous context. These quandaries fundamentally come down to one simple thing: what are moral terms of engagement when two different types of moral agents share the same moral arena?
Instead of designing a *new* system optimized for the moral capacities of self-driving AI, we're attempting to adapt them to our *existing* system, one that was entirely optimized for the moral natures of human beings.
This becomes obvious if you imagined a system for self-driving AI designed from scratch. It would look nothing like our human road system. It would be designed to be purely deterministic, because it would be relatively easy to ensure that no novelty could ever exceed the training data. It would be an arena optimized for the moral agency of self-driving AIs, not humans.
But instead, we're trying to adopt them to the existing road system. This means we will need to define the moral terms of engagement beyond the quantifiable. A human will always "forgive" another human for certain types of moral tragedies that they would never apply to an *artificial* agent causing the same tragedy.
So we'll have to grapple with how words like *forgiveness* (and agency, responsibility, culpability, etc.) can "translate" between these two very different moral systems. Otherwise the overall quantitative story of reduced fatalities will never be as compelling as you'd like it to be.
Great points and I'd say tolerance is a better word than forgiveness.
Your point on the road systems is spot on and one I've made as well. We could significantly reduce the complexity of we designed for AVs. Not designing AVs for our road.
For whatever reason, the public has lost the ability to ask “compared to what?” They demand absolute perfecting and zero risk from new technology, even if doing so traps us in an even riskier status quo.
I’ve had this argument before. Many have told me that autonomous vehicles must be “perfect” before being deployed on the road. Each time I counter, “No, they will never be perfect, that’s an impossible ask, all they need to be is better than your average driver and there is a moral imperative to deploy them.”
Without fail, each and every time I say this, it falls on deaf ears. It’s so bizarre, that even when I point out that their position condemns the early deaths of thousands, they still hold firm that nothing less than absolute perfection is ever acceptable in new technology.
I’ve run into that before as well. I mean, the main example in the essay clearly seems to be that person. There is literally nothing mundane enough to not criticize. Worse, there’s never a suggestion for how to make it better except more accountability.
All we need to do is invest in road infrastructure and we could significantly reduce the complexity of an AV. Instead of having to sense everything, if intersections, speed limits, and even other vehicles announced themselves, the pool of deltas dramatically reduces.
Technology is, by nature, a way to offload responsibilities that we should all have. The same way that volunteer and professional firemen learn to fight fires so the rest of us don't have to, and police and soldiers to be the designated shooters when we need them. Naturally, since we rely on, train them, and equip them so much, we are very angry when things don't go to plan.
https://argomend.substack.com/p/responsaintbility
That's a great point.
Great perspective Michael.
There are a lot of things we take for granted today... that if we had condemned the technologies based on first attempts... we'd be living in a much different world today.
Automobiles and Airplanes both fall into that bucket.
Aeroplanes are a great example because a ton of people crashed and died over the years to break through and learn how to fly is insane. If we held them to the exact standards then as now we'd never be flying.
I'm really glad you circled back to talk about how terrible of an idea it is for a fifteen year old to drive. Sure, folks out there might disagree with me here, but the driver I'm thinking of is me.
Autonomous vehicles will only get safer from here. It is well past time they began being integrated into more cities.
How bad yet not? I do think we need to do more road systems integration to reduce the complexity for the cars.
Yeah, we have the kludge we built over all these years to contend with. I'm still optimistic that the current car I own will be the last one I have to drive.
Good summary.
I don't know if we have a higher standard. It's probably because we're giving away our personal autonomy to a machine. So we want to feel comfortable knowing that it won't go wrong.
True, but we do give away our personal autonomy to any number of humans. Cab drivers, Bus Drivers, Airline Pilots, etc. At least with airline pilots we know their testing was more rigorous than a cab driver. But an Uber driver only has to pass a basic driver's test that our 15 year old does. (and I've had much sketchier Uber rides than my worst Waymo ride)
Yes but there’s a difference between giving your autonomy away to another human and giving it away to a non-thinking programmed machine. You can understand a human being making a mistake because we make a mistake. We don’t understand a machine.
There’s an example of this which isn’t necessarily about a autonomous vehicles or AI. You don’t have an emotional reaction to a regular car until it doesn’t work. Then you get angry because you don’t understand why it doesn’t work anymore. But a human can say “My leg hurts” or “I’m sick.” That tells you why and can feel something other than anger.
Love the intention of your post …“I hope this creates a less panicked reaction to autonomy and a more informed and balanced discussion. “ And framing the conversations around this challenging societal shift. TY
Thanks for the kind feedback. It's a messy conversation to have but one that certainly needs to happen.
I'd say it's tolerance over forgiveness but I get your point and completely agree. Forgiveness does catch the eye well but it's a bit too human related.
People have "zero forgiveness" with self-driving cars because we´re rightly wary of the downside of supposed technological advancements. Imagine how much better off we might be if we´d only been more suspicious of our smart phones.
I find it interesting that the downside to cell phones for Gen Z isn't driven by the technology, but by their parents, Gen X, who were never raised with that technology. Gen X doesn't trust their kids and so they became helicopter parents and forced them to stay home where, coupled with cell phones, you've got a mess.