Connect with us

Artificial Intelligence

The Relative Risk Trade-offs Of AI Autonomous Cars

Avatar

Published

on

For some endeavors, the risks are clear. Critics of current roadway tryouts for self-driving cars are concerned we are allowing a grand experiment to take place without knowing the risks. (Sammie Vasquez on Unsplash)

By Lance Eliot, the AI Trends Insider

When you get up in the morning, you are taking a risk.

Who knows what the day ahead has in store for you?

Of course, you were even at risk while sleeping, since an earthquake could occur during your slumber and endanger you, or perhaps a meteor from outer space might plummet to earth and ram into your house.

I’m sorry if mentioning those possibilities will cause you to lose sleep, and please know that the odds of the earthquake occurring are presumably relatively low (depending too on where you live), and the odds of the meteor striking you are even lower.

When referring to risk, it is important to realize that we experience risk throughout our daily lives.

Some people joke that they won’t leave their house because it is too risky to go outside, but this offhand remark overlooks the truth that there is risk while sleeping comfortably inside your home.

I don’t want to seem like a doom-and-gloom person, but my point hopefully is well-taken, namely that the chances of an adverse or unwelcome loss or injury is always at our fingertips and ready to occur.

You absorb risk by being alive and breathing air.

Risk is all around you and you are enveloped in it.

Those that think they only incur risk when they say go for a walk or otherwise undertake action are sadly mistaken.

Advertisement
interviews-reviews

No matter what you are doing, sleeping or awake, inside or outside of a building, even if locked away in a steel vault and trying to hide from risk, it is still there, on your shoulder, and at any moment you could suddenly suffer a heart attack or the steel vault might fall and you’d get hurt as an occupant inside it.

This brings us to the equally important point that there is absolute risk and there is relative risk.

We often fall into the mental trap of talking about absolute risk and scare ourselves silly.

It is better to discuss relative risk, providing a sense of balance or tradeoff about the risks involved in a matter.

Consider the aspect that some pundits say that going for a ride in today’s self-driving driverless autonomous cars, which are being tried out experimentally on our public roadways, carries high risk.

But we don’t know for sure what they mean by the notion of “high” related to the risk involved.

Is the risk associated with being inside a self-driving car considered less or more than say going in an airplane or taking a ride on a boat?

By describing the risk in terms of its relative magnitude or amount as it relates to other activities or matters, you can get a more realistic gauge of the risk that someone else is alluding to.

I’d like to then bring up these three measures for this discussion about risk:

  • R1: Risk associated with a human driving a conventional car
  • R2: Risk associated with AI driving a self-driving autonomous car
  • R3: Risk associated with a human and AI co-sharing the driving of a car

For my framework explaining the nature of AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/

For my indication about how achieving self-driving cars is akin to a moonshot, see this link: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/

Advertisement
interviews-reviews

It is useful to consider a Linear Non-Threshold (LNT) model in the case of autonomous cars, see this link: https://www.aitrends.com/ai-insider/linear-no-threshold-lnt-and-the-lives-saved-lost-debate-of-ai-self-driving-cars/

A grand convergence of technologies is enabling the possibility of true self-driving cars, see my explanation: https://aitrends.com/ai-insider/grand-convergence-explains-rise-self-driving-cars/

Unpacking The Risks Involved

We can use R1 as a baseline, since it is the risk associated with a human driving a conventional car.

Wherever you go for a drive in your conventional car, you are incurring the risk associated with you making a mistake and crashing into someone else, or, despite your best driving efforts, there might be someone that crashes into you. Likewise, when you get into someone else’s car, such as ridesharing via Uber or Lyft, you are absorbing the risk that the ridesharing driver is going to get into a car accident of one kind or another.

Consider the R2, which is the risk associated with a true self-driving autonomous car.

Most everyone involved in self-driving cars and those who care about the advent of driverless cars are hoping that autonomous cars are going to be safer than human driven cars, meaning that there will presumably be less deaths and injuries due to cars, less car crashes, etc.

You could assert that the risk associated with self-driving cars is hoped to be less than the risk associated with human driven conventional cars.

I’ll express this via the notation of:  R2 < R1

This is aspirational and indicates that we are all hoping that the risk R2 is going to be less than the risk R1.

Advertisement
interviews-reviews

Indeed, some would argue that it should be this:  R2 << R1

This means that the R2 risk, involving the AI driving of a driverless car, would be a lot less, substantially less than the risk of a human driving a conventional car, R1.

You’ve perhaps heard some pundits that have said this: R2 = 0

Those pundits are claiming that there will be zero fatalities and zero injuries once we have true driverless self-driving cars.

I’ve debunked this myth in many of my speeches and writings. There is not any reasonable way to get to zero. If a self-driving car comes upon a situation whereby a pedestrian unexpectedly leaps in front of the driverless car while in-motion, the physics belie being able to stop or avoid hitting the person and so there will be at least a non-zero chance of fatalities and injuries.

In short, here’s what I’m suggesting so far in this discussion:

  • R2 = 0 is false and misleading, it won’t happen
  • R2 < R1 is aspirational for the near-term
  • R2 << R1 is aspirational for the long term

Some believe that we will ultimately have only true self-driving cars on our roadways, and we will somehow ban conventional cars, leading to a Utopian world of exclusively autonomous cars. Maybe, but I wouldn’t hold your breath about that.

The world is going to consist of conventional cars and true self-driving cars, for the foreseeable future, and thus we will have human driven cars in the midst of AI-driven cars, or you could say we’ll have AI-driven cars in the midst of human driven cars.

For the notion of autonomous cars as an invasive species, see my indication here: https://aitrends.com/ai-insider/invasive-curve-and-ai-self-driving-cars/

For the importance of Occam’s razor when it comes to self-driving cars, see my explanation: https://aitrends.com/ai-insider/occams-razor-ai-machine-learning-self-driving-cars-zebra/

For my comments about how software neglect is going to raise risks of AI autonomous cars, see this link: https://aitrends.com/ai-insider/software-neglect-will-impede-ai-self-driving-cars/

Advertisement
interviews-reviews

Risks Of Co-Sharing The Driving Task

There’s an added twist that needs to be included, namely the advent of Level 3 cars, consisting of Advanced Driver-Assistance Systems (ADAS), which provide AI-like capabilities that are utilized in a co-sharing arrangement with a human driver. The ADAS augments the capabilities of a human driver.

To clarify, the Level 3 requires that a licensed-to-drive human driver must be present in the driver’s seat of the car. Plus, the human driver is considered the responsible party for the driving task. You could say that the AI system and the human driver are co-sharing the driving effort.

Keep in mind that this does not allow for the human driver to fall asleep or watch videos while driving, since the human driver is always expected to be alert and active as the co-sharing driver.

I have forewarned that Level 3 is going to be troublesome for us all. You can fully anticipate that many human drivers will be lulled into relying upon the ADAS and will therefore let their own guard down while driving. The ADAS will suddenly try to get the human driver to take over the driving controls, which the human driver will now be mentally adrift of the driving situation, and the human driver will not take appropriate evasive action in-time.

In any case, I’m going to use R3 to reflect the risk of the human and AI co-sharing the driving task.

Most everyone is hoping that the co-sharing arrangement is going to make human drivers safer, presumably because the ADAS is going to provide a handy “buddy driver” and overcome many of today’s human solo driving issues.

Here’s what people assume:

  • R3 < R1
  • R3 << R1

In other words, the co-sharing effort will be less risky than a conventional car with a solo human driver, and maybe even be a lot less risky.

Down-the-road, the thinking is that true driverless cars, ones that are driven solely by the AI system, will be less risky than not only conventional cars being driven by humans, but even less risky than the Level 3 cars that involve a co-sharing of the driving task.

Thus, people hope this will become true:

Advertisement
interviews-reviews
  • R2 < R3
  • R2 << R3

Overall, this is the aim when you consider all three types of driving aspects:

  • R2 < R3 < R1
  • R2 << R3 << R1

Thus, this is an assertion that ultimately AI driven autonomous cars (R2) are going to be less risky than co-shared driven cars (R3) and for which is less risky than conventional human-driven cars (R1), aiming to be a lot less risky throughout.

Here then is the full annotated list of these equation-like aspects:

  • R2 = 0 — a false claim that AI autonomous cars won’t have any crashes
  • R2 < R1 — aspirational near-term, AI driven cars less risky than human driven cars
  • R2 << R1 — aspirational long-term, AI driven cars a lot less risky than human driven cars
  • R3 < R1 – aspirational near-term, co-shared driven cars less risky than human-solo
  • R3 << R1 – aspirational long-term, co-shared driven cars lot less risky than human-solo
  • R2 < R3 – aspirational near-term, AI driven cars less risky than co-shared driven cars
  • R2 << R3 – aspirational long-term, AI driven cars lot less risky than co-shared driven cars
  • R2 < R3 < R1 – aspirational near-term, AI car less risky than co-shared less risky than human-solo
  • R2 << R3 << R1 – aspirational long-term, AI car lot less risky than co-shared and human-solo

For an indication about bifurcating the levels of self-driving, see my indication here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

For more about the base levels of autonomous cars, see my explanation: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/

For the outside-scope aspects of off-road driving and self-driving cars, see my remarks: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/

Ascertaining Today’s Relative Risks

My equations are indicated as the aspirational goals of automating the driving of cars.

We aren’t there yet.

When you go for a ride in a self-driving car that has a human back-up driver, you are somewhat embracing the R3 risk category, but not quite.

The human back-up driver is not per se acting as though they are in a Level 3 car, one in which they would be actively co-sharing the driving task, and instead are serving as a “last resort” driver in case the AI of the self-driving car seems to need a “disengagement” (industry parlance for a human driver that takes over from the AI during a driving journey).

It is an odd and murky position.

You aren’t directly driving the car. You are observing and waiting for a moment wherein either the AI suddenly hands you the ball, or you of your own volition suspect or believe that it is vital to takeover for the AI.

Advertisement
interviews-reviews

Some might say that I should add a fourth category to my list, an R4, which would be akin to the R3, though it is a co-sharing involving the human driver being more distant from the driving task.

Another approach would be to delineate differing flavors of the R3.

For example, some automakers and tech firms are putting into place a monitoring capability that tries to track the attentiveness of the human driver that is supposed to be co-sharing the driving task.

This might involve a facial recognition camera pointed at the driver and alerting if the driver’s eyes don’t stay focused on the road ahead, or it could be a sensory element on the steering wheel that makes sure the human co-driving has their hands directly on the wheel, etc.

If you have those kinds of monitors, it would presumably decrease the risk of R3, though we don’t really know as yet how much it does so.

Another factor that seems to come to play with R3 is whether there is another person in the car during a driving journey. A solo human driver that is co-sharing the driving task with the ADAS is seemingly more likely to become adrift of the driving task when alone in the car. If there is another person in the car, perhaps one also watching the driving and urging or sparking the human driver to be attentive, it seems to prompt the human driver toward safer driving.

Rather than trying to overload the R3 or attempt to splinter the R2, let’s go ahead and augment the list with this new category of the R4:

  • R1: Risk associated with a human driving a conventional car
  • R2: Risk associated with AI driving a self-driving autonomous car
  • R3: Risk associated with a human and AI co-sharing the driving of a car
  • R4: Risk associated with AI driving a self-driving car with a human back-up driver present

This leads us to these questions:

  • R4 < R1 ? – is AI self-driving car with human back-up driver less risky than human driven car
  • R4 << R1 ? — is AI self-driving car with human back-up driver lot less risky than human driven car

Or, if you prefer:

  • R1 < R4 ? – is human driven car less risky than AI self-driving car with human back-up driver
  • R1 << R4 ? – is human driven car lot less risky than AI self-driving car with human back-up driver

We don’t yet know the answer to those questions.

Indeed, some critics of the existing roadway tryouts involving self-driving cars are concerned that we are allowing a grand experiment for which we don’t know what the comparative risks are. They would assert that until there are more simulations done and closed track or proving ground efforts, these experimental self-driving cars should not be on the public roadways.

The counterargument usually voiced is that without having self-driving cars on our public roadways it will likely delay the advent of self-driving cars, and for each day delay it is allowing by default the conventional car to continue its existing injury and death rates.

For the Uber self-driving car crash and fatality, see my coverage here: https://www.aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/

Advertisement
interviews-reviews

For ride-sharing and autonomous cars, see my analysis: https://aitrends.com/ai-insider/ridesharing-services-and-ai-self-driving-cars-notably-uber-in-or-uber-out/

For my details about the role of the back-up or safety drivers, see this link: https://aitrends.com/ai-insider/human-back-up-drivers-for-ai-self-driving-cars/

Conclusion

When someone tells you that you are taking a risk by going for a ride in a self-driving car, and assuming that there is a human back-up driver, the question is how much of a difference in risk is there between driving in a conventional car that has a human driver versus the self-driving car that has a human back-up driver.

Since you presumably are willing to accept the risk associated with being a passenger in a ridesharing car, you’ve already accepted some amount of risk about going onto our roadways as a rider in a car, albeit one being driven by a human.

How much more or less risk is there once you set foot into that self-driving car that has the human back-up driver?

What beguiles many critics is that the risk is not just for the riders in those self-driving cars on our public roadway.

Wherever the self-driving car roams or goes, it is opting to radiate out the risk to any nearby pedestrians and any nearby human driven cars. You don’t see this imaginary radiation with your eyes, and instead it just perchance occurs because you just so happen to end-up near to one of the experimental self-driving cars on our public streets.

Are we allowing ourselves to absorb too much risk?

I’ll be further contemplating this matter while ensconced in my steel vault that has protective padding and a defibrillator inside it, just in case there is an earthquake, or I have a heart murmur, or some other calamity arises.

Advertisement
interviews-reviews

 

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source

Continue Reading
Advertisement Submit

TechAnnouncer On Facebook

Pin It on Pinterest

Share This