Early Monday morning, an Uber-owned Volvo in autonomous mode struck and killed a pedestrian. The vehicle had a human driver behind the wheel when the tragedy occurred.

This incident brings a sudden urgency to the moral quandary that ethicists have been discussing since before self-driving cars were a reality: Whose fault was this death?

This is believed to be the first time a self-driving car has caused an accident fatal to a pedestrian.

The closest thing we have to precedent is an incident in 2016, when a Tesla on autopilot caused an accident that killed its driver. After an eight-month investigation, federal auto-safety regulators found no defects in the vehicle's system, and determined that Tesla did not need to recall the model.

This incident, however, is different. Tesla's system is technically not autonomous, so its driver assumes more responsibility. And the death of a pedestrian is different from the death of a driver, raising a different set of questions.

We asked experts what some of those questions are, and what the answers might be.

Who's going to court?

Uber, Volvo, suppliers, the driver, the pedestrian... who's liable?

It turns out that's the wrong question.

In fact, it's a question that a 2015 Stanford Law School article called "specific, but unhelpful," noting that "asking 'Who is liable in tomorrow’s automated crashes?' in the abstract is like asking 'Who is liable in today’s conventional crashes?' The answer is: 'It depends.'"

Anyone from the driver to Uber to Volvo to the suppliers of individual car parts could face charges — it depends on the results of the ongoing police investigation.

The car's "driver," 44-year-old Rafael Vasquez, could be in trouble if police find that he "deceived" the car.

Many collision-avoidance algorithms require specific information to function, such as the number of people in your car and where they are sitting, as well as the number of people in cars around you, according to Dr. Ali Abbas, Professor of Industrial and Systems Engineering and Public Policy at University of Southern California, and director of the university's Neely Center for Ethical Leadership and Decision Making. Some of these algorithms could be gameable; for example, by claiming to be transporting a falsely high number of passengers, a driver could potentially trick his car, or other cars, into prioritizing his own life over others', depending on how the algorithm weighs crisis decisions.

At the moment, Abbas says, there are no laws against doing this, but if driver deception took place here, it could certainly "open up legal consideration in the future."

The recently released video footage also indicates the driver may be at fault, according to Bryant Walker Smith, author of the aforementioned Stanford Law article.

Smith is an assistant professor at the University of South Carolina School of Law and a member of the U.S. Department of Transportation's Advisory Committee on Automation in Transportation.

"If I pay close attention, I notice the victim about two seconds before the video stops," Smith told me. "This is similar to the average reaction time for a driver. That means an alert driver may have at least attempted to swerve or brake."

The footage also shows Vasquez looking away from the road on multiple occasions.

By contrast, Uber is in trouble if it's found to have released self-driving cars into open streets that it knew were faulty.

Police shouldn't be asking whether Uber's algorithms are perfect, but there may be a question of how much Uber knew about their shortcomings.

"The algorithms that we have today are not going to be the algorithms that we have 10-15 years from today," Abbas told me. "How accurately did the company know that their algorithm would perform before they introduced it in the streets?

"On average there're a fatality about once every 100 million miles in the U.S., so while this incident is not statistically determinative, it is uncomfortably soon in the history of automated driving," Smith adds.

What happens after?

Regardless of who (if anyone) goes to court, what should consumers take away from in regard to this tragedy's long-term implications for Uber, and for autonomous vehicles at large?

Smith suggests the incident shouldn't lead to greater fear of self-driving cars.

"It's important to remember that new technologies won't be perfect — but also that the status quo is terribly imperfect," he says. "On the same day this tragedy happened, 100 other people died in crashes in the United States alone."

Every death is tragic, of course, but that doesn't mean autonomous vehicles aren't statistically safer than human drivers. In fact, research has suggested that 90 percent of traffic accidents could be prevented by driverless cars.

"We should be concerned about automated driving," Smith says, "but terrified about conventional driving."

This incident is probably the first of many that will bring heightened scrutiny to autonomous driving as it inches closer to the open market. But Smith and Abbas both hope that consumers will demand more transparency from manufacturers rather than moving away from self-driving cars altogether.

"We have opened up a new can of ethical issues," says Abbas of the Uber's accident. "If I’m driving a car and I’m going to collide, do I save the most number of people, or do I save my family? Who is making those tradeoffs on our behalf? How are they making them and how are the numbers entered?" These are questions consumers deserve the answers to, he says, for any car they might ride in.

For example, a majority of respondents to a 2016 survey claimed that self-driving cars should be utilitarian; that is, they should save the greatest number of lives possible in emergency situations, even at the expense of their drivers. However, most respondents also claimed they would prefer to purchase a car that protected its own passengers at all costs. The issue is further complicated by child passengers and valuable cargo, not to mention pets.

At the moment, there's little publicly available information about how algorithms like Uber's act, and how these decisions are made.

Abbas hopes incidents like this will pressure companies to put more information about their ethics into the public eye, so consumers can better make decisions in line with their values.

"When a company wants to introduce a vehicle, they’re going to have to answer a lot of questions about the algorithm, the performance, which, as consumers, we don’t get to hear about today."

Source: Mashable