Imagine a self-driving car that makes ethical and moral decisions like humans do, possibly driving its way into this automated era of ours—seemingly crazy but it’s not impossible, according to researchers.

The virtual reality experiments investigated human behavior and moral assessments in pretend road traffic scenarios.

The participants drove a car in a usual suburban neighborhood on a foggy weather, with roadblocks such as animals, humans, and objects, and then left to decide which one to spare among these.

The outcomes of the study, published in the journal Frontiers in Behavioral Neuroscience, were conceptualized by statistical models that give rules explaining the observed behavior.

Earlier assumption was that moral decisions were strongly dependent on context and couldn’t be patterned algorithmically.

“But we found quite the opposite. Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object,” study’s first author, Leon Sütfeld, said in a statement.

The results presented major implications on the debate that surround behavior of self-driving vehicles.

German Federal Ministry of Transport and Digital Infrastructure defined 20 ethical principles associated with self-driving vehicles.

One good example is the relation of behavior during unavoidable accidents, creating critical premise that human moral behavior could not be modeled or programmed.

Prof. Gordon Pipa, a senior author of the study, suggested that people should engage in a debate since it’s now seemingly possible for machines to be programmed like how human makes moral decisions.

“We need to ask whether autonomous systems should adopt moral judgments, if yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?” Pipa said in a statement.

For instance, under the new German ethical principles, a child would be classified as considerably involved in making the risk when he runs onto the road, making him less qualified to be saved as opposed to an adult simply standing on a footpath as a non-involved party.

“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” explained by another senior author, Prof. Peter König.

“Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should act just like humans,” Konig added.

As it has become a bit more ordinary these days to have artificial intelligence systems, such as robots in hospitals, authors reminded everyone that autonomous vehicles are just the beginning of a new era where machines will start deciding without us if we don’t establish clear rules.

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: