AdSense

Search This Blog

Labels

Wednesday, November 1, 2017

Self-driving cars .. .. ........... and the most ethical choice Q !!

Sure you would have heard this story ~ and would have your own answer and theory .. .. .. to a similar Q – Sujatha wrote a classical answer in Enthiran. The Q goes like this :

As you are standing near a control lever, you see a runway wagon darting on the railway track  - on the track a group of people are standing unaware and would get crushed by the oncoming truck; you have the option, if you operate the lever, the course would change and let the wagon go on a track, that is rarely used.  Thinking that rail would never come on the lane, a small girl is happily playing on this track.  By handling the lever, you can save the lives of a few (who are at fault for standing on the track) but would end up becoming a tool in the killing of an innocent girl who did nothing wrong in playing on untenanted track.   So,

  • ·         Will you remain silent – allowing the fate to happen – death of some people
  • ·         Handle the lever, divert the wagon, save the lives of people but sadly be a party to the death of an innocent child.  

Which is the most ethical choice?

Now read this further .. .. ..  … a  self-driving car carrying a family of four on a rural two-lane highway spots a bouncing ball ahead. As the vehicle approaches a child runs out to retrieve the ball. Should the car risk its passengers’ lives by swerving to the side—where the edge of the road meets a steep cliff? Or should the car continue on its path, ensuring its passengers’ safety at the child’s expense? This scenario and many others pose moral and ethical dilemmas that carmakers, car buyers and regulators must address before vehicles should be given full autonomy, according to a study published  in Science.

Self-driving cars are almost here, but one big question remains - how do they make hard choices in a life and death situation? Now researchers have demonstrated that smart vehicles are capable of making ethical decisions on the road, just like we do everyday. By studying human behaviour in a series of virtual reality-based trials, the team were able to describe moral decision making in the form of an algorithm. This is huge because previously, researchers have assumed that modelling complex ethical choices is out of reach.

"But we found quite the opposite," says Leon S├╝tfeld, one of the researchers from the University of Osnabruck, Germany. "Human behaviour in dilemma situations can be modelled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object."

If you take a quick glance at the statistics, humans can be pretty terrible drivers that are often prone to distraction, road rage and drink driving. It's no surprise then that there are almost 1.3 million deaths on the road worldwide each year, with 93 percent of accidents in the US caused by human error. But is kicking back in the seat of a self-driving car really a safer option? The outlook is promising. One report estimates that driverless vehicles could reduce the number of road deaths by 90 percent, which works out to be around 300,000 saved lives a decade in the US alone.

Despite the glowing figures, developing a self-driving car that can respond to unpredictable situations on the road hasn't been a smooth ride. One stumbling block is figuring out how these smart cars will deal with road dilemmas that require ethical decision-making and moral judgement.

While previous research has shown that self-driving cars can avoid accidents by driving slow at all times or switching to a different driving style, following a set of programmed 'rules' isn't enough to survive the demands of inner city traffic. In an improbable scenario as discussed earlier that of -  'trolley dilemma', a thought experiment that tests how we make moral decisions. In this no-win scenario,   can a driverless car make the best choice?  The tricky thing with these kinds of decisions is that we tend to make a choice based on the context of the situation, which is difficult to mirror in the form of an algorithm programmed into a machine.

Using virtual reality to simulate a foggy road in a suburban setting, the team placed a group of participants in the driver's seat in a car on a two-lane road. A variety of paired obstacles, such as humans, animals and objects, appeared on the virtual road. In each scenario, the participants were forced to decide which obstacle to save and which to run over. Next, the researchers used these results to test three different models predicting decision making. The first predicted that moral decisions could be explained by a simple value-of-life model, a statistical term measuring the benefits of preventing a death. The second model assumed that the characteristics of each obstacle, such as the age of a person, played a role in the decision making process. Lastly, the third model predicted that the participants were less likely to make an ethical choice when they had to respond quickly.

After comparing the results of the analysis, the team found the first model most accurately described the ethical choices of the participants. This means that self-driving cars and other automated machines can make human-like moral choices using a relatively simple algorithm. But before you throw away your driver's license, it's important to remember that these findings open up a whole new realm of ethical and moral debate that need to be considered before self-driving cars are out on the road.  While we still have a way to go before we take our hands off the steering wheel for good, these findings are a big leap forward in the world of intelligent machines.

The research was published in Frontiers in Behavioral Neuroscience and this post is largely reproduced from the article read in https://www.sciencealert.com/self-driving cars.

** the most ethical choice?

  • o   To a similar  Q in Enthiran, Bohra would ask Chitti, the robot, whether he would save  Einstein or the child ~ Chitti would instantly reply : ‘hypothetical Q’
  • o   In the ethical Q at the start – if you are to say it is group of sheep versus a single human life – most people would tend to say that they would save the human life – so animal lives are less important than humans is the mindset !


With regards – S. Sampathkumar.

24th Oct 2017.

1 comment: