I, Robot; the future?
Self-driving cars are always in the headlines at the moment; from the fatal car crash in Arizona, Nevada earlier this year caused by an autonomous car to Elon Musk’s claims three years ago that Tesla’s would be able to drive themselves without any human interaction by 2017.
Within the chauffeur industry, developments are clearly causing concern. But there seems to be very little information on the avoidance of dangerous situations by artificial intelligence that governs self-driving cars.
If you are driving in heavy, speeding traffic and someone darts into the road, within a split second you will have to make a decision and react, but what oversees the decision that artificial intelligence will make? Someone has to write the programming that defines the reaction and is this programming biased by the writer, statistics or manufacturer policy?
In the 2004 film I, Robot, Will Smith’s character has a profound dislike for artificial intelligence, as the robot who saved him from a car accident, made the logical but possibly unethical decision to save him instead of a 12 year old girl, as her survival was statistically less probable.
During a study conducted by the Massachusetts Institute of Technology (MIT), taking an online survey of 2 million participants from 200 countries, it concluded several overall universal preferences; sparing the lives of humans over the lives of animals; sparing the lives of many people rather than a few; and preserving the lives of the young, rather than older people.
However the degree of which they agreed varied. For example countries based in “the East” whilst preferring to save younger over older people, they were far less pronounced in that preference compared to answers from Western Countries.
However the result definitely became a lot murkier when it came to saving lives of passengers over pedestrians. 40% favoured protection of the passenger over the pedestrian, and again it became even less decisive when the scenarios became more complex; in choice between killing one person who is crossing the road legally, vs killing two people who were crossing the road illegally, the results were fairly evenly split.
One vehicle manufacturer even went as far as saying their vehicle should prioritise saving their own passengers as a priority; this undoubtedly caused a scandal but raises the question of computer programming ethics vs human reactions.
How does this affect chauffeurs? With the increased discussion around autonomous vehicle in the industry, one thing is clear – chauffeurs are responsible for the lives of their passengers and are responsible for making snap decisions that ensure minimised long term effects to those around them.
Would a chauffeur take the decision to run their vehicle into a bollard and injure their passenger in a minor way rather than hit a child? If reaction time permits, it is likely that they would. However, if the passenger was an infirm heart patient on the way back from hospital with a new born child, would this govern the reaction?
If A.I. was governing the decision is the manufacturer inclined to protect the passenger? Would this override the ethical decision making in the face of an imminent accident? In the modern age of transportation, these questions are yet to be answered concisely and then importantly has legislation been adjusted to clearly allocate liability.
Only time will tell.
Keep up to date with all the recent news and offers at LCH by following us on social media: