Science Fiction meets real life - Teaching Autonomous Cars Ethics

You’re driving down a dual carriageway, one lane is closed and laden with cones, when suddenly a child runs out into the road. Human nature governs that the correct course of action is to swerve through the cones in an effort to miss the child. In a brave new world that is exploring autonomous technology, how would a self-driving vehicle act when faced with the same situation?



Technology utilised by autonomous vehicles has come on tenfold in recent history. In fact, it's now at such a point that we have seen working concept cars that are capable of making even better decisions than the human counterparts that they share the road with. Whilst the technology has moved on however, there is a still a question as to how an autonomous vehicle would react in situations like the above – would the vehicle break through the cones and break the law to save the child or simply obey the law and continue on its course? Chris Gerdes and his team have set out to answer this question and to complete a task that feels like the plot of a sci-fi movie: teach the robot cars ethics.


Chris Gerdes is a professor of Mechanical Engineering at Stanford University and Director of the Centre for Automotive Research at Stanford (CARS). Gerdes and his team have already spent years working with autonomous cars and figuring out how our thought processes can be implemented in the cars of the future to make them better.

Possibly one of the teams most popular achievements was the major part that they played in programming ‘Shelly’ – the autonomous Audi racing car that powered its way through the 153 turns of the infamous Pikes Peak trail. Gerdes is also well known for his TED talk in which he spoke about the future of racing cars and his study on the brainwaves of professional racing drivers.

After careers spent as the biggest advocates of the driverless car, it would seem that if anyone is up to the task of instilling a sense of ethical thought with the car of the future, it has to be Gerdes and his team.

The journey on which Chris and his team have embarked, sees them trying to programme a computer with a complex sense of ethics. After all, what they are trying to do is anything but black and white - it's very rare that two scenarios will be the same, so how do you programme the car to make a complicated ethical decision? Talking about what they're undertaking, Gerdes said:

“We need to take a step back and say, ‘wait a minute, is that what we should be programming the car to think about? Is that even the right question to ask? … We need to think about traffic codes reflecting actual behaviour to avoid putting the programmer in a situation of deciding what is safe versus what is legal.”

Programming ethics into a machine becomes even more difficult when you tackle the case of an unavoidable accident. When barreling towards a situation that it is impossible to avoid, should an autonomous vehicle judge to kill its occupant in favour of saving more lives or should a driverless car be programmed to protect its occupant and aim for the smallest obstacle - what if that smaller object is something like a pushchair?

Regardless of whether it's a human or robot making this decision, mistakes are more than likely going to made. The difference here is that a human will make a heat of the moment decision with, in most cases, no previous experience of a situation of this magnitude. An autonomous car has been programmed in advance how to deal with this situation, that therefore begs the question - what should the machine be pre-programmed to think?

Chris and his team have decided that the best way to figure out problems like this is to throw autonomous cars head first into these situations and then see how they react. The experiments that Gerdes and his team present the autonomous cars with are exact replicas of situations that they could face out on real roads. 

One recent experiment saw a car pointed directly at a simulated road crew that are working behind cones. The decision here is whether to swerve, subsequently crossing a double yellow line and breaking the law or continue on course, taking a direct route through the cones and the unlucky road crew. The result? The autonomous car opted for a last minute swerve, avoiding the cones and the road crew.

Despite successful demonstrations of real-world implementation from companies like Google, Volvo and Tesla, Chris still believes that it will be a long time before driverless vehicles are wholly accepted by the wider public:

“With any new technology, there’s a peak in hype and then there’s a trough of disillusionment. We’re somewhere on that hype peak at the moment. The benefits are real, but we may have a valley ahead of us before we see all of the society transforming benefits of this sort of technology.”

Whilst it can be considered that this research program is still very much in its infancy, it goes without saying that the work Chris and Stanford’s philosophy department is currently doing will have a huge impact on the safety and general acceptance of technology within the upcoming autonomous age. Thanks to this research, it certainly seems like autonomous cars instilled with ethical programming will make smarter decisions going forward.