Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Autonomous cars that are trained to respond more like humans to danger will cause fewer injuries during road accidents, according to a study that shows how driverless vehicles might be made safer.
Vulnerable groups such as cyclists, pedestrians and motorcyclists saw the biggest gains in protection when driverless cars used “social sensitivity” in assessing the collective impact of multiple hazards.
The study, published in the US Proceedings of the National Academy of Sciences, highlights growing efforts to balance AVs’ efficient operation with the need for them to minimise damage in collisions.
The research comes as leading tech companies such as Tesla, Google’s Waymo and Amazon’s Zoox push to roll out AVs — which use a range of sensors and automated software to drive without human intervention — around the world. Manufacturers must train AVs to respond instantly to real-world dilemmas, such as what to collide with if a crash becomes unavoidable.
The issue of AV ethics is attracting increasing attention as growing use of the cars offers the prospect of eliminating driver problems such as spatial misjudgments and fatigue.
The study suggests human behavioural methods could “provide an effective scaffold for AVs to address future ethical challenges”, said its China- and US-based authors, led by Hongliang Lu of The Hong Kong University of Science and Technology.
“Based on social concern and human-plausible cognitive encoding, we enable AVs to exhibit social sensitivity in ethical decision-making,” they said. “Such social sensitivity can help AVs better integrate into today’s driving communities.”
“Social sensitivity” included being attuned — like human drivers — to the vulnerabilities of specific road users and being able to judge who was likelier to be more seriously hurt during a crash.
The researchers drew on evidence from neuroscience and behavioural science that humans navigate using a “cognitive map” to interpret the world and adapt accordingly.
The scientists based their instructions for the AV on a concept known as “successor representation”, which encodes predictions of how different elements in an environment will interrelate across time and space.
They examined the results of harnessing their model to EthicalPlanner, a system AVs use to make decisions accounting for various risk considerations. The researchers modelled 2,000 benchmark scenarios, measuring total risk of each one by assessing the probability of collision and the likely severity of harm for the people involved.
The scientists found that using their human-inspired model with EthicalPlanner cut overall risks to all parties by 26.3 per cent and by 22.9 per cent for vulnerable road users, compared with using EthicalPlanner alone.
In crash scenarios, all road users suffered 17.6 per cent less harm, rising to 51.7 per cent for vulnerable users. The occupants of the AV were also better off, experiencing 8.3 per cent less harm.

An independent group of experts tasked by the European Commission has called for AVs to be programmed to ensure a “fair distribution of risk and the protection of basic rights, including those of vulnerable users”.
The latest research paper addresses a crucial question of “how to model AV behaviour that is safe and socially sensitive”, said Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology.
“The proposed framework . . . offers a potential path towards AVs that can navigate complex, multi-agent scenarios with an awareness of differing levels of vulnerability among road users,” Rus added.
Source link