Luckily no one was injured when one of Google’s self-driving cars recently crashed itself into a bus as it pulled out at a junction. The car was only travelling at 2mph, after all. The company has admitted it bore “some responsibility” for the accident because the test driver (and presumably the car) thought the bus would slow down to allow the car to pull out.

Google has now redesigned its algorithms to account for this, but the incident raises the key question of just who is responsible in the eyes of the law for accidents caused by driverless cars. Is it the car’s owner, its manufacturer or the software maker? Who would be taken to court if charges were brought? And whose insurance company would have to pay for the damage?

Advertisement

Most modern cars have some technology that operates without human intervention, from air bags and anti-lock brakes to cruise control, collision avoidance and even self-parking. But very few cars have full autonomy in the sense that they make their own decisions. A human driver is usually still in control – although this assumption is increasingly difficult to maintain as advanced driver assistance technologies, such as electronic stability controls, enable drivers to retain control of the vehicle when otherwise they might not.

Driver and company negligence

As things stand, the law still focuses specific car regulations on human drivers. The international Vienna Convention on Road Traffic gives responsibility for the car to the driver, saying “[e]very driver shall at all times be able to control his vehicle”. Drivers also have to have the physical and mental ability to control the car and reasonable knowledge and skill to prevent the car harming others. Similarly, in UK law, the person using the car is generally liable for its actions.

But following an accident, legal liability can still depend on whether a collision is due to the negligence of the human driver or a defect in the car. And sometimes, it could be due to both. For example, it may be reasonable to expect a driver to take due care and look out for potential hazards before engaging a self-parking function.

Advertisement

Driverless car technologies come with a warning that they are not insulated from software or design faults. But manufacturers can still be held liable for negligence if there is evidence that an accident was caused by a product defect. Legal precedents for corporate negligence have existed in the UK since 1932, when a woman successfully sued the makers of a bottle of ginger beer containing a dead snail after she drank from it and fell ill.

We have come a very long way since the 1930s. Legislation such as the Consumer Protection Act 1987 now provide a remedy for people who buy defective products. In the case of driverless vehicles, this can extend not just to the car manufacturer but to the company that programmes the autonomous software, too. Consumers don’t need to prove the company was negligent, just that the product was defective and caused harm.

However, while proving this for components such as windscreen wipers or locks isn’t too hard, it is more complicated to show software components are defective and, more importantly, that this has led to injury or harm. Establishing liability can also be difficult if there is evidence the driver has interfered with the software or overridden a driver assistance functionality. This is particularly problematic where advanced technologies enable driving to effectively be shared between the car and the driver. Product manufacturers also have specific defences, such as the limits of scientific knowledge preventing them from discovering the defect.

Duty of care

When it comes to the driver’s responsibility, current law requires drivers to take the same amount of care no matter how technologically advanced the car is or their level of familiarity with that technology. Drivers are expected to demonstrate reasonable levels of competence and if they fail to monitor the car or create a foreseeable risk of damage or harm they are in breach of their duty of care. This implies that without a change in the law, self-driving cars won’t allow us to take our eyes off the roads or take a nap at the wheel.

Advertisement

The current law means that if a self-driving car crashes then responsibility lies with the person that was negligent, whether that’s the driver for not taking due care or the manufacturer for producing a faulty product. It makes sense for the driver to still be held responsible when you consider that autonomous software has to follow a set of rational rules and still isn’t as good as humans at dealing with the unexpected. In the case of the Google crash, the car assumed that the bus driver was rational and would give way. A human would (or should) know that this won’t always be the case.

Joseph Savirimuthu, Senior Lecturer in Law, Director of Postgraduate Studies, University of Liverpool.

This article was first published on The Conversation.