The driverless cars of the future that will be constantly spying on us

Opponents of seat belt laws have often used a macabre thought experiment to justify their position. What if, they say, a sharp stake was mounted in the centre of the steering wheel, pointing directly at the driver’s heart to make crashes fatal? Cases of speeding and drunk driving would surely plummet.

The roads would become safer for pedestrians, cyclists and passengers, perhaps even for drivers, if their appetite for risk changed enough. No car manufacturer has followed through on this suggestion. But as vehicles become increasingly capable of driving themselves, safety experts have come to fear the reverse.

While fully-driverless cars are still believed to be many years away, manufacturers are rapidly adding semi-autonomous features to their cars that, little by little, pass control of the car over to software. Tesla’s Autopilot, General Motors’ Super Cruise and Ford’s upcoming BlueCruise system all offer variations of systems that provide automatic cornering, braking and lane changes. The companies claim the systems improve safety and lead to fewer accidents per mile than human driving.

But campaigners fear that they could invert the hypothetical stake in the steering wheel, making roads more dangerous as drivers recklessly put their faith in flawed systems.

According to Tesla their Autopilot cars are much safer than the average car

The “unsafe valley of assistance”

Driver assistance features do go wrong. US regulators are investigating more than 20 crashes involving Tesla’s Autopilot, and the majority of fully driverless systems currently being tested require human oversight. The most high-profile death involving a driverless car, when an Uber test vehicle hit a pedestrian in 2018, was blamed partly on the safety driver not paying attention.

Uber’s self-driving cars could not distinguish jaywalkers when one of the vehicles hit and killed a woman in 2018

Germany’s Fraunhofer FKIE, an institution that examines the risk of new technologies, has said the coming years could lead to an “uncanny and unsafe valley of assistance” in which accidents counter-intuitively increase as safety systems get better, leading drivers to place too much faith in them.

Cars may be able to drive themselves on the motorway but not as they come off the slip road. The assistance systems may simply fail, making them unable to spot animals or temporary lane markings. This means that humans will have to be prepared at all times to take over from software for several years, and evidence suggests drivers are not ready to do this.

“Humans are terrible overseers of highly automated systems. We’re really not capable of it.” says Bryan Reimer, a researcher at Centre for Transportation and Logistics at the Massachusetts Institute of Technology. A study of Autopilot users by researchers at MIT found that drivers spent more than a third of their time not looking at the road when the software was engaged.

The solution, according to Reimer and companies developing the technology, comes at a price: constant surveillance of drivers through video cameras that ensure they are paying attention.

About | Vehicle autonomy levels

Driver monitoring will soon become a requirement

While Tesla relies on sensors in a steering wheel to check drivers are in control, the likes of GM and Ford are turning to always-on infrared cameras that monitor a driver’s gaze and head position to ensure they are continually paying attention. If drivers fail to look at the road, a series of escalating warnings will sound until the vehicle comes to a stop. The systems have won safety plaudits, but motorists may be put off by the idea of a camera pointed at the driver seat.

Last month, the Chinese government stopped Teslas from entering some military and state-owned facilities, due to the company’s own driver cameras. The company said the cameras are deactivated outside of the US, where they are not used for monitoring but to record videos of crashes and other incidents. Nick DiFiore, the head of automotive at Seeing Machines, an artificial intelligence company that supplies the technology for GM, said that most companies do not store or transfer video data.

Its software, which can detect signs of drowsiness such as a lolling head or drooping eyelids, acts in real-time, meaning it does not need to leave the car. “The [manufacturers] are going to pretty great lengths. None of the systems [SeeingMachines] designed store data.”

The remains of a Tesla vehicle are seen after it crashed in The Woodlands, TexasCredit: SCOTT J. ENGLE/REUTERS

He admits this could change at carmakers’ discretion, however, should drivers demand more from their cameras, such as making Zoom calls from their vehicle.

Security breaches that expose cameras are also unlikely to be avoided. If motorists do find being watched uncomfortable, they may have little choice. From next year, cars sold in the EU will require driver monitoring systems if they have semi-autonomous features.

The UK is consulting on its own measures, which include proposals for drivers to be checked at least every 30 seconds to see if they are paying attention. Proponents of driver monitoring believe any concerns will be easily outweighed by more convenience and safety. “We want to move society forward and use automation,” says Reimer, of MIT. “That is not risk free.

We need to manage those risks much more effectively than we are today.”