Who’s Afraid of Self-Driving Cars?

Who gets punished when self-driving cars kill people?

Dr Tomaž Slivnik

"Who goes to jail if a self-driving car kills a pedestrian?" There is no good answer to this question. If it's the car's owner, nobody will buy them. If it's the manufacturer's CEO, nobody will build them. If it's the software writer, nobody will write the software. The inevitable conclusion is: nobody will go to jail. Self-driving car advocates suggest accidents will be purely an insurance (financial) matter.

Unfortunately, this is the worst possible answer to the question. The prospect of going to jail creates an incentive, much greater than financial cost, and which cannot be delegated, to drive carefully. The absence of this incentive creates a moral hazard.

Insurance-based solutions will lead to perverse incentives. Razor-razor blade markets teach us that buyers prefer products which cost less upfront even if later recurring costs are higher. Especially if the upfront cost is high. Initially, regulators won't allow self-driving cars on the road until their safety record exceeds that of human-driven cars. Eventually, human-driven cars will be displaced and only self-driving cars will remain. With human-operated cars and driving skills and non-financial incentives (jail time) gone, market forces will favour cheaper cars over lower insurance premiums (=better safety). Eventually, the safety record of self-driving cars will become terrible. Incentives eventually always win out, even if it takes time. Regulation is no substitute for private actors' incentives.

Self-driving cars are underpinned by AI. AI has had its successes - games, financial markets, search, etc. Its greatest success (and main raison d'etre) is big data analysis and mass surveillance. What do all areas where AI has been successful have in common?

1) they are non-mission-critical: no single wrong decision has catastrophic consequences;

2) in a single iteration, the AI algorithm beats other solutions on average;

3) the outcome is measured by running the algorithm many times.

AI, being a black box solution, is not such a good fit for mission-critical applications.

Will self-driving AI be "tweaked" in the same way search AI was "tweaked" to downgrade or cancel political opponents?

Current self-driving cars are electrical. The average UK household has 2.4 members and consumes 4000kWh of electricity p.a. A typical Tesla consumes 0.24kWh-0.30kWh per mile. The average UK car travels 7500-9000 miles p.a. In 2020, the UK had 32,697,408 licensed cars and 67,886,011 inhabitants. If every car were replaced with a Tesla, electricity consumption would increase by 2000-3000kWh p.a. per household, a 50%-75% increase. The grid is already stretched. Where are the power stations and distribution lines to generate and deliver all this additional electricity?

Tesla started as an electric car company, but, perhaps realising the above problem, diversified into solar energy and switched strategy to "local generation and charging". A fast 400kW electric car charger refuels an electric car in 11 minutes. A 7kW domestic charger (requiring a 70m^2 solar panel installation, fully dedicated to it, on a sunny day) takes 10.5 hours to refuel it. That's longer than overnight - and overnight, there is no sunshine. How's that going to work?

Dr Tomaž Slivnik is a technology entrepreneur and angel investor.

Improving safety without the punishment

Dr Anton Howes

I do not think it would ever be as simple as the CEO of a car manufacturer going to jail, and nor should it be. There are better approaches. 

We have a potential model in the aviation industry, for example, where systems do occasionally fail and cost lives, but where manufacturers or airline CEOs do not end up in jail. Instead, aviation has developed a culture of learning from each previous failure, in which faults and shortcomings are openly reported without fear of punishment. 

A different example is factory safety. Before the 1890s it was widely regarded as a worker's own fault if they got hurt when operating dangerous machinery — and not just by managers and owners, but by workers themselves. If they really thought the injury was due to their employer, they would have to sue them, which few could afford to do. This changed with the creation of workers' compensation: workers received automatic compensation for injuries, according to a fixed schedule of rates, but in exchange they lost the right to sue for further damages. This arrangement meant that both employers and employees avoided costly lawsuits and it re-focused employers from shifting blame towards preventing injuries. The change, and its benefits are documented here.

So I think it is a mistake to assume that only the threat of punishment, jail or financial penalties, provides enough incentive to promote safety in self-driving vehicles. There are other options.

I think Dr Slivnik’s view of a cost-safety trade-off is also mistaken. Airline customers certainly demand a (high) minimum level of safety, or at least of safety regulation. Though airline prices have fallen markedly over the last 70 years, safety has risen markedly too. Nor do I accept that AI will never be safe for "mission critical" applications. Indeed, the whole point of using AI is to go beyond very simple robotic decisions and learn to deal with a growing multitude of less expected events — and it is getting better all the time.

Lastly, on the power needs of electric cars, we need to be aware of price signals. If the grid becomes over-stretched by electric cars, then higher electricity prices, or even the expectation of them, will strongly incentivise the development of a better supply. It sounds like a big job, but we have gone through much more capital-intensive technological changes before. Just think of all those Victorian plumbing, sewerage and gas networks!

Dr Anton Howes is a historian and author of ‘Age of Invention’.

Previous
Previous

When absolute poverty isn't absolute poverty in the slightest

Next
Next

Today in people not understanding markets