As we wrote last week, there are two huge problems for programming driverless cars. One is, they may be safer than human drivers in the long-term but until then, they are considerably more dangerous than humans. Second, we want to program them to make decisions just like a human – yet we don’t want them to drive just like a human!
Like a human
If the goal is to match human performance, then it has to match the best of human performance. This is a surprisingly outrageous goal. Even chatbots are not even mildly capable of holding human-like conversations with customers.
According to CEO of May Mobility, there is globally one death per 100 million miles driven. He also asserts that, using the current failure rate of driverless cars, humans are still 10,000 times better than driverless cars. It seems like an almost impossible goal to achieve.
Following Moores Law, performance doubles every 16 months so it will be 16 years to at least match human performance. (Even Moores Law sounds very optimistic.) Meanwhile, self-driving cars must improve on the human rate of injury accidents as well as deaths. Is it really possible to out-do human driving skill in 16 years?
Do we still need humans?
MIT scientists claim we will always need humans to intervene. Air travel, for example, is highly automated but still needs human intervention. Driverless cars should be treated at least like aircraft. There must always be a human operator. MIT says automation simply changes the nature of the work that operator does.
It is also true that no driverless car will behave exactly the same way as the next one. This is because different programmers will use their own algorithms. In fact, no programmer will know exactly how that vehicle will behave in every circumstance. Funnily enough, that behaviour is starting to sound just like a human!
The social aspect of driverless cars is also profoundly ignored. When you get into a bus or car, you acknowledge the person at the wheel is in charge. This social contract helps us put our trust in that person. Where is the social contract when riding in a shared robot taxi? If the robot taxi has no driver, who helps a customer in a wheelchair?
Driverless cars are still a fantasy
As we said in Part 1, carmakers are no longer aiming for fully driverless cars. They are working on levels 3 and 4. But these levels are the most dangerous of all. They rely on having a human present – and that person has to be ready to pay attention if needed. This is unlikely, given our propensity to distract ourselves with something else when bored.
One of the biggest road safety problems today is driver inattention. Yet with level 3 and 4 cars, driver inattention at the wrong moment will be catastrophic. It may not be possible to go halfway with autonomy. Only perfect level 5 autonomy could create the level of safety required.
A second problem is the need to dramatically change all the infrastructure of cities and towns to accommodate self-driving cars. The scale of this task is monumental, especially when you think how long it has taken to create roads and railways. One engineer argued, if you need to make massive infrastructure changes to make AVs work, you have failed at the task.
Ultimately, there will always be a gulf between the engineers and regulators who demand safety, and the visionaries and upstarts with their bold ideas. Companies may need to adjust their intentions and create less automated vehicles that can drive anywhere, anytime. It may be more practical than building sci-fi vehicles that demand we change the world.
Waymo CEO made a revealing statement about building driverless cars: “I think the things that humans have challenges with, we’re challenged with as well”. It seems carmakers have discovered they are human after all.
your opinion matters: