Self-Driving Accidents

Self-driving cars have been a dream for generations, and nominally self-driving cars (in reality they're often remote-driven) are on the road today. But self-driving cars react to problems in ways fundamentally alien to how humans react, and that leads to my theory: regardless of their accidents-per-mile or accidents-per-hour statistics, the accidents they get into will be ones that humans would have trivially avoided, and so self-driving accidents will provioke additional outrage. The solution going forward is better driver-assistance rather than self-driving.

A cartoon of a depressed-looking self-driving car from Futurama
"Project Satan" from Futurama epsiode "The Honking"

My self-driving journey

When I first ran into the idea of self-driving cars decades ago, the expected "realistic" path required huge investments in infrastructure. Machine vision was progressing far too slowly to be a realistic primary sensor, and so roads would need electronic "rails" embedded in them to tell self-driving cars the directions and ways to go. Ideally, all cars would have transponders which indicated their location, speed/direction, and intentions. At the time I didn't think too much about pedestrians: the problems of how cars dealt with other cars would be hard enough.

Ten or twenty years ago, I had reversed my position: machine vision and neural networks had made amazing strides, Google Maps and similar efforts had created precision maps of nearly all roads in the US (and other countries), and suddenly the idea of a "no-infrastructure" self-driving car was plausible. They could use the same indicators that people used (like street signs and lane markings) and wouldn't require a massive rebuilding of our roads.

After progress in driver-assistance and self-driving cars has seemed to stall, and edge cases have proliferated, I am back to being a self-driving car skeptic. I love my driving-assistance features and they have definitely made my driving safer, but self-driving in chaotic environments (e.g. city streets with pedestrians, neighborhoods with children playing, snowstorms) may never be possible.

How self-driving cars react

Self-driving cars will (and have!) make mistakes that no competent human would make. Those come from a combination of how they perceive what's around them and how they respond to that model of the world. I'm trying to avoid words like "see" or "think" because those give the systems too much credit: the systems are already marvels of computing, but treating them like human or even animal sensing and thinking hurts our understanding far more than it helps.

Perception

Self-driving cars and driver-assistance features have a wide range of sensors available. These include a collection of visible-light cameras (i.e. sensors roughly equivalent to the capabilities of the human eye), radar, sonar, and lidar detection and ranging, satellite naviation systems, and a host of own-status sensors like accelerometers and wheel-speed sensors.

Although people have learned to use their eyes for the majority of driving situational-awareness tasks, machine vision is fooled more easily. Most cars use the other detection-and-ranging sensors for anti-collision purposes like dynamic cruise control and automatic emergency braking. Tesla is an outlier here by relying exclusively on cameras.

Side note: this was an awful idea by Elon Musk (against the advice of his engineering team and demonstrated accidents) driven purely by cost concerns.

Machine vision continues to be fooled by depth-perception, and accidents have happened because cameras didn't recognize an obstacle (e.g. tractor-trailer) and kept driving at full speed. Computers recognize objects throgh statistics: "this group of pixels has a 92% probability it's another car."  Humans also work in probability with our senses, but we also filter for context in a way that computers don't: "that blob looks like a penguin, but I'm driving in Nevada so it must be something else."  In addition, if the intent is to improve safety over human limits (and it should be), then why would we accept the same dangers in low-visibility situations that routinely cause fatal accidents? Radar prevents fog-related pile-ups! Any of the sensors I listed can be fooled by odd situations, but they are fooled by different odd situations: if the sensors "vote", the system will be right far more often than any one type of sensor.

Another side note: I haven't read anything about sound sensors in self-driving cars. When I drive, I listen to the engine and tire noise of cars around me as another indicator of driver intentions. I can hear cars in my blind spots and a suddenly-revving engine means it's time for me to ensure I'm not in the way of that other driver.

Processing

The big advantage that people have over computers in processing is context. A self-driving car can react faster and never gets tired or drunk, but it cannot infer like people can. A human driver may see a group of children kicking a ball and slow down without seeing anything crossing their path because the human knows that a ball and/or child could run into the road with no warning.

Similarly, a person can process the various off-nominal road conditions more easily than a computer. That includes flaggers or police controlling traffic or confusing construction zones with contradictory lane markings.

Finally, that processing must lead to two-way communication in some cases. A pedestrian crossing the road often checks whether a driver saw them, or drivers negotiating at a merge. I imagine that future self-driving cars will need an equivalent to visible eyes and hands for the purpose of signaling people around the car.

In current "self-driving" cars, these edge cases are handled by human drivers: either relinquishing control to the onboard driver (hopefully they were paying attention and/or have time to react appropriately!) or a remote driver in a call center somewhere.

Differences and the way forward

Because self-driving cars perceive things differently and cannot contextualize, they will get in different accidents than people. Rather than getting tired or distracted and hitting the brakes late, self-driving cars will plow into children stepping off a school bus or crash into emergency vehicles. Even if the accident rate is lower than human drivers (given Tesla's frequent mis-characterization of "self-driving" we are probably a long ways off from knowing the answer), the accidents will tend to be ones that a human driver could avoid.

The way forward as I see it is better driver-assistance rather than driver-replacement. My 2023 car has lane-keeping assistance, dynamic cruise control, and automatic braking, and all of those are helpful and make me a safer driver. A great example of assistance vs. replacement is that the car warns me when the car in front is pulling away. For driver-assistance, that ping is great because it says "look up and maybe start driving!". False positives and false negatives are low-impact (false negatives are probably preferred because it's better for someone to wait longer than absentmindedly rear-end the car in front of them). In self-driving, that decision would need to be accurate with both false positives and false negatives being high-impact. I can also imagine a camera-driven warning that says "red light ahead" or "traffic light just turned green". Wonderful driver-assistance, and if they get it wrong sometimes that's not a huge deal.

Driving-assistance also saved my life at least once. I was driving home on the highway from the airport after dropping off some family and was very tired. I have no memory of closing my eyes, but suddenly my lane-departure warning beeped at me and my eyes snapped open. I got the car back under control and the adrenaline kept me awake the rest of the drive home, but I could have easily hit another car or the wall without that warning.

The other driver-replacement technology I didn't talk about and should have is public transport. No matter how energy-efficient private cars get, buses and trains will always be more efficient and take up less road space. A human driver with driver-assistance features will be far safer than an unassisted human or self-driving car.

Comments