I've felt for quite a while that full self-driving (automated driving without human supervision through arbitrary road systems) is a problem that is deceptively hard. Yes, it is possible to map a route and navigate on a road mesh, do lane following, and even obstacle avoidance using current systems. With LIDAR and well-trained avoidance systems things like Waymo can operate in constrained urban environments.

But as soon as the training wheels are off and the environment becomes unconstrained, the entire problem stops being about just whether we can design an agent which has driving capabilities and becomes "can we make a vehicle which can predict agent-agent dynamics?" If we think about the full range of human road behaviors we must consider adversarial attacks on the system such as:

  • Blocking it from entering a lane
  • Boxing it in and forcing it off the road into obstacles
  • Throwing paint/eggs/rocks at its vision systems
  • Using deceptive tactics (e.g. pretend to be a road worker) to vandalize it and/or steal from its cargo
  • Intentionally standing in its path to delay it
  • Making blind turns in front of it
  • Running into traffic

In addition to agent-agent problems, we must also consider road hazards:

  • Poorly maintained roads with damaging potholes
  • Sinkholes which have disabled the road
  • Eroded road sides with dangerous falls
  • Road debris from land slides
  • Road debris from other vehicles

In these situations, a perfectly rule-following automaton behaves well below human level in preventing delay or damage to itself. Do these scenarios require AGI for a level-5 autonomous vehicle to reach human level? Are the benefits from above-average performance in normal traffic enough to offset the risk of subhuman performance in extrema?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 8:23 AM

Generally, for every ethical problem of type "is it better to do X or Y?", we can imagine a traffic situation where a barrier with two gates suddenly appears before the fast driving car, one gate inscribed with "if you go through this gate, X will happen", the other inscribed with "if you go through this gate, Y will happen".

(Just kidding.)

I think this is a good analogy for those attempts to transfer the trolley problem to self-driving cars.

Practical problems however still exist. I was talking with a woman who grew up in Karachi and she said that the custom over there is that if there aren't many cars on the road and you are waiting at a red line and a motorcycle tries to stop next to you, you automatically start driving forward. That's a strategy to avoid being mugged in Karachi.

A driverless car has some advantages in a situation like that because it just ignores a gun if the motorcycle driver pulls a gun. On the other hand, there might be other strategies to stop the driverless car to mug it and you likely want to make it robust against those. 

New crime strategies will probably appear soon after self-driving cars become common.

For example, a group of people may block the entire road, forcing the car to stop. A human might recognize this as a criminal attack and choose to just keep going, but a self-driving car will stop. (That is, a strategy that would be "too expensive" against humans may be profitable against self-driving cars.)

You could also block the road using dummies or cardboard silhouettes or whatever the car's algorithm would recognize as "a human". You could even use them strategically to make the car crash into a wall, giving the algorithm a dilemma between killing 1 or 2 humans inside, or dozens of "humans" on the road.

EDIT: Ah, I see this is the point the article makes.

  • Blocking it from entering a lane
  • Boxing it in and forcing it off the road into obstacles
  • Throwing paint/eggs/rocks at its vision systems
  • Using deceptive tactics (e.g. pretend to be a road worker) to vandalize it and/or steal from its cargo
  • Intentionally standing in its path to delay it
  • Making blind turns in front of it
  • Running into traffic

I'm not sure that existing NGIs (Natural General Intelligences, a.k.a. humans) handle any of these scenarios especially well. If someone stood in front of your car to delay you, what would you do?

In these situations, a perfectly rule-following automaton behaves well below human level in preventing delay or damage to itself.

The solution here isn't AGI, the solution here is to notice that the rules that people actually follow differ from the rules that are written, and to design the car's AI to do the human thing rather than the strictly legal thing.

A car only needs to make the kinds of decisions that humans make in a few seconds and as such it does not have to do a lot of tasks that full-AGI could do.

On the other hand, as far as I remember Elon Musk said that they needed to make their car intelligence a lot more general than they first expected. I think he said in the Optimus announcement that Optimus will basically run on the same AI that the car does, so the car software has to be very general.