Honda ‘Traffic Jam Pilot’ Offers Standby Self-Driving By March 2021
Before March 2021, the Honda Legend will ship with "Traffic Jam Pilot" -- actual full self-driving
A traffic jam system is one that can handle everything in a traffic jam. Such products normally only work on freeways, and only at slow speeds. Most importantly, they allow you to take all attention off the road — you can do e-mail or read a book, and do not have to watch the road. When traffic opens up again, they alert you and you start driving. While Honda doesn’t say, most plans for such systems don’t want you to fall asleep, but since people do that, typically when traffic opens up they will just slow to a stop — hardly desirable activity, but a safe thing to do when at the end of a traffic jam.
We thus see the first and important leap in a commercial car towards a true robocar — a switch in who is responsible for driving, and thus in who will be liable for a mistake. Systems like GM Super Cruise and Tesla
Autopilot and FSD are known as ADAS, the DA standing for “Driver Assist.” The human driver is in control, watching, and responsible for what goes on. If there is a crash, it’s the fault of the human driver almost no matter what the system did. That’s why the famous Tesla Autopilot fatalities have not created legal blame for Tesla (not that this stops lawsuits.)
While the technical differences between driver assist and robocar are so large that it was a mistake to imagine they were different “levels” of some base technology, the legal difference is just as striking. I believe Honda must, in selling this product, have to take legal liability for any mistake it makes. Would you turn on a system and then pick up a book, knowing that if it hit somebody, you would be liable? Would you get in an Uber
if you would be liable if the driver had an accident? Probably not.
This is fine. I am confident Honda has validated the safety of the product enough to know that incidents will be rare and, of course, always low speed with minimal injury. As such they can insure against their risk just fine, and build that cost into the cost of the system. In theory, a driver would save money on insurance if their own personal insurance was fancy enough to reduce their premiums for the period they are not driving, though no insurance does that at present — though it’s been discussed.
The confusion over “Level 3”
When the National Highway Transportation Safety Agency first proposed “Levels” (later adopted by the Society of Automotive Engineers) it was very early in the history of robocars. Their set of levels was misguided, based mostly on what part of the driving task a human would perform — a bit like defining classes of motorcar based on the role of the horse. Their 2nd level described fancy “driver assist” and their 4th was a full robocar. Level 3 acted mostly like a full robocar, but their might come a time when, with at least 10 seconds of warning (ie. not an emergency situation) a human would need to take over. That human could do other tasks, but needed to be on standby, and probably not asleep, since you can’t take over in 10 seconds if asleep.
A full robocar though, doesn’t need a human on standby, and thus can even run empty. However, it is allowed to only operate over a subset of roads and conditions, and indeed all cars do — the later added “Level 5” (which can handle all roads a human can) is more of a science fiction goal, and not worth dong from a commercial standpoint.
This means that if a car can safely come to a stop before it leaves its domain it is a full robocar, because it does not need that standby human. From a technical standpoint, the 3rd level is really a special form of Level 4 — it’s a robocar that has to move in and out of its domain at speed. What this really means is the freeway. It’s not acceptable to have a human drive onto the freeway, stop and turn on a driving system, and it’s also not workable for a vehicle, approaching somewhere the vehicle can’t drive (like an off-ramp) to just stop on the freeway. You need a live human to take the car out of the driving domain while moving. That’s what Level 3 turned out to be.
Early experiments with this concept were done at Google Chauffeur (now Waymo) near the end of my tenure there. We found that the idea of doing a transition back to human driving at speed was risky. You couldn’t easily trust people to not fall asleep or otherwise always be on standby, ready to handle things. It isn’t just sleep. Somebody will be watching Darth Vader tell Luke who his father is (sorry, spoilers!) right when the car alerts to take over, and not be quite ready to grab the controls and fully understand the situation.
This is not a problem in a traffic jam. A traffic jam is the easiest thing in the world to drive. It’s slow, and all you have to do is follow the leader. Even if the lane lines are gone or repainted, if there isn’t a leader to follow it’s not a traffic jam. If you are not sure of what to do, you can always just stop —that’s why it’s called stop-and-go. Stopping distances are very short, so you can maintain a safe distance and react quickly. It’s not too surprising Honda and the Japanese regulators think they can pull this off.
Not everybody thinks that. Audi actually built a traffic jam pilot for their cars, put the hardware in to support it, and never turned the feature on in software. That’s because of the second part of the equation — the switch in legal liability. It also isn’t just legal responsibility; any crash is a risk in the world of public opinion and can tarnish a valuable brand, even if the cost of the repair is readily afforded. While pedestrians are forbidden on freeways, they do show up around the accidents that cause traffic jams, and hitting a pedestrian, even at 10mph, can cause something much worse than a bent bumper. Honda’s example may make Audi deploy this.
Indeed, traffic jam driving is easy enough that it doesn’t need LIDAR and maps the way full speed driving does — you just follow the leader, and the leader and the cars around you are not very far away. Tesla could probably produce such a product as well and release it as a software update.
It is a useful, though not world-changing product. Stop and go traffic is everybody’s least favorite part of driving. Being able to catch up on email or watch a video during jams is very nice. In fact, Tesla Autopilot is valuable just for making traffic jam driving a bit less stressful, though you can’t get things done and must watch the road.
Making traffic jams worse
It is world-changing in one bad way. As such products proliferate, they make it more tolerable to get into a traffic jam. Right now, many people work hard to avoid driving at rush hour or during heavy traffic, and it’s important that they do. With a full traffic jam pilot, the time in jams is no longer wasted — you can get things done, even work at the office (back when we start working in offices again.) That means there is no big reason to avoid them, which could make them much worse. With gasoline cars, traffic jams also pollute a great deal.
The good news is that the era of the traffic jam pilot should be short. That it’s not too many years from that to the full robocar that can go faster. Yes, that makes the bad commute even more palatable, but it also offers a number of worthwhile solutions for fixing traffic like frictionless shared vans and abating the problem that’s caused.