Will You Let Your Car Drive Itself?

by E.V. Rhodes

"Wanna see something weird?" is not a question I usually ask passengers when I'm driving, but in February 2022, as we headed north for a ski weekend, I explained to my two companions how I'd previously noted some odd, even alarming, behavior when using the cruise control feature on my new Tesla Model Y, and I asked, did they want to see if it would do it again?

In my previous experiences, the cruise control had properly maintained the car's speed for long periods of driving.  It also accurately kept a set distance back from any vehicle ahead.  But several times it had suddenly and dramatically slowed.  I could not tell if it had detected a threat, for it gave no reason for slamming on the brakes.  After this happened several times, I simply stopped using the cruise control feature.

This is not Tesla's much-touted "Full Self-Driving" software (at the time a $10,000 upgrade which was an easy "no thanks"), but simply their standard "Traffic-Aware Cruise Control" which they say "Is designed to slow down Model Y as needed to maintain a selected time-based distance from the vehicle in front, up to the set speed... primarily intended for driving on dry, straight roads, such as highways."  I explained to my passengers that I wanted their consent before trying it again, as well as their observations and insights should anything happen.  With their agreement, I engaged the cruise control, set the speed limit, and removed my foot from the accelerator.  The day was sunny and clear, the highway traffic was light, and the car continued carrying us towards the distant mountains.  I remained in the right-hand lane, alert and driving as usual.

But not 20 minutes later, it happened again!

I'm not someone who resists technological progress.  Years ago, I built a ZX81 computer kit and used it to control a simple robot arm.  In college, I worked at a Fortune 500 company writing "expert systems" software to optimize manufacturing processes, and later I helped to develop an autonomous robot which could locate and navigate to its recharging station, and stay "alive" for weeks at a time.  So when it comes to software for self-driving cars, I appreciate the challenges, and have great respect for the programmers and the results they've demonstrated.

In 2014, Tesla began offering limited self-driving capability on some of their vehicles.  With frequent, incremental software updates, development proceeded rapidly.  By January 2016, Tesla's CEO stated that the their autonomous driving system was "probably better" than most human drivers.  Of course, "probably" is difficult to quantify.  The real-world presents autonomous systems with incredible complexity; ever changing weather, illumination, and surroundings, not to mention the unpredictable behaviors of people, animals, and other vehicles.  Self-driving cars must reliably and accurately generalize from highly variable data, and be prepared for an enormous number of situations which might occur incredibly rarely, if ever.  The fact that self-driving cars can travel on public roads at all represents an astounding technical achievement.  But they are only safe until they are not.

After explaining my previous experiences with the cruise control, my companions agreed to trying it, and to watch closely should anything happen.  After engaging, we drove for 20 or 30 minutes without incident - until my car suddenly slammed on its brakes and decelerated rapidly!  The driver behind us swerved to avoid a collision and sounded their horn.  Why had we slowed?  There were no obstacles or vehicles ahead of us.  The road was straight and clear!

I immediately stepped on the accelerator, disengaging the cruise control and resuming our speed.  End of experiment!  One of my passengers thought a section of the road may have been resurfaced, and perhaps looked slightly darker than the rest.  Did that register as a threat to the software?  (Unfortunately, I did not capture a dash cam recording of the event.)

I have never enjoyed being an unpaid software beta tester, and I'm even more reluctant to be a guinea pig where problems could result in injury or death.  I have not used the cruise control since that day, but I recently learned that our alarming event was not unique.  Many other Tesla cruise control users have also experienced sudden, inexplicable braking.  On May 4, 2022, the U.S. National Highway Traffic Safety Administration (NHTSA) issued a letter to Tesla stating "This office has received (758) seven hundred and fifty-eight reports of unexpected brake activation in certain (MY) 2021-2022 Model 3 and Y vehicles."  With that many people concerned enough to actually file a report with a government agency, how many others (like myself) had not reported their experiences?  Thousands more, I suspect.

Now don't get me wrong.  The Tesla Model Y is an excellent car with great performance, comfort, and tons of amazing features.  The very same NHTSA gives it five out of five stars for overall safety.  Rising gas prices make owning an electric car increasingly affordable, and if you have solar panels you can easily produce all the fossil-free energy it needs right at home, making them good for you, your wallet, and the planet.  (End of EV plug.)

But, if adding machine intelligence to a fairly standard feature like cruise control (first offered on a Chrysler production car in 1958) presents such mysterious and life-threatening difficulties, what about the much greater challenges facing fully self-driving cars?  They already have been involved in many reported injuries and deaths with everything from drivers stupidly defying important operating instructions, to innocent individuals tragically hit on roadsides.  Self-driving vehicles pose risks not only to the drivers who knowingly accept them, but to potentially anyone in their presence: other drivers, passengers, pedestrians, motorcycle and bicycle riders, highway workers, police and emergency responders - in short, almost everyone.

It surprised me to learn that the USA currently has no federal laws governing self- driving cars.

In 2016, the NHTSA did publish the "Federal Automated Vehicles Policy," a set of guidelines which they say provides "A proactive approach to providing safety assurance and facilitating innovation."  This allows developers to act quickly and develop solutions rapidly with fewer legal obstacles, however it can also be seen as placing profits before people, since nothing legally requires them to hold my safety as their highest concern.  How can we know in an objective, fact-based way, when self-driving vehicles are actually able to increase the overall safety of our roadways?  Perhaps we must simply accept that automobiles are dangerous, and that some amount of injury and death must be expected.  We already tolerate that with human drivers, why not machines as well?

Well, because the goal of self-driving vehicles is to make our roads safer, not less safe.  Determining if and when they actually are safer will require an army of highly trained, detail oriented investigators, drilling deep into vast amounts of real-world self-driving vehicle data, verifying their findings, and standing behind their conclusions.

Interestingly, we have just such an army: the worldwide auto insurance industry, valued at over US $700 billion in 2019.

Insurance actuaries assess the risks existing at the dynamic intersection of human behavior, government regulation, and automotive technology.  The field is overseen by government agencies which monitor insurance rates, coverage, and incentives.  At present (in California anyway, your state or country may vary), pricing discounts are offered for good driving, good student grades, being away at school, and having multiple vehicles on the same policy.

Insurers currently do not offer any financial incentive for using self-driving vehicles.  Such adjustments can only come after a long period of rigorous statistical study that objectively proved such systems actually helped reduce accidents and save lives.  Those studies would be used to support the creation of legislation authorizing insurance companies to offer such discounts.  This presents a chicken-or-egg type safety conundrum: there can be no incentives for self-driving cars without extensive real-world studies, and there can be no extensive real-world studies without putting lots of self-driving cars on the roads before they are definitively proven safe.

For an inexact comparison, look at the history of seat belts which, starting in the 1930s, were clearly shown to save lives, but which did not become mandatory equipment in U.S. cars until 1966.  Even then, their actual use was not enforced until much later; New York passed the first "Click It or Ticket" law in 1984, and all other states followed suit by 1995 - except for New Hampshire which, at present, only requires seat belt use by persons under 18 years of age.  (As it says on their license plates: "Live Free or Die.")

A long road lies ahead for widespread acceptance of self-driving cars.  Until then, the path will be paved with varying degrees of danger and uncertainty.  Should major insurance companies someday offer me cash discounts for using autonomous systems, I will take that as a solid indicator that self-driving cars have finally achieved true safety and reliability improvements.

Until that time, I'm keeping my cruise control disengaged, and my foot on the pedals.

Special thanks to Alex K. for insights into how the insurance industry contends with new developments.

Return to $2600 Index