Tesla, Autopilot, and the Future of Self-Driving Cars

Tesla AutoPilot

From the first press release outlining Tesla’s Autopilot technology, potential customers have wondered how the system works, what its limitations are, and whether it will be welcomed or shunned. Since Joshua Brown’s fatal crash while using Autopilot in a Tesla Model S, these questions have grown larger and more pointed. Without a doubt, popular opinion has shifted toward negativity. But should it?

Despite Tesla’s blog post on the event—the company laid out the context of the crash and its vehicles’ safety record in comparison to non-autopilot crash statistics—it’s certainly harder to put faith in the Autopilot system now than it was before the accident. While Tesla is correct in explaining that Autopilot is statistically safer than human-controlled operation—with a fatal accident occurring every 60 million miles driven worldwide, as opposed to the 1 in 130 million miles handled by Autopilot—it ignores the psychological element that tends to undervalue Autopilot’s success. Specifically, we all believe we’re safer when we’re in control of the situation.

Now here comes the interesting part: if handled correctly, the fatal Autopilot crash can help Tesla clear this hurdle. Autonomous driving is coming. At this point, it is an inevitability; major automakers have simply spent too many ambitious dollars researching and developing self-driving cars to give it up. Autopilot, however, wasn’t intended for fully autonomous operation, and in responding to the fatal accident, the automaker can address this. Autopilot is not intended to take vehicle operation away from the driver. Instead, it’s designed to augment the driving experience.

Tesla’s co-founder and CEO, Elon Musk, has explained since the crash that Autopilot has been considered a beta technology, and that this designation—designed to keep drivers aware of the system’s shortcomings—won’t be repealed until Autopilot sees 1 billion driving miles.

In essence, an alpha test is a trial where mistakes or errors are expected, and a beta phase is where these mistakes and errors are allowed, all with the end goal of production software that can be expected not to fail. Even though Tesla labeled Autopilot as “beta,” many accepted the system without the possibility of bugs—as a system expected not to fail. Partially because of this (but also partially because early adopters love to view new products as the end-all-be-all of their field), drivers often used the system in ways it wasn’t intended to be used.

A fatal crash is never a frivolous issue, which is precisely why Tesla was careful to market Autopilot as a beta technology. The malfunction affords an opportunity for Tesla to improve its product and the messaging around the system. It serves as a reminder to shoppers and Tesla owners of the current limitations of self-driving cars.

Autonomous vehicles are still on the way, and while the fatal Autopilot crash might look to some like a harbinger of this technology’s failure, it will more likely come to represent a huge step forward in the public’s understanding of the technology and its appropriate uses. Both BMW and Volvo have announced plans to bring fully autonomous vehicles to market by the year 2021, with the former agreeing to partner with Intel and Mobileye, a tech company focused on vision-based driver-assistance aids—technology similar to what supports Tesla’s Autopilot. The Autopilot crash marked a significant step backward for the system’s public relations. Now, however, it is poised to take an even greater leap forward.

How do you think Tesla will rebound from the fatal Autopilot crash?

-Matt Smith

Find Certified Pre-Owned Cars and Used Cars in your area at CarGurus.

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*
Website