What Smart Tesla fans Get Wrong about FSD

The only folks more prone to rants than myself are Tesla fans talking about neural nets. So I thought, why not combine the power of the rants? I'll rant about Tesla fans ranting about the power of neural nets!
To get myself good and wound up, I listened to a recent Tesla Daily Podcast about Full Self Driving (FSD) and Neural Nets. Go ahead, listen to it.  I'll wait. It's actually not that bad.
"Jimmy D" is the guest and he talks with some authority about neural nets. He doesn't get anything terribly wrong on the basic tech. Everything he says about how Tesla plans to exploit it seemed reasonable to me. If you are a Tesla skeptic, it's a good way to understand Tesla bull thinking.

Of course, like everyone else who's familiar with the AV space and not named Elon, I think Jimmy is 100% wrong about Tesla's prospects. He's a typical Elon fan boy- smart, technically savvy, and without any important domain knowledge. He's drawn in by Elon's surface logic and technical talk, but isn't expert enough to know that Tesla's approach to AV is wrong.

In the Podcast, Jimmy gets four things fundamentally wrong:
  1. Neural Nets aren't good enough. 99% accuracy simply isn't good enough for AV.
  2. Lidar isn't used just because Waymo started long ago.
  3. Tesla "training data" is virtually worthless.
  4. Partial autonomy isn't safer.
  5. Tesla's aren't safer.
  6. Llama's don't know how to count.
But before we dig into the details, let's talk about... 

The Cult of Elon

No wait. Before we talk about Elon, let's talk about why I'm

Bullish on Autonomy

Don't take anything I say here to mean I don't believe in autonomous vehicles. I do. They are coming, and when they are fully mature, they will upend society as much as the Model T did.

I also happen to think Waymo is way ahead of the competition- Cruise/Uber/Lyft/etc... Perhaps by 2-3 years or more.  I might be a Google partisan as a former employee, but I think if you look at what people have demonstrated, Waymo is way ahead. As we'll talk about below, the hardest part is getting high reliability. 

Virtually anyone can throw together a demo which mostly works. But those demos require a human behind the wheel in case of failure. Getting rid of the human is the hardest part. Really it's the only hard part. Only Waymo has demonstrated this ability in real-world conditions.

In sort- I believe in AVs, I think they are coming perhaps sooner than people expect, and I think Elon is a liar and none of the Tesla's on the road will ever be capable of FSD.

Now let's discuss:

The Cult of Elon

It's worth reflecting on why so many are fooled by Elon. Jimmy seems like a smart, competent guy. He clearly understands neural nets and the recent history of breakthroughs. So why is Jimmy so misinformed about the nature of the AV problem?

The answer is Elon Musk. 

Elon's true gift is his ability to exploit smart, trusting engineers. He makes them think he has all the answers, when in fact almost everything he says has been debunked by experts in the field. But his special blend of smarts, technical knowledge, charisma and chutzpah cuts like butter through many engineers.

In the engineering world, there's a premium placed on honesty. We expect other engineers to tell us the truth, or at least what they honestly believe. When we encounter someone who knows the jargon and seems informed, we naturally assume what they say is true. This generally works fairly well. Engineering truths can often be quickly tested- an engineer who goes around making outlandish claims will shortly be outed.

Elon has taken this norm- "Engineers speak honestly" and used it to perfection. Elon's smart enough to know the jargon. If you yourself are not an expert, Elon will speak competently enough that you assume he's a good engineer. From there, Elon then tells lies, almost always about future products.  He tricks you into thinking he's a smart engineer having an honest conversation, when in fact he's the slick marketing guy selling you vaporware.  Elon's charismatic and smart enough that he gets away with it. He's helped by the timescale- one to two years later everyone's moved on and forgotten his promises.

This is what Elon's done with full self driving, and Jimmy has bought into most of Musk's lies.  This is why there's such a dichotomy of opinion about Tesla's AV efforts.  Fans think Tesla is doing great, AV experts think Tesla is a joke.

So why do most experts think Tesla's approach won't work?

Neural Nets Aren't Good Enough

If any fanboys read this, let me try to defuse your anger by saying neural nets are Great! I work with them all the time. The technology has made AMAZING strides. We can do things now that seemed like science fiction 6 years ago. Image recognition in particular has undergone a revolution.

NN's are incredibly useful to autonomous vehicles. NN's have grown much more accurate in recent years. However, they are not nearly good enough for FSD. This is the core problem that Jimmy elides over.  It's why AV experts think Tesla is a joke.

It's possible that NN's are incredible and great for a ton of applications, but also not nearly good enough for driving safety-related decisions. Software that's right 99% of the time isn't good enough when deciding if a biker is in your lane.

NN's and Image Recognition

Advances in Deep Learning have made various image processing and recognition tasks possible.  Smartphones now do extensive computational photography using neural nets. Nest cams use neural nets to identify people, as do Facebook and other social networks.

These tools generally work well. However, they don't work 100% of the time. In fact, they are not that close to 100%. If Portrait mode fails once every fifty photos, who cares? If Facebook suggests the wrong person in a photo, does it really matter? These tools are commercially viable because they mostly work, and 98% accuracy is far more than needed for most use cases.

98% isn't nearly enough for autonomous vehicles.

In fact, 99.9% isn't good enough. I'd hazard that AV safety engineers probably want 5 or more 9's of reliability- 99.999%. That might seem excessive, but even that allows for a 1:100,000 chance of misidentifying a pedestrian in your path. Given what we know about existing solutions and the difficulty of the problem, it's unlikely Tesla's NNs even get to 99.5% error rates for many safety critical classification tasks.  99.5% would be a major achievement. To use a phrase familiar to Tesla fans, Tesla is orders-of-magnitude away from a viable FSD solution.  Then need their system to be at least 100x more accurate.

This is why every other company pursuing AVs is using lidar. Lidar is extremely reliable and accurate. If your lidar sensor says there's nothing in your path, then there's nothing in your path, especially when you have two independent sensors looking in the direction of travel.  That's what's needed to get to 99.999% reliability. For all the talk about NN advances, the fact of the matter is that error rates for critical decisions are still way too high. 

No one in the field has any idea how to lower those error rates another 10x, let alone 100x. It's going to take a major breakthrough (perhaps more than one) to get visions systems reliable enough to depend on for driving.

So next time someone starts talking about using neural nets for FSD, ask if they think those systems can get to 99.999% accuracy. Ask if anyone anywhere has every demonstrated a vision system on a real-world task this accurate.

Lidar Isn't Ancient

One way Tesla fans explain away the ubiquity of lidar in AV, but it's absence in Tesla is by saying that lidar is old. It was needed before the Deep Learning revolution. Waymo's efforts started before DL, so they built up an architecture around lidar.

This is sort of true. The original DARPA program was before the recent revolution in neural nets. And Google's program started before this as well. However, many other programs were started well after Google. Cruise started in 2013. Anthony Levandowski formed Otto in 2016 (2 years after the first Inception paper). In fact, Google & Uber fought a long legal battle over lidar. Seems weird that these two would fight over something supposedly made obsolete by tech developments 2 years beforehand.

There have been nearly a dozen significant new entrants to the AV space over the past 4 years.  Every single one of them is using lidar. That alone should tell you how the experts in the space feel about Elon's approach.

Now about that training data...

Stop It With the Fleet Training Already

Tesla fans incessantly talk about all the data Tesla gathers from its fleet. While I don't doubt that Tesla gets some valuable mapping data (though probably 100-1000x less data than Google gets from Maps/Waze users), the visual data that the fleet gathers is virtually worthless.  It's not even clear if this data is being collected due to its size and lack of utility.

When Jimmy talks about the fleet data, he's aware that the raw data isn't that useful. In order to be used, the data must be labelled. Humans must curate the videos, labelling roads, cars, bikes, pedestrians, etc... It turns out that labelling is way more expensive than just collecting data. 

Think about it. Anyone can put a few cameras on a car and drive thousands of miles. It's not expensive and it doesn't take that long. What's hard is having a human operator go through every second and frame of imagery and accurately labelling the video data. This is painstaking work, even with the best assistance software.

Tesla's fleet advantage is no advantage at all. You can easily collect road imagery for less than $1/mile. Getting that mile accurately labelled by human operators probably costs 100x that. So congrats to Tesla, they save 1% on training data costs. Of course, it's worse than that. If you pay to collect data, you control exactly where the car drives, under what conditions, with what sensors. Tesla's data is a random hodgepodge of wherever their customers happen to drive. Even before they curate, they have to pay someone to sort through this data and figure out which bits to use. 

My guess is, between the cost of curating, the low quality of random drives, and the costs of uploading terabytes of data from end users, Tesla probably doesn't even upload much video data at all. I'm guessing almost all their real-life video training data is based on explicit driving they've done rather than customer drives.

Now that we've explained the primary reasons Tesla's approach won't work, let's take a quick break for an aside about Elon.

Is Elon Smarter or Just Willfully Ignorant?

When Elon makes grand claim, it's often useful to ask- why isn't anyone else already doing this? Sometimes people have great, new ideas. Usually when that happens, there's a big reveal and everyone else is like "oh yeah, that's a great idea". Mostly when Elon does this, experts are like "That makes no sense". See for instance: Solar Roof, FSD, Semi Truck, $35K M3, 10K/wk, "Buying a trucking company", Torpedo rescue.  None of that made any sense to people familiar with the space, and as its turned out all of these are lies- at least as far as we can know now.

So let's talk about the fleet training.  If fleet data were so valuable, why isn't anyone else doing this? GM spent $1B on Cruise, WayMo's had partnerships with various OEMs for years, all the other Auto companies are clearly pursuing AVs aggressively.  So why has no one else put cameras on a large fleet of vehicles?

The answer is "because it's not that useful". And this is the classic Elon lie pattern. To the non-expert, it sounds great. Crowd source the data! It's genius. Except all the the experts know it's not the video that's expensive, it's the labelling. So Elon sounds like a genius to lay people, and an idiot to people with domain knowledge.

And now back to our regularly scheduled Elon-bashing.

About That Partial Autonomy

It's true that Waymo decided to move directly to FSD, skipping partial autonomy.  Jimmy however, is misinformed about why Waymo made this decision.

Unlike Elon, Waymo uses data and testing to make decisions. Five years ago, they let employees try out the tech for a few weeks. What Waymo found was extremely troubling. At first, users were nervous and didn't trust the car.  However, that quickly reversed and they came to trust the car too much. It's very hard for humans to pay attention when not making decisions. Waymo found that users simply were not able to take over control of the car in a reliable fashion. When partial autonomy failed, it became dangerous because the human was not prepared to take over control.

Tesla has learned this the hard way. Rather than testing, they just released the software on their customers. After a series of fatal and near-fatal accidents involving AutoPilot, they've made various efforts to ensure the driver stays engaged.  The cars are not properly equipped to do this, so it's both annoying and ineffective.

This is why most of the energy in the space is directed towards FSD. It's widely believed now that partial autonomy is fool's gold. The repeated failures of AutoPilot have only underlined this. If the system is relying on a human for its ultimate safety, then it is not a safe system. This is another example of Elon fooling his fans. He sounds like a genius, but everyone with knowledge of the space knows what he's doing is wrong and dangerous.

About Those Safety Stats

Jimmy mentions that AutoPilot is already saving lives. This is clear nonsense. He quotes Tesla's stats, which he admits are not great, but as an Elon fan, he doesn't fully grasp how completely ridiculous the comparisons Elon makes are.

Comparing safety data for new luxury cars with the whole auto fleet is absurd.  New cars are safer than old cars (the age of the average American car is 11 years), luxury car buyers are older, more cautious drivers. We can see how absurd Tesla's comparison is if we try and reconstruct similar stats for other luxury brands (BMW, Mercedes). Fortunately someone has already done this, so I don't have to do any actual work.

The data shows that Tesla's are probably 2-3x more dangerous than other new luxury cars. Some of that is due to Tesla's being (very) fast cars, but most is due to AutoPilot being dangerous as many of the reported deaths can be attributed to AP. Modern luxury cars are very safe, so even the small number of known AP-related deaths is a significant number of deaths for a luxury car fleet.

In short, Tesla isn't saving any lives Jimmy. If anything, it's risking lives in ways that other, more sober companies have intentionally avoided.

To Sum Up

It's been fun bashing Elon and Tesla. In a few years, we'll all look back on this and think "Wow, how did Elon fool so many people for so long". He'll go down as one of the great Con men of the 21st century for sure.

If you've just skipped over the whole article looking for a conclusion, damn you. I spent a long time poring over my prose, even if it doesn't show. I even proofread it for god's sakes. Can't you just go back and read it. No?  OK, here's a quick summary of what we've learned:
  1. Jimmy is a nice guy and pretty knowledgeable about neural networks.
  2. Elon is a lying sociopath.
  3. Elon fools guys like Jimmy with tech talk that obscures the real challenges.
  4. Neural nets have made great strides.
  5. Neural nets are not nearly accurate enough for safety decisions.
  6. Neural nets need at least 100-1000x more accuracy before they alone can be used.
  7. Every other player in the AV space uses lidar, regardless of when they started.
  8. Partial Autonomy (aka AutoPilot) has been rejected by most others in the space because it's not safe. Either the car can reliably drive itself or it cannot. Depending on a human for backup is not safe.
  9. The data from Tesla's fleet is not valuable. Labelled data is valuable. Random videos of driving are not.
  10. Tesla safety stats are very misleading.  Best guess is Tesla's are 2-3x less safe than other luxury cars.
  11. None of the Tesla vehicles on the road will ever have FSD.


Comments

Sierra Wells said…
Excellent piece! I'm always amazed that people argue w/me when I try to tell them this, especially about LIDAR. Well, that & Muskrat being the PT Barnum of our generation. I live 18 miles from the NV Gigafactory & am livid about the whole project for so many reasons. Thanks for giving me something else Elon to get worked up about.

I have one small request, though. Please proofread again: It's Teslas not Tesla's/Llamas not Llama's/NNs not NN's. Please don't use apostrophe S to indicate plurals. I get more worked up about that than I do Elon! 😃
Jon Wingfield said…
TL;DR that the author *should* have written:

1. Full disclosure: I (Josh) am short Tesla and almost all of my posts on Quora and this site reflect my financial position.
2. I'm bullish on autonomy because I am an ex-googler and CTO of a company that is built on AI.
3. I'm not an expert on vehicle-based autonomy, but I know neural nets and I read some articles on the internet so I can make predictions that are better than people who work in that industry.
4. I like to quote other sources who are also short Tesla (even hedge fund owners!), while ignoring academic sources who see Tesla as just another competitor approaching the problem from a different viewpoint. (Lex Fridman, et.al.).
Unknown said…
Lidar means light detection and ranging. A lidar can measures the distance of the surface so accurately. Onsite3D provide the best lidar scanning service in Alberta. They will provide you the best service at the lowest price. Lidar Calgary, Alberta
Unknown said…
AV tech does not have to achieve a specific set of 9s, it need only be safer than a human...just ask any P&C underwriting expert. the inflection point will come when AVs are proven to be safer than a car with a human behind the wheel. I've saved this link and will follow-up in 10 years to see how accurate your predictions are....
technology said…
Laser scanners help agencies that plan roadways in at least three ways 3d laser scanning survey Sydney

Popular posts from this blog

Why Tesla's FSD Approach is Flawed

Predictions for the April Tesla Autonomy Event