Elon Musk has long promised the revolution of autonomous driving through Tesla’s Full Self-Driving (FSD) system. Since its public beta launch in October 2020, FSD has remained a bold centerpiece of Tesla’s roadmap and a major selling point for its vehicles. The system is touted as the future of driving; a hands-free, safer, smarter replacement for human control behind the wheel.
Tesla owners pay up to $12,000 upfront or $199 monthly for this premium feature. The marketing machine behind FSD is relentless, but the cold, hard data — from regulators, safety reports, lawsuits, and field performance — paints a far less flattering picture.
Tesla’s FSD has not only fallen dramatically short of its promises, but it also continues to operate in a legal and technological gray area, putting drivers and pedestrians at serious risk.
As competitors like Waymo and Cruise roll out true Level 4 autonomous fleets in tightly controlled environments, Tesla's system remains stuck in Level 2 autonomy, still requiring full human supervision. What’s worse, Tesla markets it in a way that often misleads consumers, causing them to overestimate its capabilities, a pattern regulators and watchdogs are now beginning to challenge.
At its core, FSD is little more than an advanced driver-assistance system (ADAS), rebranded with exaggerated ambition. Despite its name, Full Self-Driving is not "full" in any capacity. According to the Society of Automotive Engineers (SAE), Level 2 autonomy requires drivers to remain attentive at all times, hands on the wheel, and eyes on the road.
Tesla’s system fits squarely in this category. But its name and Musk’s repeated predictions that Teslas would be capable of full autonomy "next year" — dating back to 2016 — have grossly inflated public expectations.
The distinction between what Tesla claims and what the data supports couldn’t be clearer. Tesla vehicles equipped with FSD have been involved in numerous crashes, some of them fatal. According to the National Highway Traffic Safety Administration (NHTSA), as of late 2023, Teslas accounted for the vast majority of reported crashes involving driver-assistance systems; nearly 74% of all such incidents since 2021.
Although Tesla has more vehicles on the road using these systems than most manufacturers, the discrepancy is still alarming. Many of these accidents appear to stem from users misunderstanding the capabilities of FSD, often placing too much trust in it, believing the vehicle can operate safely without their oversight.
The NHTSA has opened multiple investigations into FSD and Autopilot, especially after a spate of crashes involving emergency vehicles. In one particularly damning case, a Tesla on Autopilot slammed into a parked fire truck in California, killing the driver. Other incidents involve Teslas failing to recognize road barriers, construction zones, or pedestrians — all basic challenges that a system claiming to be “full self-driving” should easily handle.
In sharp contrast, Waymo, a subsidiary of Alphabet Inc., has been quietly and methodically deploying actual driverless cars, without a human behind the wheel, in Phoenix, San Francisco, and Los Angeles.
These cars operate at Level 4 autonomy, meaning they require no human intervention within a predefined operational domain.
Waymo's data is publicly shared through its Voluntary Safety Self-Assessment, and regulators are closely involved in every phase of deployment. According to Waymo’s 2023 safety report, their autonomous fleet had driven more than 10 million miles on public roads and an additional 20 billion miles in simulation, with an exceptionally low crash rate, mostly fender benders caused by other human drivers.
Cruise, owned by General Motors, is another major player that, despite its own setbacks and recalls, has achieved far more tangible progress toward safe, self-sufficient autonomy than Tesla.
Cruise cars operate without drivers in specific city zones, with rigorous monitoring and testing data shared regularly with the California DMV and other authorities.
Tesla, on the other hand, does not submit safety assessments to NHTSA’s AV Test Initiative, a voluntary program meant to increase transparency around autonomous vehicle performance.
While it has not tendered official public explanations for why it does not participate in the NHTSA’s AV Test Initiative, based on Tesla's communications and actions, several likely reasons can be inferred.
Firstly, Tesla now argues that FSD and Autopilot are not autonomous systems, but ADAS. Because NHTSA’s AV Test Initiative is primarily focused on Level 3 and above autonomy (which do not require constant human supervision), Tesla maintains that FSD, which is officially Level 2, doesn’t qualify for inclusion.
Since Tesla markets FSD in a way that implies full autonomy, even though it’s functionally Level 2, this disconnect between marketing and classification allows Tesla to avoid regulatory transparency while still reaping the benefits of hype.
Secondly, Tesla has a history of resisting government oversight, particularly when it comes to data sharing and safety transparency. Participating in the AV Test Initiative would require Tesla to submit voluntary safety self-assessments, disclose testing methodology, disengagement rates, crash data, and more, and align with practices that companies like Waymo and Cruise already follow. Tesla likely sees this as an unnecessary exposure to public and regulatory scrutiny.
Thirdly, Tesla tends to keep its development data and methodologies confidential, possibly to maintain a competitive edge or avoid comparisons with other AV companies that publish detailed safety metrics. Unlike Waymo’s extensive safety reports and Cruise’s DMV filings, Tesla reveals very little.
Elon Musk has repeatedly criticized regulatory bodies, including the NHTSA and SEC, and prefers a minimal-regulation approach to innovation. Not participating in voluntary transparency programs aligns with Musk’s libertarian-leaning philosophy that the market, not regulators, should decide whether a technology is safe or acceptable.
In other words, Tesla plays by its own rules, largely avoiding the scrutiny other self-driving companies accept as the price of doing business safely.
Much of Tesla’s approach relies on collecting data from the real world by pushing updates to customers who act as de facto beta testers. This contrasts sharply with the practices of Waymo and Cruise, which use safety drivers during development and limit their driverless operations to well-mapped, geo-fenced areas. Tesla’s decision to release unproven code into uncontrolled environments has raised alarms from regulators and researchers alike.
Adding to the controversy is the mounting number of lawsuits involving Tesla’s self-driving features.
In one high-profile case, the family of a man killed in a 2019 Tesla crash claims that the company falsely advertised FSD capabilities, leading to the driver’s overreliance on the system. Internal Tesla emails and leaked documents suggest engineers were aware of system limitations yet continued marketing FSD aggressively. Whistleblowers have alleged that Tesla underreports bugs and brushes off internal concerns about the software’s readiness.
Perhaps most damning is the disconnect between Musk’s public optimism and internal engineering realities. While Musk regularly touts “mind-blowing” advancements and claims that Tesla’s vehicles are on the verge of full autonomy, engineers behind the scenes reportedly struggle with core challenges such as lane changes, left turns, and detecting non-vehicle objects.
According to a 2023 report by The Washington Post, Tesla’s Autopilot team saw high turnover, with many senior staff members expressing frustration over Musk’s timelines and refusal to consider adding LiDAR or radar, technologies that competitors like Waymo use to enhance accuracy and safety.
Tesla’s camera-only “vision” system continues to show major limitations in poor weather and lighting conditions.
The lack of transparency only compounds these issues. Unlike other AV developers, Tesla provides no clear documentation about its testing protocols, disengagement rates, or system limitations. Its software updates roll out without third-party auditing or regulatory vetting, which means users often receive potentially dangerous code with no real oversight.
In May 2023, Tesla initiated a recall affecting over 360,000 vehicles with FSD Beta, following an NHTSA probe that found the software could cause vehicles to disobey traffic laws in certain conditions. The recall, however, was executed through an over-the-air update, allowing Tesla to avoid traditional scrutiny that would follow a mechanical or hardware-related recall.
Critics argue that Tesla’s marketing strategy for FSD borders on fraud.
In California, the Department of Motor Vehicles accused Tesla of false advertising, citing the misleading names “Autopilot” and “Full Self-Driving.” The case is ongoing, but it reflects a growing awareness of the gap between branding and capability. Meanwhile, Germany banned Tesla from using the term “Autopilot” in its advertising altogether, citing public confusion.
In Asia, China’s Ministry of Industry and Information Technology recently banned the use of terms like “smart driving” and “autonomous driving” in vehicle advertisements, including referring to systems as “Full Self‑Driving” (FSD) .
In April 2025, MIIT held a meeting with nearly 60 automakers and explicitly prohibited using such marketing terms to curb misleading claims, especially after a fatal crash involving Xiaomi’s ADAS system .
In China, Tesla responded by removing “FSD 智能辅助驾驶” (Full Self‑Driving) from its Chinese software label and replacing it with the more accurate “Intelligent Assisted Driving”, following these regulatory changes.
These moves are driven by China’s desire to ensure that marketing reflects actual capabilities and to prioritize public safety and consumer clarity over hype or inflated claims.
Even loyal Tesla customers have started to push back. Forums once filled with FSD fanfare now include threads questioning the value of the $12,000 upgrade, with many noting the system’s unpredictability in urban environments.
While FSD may impress in staged YouTube demos, users report inconsistent performance in the wild — hesitating at intersections, veering unpredictably, or braking unnecessarily. In one viral video, a Tesla using FSD plows through a child-sized mannequin during a test conducted by safety advocates. Tesla dismissed the test as a stunt, but offered no evidence to counter the outcome.
Despite all this, Musk continues to promise robotaxi fleets and an autonomous future. At Tesla’s AI Day events, he unveiled new chips, Dojo supercomputers, and computer vision breakthroughs, presenting a dazzling future just around the corner. But each passing year brings missed deadlines, more lawsuits, and more investigations, all while other companies quietly make incremental, safer progress.
If Tesla were a startup promising full autonomy in a few years, the overpromising might be understandable. But this is a publicly traded company with millions of vehicles on the road, many of them in the hands of drivers who mistakenly believe they’ve bought into a future that doesn’t yet exist. That’s not innovation: That’s a bait-and-switch.
The irony is that Tesla could have achieved something remarkable by developing a robust Level 2 or Level 3 system with clearer communication and more responsible deployment. Instead, its decision to brand its ADAS product as “Full Self-Driving” without regulatory approval or full technological readiness has created a dangerous illusion of safety. And illusions, especially on public roads, kill people.
Until Tesla aligns its branding with reality, submits to the same regulatory scrutiny as its peers, and stops using customers as unpaid guinea pigs, the Full Self-Driving feature remains not a marvel of modern engineering, but a masterclass in hype over substance. The data proves it: Tesla's FSD is not just overhyped; it’s a reckless gamble disguised as innovation.
And that’s what makes it a scam.