What makes Tesla’s automation feel different
Vision-first, end-to-end learning. Starting with v12, Tesla shifted toward larger neural networks that learn from driving video all the way to controls—fewer hand-coded rules, more pattern learning.
Fleet learning + OTA updates. Your car keeps improving as updates refine perception, planning, and drive profiles.
A copilot, not a pilot. Tesla’s docs are explicit: today’s features require active driver supervision and don’t make the vehicle autonomous.
Safety context
Tesla’s Vehicle Safety Report for Q2 2025 cites one crash per 6.69 million miles with Autopilot engaged versus ~702,000 miles U.S. average. I treat those figures as directional, and I always verify with my own experience: smoother merges, better gap selection—still ready to intervene for construction, odd signage, or chaotic drivers.
Why this is still “weak AI”
Weak AI = expert at a scoped task (perception → planning → control). Strong AI = human-like understanding across domains—we’re not there. FSD (Supervised) is specialized automation under my supervision, which is exactly how safety tech should mature.
Responsible use (my routine)
- Stay engaged: hands on, eyes up—let the stack reduce workload, not responsibility.
- Tune drive profiles to conditions (follow distance, speed offset, assertiveness).
- Know the edges: construction zones, weird lane markings, unprotected turns.
- Keep software current; OTA brings smoother lane changes, merges, and attention checks.
Regulatory reality
NHTSA keeps a close eye on FSD behavior. Recently, the agency said it’s seeking information about reports of a higher-speed “Mad Max” assistance mode. The key line from regulators remains: the human is fully responsible.
Sources
- Tesla Support — Full Self-Driving (Supervised) (supervision required; not autonomous).
- Tesla — Vehicle Safety Report (Q2 2025 miles-per-crash figures).
- Reuters — NHTSA asks Tesla about ‘Mad Max’ mode (drivers remain fully responsible).