Tesla Full Self-Driving (FSD) V13 starts rollout
Tesla has officially rolled out FSD (Full Self-Driving) Supervised v13.2, and it’s clear this update is a major milestone in the company’s quest for autonomous driving. Packed with impressive improvements and features, the release marks a significant upgrade to Tesla’s self-driving capabilities. Here’s what’s new, why it matters, and what we’re still waiting for.
Key Updates in FSD v13.2
The latest version of FSD Supervised revamps Tesla’s end-to-end driving network. Here are the standout upgrades:
-
36 Hz, Full-Resolution AI4 Video Inputs
- This means Tesla is processing video data at a much higher resolution and faster rate, providing richer and more detailed input for the neural network.
- Why it matters: Better video input equals sharper decision-making and improved driving performance, especially in complex environments.
-
4.2x Data Scaling and 5x Training Compute Scaling
- The neural network is now trained on a dataset that’s 4.2 times larger, thanks to the powerful Cortex cluster that boosts training compute by 5x.
- Why it matters: Scaling data and compute capacity allows Tesla to train more robust models, improving their ability to generalize across different scenarios.
-
Reduced Photon-to-Control Latency by 2x
- This means faster reaction times from detecting an object to executing a driving decision.
- Why it matters: Lower latency is critical for handling dynamic, fast-changing driving environments like urban traffic.
-
Integrated Parking Features
- You can now start FSD Supervised from Park with a single button, and it seamlessly integrates unpark, reverse, and park functionalities.
- Why it matters: Parking is a common pain point for drivers, and this feature aims to make the experience seamless and intuitive.
-
Dynamic Routing Around Road Closures
- The system now dynamically detects road closures, displays them on your route, and adjusts navigation accordingly.
- Why it matters: This is a significant usability improvement, especially in construction-heavy or urban areas.
Improvements on the Horizon
Tesla also gave us a glimpse of what’s coming next:
- Audio Inputs for Emergency Vehicles: This will improve FSD’s ability to handle sirens and other auditory signals.
- Enhanced Parking and Navigation: Pulling into a driveway, parking in a garage, or identifying spots will soon be more precise.
- Improved False Braking: A common frustration for users in parking lots and around stationary objects, false braking is set to improve significantly.
What’s Exciting About This Update
Tesla is clearly leveraging massive computational power and vast data to advance FSD’s capabilities. The integration of dynamic routing and parking solutions highlights the company’s focus on user experience. The reduced latency and smoother controller also suggest that Tesla is closing the gap between “good enough” and truly intuitive driving.
One of the most interesting updates is the mention of AI4 video inputs and neural networks. While Tesla doesn’t provide full technical details, the implication is that they’re integrating next-gen AI models with more sophisticated input processing. This could bring FSD closer to human-level driving capabilities.
Some Thoughts and Questions
As exciting as these updates are, a few things stand out:
-
Supervised Status
- Despite the improvements, FSD remains in a supervised mode, meaning drivers must stay engaged. This keeps Tesla legally and practically in the realm of advanced driver assistance rather than full autonomy.
- Will Tesla ever make the leap to unsupervised autonomy?
-
Audio Inputs
- The upcoming audio input feature is a step forward, but it raises the question: Why now? Emergency vehicles have been a long-standing challenge, and it feels overdue for a system as advanced as Tesla’s.
-
Dynamic Routing and Parking
- These features are excellent quality-of-life improvements, but it’s interesting to see Tesla doubling down on practical usability while still working toward broader autonomy goals.
-
Skepticism Around Long-Tail Scenarios
- While the updates are promising, challenges like handling rare edge cases, complex multi-agent interactions (e.g., pedestrian-heavy areas), and extreme weather remain.
The Big Picture
FSD v13.2 is undoubtedly a major step forward, and it reflects Tesla’s ambition to lead the self-driving revolution. The reduced latency, improved neural networks, and integrated parking features all point to a future where Tesla’s vehicles feel more natural and human-like in their driving.
That said, the supervised label and ongoing updates highlight that we’re still in the early stages of full autonomy. Tesla’s iterative approach—releasing improvements as they come while refining the tech—keeps it ahead of competitors but also under constant scrutiny.
Official Release Notes:
2024.39.10
FSD (Supervised) V13.2
FSD (Supervised) v13 upgrades every part of the end-to-end driving network.
Includes:
- 36 Hz, full-resolution AI4 video inputs
- Native AI4 inputs and neural network architectures
- 4.2x data scaling
- 5x training compute scaling (enabled by the Cortex cluster)
- Reduced photon-to-control latency by 2x
- Speed Profiles on both City Streets and Highways
- Start FSD (Supervised) from Park with the touch of a button
- Integrated unpark, reverse, and park capabilities
- Improved reward predictions for collision avoidance
- Improved camera cleaning
- Redesigned controller for smoother, more accurate tracking
- Dynamic routing around road closures, which displays them along an affected route when they are detected by the fleet
Upcoming Improvements:
- 3x model size scaling
- 3x model context length scaling
- Audio inputs for better handling of emergency vehicles
- Improved reward predictions for navigation
- Improvements to false braking and slower driving in parking lots
- Support for destination options including pulling over, parking in a spot, driveway, or garage
- Efficient representation of maps and navigation inputs
- Improved handling of camera occlusions