Tesla CEO Elon Musk has shared intriguing insights into the potential synergies between OpenAI’s Sora and his own autonomous driving ambitions following the unveiling of OpenAI’s text-to-video model.

In response to a video titled ‘What does @OpenAI’s Sora have to do with @Tesla’s FSD (Full Self-Driving) v12?’ on X, Musk stated, “Tesla has been able to do real-world video generation with accurate physics for about a year.”

However, Musk noted that the generated videos lacked interest since they were solely based on real-world training data sourced from Tesla’s camera-equipped fleet. He elaborated, “It wasn’t super interesting, because all the training data came from the cars, so it just looks like video from a Tesla, albeit with a dynamically generated (not remembered) world.”

The video in question compares research papers from both companies, showcasing different approaches to video generation that ultimately converge on the same solution.

Furthermore, Musk acknowledged Tesla’s limited training compute for FSD, indicating that they haven’t trained with other video data but plan to do so later in the year when spare capacity becomes available.

Meanwhile, Tesla has rolled out FSD version 12.12 to customers, promising significant advancements in the company’s self-driving software.

Leave a Reply

Your email address will not be published. Required fields are marked *

This will close in 0 seconds

Sorry this site disable right click
Sorry this site disable selection
Sorry this site is not allow cut.
Sorry this site is not allow paste.
Sorry this site is not allow to inspect element.
Sorry this site is not allow to view source.
Resize text