In a recent interview with the Wall Street Journal, Mira Murati, Chief Technology Officer at OpenAI, provided an update on their latest technology, Sora, along with details on its public release and future developments.
What exactly is Sora?
Well, it’s the brainchild of OpenAI, following on from their popular ChatGPT chatbot. Sora takes things a step further by generating videos up to a minute long based on text descriptions. Impressively, it can create intricate scenes featuring multiple characters, specific motions, and detailed backgrounds. Sora doesn’t stop there; it even produces multiple shots within a single video, showcasing its versatility. OpenAI showcased some sample videos generated by Sora, which received high praise for their quality.
When can we expect Sora to be available?
Currently, it’s accessible to a select group of visual artists, designers, and filmmakers. However, Murati revealed that Sora will be rolled out to the public later this year, possibly within just a few months.
User control
In terms of user control, Murati emphasized OpenAI’s goal of allowing users to edit the content generated by Sora. This means users could tweak the videos to their liking or correct any inaccuracies.
Undergoing Training
Now, onto the nitty-gritty of training Sora. While specifics weren’t divulged, Murati mentioned that the data used for training Sora comes from publicly available or licensed sources. OpenAI’s partnership with Shutterstock also played a role in sourcing content for Sora.
Concerns regarding misuse
Addressing concerns about misuse, particularly in the realm of deepfakes, Murati assured that Sora wouldn’t generate videos featuring public figures. Similar restrictions are in place for images generated by DALL-E. Additionally, to maintain transparency, Sora-generated videos will be watermarked to indicate their AI origin.