Sora: Watch: OpenAI’s new text-to-video AI model Sora creates stunning videos from text inputs |

[ad_1]

ChatGPT maker OpenAI has launched an AI model called Sora that can create up to 60-second long videos based on text inputs. The model is able to generate complex scenes with accurate details “while maintaining visual quality and adherence to the user’s prompt.”
OpenAI CEO Sam Altman has shared a number of examples on X (formerly Twitter) showing the prowess of the model.However, the company says that the current model has weaknesses and it may “struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect.”
Despite that, several examples shared by Altman suggests Sora can create multiple shots within a single generated video. Here are some examples:

Prompt: A wizard wearing a pointed hat and a blue robe with white stars casting a spell that shoots lightning from his hand and holding an old tome in his other hand

Prompt: Two golden retrievers podcasting on top of a mountain
Cred Cofounder Kunal Shah also tried the new model.

Prompt: A bicycle race on ocean with different animals as athletes riding the bicycles with drone camera view
“We are also granting access to a number of visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals,” the company added.
OpenAI highlighted that Sora is becoming available to red teamers to assess critical areas for harms or risks. The company is also building tools to help detect misleading content.
“In addition to us developing new techniques to prepare for deployment, we’re leveraging the existing safety methods that we built for our products that use DALL·E 3, which are applicable to Sora as well,” the Microsoft-backed company added.



[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *