Creators of ChatGPT Introduce Sora: AI for Crafting Short Videos
OpenAI, a US-based company, is broadening its product range with Sora, an application purportedly capable of generating high-quality videos using brief text prompts or a single still photograph.
Open AI has yet to make the new AI tool public © Lionel Bonaventure/AFP |
On Thursday, the US company known for ChatGPT and Dall-e introduced a new AI tool capable of generating short, realistic videos. Named Sora, this application can produce videos up to a minute in length by interpreting brief text prompts, as stated by OpenAI. It has the additional functionality of transforming photos into videos or extending existing short clips.
Although Sora is not yet accessible to the public, OpenAI CEO Sam Altman, on the platform formerly known as Twitter, mentioned that the company is currently providing access to a select group of creators for testing purposes. Altman also encouraged users to propose prompts on the platform and promptly shared the outcomes. Examples included videos such as two golden retrievers hosting a podcast atop a mountain and a creature described as "half duck, half dragon" soaring through a picturesque sunset with a hamster donning adventure gear on its back.
Does Sora stand out from other AI tools?
While Sora isn't the pioneering text-to-video AI solution, several other companies, including Google, Meta, and the startup Runway ML, have showcased comparable technology in the past.
However, the exceptional quality of the videos generated by OpenAI's Sora has astonished onlookers, sparking discussions about its ethical and societal implications.
Moreover, the lack of transparency regarding the image and video sources used to train Sora has raised questions. Notably, OpenAI has faced legal challenges from entities like the New York Times and certain authors for allegedly utilizing copyrighted materials to train ChatGPT.
Testing for Safety of the Tool
OpenAI, headquartered in San Francisco, cautioned that their current model exhibits weaknesses, such as occasional confusion between left and right or inconsistencies in maintaining visual coherence throughout video sequences.
"We are committed to collaborating with policymakers, educators, and artists globally to address their apprehensions and explore beneficial applications of this innovative technology," stated OpenAI.
Emphasizing the importance of safety, the company announced plans for rigorous evaluation, including adversarial testing, also known as red-teaming, where designated users attempt to expose flaws, generate inappropriate content, or provoke unexpected behavior from the platform named Sora.
Post a Comment