Video generated using OpenAI's Sora garners attention for unexpected glitch
OpenAI recently revealed its new video AI model, Sora, which has caught people's attention for its ability to create lifelike videos from text prompts. The model has made some amazing samples, like a dog chilling on a windowsill to a monkey playing chess and wooly mammoths running through a snowy scene. However, a few clips show that the AI still has some kinks to work out before it's ready for everyone to use.
Sora's very first sample is flawed
The first sample clip for Sora, embedded on the company's official website, shows a fashionable woman strolling down a neon-lit Tokyo street with cool city signs. Though the clip seems impressive at first, if you look closely, you'll see the woman's legs switch places at 16 seconds and 31 seconds, which is a serious flaw in movement. This mistake shows that the AI doesn't quite get human anatomy yet. It makes one wonder if OpenAI even noticed the mix-up.
Take a look at the clip
Progress in AI video generation and future considerations
Even with the noticeable issues, Sora is a big step up from older AI-generated videos, like that creepy one of Will Smith indulging with spaghetti from last year. A majority of users have described Sora-generated samples as "photorealistic." Sure, there are some problems now, but what really matters is where this tech is going and what kind of guardrails we must have if humanity is to be protected from AI.