Video generated using OpenAI's Sora garners attention for unexpected glitch
What's the story
OpenAI recently revealed its new video AI model, Sora, which has caught people's attention for its ability to create lifelike videos from text prompts.
The model has made some amazing samples, like a dog chilling on a windowsill to a monkey playing chess and wooly mammoths running through a snowy scene.
However, a few clips show that the AI still has some kinks to work out before it's ready for everyone to use.
Imperfections spotted
Sora's very first sample is flawed
The first sample clip for Sora, embedded on the company's official website, shows a fashionable woman strolling down a neon-lit Tokyo street with cool city signs.
Though the clip seems impressive at first, if you look closely, you'll see the woman's legs switch places at 16 seconds and 31 seconds, which is a serious flaw in movement.
This mistake shows that the AI doesn't quite get human anatomy yet. It makes one wonder if OpenAI even noticed the mix-up.
Twitter Post
Take a look at the clip
.@OpenAI unveiled their new AI model Sora, which creates video from text.
— Kezhal Dashti (@KezhalDashti) February 16, 2024
The way that AI video has improved over the last year is 🤯
A huge technological leap.
Here’s the prompt used for the AI video below:
“A stylish woman walks down a Tokyo street filled with warm glowing… pic.twitter.com/hu9Bvu7eNr
Scenario
Progress in AI video generation and future considerations
Even with the noticeable issues, Sora is a big step up from older AI-generated videos, like that creepy one of Will Smith indulging with spaghetti from last year.
A majority of users have described Sora-generated samples as "photorealistic."
Sure, there are some problems now, but what really matters is where this tech is going and what kind of guardrails we must have if humanity is to be protected from AI.