The owners of Facebook, Meta said on Friday that it has developed a new artificial intelligence model called Movie Gen.
It claims that this model can compete with tools from top media generation firms like OpenAI and ElevenLabs and can produce realistic-looking video and audio snippets in response to user inputs.
Laptops 1000Movies of animals swimming and surfing were among the examples of Movie Gen’s works that Meta supplied.
Other movies featured humans executing real-life activities, such as painting on a canvas, and utilizing images of themselves.
In addition, Movie Gen may be used to modify pre-existing videos and produce sound effects and background music that are synchronized with the video material, according to a blog post by Meta.
In a couple of these videos, Meta used the tool to alter a parking lot where a man was skateboarding from dry ground to one with a splattering puddle by inserting pom-poms into the hands of the man running alone in the desert.
Movie Gen allows for up to 16-second videos and up to 45-second audio files, according to Meta.
Data from blind tests released showed that the model outperforms products from businesses including Runway, OpenAI, ElevenLabs, and Kling.
The news comes at a time when Hollywood has been struggling to use generative AI video technology this year following Microsoft-backed OpenAI demonstration of how their product Sora could respond to text cues by producing feature-like films.
While some entertainment industry technologists are excited to employ these technologies to improve and speed up filming, others are concerned about adopting algorithms that seem to have been trained on copyright works without authorization.
Concerns regarding the use of AI-generated fakes, or “deepfakes,” in elections around the globe, including those in the United States, Pakistan, India, and Indonesia, have also been brought up by lawmakers.