Movie Gen is an AI-powered video generator

A Review of the Meta AI-powered Video Generator, Movie Gen, which isn’t currently available for the public at the TDDI Berlin Screen Actors Guild

A new AI-powered video generator from Meta produces high-definition footage complete with sound, the company announced today. Movie Gen isn’t currently available for public access, despite the announcement coming several months after Openai unveiled Sora.

In addition to generating new clips, Meta says Movie Gen can also create custom videos from images or take an existing video and change different elements of it. One example shared by the company shows a still photo, while another shows a video of the woman sitting in a pumpkin patch.

Chris Cox writes on Threads that Meta is not ready to release this item as a product because it is too long in development and costs too much.

Movie Gen has been trained on some information. The specifics aren’t clear in Meta’s announcement post: “We’ve trained these models on a combination of licensed and publicly available data sets.” It is not always known what text, video or audio clips were used to create any of the major models, and the issue of who gets the training data remains a contentious one.

The Screen Actors Guild – American Federation of Television and Radio Artists went on strike to protest artificial intelligence in Hollywood and other areas, as well as concerns that it could affect their working lives.

Elf Yourself on steroids: how to become a dinosaur in a puddle with a movie generator, or how to play with an augmented reality model

The company shared multiple 10-second clips generated with Movie Gen, including a Moo Deng-esque baby hippo swimming around, to demonstrate its capabilities. The announcement of Movie Gen comes just after a Meta Connect event in which new and refreshed hardware and the latest version of its large language model were showcased.

Movie Gen can be used to generate audio bites. In the sample clips, a man standing by a waterfall with splashes and hopeful sounds of a symphony; a car roaring and tires screeching as it goes around a track, and a snake sliding along the jungle floor, are all examples of things that can happen in the game.

Meta gave more information about Movie Gen in a research paper. Movie Gen Video and Movie Gen Audio contain over 30 billion parameters. (A model’s parameter count roughly corresponds to how capable it is; by contrast, the largest variant of Llama 3.1 has 405 billion parameters.) Movie Gen can produce high-definition videos up to 16 seconds long, and Meta claims that it outperforms competitive models in overall video quality.

Users can post an artificial image of themselves in gold chains on Threads and imagine how they would look in the actual scenario, which was shown earlier this year by CEO Mark Zuckerberg. A video version of a similar feature is possible with the Movie Gen model—think of it as a kind of ElfYourself on steroids.

Considering Meta’s legacy as a social media company, it’s possible that tools powered by Movie Gen will start popping up, eventually, inside of Facebook, Instagram, and WhatsApp. In September, competitor Google shared plans to make aspects of its Veo video model available to creators inside its YouTube Shorts sometime next year.

Currently, you are able to experiment with video tools from smaller, upcoming companies, like Runway and Pika, because larger tech companies are still holding onto fully releasing video models to the public. If you ever have been curious what it might be like to be crushed with a press or melt in a puddle you should give Pikaffects a try.

Previous post The review of Joker was called Make ‘Em Laugh (and Yawn)
Next post The Ioniq 5 is the newest addition to the fleet