Startup Stability is taking generative AI up a notch. While the technology has been used largely for photos and voice replications, Stability is making waves with a new tool for advanced AI-generated videos.
The feature is called Stable Video Diffusion and is described by the company as “a latent video diffusion model for high-resolution, state-of-the-art text-to-video and image-to-video generation.” What that essentially means is it’s a foundational model for creating videos, based on a similar model used to make AI imagery.
Unlike most AI models, Stable Video Diffusion doesn’t only churn out videos based on text prompts. It’s also capable of transforming a single image into a video between 14 and frames, shown at a speed between 3 and 30 frames per second.
In addition to announcing the tool, which is still in the testing phase, Stability also published a paper outlining its vision for the future of generative video, along with sharing the code for its model publicly on GitHub. In the future, the startup plans to expand on the model for various user needs.