Content creation company Runway recently
revealed the next step in its toolset for AI-generated videos. The developer's suite can generate short animated clips from text prompts, static images, a combination of the two, or other elements. One example uses text input to make a clip of New York City seen through an apartment window. Another animates a photo with entirely different lighting.
Another tool combines an image and video, applying the visual aesthetic of one to the other. Runway's website demonstrates a theoretical application where the tech transforms rudimentary real-worlds and 3D mockups into animated renders. The new tech, dubbed Gen-2, is the second generation of the company's generative AI tools, which the whitepaper
details. The first pass synthesizes videos with diffusion models and pre-existing structures to combine the visual styles of videos with unrelated images. The videos don't look all that realistic, and the AI can't create long videos from scratch yet, but the clips could make for creative art shorts