Adobe Launches Generative ‘Firefly’ AI, Where Users Can Simply Type To Edit Images
A private beta of the advanced AI premiered on Tuesday.
Adobe has long used AI to help bolster the functionality of its applications. A few years ago, the company rolled out its Sensei AI system, which aims to use machine learning to speed up monotonous tasks such as tagging images, picking out subjects from a photo and summarizing texts.
As Adobe continues to lean into AI, it has now unveiled a fleet of “generative AI models” dubbed Firefly. Currently in beta, Firefly, similar to other image-generating tools, allows creators to use everyday language to conjure images.
Samples provided by Adobe show how Firefly can do just that, but also offers seemingly endless opportunities to personalize one’s content when it comes to editing photos or videos and designing graphics. In one example, Adobe shows how a video editor can type out a given “mood, atmosphere or even the weather” and the lighting and coloring of the video will change to match it.
Another sample shows Firefly’s “context-aware image generation,” where AI-produced images respond to the setting of a given photo. For example, if a user wanted to add “animals” to a photo of the ocean, the AI would know to specifically render sea mammals.
In the future, Adobe wants to use Firefly to help creators with more intricate AI-generated compositions, such as creating a given texture based on a text prompt, and with 3D objects, allowing them to easily transform 3D compositions into photorealistic images.
Firefly will launch first as a private beta, but will eventually be available across Adobe programs, including Photoshop and Premiere Pro.
In other tech news, Fabio Verdelli Design Studio has introduced a minimalist DUNE mouse concept.