Adobe’s Project Music AI Uses Text Prompts To Generate Songs
Users can also utilize the AI to edit their own music.
Adobe has been consistently building on its generative AI over the past year. For its latest tool, the company has developed an experimental way to create music.
Project Music GenAI Control is being described as “an early-stage generative AI music generation and editing tool” that allows creators to generate audio from text prompts. Overall, it seems fairly similar to other text-to-image or text-to-video AI models.
Adobe detailed that Project Music used the same technology that it was already employing with Firefly. Sample prompts the company said users could feed it included “powerful rock,” “happy dance” and “sad jazz.”
Besides generating audio from scratch, users could adjust musical elements, manipulating tempo and structure, looping a certain part of the song or extending the length of a given clip.
“With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” Senior Research Scientist at Adobe Research Nicholas Bryan said.
Project Music GenAI Control is still being developed and isn’t yet available to the public.