The Future of AI Music Generation

Technology

TechCrunch looks at Dance Diffusion, an AI music generator:

The emergence of Dance Diffusion comes several years after OpenAI, the San Francisco-based lab behind DALL-E 2, detailed its grand experiment with music generation, dubbed Jukebox. Given a genre, artist and a snippet of lyrics, Jukebox could generate relatively coherent music complete with vocals. But the songs Jukebox produced lacked larger musical structures like choruses that repeat and often contained nonsense lyrics.

Google’s AudioLM, detailed for the first time earlier this week, shows more promise, with an uncanny ability to generate piano music given a short snippet of playing. But it hasn’t been open sourced.

Dance Diffusion aims to overcome the limitations of previous open source tools by borrowing technology from image generators such as Stable Diffusion. The system is what’s known as a diffusion model, which generates new data (e.g., songs) by learning how to destroy and recover many existing samples of data. As it’s fed the existing samples — say, the entire Smashing Pumpkins discography — the model gets better at recovering all the data it had previously destroyed to create new works.