In the last few months, the music generation landscape has changed dramatically with the introduction of a number of new AI models. Most prominently, Udio, Suno, and Riffusion have captured the imagination with novel ways to produce music. Riffusion, in general, recently launched its model, which highlighted the exciting possibilities of AI in the world of sound and music production. These entities are only a small slice of a quickly-growing space. It’s no surprise that, in addition to other AI models, ChatGPT is rapidly advancing the technology behind music generation.
Udio works by allowing users to craft distinctive musical creations using an interactive, intuitive platform. Suno does just that, providing powerful yet intuitive tools to create music on the fly based on user input. In parallel, Riffusion has become a sensation for its unique approach to generating music with AI using sound wave patterns and visualizations. Taken together, these developments represent an exciting milestone for how music can one day be created and experienced throughout the world.
The new companies and projects focused on music-generating AI aren’t limited to the ones we’ve listed here. Now, there’s Climate Draft and a host of other players getting into the game, each putting a new spin on supple technology and big ideas. This expansion is leading a wider trend throughout the tech industry, where artificial intelligence is being inserted more and more into the creative industries.
Kyle Wiggers, TechCrunch’s AI Editor, has been leading the charge in covering these developments. Living in Manhattan, Wiggers contributes a New Yorker’s incisive perspective to the ongoing cultural evolution of technology meeting artistry. His partner is a music therapist, which makes for a pretty fascinating addition to Timothy’s insights. Together, they tread carefully the streets of their city, lives entangled with algorithms and beats.
Wiggers occupies the perfect role to explore the broader implications of these advances. Getting back to Gordon, he looks at them from both technological and humanistic perspectives. AI and music therapy come together to open up some really cool, innovative opportunities. These technologies hold tremendous power to improve human experiences and augment emotional health via sound.
There’s no doubt that music-generating AI models will have an impact on the future of how music is created in a pretty significant way. It is this collaborative intersection between the craft of technologists and the artistic sensibilities of musicians that can spark the breakthroughs that will transform how audiences experience music. Plus, the more players that enter this realm, the greater potential for innovation and experimentation becomes.