Zubin Kanga on composing with AI: ‘a creative tool, rather than a means to replace human creativity’

Zubin Kanga
Friday, October 4, 2024

Artificial intelligence is understandably a source of consternation among musicians but, argues composer Zubin Kanga, musicians have always adapted to the expanding arsenal of tools at their disposal, especially when AI is used for creation rather than imitation

Alexander Schubert has explored the use of AI-generated images as a base layer to build elaborate interactive visuals © Quentin Chevrier
Alexander Schubert has explored the use of AI-generated images as a base layer to build elaborate interactive visuals © Quentin Chevrier

Much of the recent discussion about artificial intelligence and music has focused on the negative impact of AI on musicians and the music industry; how AI models are being built from huge databases of copyrighted music, how AI might take the jobs of musicians or make entire types of musicians obsolete, how AI could be used to exploit us – or replace us.

The history of the music industry is one of major technological changes, from the invention of recording, to the invention of sampling, and the development of sequencing software. These changes undoubtedly took away some jobs, but they also created others. AI is a particularly disruptive technology, and there are an increasing number of online AI systems that have demonstrated an impressive ability to imitate a range of musical styles. Yet it is still limited by the data it is trained on – true musical creativity builds on the work of the past but adds something genuinely new and original, something that even the most sophisticated AI cannot achieve.

Rather than focus on AI’s imitative abilities, many contemporary classical composers (including some I’ve collaborated with) have focused on its ability to be uncanny, strange and unpredictable. In using AI as a creative tool, musicians are exploring its changing role in our lives, as well as exploring the potential of AI to generate sonic and audio materials that are fascinating in their failure to imitate human creativity.

Jennifer Walshe and Memo Atken’s ULTRACHUNK is a human-AI duet © Anne Tetzlaff

One work that has been particularly inspirational for me is Jennifer Walshe and Memo Atken’s ULTRACHUNK. The work is performed as a live improvisation between Walshe (as vocalist) and an AI-generated doppelgänger. Recording herself singing over many months, Walshe created a dataset that could be used to train a neural network that can match notes with her as she sings live. The result is a strange duet, as each note she sings makes the AI system draw on thousands of video excerpts to match it.

Neural networks trained on bespoke datasets have also been used extensively by Dr Emily Howard and other researchers at the Centre for Practice and Research in Science Music (PRiSM) at the Royal Northern College of Music. In recent works like DEVIANCE (2023) – which she wrote for me – Howard has explored neural networks trained on her own orchestral works, creating dialogues between her own compositions and AI transformations of them. In collaboration with PRiSM, I explored a similar type of AI transformation of the past in my own work, Metamemory. Using piano recordings I made in my student years of canonical piano works by J.S. Bach, Maurice Ravel, Claude Debussy, Alban Berg and many others, I created AI-generated fragments of my own playing. I then used a sampler keyboard to ‘play’ these AI-generated sounds, alongside quotes from these canonical works that I played on an accompanying synthesizer.

Zubin Kanga performing DEVIANCE (2023) by Emily Howard at Kings Place, London © Robin Clewley

The recent revolution of publicly available Generative AI has allowed anyone to experiment with creating video, images and sounds, just using text prompts. Nina Whiteman’s cybird cybird combines several of these, playing with the absurdity and uncanny versions of AI-generated visuals. Although the results are humorous, (with AI-generated avatars giving the performer instructions, and AI-generated bird pictures and birdsong revelling in their artificiality), Whiteman’s piece is a serious exploration of how birdsong is being influenced by the urban landscape.

In many recent works, composers don’t highlight their AI-generated content, but use machine learning as one of many tools. Alexander Schubert has explored the use of AI-generated images as a base layer to build elaborate interactive visuals. Caitlin Rowley has explored the use of text-based AI to write code for interactive electronic music software, allowing this otherwise difficult task to be aided by AI, Alex Wastnidge and Cagri Erdem have experimented with AI-generated beats and melodic patterns for use by producers, and Patrick Hartono has trained an AI on the hand gestures used by puppeteers, allowing him to create virtual shadow puppet audio-visual performances.

I believe that the future of musicians’ use of AI will be like these examples, with AI becoming a creative tool, rather than a means to replace human creativity. Musicians will use it to produce base textures, to explore uncanny effects of transformed sound and visuals, to create digital duetting partners, to generate a variety of rhythmic patterns, as an inspiration for melody writing, to programme interactive music software, and (most importantly) to take care of mundane administrative tasks, giving musicians more time to create.

Nina Whiteman’s cybird cybird plays with the absurdity and uncanny versions of AI-generated visuals 

There are ethical considerations with any use of AI. Although earlier neural networks were trained on datasets provided by the composers (including Walshe, Howard and me), the publicly available systems that are now widely available were trained on content from across the internet, in many cases without the informed consent of the copyright holders. I’m optimistic that future AI systems could be trained on content provided with consent by artists, and that the training of AI systems could be a source of income for musicians specialising in this. Whether musicians use AI systems, or examine the ethics of using one system vs another, will be an individual choice.

Although AI will increasingly become part of classical music, audiences’ desire to experience live music is as strong as ever, with UK classical audience numbers hitting a record high in the last year. With the ubiquity of digital music, the human connection to real musicians – creating music that reflects their life experience, time and place – is still important to audiences. Although it will continue to become more sophisticated, AI-generated music will not be able to replace the experience of seeing a great pianist bringing a new interpretative perspective to Beethoven’s late sonatas. Nor will it replace the experience of hearing a dynamic ensemble performing a new work by an innovative composer creating bold new soundworlds. Human creativity can be imitated by AI, but it can never be surpassed.

 

Zubin Kanga’s upcoming concerts include performances in Manchester, London and Huddersfield. You can find his full concert diary here.