The idea of this project is to create an artificial intelligence model that can support the a group of electronic music performers by adjusting their instruments remotely.
The challenge here is not just to create an AI model, but to build it into an interactive system that might be useful for a team of musicians!
We recently developed a system for synchronising musical instruments (code here) over a network. We have a basic synth instrument online and have a web server that can exchange information during a performance. The missing element is an AI system to act as a “conductor” for the performers as well as observer their actions.
As part of this project, you will conceptualise, create, and evaluate a musical system. You’ll need to be comfortable learning new languages and should enjoy working with physical hardware. It would be advantageous to have taken Sound and Music Computing.
You can take inspiration from some of our previous music tech projects that you can see here.
For an Honours/master project we would expect you to create a working prototype that includes an AI model and enables interactive sound or music to be created. You would need to complete some type of formal evaluation. This project could also be the basis of a wider PhD project.
Please read information about joining the Sound, Music, and Creative Computing Lab before applying for this project.
How to Apply
To apply for this project, contact Charles Martin.
- your CV
- your unofficial transcript (if you are an ANU student)
Make sure you specify what skills and accomplishments you have that would help you to complete this project.
Useful Papers and Resources:
- Composing Interface Connections for a Networked Touchscreen Ensemble
- Intelligent Agents and Networked Buttons Improve Free-Improvised Ensemble Music-Making on Touch-Screens
- Can Machine Learning Apply to Musical Ensembles?
- Deep Models for Ensemble Touch-Screen Improvisation
- Vigliensoni et al. A small-data mindset for generative AI creative work