Sound and Music Computing
Dr Charles Martin
The last two weeks of course will help you directly with planning your final performance. Not extra topics or extension.
This week is interfaces and expression.
Next week is composition in improvisation.
These are crucial topics for turning the technical and ensemble knowledge you have developed into a convincing performance.
We’ve already talked about interfaces so why should we do more?
previous discussion focussed on technical aspects
now we need to go deeper into design.
This is an important research topic for the New Interfaces for Musical Expression conference!
What is a musical interface for?
Enabling performers to express themselves; that is, giving them creative control over the sounds that occur in a performance.
Expression can be defined as “conveying feelings”, but maybe to avoid unwanted romance we can define it as “conveying complex information”.
So: Effective musical interface give performers creative control, and let them convey complex information.
Electronic instruments have an important separation between the control parts and the sound making parts (not so for most acoustic instruments).
Many different types of interfaces (or controllers) can control many types of synthesiser design.
Working out how to map different signals from a controller to the parameters of a synthesiser is an important design decision.
Hunt, Wanderley, and Paradis (2003) explored mapping and determined that simply having one slider for each synth parameter is probably not a good idea.
We now know that interface mappings need to be designed. The same interface and synth can have different levels of success depending on how the mapping design supports expression.
Imagine a one-button interface that triggers all the sequences a completes the whole performance. What is wrong with this?
It’s very user-friendly! Easy to learn! Low-effort! High likelihood of success! What is wrong with it?
But it’s missing something! The risk-free design means we don’t get complex information from the performer (just from the button designer).
Imagine a beautiful synth design with 106 parameters, with each one connected to an individual slider. The designer has created a custom hardware interface with all 106 sliders available at the performers fingertips. What is wrong with this?
It’s very accurate! Any sound configuration is available to the performer, they can explore the full creative range of this instrument.
It’s missing something! The system doesn’t have any constraints and would be impractical to learn. This design means much of the information conveyed by the user will be arbitrary (random) and meaningless.
Sasha Lietman (2017) says: “use continuous sensors”! Why?
Charles advice: give performers “somewhere to go” in every moment. With a button you have nowhere to go (after hitting it). With a continuous controller you can, go up, go down, stop moving, change speed, change direction, etc.
Sasha Leitman. 2017. Current Iteration of a Course on Physical Interaction Design for Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 127–132. http://doi.org/10.5281/zenodo.1176197
Continuous data from a sensor should be filtered before use. But how and why?
Think of it like an audio signal:
Do you want to detect slow changes? Use a low pass filter.
Do you want to detect fast changes? High-pass filter.
If you have control data in Pd, send it through sig~
to turn it into audio, then you can use lop~
and hip~
as usual.
Some sensors have lots of dimensions:
The easiest and fastest way to get one signal from all that stuff is to take the magnitude of these vectors:
sqrt(x*x + y*y + z*z)
Two questions:
There are lots of interfaces that function in some manner similar to “move the thingo to make more sound”.
But what about doing the opposite? (“move the thingo to make LESS sound”).
Seems wild, but can work! What other ways could you use inverse mappings?
ARJ has thought about this a lot, including in the “standstill” performances Charles was involved in.
Charles Patrick Martin, Alexander Refsum Jensenius, and Jim Torresen. 2018. Composing an Ensemble Standstill Work for Myo and Bela. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 196–197. http://doi.org/10.5281/zenodo.1302543
A synth with one-button is a bit silly (or is it). Is one knob enough to make an expressive interface?
The trick is to not have it sound all the time (rhythm is important).
map knob position to pitch/timbre or a chosen “sound quality” parameter.
map movement to start an envelope
could map speed to max volume of the envelope.
Charles uses this technique extensively: e.g., PhaseRings app, EMPI synth, etc.
please think about expression.
don’t just control volume.
don’t just trigger samples/sequences.
This week: think about creative control, complex information, and how your ensemble will use your interfaces.
Gibber code:
const socket = new WebSocket("ws://localhost:9080")
socket.addEventListener("open", (event) => {
console.log("connected to websocket")
})
socket.addEventListener("message", (event) => {
console.log(event.data)
})
#!/usr/bin/env python
import asyncio
from websockets.sync.client import connect
def hello():
with connect("ws://localhost:9080") as websocket:
websocket.send("Hello world!)
hello()