At New York University to work on sensors for music performance – pt. 2: Making sense of IMU Motion Data

I’m currently in New York and in the past weeks I have designed a set of Max objects that make use of the motion data obtained from 9DoF IMUs for musical purposes. Antonio Camurri and his colleagues at InfoMus – Casa Paganini have made extensive use of various motion descriptors throughout the years. I tried to adapt their concepts to the data obtained from the IMUs. In this paper you can find an interesting overview of some of the techniques they employ for analysing movement expressivity.

Since at the moment I’m mostly using Thalmic Labs’ Myo, I also further developed part of the MuMyo Max patch that Kristian Nymoen, Mari Romarheim Haugen, and Alexander Refsum Jensenius from fourMs (University of Oslo) presented at NIME this year. For example, I added a way to centre the yaw orientation value in Max, as shown in the video below. Easily centring the yaw value is useful also because the orientation data of the Myo is affected by yaw drifting. I haven’t experienced a massive amount of drift when using the device so periodically centring seems like an acceptable solution in my case, however it might be worth trying to implement algorithms to dynamically compensate yaw drift such as Madgwick’s filter.

Andrew (my collaborator here at NYU) is working on a real-time DSP/synthesis engine that we will control through musicians’movements sensed by the Myo. I look forward tot try it myself and with other musicians I’ve met here at NYU. I also involved Rodrigo Schramm, with whom I had the pleasure of working several times before. He has recently completed his brilliant PhD thesis on computational analysis of music-related movements and I’m very happy to collaborate with him again.

I used some simple maths to convert the orientation data to XY position in a 2D space, which comes in handy when using some sort of XY pad like Max’s [nodes] object to control musical parameters.  In addition to the orientation, I also mapped two subsets of the EMG data to the size of the nodes, which creates some interesting global effects when increasing the effort in a movement.

I also built a patch dedicated to recording the sensor data synced with audio sources, which will be very useful for research and analysis. The recorder will also come in handy when using various machine learning techniques to recognise certain movements. I’m particularly interested in recording whole performances and and compare a recording with the real-time data stream during a live performance. To do so I’ll use Baptiste Caramiaoux’s Gesture Variation Follower, which is available both for Max and C++.

Enough with the technicalities for this post, let’s talk about music. I’ve been going to The Stone every week since I arrived here, which is unique venue for amazing, mind-blowing, genre-defying music and a constant source of inspiration for what I’m doing. Every Sunday at 3pm different musicians perform a selection of new compositions by John Zorn called “The Bagatelles”. In small venues such as The Stone it is possible to hear (or should I say “feel”) every single detail of the performance, appreciate the texture of the sound, the presence of the performers, their movements and their interplay. Extremely recommended.

I will soon post some more “musical” tests, also involving other musicians!

This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.

Published
Categorized as NYC