At New York University to work on sensors for music performance – pt. 1

For the next few weeks I will be in New York  working on a collaborative project with the Music and Audio Research Laboratory (MARL) at NYU Steinhardt School of Music and Performing Arts Practice.

The goal of this project is to develop software tools that allow one to harness wearable sensor technologies in order to provide tools for body movement research and interactive music performance. In particular, the project will focus on the use of 9 Degrees of Freedom Inertial Measurement Units (9DoF IMUs) coupled with a form of muscle sensing, such as electromyography (EMG) or mechanomyography (MMG).

This project has a twofold purpose. One is to develop dedicated applications to process the motion data from the sensors in real time. This will allow performers to interact with music and with each other through their movements, extending the possibilities of their musical instruments. In addition to that, the software will be a useful tool for researchers to study body motion and collect sensor data for analysis.

During the first week we focused on obtaining a stable stream of data from the Myo armband. The Myo features a 9DoF IMU which provides tridimensional acceleration and angular velocity in addition to orientation data obtained by sensor fusion, both in Euler angles and quaternion format. Along with the IMU data, the Myo provides 8-channel EMG data, which is a unique feature of the device. I will work with other sensors in the near future since I wouldn’t like to limit the software I’m working on to the Myo. I’ve already tried other IMUs and I look forward to work with Marco Donnarumma’s new version of the Xth Sense, which is currently being tested and will be soon available through xth.io. However, at the moment the Myo provides a good hardware platform for prototyping algorithms and trying ideas out since it is a fairly well-engineered and compact device.

I started working on real-time implementations of movement descriptors traditionally used with optical motion capture systems, such as Quantity of Motion, and Smoothness. This descriptors allow to extract expressive features from the movement data, which are then useful for interactive music applications and movement analysis. The main challenge here is to adapt the ideas behind this descriptors to the data provided by the wearable sensors, which is completely different from the data obtained by optical devices such as the Kinect and marker-based MoCap systems.

In addition to this rather technical work, I will test the software in actual music performances, collaborating with other musicians. I believe this is a vital and essential part of the research, without which the project might steer too far away from what it is actually all about: music. While in New York, I will also try to take advantage of the fervent and inexhaustible offer of live music this city has always had, which is a great source of inspiration for what I’m doing. I’ve already been to a few excellent concerts and performances of various kinds and observing the behaviours of the musicians performing has already lead to some ideas I will want to try in the coming days.

I will try to write more posts about our progress if time allows it, possibly including videos and pictures. Alright, now back to work.

Read pt.2 here.

This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.

Published
Categorized as NYC