At New York University to work on sensors for music performance – pt. 3: Machine Learning
In the past couple of weeks we used two Myos at the same time to evaluate more high-level features of the movement such as Symmetry, Contraction and even full-body weight shifting, which worked surprisingly well when combining and comparing the orientation data of the two IMUs inside the Myos at the same time.
In addition to that we used Machine Learning in Max to map the data from the Myos to a complex granular sampler with resonators. We used the excellent ml.lib library for Max to quickly map arm postures and muscular efforts to multiple parameters of the granular engine in order to control real-time processing of the audio signal coming from an electric guitar. The cool thing about this approach is that you don’t have to spend time mapping and rescaling control values since you can easily map complex expressive gestures to multiple synthesis parameters at once. Check the video above for a demo.
Speaking of inspiring music I’ve been listening to lately, check this amazing performance by Colin Stetson and Sarah Neufeld. They are great musicians and the way they interact and move on stage is really compelling. I listened to their record pretty much every day for a couple weeks on my subway commute between Manhattan and Greenpoint.
This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.