Month: September 2015

  • At New York University to work on sensors for music performance – pt. 5: tests with musicians

    Some experiments I did together with Andrew Telichan Phillips and some very nice and talented musicians at NYU Steinhardt and at The Sweatshop.
    We used Myo sensor armbands and Machine Learning to adapt control parameters to the movements of musicians playing different musical instruments.

    Credits:
    Alto Sax: Ana Garcia
    Drums: Kim Deuss
    Tenor Sax: Timo Vollbrecht
    Flute: Rachel Bittner

    Related posts: pt. 1pt. 2pt.3, pt.4.
    This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.

  • At New York University to work on sensors for music performance – pt. 4: Talk at NYU Steinhardt

    Tomorrow I am going to deliver a talk at the NYU Music and Audio research laboratory about my research at the Interdisciplinary Centre for Computer Music Research (ICCMR) in Plymouth.

    Click on the poster below to learn more.
    Poster_20150910_Visi

     

    Related posts: pt. 1pt. 2, pt.3.

    This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.

  • At New York University to work on sensors for music performance – pt. 3: Machine Learning

     

    In the past couple of weeks we used two Myos at the same time to evaluate more high-level features of the movement such as Symmetry, Contraction and even full-body weight shifting, which worked surprisingly well when combining and comparing the orientation data of the two IMUs inside the Myos at the same time. 

    In addition to that we used Machine Learning in Max to map the data from the Myos to a complex granular sampler with resonators. We used the excellent ml.lib library for Max to quickly map arm postures and muscular efforts to multiple parameters of the granular engine in order to control real-time processing of the audio signal coming from an electric guitar. The cool thing about this approach is that you don’t have to spend time mapping and rescaling control values since you can easily map complex expressive gestures to multiple synthesis parameters at once. Check the video above for a demo.

    Speaking of inspiring music I’ve been listening to lately, check this amazing performance by Colin Stetson and Sarah Neufeld. They are great musicians and the way they interact and move on stage is really compelling. I listened to their record pretty much every day for a couple weeks on my subway commute between Manhattan and Greenpoint.

    Related posts: pt. 1, pt. 2.

    This project is supported by Santander Universities and it’s a collaboration between Federico Visi, who is currently carrying out his doctoral research at the Interdisciplinary Centre for Computer Music Research (ICCMR), Plymouth University (UK) under the supervision of Prof Eduardo Reck Miranda and Andrew Telichan Phillips, who is currently carrying out his doctoral research at NYU under the supervision of Dr. Tae Hong Park.