Author: _FV

  • Physically distant: online talks on telematic performance

    Physically distant: online talks on telematic performance

    Wednesday 3 June 2020, 13:30 – 21:00 CEST

    Performing live with physically distant co-performers and audiences through audio, video, and other media shared via telematic means has been part of the work of artists and researchers for several decades. Recently, the restrictions put in place to cope with the COVID-19 pandemic required performing artists to find solutions to practice their craft while maintaining physical distance between themselves, their collaborators, and their audience. This scenario brought many questions related to telematic performance to the fore: what are the opportunities and challenges of telematic performance? What are the implications on how performing arts are conceived, developed, experienced? How are research and practice being reconfigured? How is telematic performance suggesting different understandings of the role of instruments, gesture and acoustic spaces? How might telematic performance contribute to reconfiguring our understanding of music in societal and political perspectives?

    The GEMM))) the Gesture Embodiment and Machines in Music research cluster at the School of Music in Piteå, Luleå University of Technology has invited a group of artists, researchers, and scholars to instigate an open, interdisciplinary discussion on these themes. The talks will happen online, on Wednesday 3 June 2020.

    The sessions will be organised in 1-hour time slots. Each slot will include two 15-min presentations, the remaining time will be dedicated to questions and discussion. After each slot, there is going to be a 30-min break in order to avoid “Zoom fatigue.” There will be a plenary at the end of the day, during which we will be discussing issues and opportunities that have emerged during the other sessions.

    Schedule (all times are CEST):
    13:30 – 14:00 Session 0 : Welcome and introduction
    14:00 – 15:00 Session 1 : Roger Mills; Shelly Knotts
    15:00 – 15:30 Break
    15:30 – 16:30 Session 2 : Gamut inc./ Aggregate; Randall Harlow
    16:30 – 17:00 Break
    17:00 – 18:00 Session 3 : Alex Murray-Leslie; Atau Tanaka
    18:00 – 19:00 Dinner break (1 hr)
    19:00 – 20:00 Session 4 : Chris Chafe; Henrik von Coler
    20:00 – 21:00 Plenary

    Moderators: Federico Visi, Stefan Östersjö

    Anyone can join upon registration using this online form: https://forms.gle/1goB2TcjGKjL6nkT8  
    We will send you a link to join a Zoom meeting on the day of the talks.
    NOTE: the talks will be recorded.

    An additional networked performance curated by GEMM))) is taking place on Tuesday 2 June, followed by a short seminar and discussion. Everyone is welcome to also join this event,  we will circulate details to the registered email addresses and via social media.

    Programme (all times are CEST):
    14:00 – 14:15 networked performance with the Acusticum Organ: Robert Ek, Mattias Petersson, Stefan Östersjö
    14:15 – 14:30 Vong Co: networked performance with The Six Tones: Henrik Frisk, Stefan Östersjö & Nguyen Thanh Thuy
    14:40 – 15:00 Paragraph – a live coding front end for SuperCollider patterns: Mattias Petersson
    15:00 – 15:20 Discussion

    A follow up event is planned for the 2020 Piteå Performing Arts Biennial taking place online on 26-27 October 2020.

    Further info: write me.

  • Towards Assisted Interactive Machine Learning

    Towards Assisted Interactive Machine Learning

    In a sentence: Assisted Interactive Machine Learning (AIML) is an interaction design method based on deep reinforcement learning that I started developing for the purpose of exploring the vast space of possible mappings between gesture and sound synthesis.

    I am presenting a research paper and a live multimedia performance on AIML at ICLI 2020 – the fifth International Conference on Live Interfaces taking place at the Norwegian University of Science and Technology in Trondheim, Norway.

    The paper (PDF)

    We present a sonic interaction design approach that makes use of deep reinforcement learning to explore many mapping possibilities between input sensor data streams and sound synthesis parameters. The user can give feedback to an artificial agent about the mappings proposed by the latter while playing the synthesiser and trying the new mappings on the fly. The design approach we adopted is inspired by the ideas established by the interactive machine learning paradigm, as well as by the use of artificial agents in computer music for exploring complex parameter spaces.

    About the performance (PDF)

    “My phone beeps. A notification on the home screen says “You have a new memory”. It happens at times, unsupervised learning algorithms scan your photos and videos, look at their features and metadata, and then you get a nice slideshow of that trip to South America, or those shows you went to while you were in Hamburg or London. There is something ridiculous about this (the music they put on the slideshows, for example) as well as something eerie, something even slightly distressing perhaps.”

    “You Have a New Memory” (2020) makes use of the AIML interaction paradigm to navigate a vast corpus of audio material harvested from the messaging applications, videos, and audio journals recorded on the author’s mobile phone. This corpus of sonic memories is then organised using audio descriptors and navigated with the aid of an artificial agent and reinforcement learning.
    The title of the piece – “You Have a New Memory” – refers to the notifications that a popular photo library application occasionally send to mobile devices to prompt their users to check an algorithmically generated photo gallery that collects images and videos related to a particular event or series of events in their lives.

    I started developing these concepts in Summer 2019 in Berlin after a few informal meetings with Atau Tanaka, then Edgard-Varèse guest professor at TU Berlin. Development took place during a 1-month postdoc at Goldsmiths, University of London, in September 2019, and continued with Stefan Östersjö and the GEMM))) Gesture Embodiment and Machines in Music research cluster at the School of Music in Piteå, Luleå University of Technology, Sweden.

    Paper presentation at ICLI2020, Trondheim, Norway:

  • NIME 2019 Music Proceedings

    NIME 2019 Music Proceedings

    As one of the NIME 2019 Music co-chairs, I promoted the establishment of Music Proceedings:

    Since NIME began nearly two decades ago, this is the first event where composers and creators of the music pieces in the concert programme have been invited to publish an extended abstract of their work. These documents, describing the aesthetic and technical characteristics of the music pieces, are collected here, in the Music Proceedings.

    We believe Music Proceedings are an important step towards a consistent and richer means of documenting the performances taking place at NIME. This will be a useful resource for researchers, and provides an alternative voice for contributors to speak about their artistic practice in NIME research.

    Download the PDF here.

  • SloMo study #2

    SloMo study #2

    This piece was performed at NIME 2018 (both at the Virginia Tech’s Moss Arts Center and at the NIME performance night organised by University of Virginia in Charlottesville) and at MOCO 2018, held at InfoMus – Casa Paganini.

    I composed SloMo study #2 to explore the use of slow and microscopic body movements in electronic music performance, and the role of rhythmic visual cues and breathing in the perception of movement and time. To do so, I used wearable sensors (the EMG sensors and IMUs found in Myo armbands), variable-frequency stroboscopic lights, an electronic stethoscope, and a body-worn camera for face tracking.

    Here is a short video excerpt that I used to accompany my NIME and MOCO submissions. Unfortunately the effects of slowly changing the frequency of the strobes cannot be captured in videos with standard frame rates.

    Speaking of NIME, I’m going to be a Music co-chair for NIME 2019 and I’m really looking forward to seeing what NIME artists have come up with this year.


  • New modosc objects for EMG & MoCap processing in Max

    New modosc objects for EMG & MoCap processing in Max

    During November and December 2018, I had the opportunity to spend 5 weeks as a visiting researcher at RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, an amazing centre of excellence recently inaugurated at the University of Oslo. In mid November, at the beginning of my stay, Luke Dahl and I presented modosc, our Max library for real time motion capture analysis to the attendees of the RITMO International Motion Capture Workshop. The library is the results of a collaboration between Luke and myself, and has been presented at various conferences in 2018, including NIME1 (paper) , and MOCO2 (paper).

    While in Oslo, I had the chance to spend time in the RITMO Motion Capture lab and use their Delsys Trigno wireless EMG system synchronised with their Qualisys cameras. With that gear, I coded three new modosc objects for real-time processing of EMG signals synchronised with MoCap:

    • mo.qtmSig: binds data from QTM analog boards to new signal addresses in the modosc namespace (under /modosc/signals);
    • mo.zcr: calculates the zero crossing rate of a signal (useful feature for classification tasks);
    • mo.tkeo: calculates the Teager-Kaiser energy-tracking operator (TKEO) of a signal (useful for onset detection and other things, to learn more check out Eivind Kvedalen’s PhD thesis: http://folk.uio.no/eivindkv/ek-thesis-2003-05-12-final-2.pdf). I got the idea of implementing this interesting feature from Geert Roks, a student at Goldsmiths University of London currently collaborating with Prof Atau Tanaka.

    Here are some video tutorials to get you started with modosc.

  • modosc: Mocap & Max video tutorials

    modosc: Mocap & Max video tutorials

    These are some introductory video tutorials about processing motion capture data in real time in Max using the modosc library.

    Modosc is a set of Max abstractions designed for computing motion descriptors from raw motion capture data in real time.


  • Building a swarm poly synth using Max 8 new MC objects

    Building a swarm poly synth using Max 8 new MC objects

    I just downloaded the new Max 8 and here is a simple synth I built using the new MC (multichannel) objects. Each voice has 32 sawtooth oscillators, so with 6-voice polyphony you can get up to 192 oscillators playing at the same time. The dials control pitch spread and “deviated” release (meaning that each oscillator inside each voice will have a slightly different release time).

    Since few people on social media asked me to share the patch, I made it available for download here. EDIT: I moved the files to GitHub: https://github.com/federicoVisi/max_mc_swarm_polysynth

    NOTE: the patch is a quick and dirty experiment I did to try out the sound synthesis capabilities of the MC objects in Max 8. It is not a finished instrument and has some inconsistencies that should be fixed. You’re very welcome to edit the patch and get in touch to share ideas, although be aware that I might not have the time to provide technical support.

  • Workshop and Performance at Harvestworks, New York City

    Workshop and Performance at Harvestworks, New York City

    I recently ran a workshop and performed at Harvestworks in New York City. The workshop was done in collaboration with Andrew Telichan Phillips form the Music and Audio Research Laboratory at NYU Steinhardt. The amazing Ana García Caraballos performed with me my piece 11 Degrees of Dependence on alto sax, myo armbands, and live electronics. Here’s a video:

     

  • Testing the XTH Sense with Physical Models and Machine Learning

    Testing the XTH Sense with Physical Models and Machine Learning

    I recently had the chance to play with a prototype version of the new XTH Sense. I met up with Marco Donnarumma and Balandino Di Donato at Integra Lab in Birmingham and we spent a couple of days experimenting with this interesting and yet unreleased device. It is a small, wireless, wearable unit that comprises a Mechanomyogram (MMG) sensor for capturing the sounds produced by muscular activity and a 9DoF IMU unit, which returns various motion features, such as acceleration, angular velocity, and orientation.

    I had already been working with 9DoF IMUs data during my research collaboration at NYU Steinhardt in New York and for previous performances, so I knew what to I could expect in that department. However, one of the main peculiarities of the XTH Sense is the MMG sensor. While in New York, I had worked with Thalmic Labs’ Myo, which employs Electromyogram (EMG) for muscle sensing. I won’t go too deep into the technical differences between MMG and EMG, suffice it to say that EMG senses the electrical impulses sent by the brain to cause muscle contraction, while MMG consists of sounds that your muscles produce during contraction and extension[ref]If you want to learn more, Marco covered these topics thoroughly in this article written with Baptiste Caramiaux and Atau Tanaka, plus here is another article that compares the two technologies from a biomedical point of view.[/ref].  In terms of expressive interaction, what I find interesting about the MMG sensor of the XTH Sense is the distinctive way it responds to movements and gestures. Unlike EMG, the control signals obtained from the XTH Sense peak at movement onsets and remain relatively low if you keep your muscles contracted. This is neither better nor worse than EMG, it’s different.

    While adapting my code, I started noticing how the response of the XTH Sense made me interact differently with the machine learning and physical modelling patches I had previously built using the Myo. I guess with a fair deal of signal processing I could make the two devices behave in a virtually similar way, but this in my opinion would be rather pointless. One of the exciting things about having to deal with a new device is to embrace its interface idiosyncrasies and explore their expressive potential. As a simple example, in the physical modelling patch I built for the rain stick demo we filmed in Birmingham, the amount excitation sent to the model depended on one of the MMG control features. Had I used EMG I would have obtained a steady excitation signal by firmly squeezing the stick, while the response of the MMG required me to perform a more iterative gesture — like repeatedly tapping my fingers on the stick — if I wanted to obtained a louder sound. This somehow reminded me of the gestures involved in playing a wind instruments and this idea influenced the whole interaction design I eventually implemented.

    I will soon be back in New York for a workshop and a performance at Harvestworks on May 8th, where I’ll show some of the tools and methodologies I use in my research and practice, including those I experimented with playing with the new XTH Sense for the first time. If you’re in the area and want to attend register here or if you just want to know more about it drop me a line.

  • Performances at Peninsula Arts Contemporary Music Festival 2016

    Performances at Peninsula Arts Contemporary Music Festival 2016

    Very excited to be performing two pieces at this year’s Peninsula Arts Contemporary Music Festival.

    The super talented Esther Coorevits will once again join me to perform an updated version of Kineslimina, which will be performed at the Gala Concert on Saturday night and will feature some of the technologies I started working on while I was in New York last summer.

    On Sunday, the amazing Dr. Katherine Williams will play soprano sax and motion sensors for my new piece 11 Degrees of Dependence. Her movements will control control the parameters of synthetic flute.

    Check out the rest of the programme, there are some very exciting works you won’t be able to hear anywhere else.