Physically Distant #3: the network, the pandemic, and telematic performance

Posted by On Oct 15, 2020 In Uncategorized

PD#3 will be part of Ecology, Site And Place – Piteå Performing Arts Biennial 2020. Participation in the conference is free, but registration is compulsory. Register by sending an email to piteabiennial@gmail.com

After the two previous editions in June and July, the third Physically Distant Talks will take place on 26 and 27 October 2020. The talks are going to be part of the online event of Ecology Site and Place – Piteå Performing Arts Biennial.

The format will be different this time, as there are going to be more telematic performances and the talks will be structured in three panels. Each panel member is invited to prepare a 3-minute provocation/reflection related to the topic. This collection of provocations from the panelists will set the tone for an open discussion in the style of the previous Physically Distant talks. As in the previous editions of the talks, Stefan Östersjö and myself, Federico Visi, will be moderating the discussion.

Programme (all times are CET)

Monday, 26 October 2020

17:30 Introduction. Stefan Östersjö and Federico Visi
17:40 Simon Waters and Paul Stapleton: Musicking online: your technical problem is actually a social problem. A performative conversation.

18:00-19:00 Panel I. Instrumentality in Networked Performance
Panelists: Nela Brown, Nicholas Brown, Juan Parra Cancino, Franziska Schroeder, Henrik Von Coler.

19:00-19:45 Telematic Performance: A concert hall organ in the network.
Live-streaming from Studio Acusticum. Telematic performances with the University Organ remotely controlled from several locations.
Robert Ek, clarinet, performing in Piteå (SE)
Mattias Petersson, live-coding, performing in Piteå (SE)
Federico Visi, electronics, electric guitar, performing in Berlin (DE)
Scott Wilson, live coding, performing in Birmingham (UK)
Stefan Östersjö, electric guitar, performing in Stockholm (SE)

19:45-20:00 Break

20:00-21:00 Panel II. Network ecology: Communities of practice for the digital arts
Panelists: Shelly Knotts, Thor Magnusson, Mattias Petersson, Rebekah Wilson, Scott Wilson.

Tuesday, 27 October 2020

17:45-18:00 Marcin Pączkowski: rehearsing music online: possibilities and limitations

18:00-19:00 Panel III. The network as place
Panelists: Ximena Alarcon Diaz, David Brynjar/Angela Rawlings/Halla Stefansdottir, Chicks on Speed (Melissa Logan, Alex Murray-Leslie), Maja Jantar, Marcin Paczkowski, Roger Mills, Luca Turchet.

19:00-19:30 Telematic Performance: iða
David Brynjar Franzson, technical concept and streaming (US)
Maja Jantar, performer and composer of visual score (BE)
Angela Rawlings, performer and composer of visual score (IS/CA)
Halla Steinunn Stefánsdóttir, performer and composer of visual score (SE)

19:30-20:00 Break

20:00-21:00 Where do we go from here? (plenary discussion)

For more details on the Ecology, Site And Place – Piteå Performing Arts Biennial 2020 online event, download the book of abstracts.

Physically Distant #2: more online talks on telematic performance

Posted by On Jul 23, 2020 In Events, Research

Tuesday 28 July 2020, 14:00 – 19:00 CEST
Anyone can join upon registration using this online form: https://forms.gle/zzLV46NbvgqAtJ7t7

Performing live with physically distant co-performers and audiences through audio, video, and other media shared via telematic means has been part of the work of artists and researchers for several decades. Recently, the restrictions put in place to cope with the COVID-19 pandemic have required performing artists to find solutions to practice their craft while maintaining physical distance between themselves, their collaborators, and their audience. In this second edition of Physically Distant, we wish to continue discussing telematic performance from perspectives suggested by the following questions: 

What are the opportunities and challenges of telematic performance?
What are the implications on how performing arts are conceived, developed, experienced?
How are research and practice being reconfigured?
How is telematic performance suggesting different understandings of the role of instruments, gesture and acoustic spaces?
How might telematic performance contribute to reconfiguring our understanding of music in societal and political perspectives?

We wish to highlight two threads from the previous discussions. First, how telematic performance can be conceived of as protest, and second, the potential for telematic performance to expand the artistic and social potential in intercultural arts. Both of these threads imply a discussion of accessibility.

Once again, the GEMM))) the Gesture Embodiment and Machines in Music research cluster at the School of Music in Piteå, Luleå University of Technology has invited a group of artists, researchers, and scholars to instigate an open, interdisciplinary discussion on these themes. The talks will happen online, on Tuesday 28 July 2020.
The sessions will be organised in 1-hour time slots. Each slot will include two 15-min presentations, the remaining time will be dedicated to questions and discussion. 

We are very happy to host a telematic performance by the Female Laptop Orchestra (FLO). The practice of this group is discussed in the talks by Franziska Schroeder and Nela Brown. A presentation of the conceptual backdrop for the performance can be found below.

The structure of the event includes short breaks in between the sessions in order to avoid Zoom fatigue and allow for informal chats and continued discussion over a drink (not provided by the organisers). There will be a plenary at the end of the day, during which we will be discussing issues and opportunities that have emerged during the other sessions.

28 July 2020 schedule (all times are CEST):

  • 14:00 Session 0: Introduction, results of the survey that followed Physically Distant #1
  • 14:30 Session 1: Ximena Alarcon, Franziska Schroeder
  • 15:30-15.50 — Performance by FLO (Female Laptop Orchestra) —
  • 15.50-16.00 — 10-min Break —
  • 16:00 Session 2: Nela Brown , Rebekah Wilson
  • 17:00 — 30-min Break —
  • 17:30 Session 3: OvO, Kaffe Matthews
  • 18:30 Session 4: Plenary
  • 19:00 — END —

Moderators / instigators: Federico Visi, Stefan Östersjö

Anyone can join upon registration using this online form: https://forms.gle/zzLV46NbvgqAtJ7t7 

We will send you a link to join a Zoom meeting on the day of the talks.
NOTE: the talks will be recorded.

A follow up event is planned for the 2020 Piteå Performing Arts Biennial taking place online on 26-27 October 2020.

Further info: mail@federicovisi.com

Telematic performance by the Female Laptop Orchestra (FLO)

Absurdity (concept by Franziska Schroeder and Matilde Meireles)
A distributed performance using LiveSHOUT with members from the Female Laptop Orchestra (FLO).

“Absurdity” is based around a short excerpt from one of Portugal’s most mysterious, elusive and peculiar writers, Fernando Pessoa. Pessoa’s multiplicities and his ways of thinking about life, engendering ideas that can feel manic-depressive, filled with buckets of self-pity, while being able to scratch the innermost parts of one’s soul, lie at the heart of this distributed performance.

Members of FLO will stream sounds from several distributed places, including Crete, Italy, Brazil and the UK, while Franziska and Matilde will be delivering fragmented excerpts (in both English and Portuguese) alongside the LiveSHOUT streams. The idea of distributed creativity, where we combine sounds from several sites, inspired by Pessoa’s plurality of thoughts and philosophies; his multiplicities, his fictionality and his self alienation, will lead to a performance that aims to be absurd, dispersed, fragmented and multiple.

“I’ve always belonged to what isn’t where I am and to what I could never be”.  (Pessoa In: Ciuraru, 2012).

The FLO performers are:
Franziska Schroeder – LiveSHOUT and Pessoa reading (English)
Matilde Meireles – LiveSHOUT and Pessoa reading (Portuguese)
Maria Mannone – LiveSHOUT streams of piano improv from Palermo
Maria Papadomanolaki – LiveSHOUT streams of sounds from Crete
Anna Xambó – LiveSHOUT streams of sounds from Sheffield
Nela Brown – LiveSHOUT streams of sounds from London
Ariane Stolfi – LiveSHOUT  streams of sounds from Porto Seguro and playsound.space

Female Laptop Orchestra (FLO), a music research project established in 2014 by Nela Brown, connects female musicians, sound artists, composers, engineers and computer scientist globally, through co-located and distributed collaborative music creation. Each FLO performance is site-specific and performer-dependant, mixing location-based field recordings, live coding, acoustic instruments, voice, sound synthesis and real-time sound processing using Web Audio API’s and VR environments with audio streams arriving from different global locations (via the internet and mobile networks). From stereo to immersive 3D audio (and everything in between), FLO is pushing the boundaries of technology and experimentation within the context of ensemble improvisation and telematic collaboration.

Female Laptop Orchestra: https://femalelaptoporchestra.wordpress.com/

LiveSHOUT: http://www.socasites.qub.ac.uk/distributedlistening/liveSHOUT/

Locus Sonus soundmap: https://locusonus.org/soundmap/051/

Presenters Bios:

Ximena Alarcón Díaz is a sound artist researcher interested in listening to in-between spaces: dreams, underground public transport, and the migratory context. She creates telematic sonic improvisations using Deep Listening, and interfaces for relational listening. She has a PhD in Music, Technology and Innovation from De Montfort University (2007), and is a Deep Listening® certified tutor. Her project INTIMAL is an “embodied” physical-virtual system for relational listening in telematic sonic performance (RITMO-UiO, 2017-2019, Marie Skłodowska Curie Individual Fellowship). She is currently a Senior Tutor in the online Deep Listening certification program offered by the Center for Deep Listening (RPI), and works independently in the second phase of the INTIMAL project that involves: an “embodied” physical-virtual system to explore sense of place and presence across distant locations; and a co-creation laboratory for listening to migrations with Latin American migrant women.
http://ximenaalarcon.net

Franziska Schroeder is an improviser and Reader, based at the Sonic Arts Research Centre, Queen’s University Belfast where she mainly teaches performance and improvisation.
In 2007 she was the first AHRC Research Fellow in the Creative/Performing Arts to be awarded a 3 year grant to carry out research into virtual / network performance environments. Her writings on distributed creativity have been published by Routledge, Cambridge Scholars, and Leonardo. In 2016 she co-develop the distributed listening app LiveSHOUT.
Within her research group “Performance without Barriers”, which she founded in 2016, Franziska currently designs VR instruments with and for disabled musicians.
https://pure.qub.ac.uk/en/persons/franziska-schroeder

Rebekah Wilson is an independent researcher, technologist and composer. Originating from New Zealand she studied instrumental and electroacoustic music composition, and taught herself computer technology. In the early 2000s she held the role of artistic co-director at STEIM, Amsterdam, where her passions for music, performance and technology became fused. Since 2005 she has been co-founder and technology director for Chicago’s Source Elements, developing services that exploit the possibilities of networked sound and data for the digital sound industry while continuing to perform and lecture internationally. Holding a masters in the field of networked music performance, her current research on the topic can be found on the Latency Native forum.
https://forum.latencynative.com

Nela Brown is an award-winning Croatian sound artist, technologist, researcher and lecturer living in London, UK. She studied jazz and music production at Goldsmiths, University of London, followed by a BA (Hons) in Sonic Arts at Middlesex University London. Since graduating in 2007, she worked as a freelance composer and sound designer on award-winning international projects including theatre performances, dance, mobile, film, documentaries and interactive installations. In 2014, she started Female Laptop Orchestra (FLO). In 2019, as part of the prestigious Macgeorge Fellowship Award, she was invited to join the Faculty of Fine Arts & Music at the University of Melbourne, Australia to deliver talks and workshops about collaborative music-making, laptop orchestras and hack culture, as well a number of performances with FLO. She is currently doing a PhD in Human-Computer Interaction and lecturing at the University of Greenwich in London.
http://www.nelabrown.com/

Italian noise-rock duo OvO has been at the center of the worldwide post-rock, industrial-sludge, and avant-doom scenes for nearly two decades. Their “always-on-tour” mentality, coupled with a DIY ethic, fearless vision, and pulverizing live shows have made them the Jucifer of Europe; impossible to categorize, but always there, appearing in your hometown, like a ghostly omnipresence. OvO’s fiercely independent ethos and grinding live schedule have earned the band a significant worldwide fanbase that have come to expect nothing but the most daring and innovative dark music presentations.
OvO were on the road for their 20th anniversary European tour when the COVID-19 pandemic hit the continent. The band was forced to cancel the remaining gigs of the tour and drive back to their home country, which was suffering one of the worst health emergencies of its recent history. In the midst of the lockdown, they performed live on the stage of the Bronson club in Ravenna, Italy, and professionally live-streamed the entire concert on DICE.fm
http://ovolive.blogspot.com

Kaffe Matthews is a pioneering music maker who works live with space, data, things, and place to make new electroacoustic compositions. The physical experience of music for the maker and listener has always been central to her approach and to this end she has also invented some unique interfaces, the sonic armchair, the sonic bed and the sonic bike that not only enable new approaches to composition for makers but give immediate ways in to unfamiliar sound and music for wide ranging audience.
Kaffe has also established the collectives Music for Bodies (2006) and The Bicrophonic Research Institute (2014) where ideas and techniques are developed within a pool of coders and artists using shared and open source approaches, publishing all outcomes online.
During COVID times, Kaffe has produced new music by collaborating with other music makers through streaming platforms and has hosted live-streaming parties in her apartment in Berlin.
https://www.kaffematthews.net

Tuesday 28 July 2020, 14:00 – 19:00 CEST
Anyone can join upon registration using this online form: https://forms.gle/zzLV46NbvgqAtJ7t7

Physically distant: online talks on telematic performance

Posted by On May 29, 2020 In Events, Research

Wednesday 3 June 2020, 13:30 – 21:00 CEST

Performing live with physically distant co-performers and audiences through audio, video, and other media shared via telematic means has been part of the work of artists and researchers for several decades. Recently, the restrictions put in place to cope with the COVID-19 pandemic required performing artists to find solutions to practice their craft while maintaining physical distance between themselves, their collaborators, and their audience. This scenario brought many questions related to telematic performance to the fore: what are the opportunities and challenges of telematic performance? What are the implications on how performing arts are conceived, developed, experienced? How are research and practice being reconfigured? How is telematic performance suggesting different understandings of the role of instruments, gesture and acoustic spaces? How might telematic performance contribute to reconfiguring our understanding of music in societal and political perspectives?

The GEMM))) the Gesture Embodiment and Machines in Music research cluster at the School of Music in Piteå, Luleå University of Technology has invited a group of artists, researchers, and scholars to instigate an open, interdisciplinary discussion on these themes. The talks will happen online, on Wednesday 3 June 2020.

The sessions will be organised in 1-hour time slots. Each slot will include two 15-min presentations, the remaining time will be dedicated to questions and discussion. After each slot, there is going to be a 30-min break in order to avoid “Zoom fatigue.” There will be a plenary at the end of the day, during which we will be discussing issues and opportunities that have emerged during the other sessions.

Schedule (all times are CEST):
13:30 – 14:00 Session 0 : Welcome and introduction
14:00 – 15:00 Session 1 : Roger Mills; Shelly Knotts
15:00 – 15:30 Break
15:30 – 16:30 Session 2 : Gamut inc./ Aggregate; Randall Harlow
16:30 – 17:00 Break
17:00 – 18:00 Session 3 : Alex Murray-Leslie; Atau Tanaka
18:00 – 19:00 Dinner break (1 hr)
19:00 – 20:00 Session 4 : Chris Chafe; Henrik von Coler
20:00 – 21:00 Plenary

Moderators: Federico Visi, Stefan Östersjö

Anyone can join upon registration using this online form: https://forms.gle/1goB2TcjGKjL6nkT8  
We will send you a link to join a Zoom meeting on the day of the talks.
NOTE: the talks will be recorded.

An additional networked performance curated by GEMM))) is taking place on Tuesday 2 June, followed by a short seminar and discussion. Everyone is welcome to also join this event,  we will circulate details to the registered email addresses and via social media.

Programme (all times are CEST):
14:00 – 14:15 networked performance with the Acusticum Organ: Robert Ek, Mattias Petersson, Stefan Östersjö
14:15 – 14:30 Vong Co: networked performance with The Six Tones: Henrik Frisk, Stefan Östersjö & Nguyen Thanh Thuy
14:40 – 15:00 Paragraph – a live coding front end for SuperCollider patterns: Mattias Petersson
15:00 – 15:20 Discussion

A follow up event is planned for the 2020 Piteå Performing Arts Biennial taking place online on 26-27 October 2020.

Further info: write me.

Towards Assisted Interactive Machine Learning

Posted by On Mar 08, 2020 In Research

In a sentence: Assisted Interactive Machine Learning (AIML) is an interaction design method based on deep reinforcement learning that I started developing for the purpose of exploring the vast space of possible mappings between gesture and sound synthesis.

I am presenting a research paper and a live multimedia performance on AIML at ICLI 2020 – the fifth International Conference on Live Interfaces taking place at the Norwegian University of Science and Technology in Trondheim, Norway.

The paper (PDF)

We present a sonic interaction design approach that makes use of deep reinforcement learning to explore many mapping possibilities between input sensor data streams and sound synthesis parameters. The user can give feedback to an artificial agent about the mappings proposed by the latter while playing the synthesiser and trying the new mappings on the fly. The design approach we adopted is inspired by the ideas established by the interactive machine learning paradigm, as well as by the use of artificial agents in computer music for exploring complex parameter spaces.

About the performance (PDF)

“My phone beeps. A notification on the home screen says “You have a new memory”. It happens at times, unsupervised learning algorithms scan your photos and videos, look at their features and metadata, and then you get a nice slideshow of that trip to South America, or those shows you went to while you were in Hamburg or London. There is something ridiculous about this (the music they put on the slideshows, for example) as well as something eerie, something even slightly distressing perhaps.”

“You Have a New Memory” (2020) makes use of the AIML interaction paradigm to navigate a vast corpus of audio material harvested from the messaging applications, videos, and audio journals recorded on the author’s mobile phone. This corpus of sonic memories is then organised using audio descriptors and navigated with the aid of an artificial agent and reinforcement learning.
The title of the piece – “You Have a New Memory” – refers to the notifications that a popular photo library application occasionally send to mobile devices to prompt their users to check an algorithmically generated photo gallery that collects images and videos related to a particular event or series of events in their lives.

I started developing these concepts in Summer 2019 in Berlin after a few informal meetings with Atau Tanaka, then Edgard-Varèse guest professor at TU Berlin. Development took place during a 1-month postdoc at Goldsmiths, University of London, in September 2019, and continued with Stefan Östersjö and the GEMM))) Gesture Embodiment and Machines in Music research cluster at the School of Music in Piteå, Luleå University of Technology, Sweden.

Paper presentation at ICLI2020, Trondheim, Norway:

NIME 2019 Music Proceedings

Posted by On Jun 02, 2019 In Research

As one of the NIME 2019 Music co-chairs, I promoted the establishment of Music Proceedings:

Since NIME began nearly two decades ago, this is the first event where composers and creators of the music pieces in the concert programme have been invited to publish an extended abstract of their work. These documents, describing the aesthetic and technical characteristics of the music pieces, are collected here, in the Music Proceedings.

We believe Music Proceedings are an important step towards a consistent and richer means of documenting the performances taking place at NIME. This will be a useful resource for researchers, and provides an alternative voice for contributors to speak about their artistic practice in NIME research.

Download the PDF here.

SloMo study #2

Posted by On Jan 19, 2019 In Uncategorized

This piece was performed at NIME 2018 (both at the Virginia Tech’s Moss Arts Center and at the NIME performance night organised by University of Virginia in Charlottesville) and at MOCO 2018, held at InfoMus – Casa Paganini.

I composed SloMo study #2 to explore the use of slow and microscopic body movements in electronic music performance, and the role of rhythmic visual cues and breathing in the perception of movement and time. To do so, I used wearable sensors (the EMG sensors and IMUs found in Myo armbands), variable-frequency stroboscopic lights, an electronic stethoscope, and a body-worn camera for face tracking.

Here is a short video excerpt that I used to accompany my NIME and MOCO submissions. Unfortunately the effects of slowly changing the frequency of the strobes cannot be captured in videos with standard frame rates.

Speaking of NIME, I’m going to be a Music co-chair for NIME 2019 and I’m really looking forward to seeing what NIME artists have come up with this year.


New modosc objects for EMG & MoCap processing in Max

Posted by On Jan 11, 2019 In Research, Tools

During November and December 2018, I had the opportunity to spend 5 weeks as a visiting researcher at RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, an amazing centre of excellence recently inaugurated at the University of Oslo. In mid November, at the beginning of my stay, Luke Dahl and I presented modosc, our Max library for real time motion capture analysis to the attendees of the RITMO International Motion Capture Workshop. The library is the results of a collaboration between Luke and myself, and has been presented at various conferences in 2018, including NIME1 (paper) , and MOCO2 (paper).

While in Oslo, I had the chance to spend time in the RITMO Motion Capture lab and use their Delsys Trigno wireless EMG system synchronised with their Qualisys cameras. With that gear, I coded three new modosc objects for real-time processing of EMG signals synchronised with MoCap:

  • mo.qtmSig: binds data from QTM analog boards to new signal addresses in the modosc namespace (under /modosc/signals);
  • mo.zcr: calculates the zero crossing rate of a signal (useful feature for classification tasks);
  • mo.tkeo: calculates the Teager-Kaiser energy-tracking operator (TKEO) of a signal (useful for onset detection and other things, to learn more check out Eivind Kvedalen’s PhD thesis: http://folk.uio.no/eivindkv/ek-thesis-2003-05-12-final-2.pdf). I got the idea of implementing this interesting feature from Geert Roks, a student at Goldsmiths University of London currently collaborating with Prof Atau Tanaka.

Here are some video tutorials to get you started with modosc.

modosc: Mocap & Max video tutorials

Posted by On Jan 11, 2019 In Research, Tools

These are some introductory video tutorials about processing motion capture data in real time in Max using the modosc library.

Modosc is a set of Max abstractions designed for computing motion descriptors from raw motion capture data in real time.


Building a swarm poly synth using Max 8 new MC objects

Posted by On Oct 01, 2018 In Max, Research, Tools

I just downloaded the new Max 8 and here is a simple synth I built using the new MC (multichannel) objects. Each voice has 32 sawtooth oscillators, so with 6-voice polyphony you can get up to 192 oscillators playing at the same time. The dials control pitch spread and “deviated” release (meaning that each oscillator inside each voice will have a slightly different release time).

Since few people on social media asked me to share the patch, I made it available for download here. EDIT: I moved the files to GitHub: https://github.com/federicoVisi/max_mc_swarm_polysynth

NOTE: the patch is a quick and dirty experiment I did to try out the sound synthesis capabilities of the MC objects in Max 8. It is not a finished instrument and has some inconsistencies that should be fixed. You’re very welcome to edit the patch and get in touch to share ideas, although be aware that I might not have the time to provide technical support.

Workshop and Performance at Harvestworks, New York City

Posted by On May 31, 2016 In Events, NYC, Photo/Video

I recently ran a workshop and performed at Harvestworks in New York City. The workshop was done in collaboration with Andrew Telichan Phillips form the Music and Audio Research Laboratory at NYU Steinhardt. The amazing Ana García Caraballos performed with me my piece 11 Degrees of Dependence on alto sax, myo armbands, and live electronics. Here’s a video: