Author: _FV

  • Performing Swirling Qubits by Eduardo Miranda at ECDF and CTM Festival, Berlin

    Performing Swirling Qubits by Eduardo Miranda at ECDF and CTM Festival, Berlin

    Two gigs in Berlin this coming week: I am performing Swirling Qubits by Eduardo Reck Miranda at the Einstein Centre for Digital Future on Tuesday 30 January and at Radialsystem on Friday 2 February for CTM Festival. Here are some pictures from the rehearsals with Eduardo, Maria Aguado, and Paulo Itaborai at the Berlin Open Lab. I am going to use the Tensile wearable that we have developed at the UdK Wearable Computing group for the Interwoven Sound Spaces project. From CTM programme notes:

    A quantum computer deals with information encoded as quantum bits – or qubits. The qubit is to a quantum computer what a bit is to a digital one: it is a basic unit of information. In hardware, qubits live in the subatomic world. They are subject to the laws of quantum mechanics. Thus, qubits process information in fundamentally different and potentially more powerful ways than digital bits. Quantum computing is bound to open new and exciting opportunities for creative practices. For the performance, a state-of-the-art quantum computer, developed by IQM, will be used to run generative music algorithms and to synthesise sounds. Performers will control the quantum computer based in Finland with gestures, brain signals and live coding through the Internet. Vintage analogue synthesisers in Brazil will produce sounds transmitted to Berlin in real-time.”

    The programmes include more Quantum Computer Music pieces by Eduardo Miranda, Paulo Itaborai, Dino Vicente, Pete Thomas, and Colin Harrington.

    I did my PhD with Eduardo at the Interdisciplinary Centre for Computer Music Research (ICCMR), University of Plymouth, from 2013 to 2016. It’s great to work with him again here in Berlin after quite a few years. 

  • AI in Music symposium

    AI in Music symposium

    I look forward to taking part in the AI in Music – Agency, Performance, Production and Perception symposium organised by the University of Music Trossingen’s members of the KISS project. The event will run for two days, 15th and 16th of December 2023. The programme looks very interesting, with keynotes, panels, and concerts addressing different aspects of the use of AI in music. I will contribute to the panel on AI in performance together with Thor Magnusson and Anna Xambó. Here is the panel abstract:

    The panel discusses how Artificial Intelligence can offer novel possibilities for music performance. The panel examines the utilization of algorithms as co-performers and machine learning as a means of enhancing the interface between human bodies and sound production. Furthermore, the panel considers how machine learning itself can become a central element of performance and how bodies of data can be made performatively perceptible. It gathers viewpoints aimed at understanding the impact of creative AI on our interactions with technology, social dynamics, and knowledge creation.

    Talking with Anna and Thor about these topics is going to be such a treat.

  • Interwoven Sound Spaces

    Interwoven Sound Spaces

    Interwoven Sound Spaces is an interdisciplinary project which brought bringing together telematic music performance, interactive textiles, interaction design, and artistic research. A team of researchers collaborated with two professional contemporary music ensembles based in Berlin, Germany, and Piteå, Sweden, and four composers, with the aim of creating a telematic distributed concert taking place simultaneously in two concert halls and online. Central to the project was the development of interactive textiles capable of sensing the musicians’ movements while playing acoustic instruments, and generating data the composers used in their works. Musicians, instruments, textiles, sounds, halls, and data formed a network of entities and agencies that was is reconfigured for each piece, showing how networked music practice enables distinctive musicking techniques.
    https://www.interwovensoundspaces.com

  • Photos from Machined Music

    Photos from Machined Music

    A few photos from the Machined Music event at Berlin Science Week: talk with Moisés Horta Valenzuela, Tom Burgert, and myself moderated by Claudia Schnugg followed by a performance by Moisés AKA Hexorcismos using Semilla. Event curated by Micheal Fowler.

    Photo credit: Berlin Science Week/Ole Spata, 2022.

  • Machined Music: panel discussion at Berlin Science Week

    Machined Music: panel discussion at Berlin Science Week

    Tonight, 4 Nov 2022, at Berlin Science Week I’m going to have a conversation with Claudia Schnugg, Tom Burgert, and Moisés Horta Valenzuela about using algorithms in music research and practice.
    08.00 PM – 09.30 PM, Invalidenstraße 43, Berlin
    From the Berlin Science Week webpage about the event:

    Algorithms have been used in music and sound art extensively within the three past decades
    From the use of evolutionary models, L-systems, generative neural networks and machine learning techniques, the automatization of musical processes using a computer has been variously exploited for aesthetic ends. `Machined Music’ is a panel discussion as a deep-dive into these techniques that brings together experts in both the artistic and scientific domains.
    A 45-minute podium discussion will be followed by a 45-min performance by electronic musician Moisés Horta Valenzuela utilizing some of the techniques discussed.”

  • Successful funding application

    Successful funding application

    The news are in: our proposal titled “Music of the Indeterminate Place: telematic performance and composition intersecting physical and network spaces” was awarded a 3-year artistic research grant from the Swedish Research Council! This will support a very substantial amount of hyperorgan and networked music performance work at Luleå University of Technology with international collaborations with other organ halls and research institutions. We will build upon the work we’ve been doing at the GEMM))) Gesture Embodiment and Machines in Music research cluster since 2019, including the performances of the TCP/Indeterminate Place global hyperorgan quartet and the gesture-organ interactions developed in collaborations Opera Mecatronica. There will be networked organs sounds!

  • Hyperorgan interactions, suspended choirs, and the The Tale of the Great Computing Machine

    Hyperorgan interactions, suspended choirs, and the The Tale of the Great Computing Machine

    Back in Berlin after a few days of work down in the R1 Reaktorhallen, KTH KTH Royal Institute of Technology for a unique opera piece: The Tale of the Great Computing Machine. A project led by Åsa Unander-Scharin and Carl Unander-Scharin. I took care of designing the gestural interactions with the Skandia pipe organ inside R1, the interactions with a set of speakers mounted on motorised winches (which we call “the Suspended Choir), as well the interactions between the organ and robots that will perform alongside humans. It’s all going to be live, and there are several other talented collaborators that are taking care of live visuals, lights, sound, and more. We had to network quite a few computers in order to make everything work in such a big and unique space.

    The opera is based on the novel “The Tale of the Big Computer” written by Olof Johannesson in the 1960s. The book describes the rise of an intelligent network of computers and its relationship with humans. Olof Johannesson is actually a pseudonym of Hannes Alfvén, a physicist who would win the Nobel Prize for his work on magnetohydrodynamics just a few years after the book was published.

    Premiere on the first of December, and apparently many shows are already sold out!

  • Talk at the “Mapping Social Interaction through Sound” symposium, Humboldt University, Berlin

    Talk at the “Mapping Social Interaction through Sound” symposium, Humboldt University, Berlin

    I was invited to participate in the Mapping Social Interaction through Sound symposium on 27-28 November 2020. The symposium is organised by Humboldt University, Berlin and – as it is customary these days – will take place on Zoom.

    This is the abstract of my talk.

    Building and exploring multimodal musical corpora:
    from data collection to interaction design using machine learning

    Musical performance is a multimodal experience, for performers and listeners alike. A multimodal representation of a piece of music can contain several synchronized layers, such as audio, symbolic representations (e.g. a score), videos of the performance, physiological and motion data describing the performers movements, as well as semantic labelling and annotations describing expressivity and other high-level qualities of the music. This delineates a scenario where computational music analysis can harness cross-modal processing and multimodal fusion methods to shift the focus toward the relationships that tie together different modalities, thereby revealing the links between low-level features and high-level expressive qualities.

    I will present two concurrent projects focussed on harnessing musical corpora for analysing expressive instrumental music performance and design musical interactions. The first project is centered on a data collection method – currently being developed by the GEMM research cluster at the School of Music in Piteå – aimed at bridging the gap between qualitative and quantitative approaches. The purpose of this method is to build a data corpus containing multimodal measurements linked to high-level subjective observations. By applying stimulated recall (a common qualitative research method in education, medicine, and psychotherapy) the embodied knowledge of music professionals is systematically included in the analytic framework. Qualitative analysis through stimulated recall is an efficient method for generating higher-level understandings of musical performance. Initial results suggest that this process is pivotal in building our multimodal corpus, providing insights that would be unattainable using quantitative data alone.

    The second project – a joint effort with the Computing Department at Goldsmiths, University of London – consists in a sonic interaction design approach that makes use of deep reinforcement learning to explore many mapping possibilities between large sound corpora and motion sensor data. The design approach adopted is inspired by the ideas established by the interactive machine learning paradigm, as well as by the use of artificial agents in computer music for exploring complex parameter spaces. We refer to this interaction design approach as Assisted Interactive Machine Learning (AIML). While playing with a large corpus of sounds through gestural interaction by means of a motion sensor, the user can give feedback to an artificial agent about the gesture-sound mappings proposed by the latter. This iterative process results in an interactive exploration of the corpus, as well as in a way of creating and refining gesture-sound mappings.

    These projects are representative of how the development of methods for combining qualitative and quantitative data, in conjunction with the use of computational techniques such as machine learning, can be instrumental in the design of complex mappings between body movement and musical sound, and contribute to the study of the multiple facets of embodied music performance.

    Further reading

    Visi, F. G., Östersjö, S., Ek, R., & Röijezon, U. (2020). Method development for multimodal data corpus analysis of expressive instrumental music performance. Frontiers in Psychology, 11(576751), doi: 10.3389/fpsyg.2020.576751
    Download PDF (pre-print)

    Visi, F. G., & Tanaka, A. (2021). Interactive Machine Learning of Musical Gesture. In E. R. Miranda (Ed.), Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches, and Developments for Creativity. Springer Nature, forthcoming.
    View on arXiv.org
    Download PDF (pre-print)

    Visi, F. G., & Tanaka, A. (2020). Towards Assisted Interactive Machine Learning: Exploring Gesture-Sound Mappings Using Reinforcement Learning. In ICLI 2020 – the Fifth International Conference on Live Interfaces.
    Download PDF

    Presentation slides
    Download PDF

  • Physically Distant #3: the network, the pandemic, and telematic performance

    Physically Distant #3: the network, the pandemic, and telematic performance

    PD#3 will be part of Ecology, Site And Place – Piteå Performing Arts Biennial 2020. Participation in the conference is free, but registration is compulsory. Register by sending an email to piteabiennial@gmail.com

    After the two previous editions in June and July, the third Physically Distant Talks will take place on 26 and 27 October 2020. The talks are going to be part of the online event of Ecology Site and Place – Piteå Performing Arts Biennial.

    The format will be different this time, as there are going to be more telematic performances and the talks will be structured in three panels. Each panel member is invited to prepare a 3-minute provocation/reflection related to the topic. This collection of provocations from the panelists will set the tone for an open discussion in the style of the previous Physically Distant talks. As in the previous editions of the talks, Stefan Östersjö and myself, Federico Visi, will be moderating the discussion.

    Programme (all times are CET)

    Monday, 26 October 2020

    17:30 Introduction. Stefan Östersjö and Federico Visi
    17:40 Simon Waters and Paul Stapleton: Musicking online: your technical problem is actually a social problem. A performative conversation.

    18:00-19:00 Panel I. Instrumentality in Networked Performance
    Panelists: Nela Brown, Nicholas Brown, Juan Parra Cancino, Franziska Schroeder, Henrik Von Coler.

    19:00-19:45 Telematic Performance: A concert hall organ in the network.
    Live-streaming from Studio Acusticum. Telematic performances with the University Organ remotely controlled from several locations.
    Robert Ek, clarinet, performing in Piteå (SE)
    Mattias Petersson, live-coding, performing in Piteå (SE)
    Federico Visi, electronics, electric guitar, performing in Berlin (DE)
    Scott Wilson, live coding, performing in Birmingham (UK)
    Stefan Östersjö, electric guitar, performing in Stockholm (SE)

    19:45-20:00 Break

    20:00-21:00 Panel II. Network ecology: Communities of practice for the digital arts
    Panelists: Shelly Knotts, Thor Magnusson, Mattias Petersson, Rebekah Wilson, Scott Wilson.

    Tuesday, 27 October 2020

    17:45-18:00 Marcin Pączkowski: rehearsing music online: possibilities and limitations

    18:00-19:00 Panel III. The network as place
    Panelists: Ximena Alarcon Diaz, David Brynjar/Angela Rawlings/Halla Stefansdottir, Chicks on Speed (Melissa Logan, Alex Murray-Leslie), Maja Jantar, Marcin Paczkowski, Roger Mills, Luca Turchet.

    19:00-19:30 Telematic Performance: iða
    David Brynjar Franzson, technical concept and streaming (US)
    Maja Jantar, performer and composer of visual score (BE)
    Angela Rawlings, performer and composer of visual score (IS/CA)
    Halla Steinunn Stefánsdóttir, performer and composer of visual score (SE)

    19:30-20:00 Break

    20:00-21:00 Where do we go from here? (plenary discussion)

    For more details on the Ecology, Site And Place – Piteå Performing Arts Biennial 2020 online event, download the book of abstracts.

  • Physically Distant #2: more online talks on telematic performance

    Physically Distant #2: more online talks on telematic performance

    Tuesday 28 July 2020, 14:00 – 19:00 CEST
    Anyone can join upon registration using this online form: https://forms.gle/zzLV46NbvgqAtJ7t7

    Performing live with physically distant co-performers and audiences through audio, video, and other media shared via telematic means has been part of the work of artists and researchers for several decades. Recently, the restrictions put in place to cope with the COVID-19 pandemic have required performing artists to find solutions to practice their craft while maintaining physical distance between themselves, their collaborators, and their audience. In this second edition of Physically Distant, we wish to continue discussing telematic performance from perspectives suggested by the following questions: 

    What are the opportunities and challenges of telematic performance?
    What are the implications on how performing arts are conceived, developed, experienced?
    How are research and practice being reconfigured?
    How is telematic performance suggesting different understandings of the role of instruments, gesture and acoustic spaces?
    How might telematic performance contribute to reconfiguring our understanding of music in societal and political perspectives?

    We wish to highlight two threads from the previous discussions. First, how telematic performance can be conceived of as protest, and second, the potential for telematic performance to expand the artistic and social potential in intercultural arts. Both of these threads imply a discussion of accessibility.

    Once again, the GEMM))) the Gesture Embodiment and Machines in Music research cluster at the School of Music in Piteå, Luleå University of Technology has invited a group of artists, researchers, and scholars to instigate an open, interdisciplinary discussion on these themes. The talks will happen online, on Tuesday 28 July 2020.
    The sessions will be organised in 1-hour time slots. Each slot will include two 15-min presentations, the remaining time will be dedicated to questions and discussion. 

    We are very happy to host a telematic performance by the Female Laptop Orchestra (FLO). The practice of this group is discussed in the talks by Franziska Schroeder and Nela Brown. A presentation of the conceptual backdrop for the performance can be found below.

    The structure of the event includes short breaks in between the sessions in order to avoid Zoom fatigue and allow for informal chats and continued discussion over a drink (not provided by the organisers). There will be a plenary at the end of the day, during which we will be discussing issues and opportunities that have emerged during the other sessions.

    28 July 2020 schedule (all times are CEST):

    • 14:00 Session 0: Introduction, results of the survey that followed Physically Distant #1
    • 14:30 Session 1: Ximena Alarcon, Franziska Schroeder
    • 15:30-15.50 — Performance by FLO (Female Laptop Orchestra) —
    • 15.50-16.00 — 10-min Break —
    • 16:00 Session 2: Nela Brown , Rebekah Wilson
    • 17:00 — 30-min Break —
    • 17:30 Session 3: OvO, Kaffe Matthews
    • 18:30 Session 4: Plenary
    • 19:00 — END —

    Moderators / instigators: Federico Visi, Stefan Östersjö

    Anyone can join upon registration using this online form: https://forms.gle/zzLV46NbvgqAtJ7t7 

    We will send you a link to join a Zoom meeting on the day of the talks.
    NOTE: the talks will be recorded.

    A follow up event is planned for the 2020 Piteå Performing Arts Biennial taking place online on 26-27 October 2020.

    Further info: mail@federicovisi.com

    Telematic performance by the Female Laptop Orchestra (FLO)

    Absurdity (concept by Franziska Schroeder and Matilde Meireles)
    A distributed performance using LiveSHOUT with members from the Female Laptop Orchestra (FLO).

    “Absurdity” is based around a short excerpt from one of Portugal’s most mysterious, elusive and peculiar writers, Fernando Pessoa. Pessoa’s multiplicities and his ways of thinking about life, engendering ideas that can feel manic-depressive, filled with buckets of self-pity, while being able to scratch the innermost parts of one’s soul, lie at the heart of this distributed performance.

    Members of FLO will stream sounds from several distributed places, including Crete, Italy, Brazil and the UK, while Franziska and Matilde will be delivering fragmented excerpts (in both English and Portuguese) alongside the LiveSHOUT streams. The idea of distributed creativity, where we combine sounds from several sites, inspired by Pessoa’s plurality of thoughts and philosophies; his multiplicities, his fictionality and his self alienation, will lead to a performance that aims to be absurd, dispersed, fragmented and multiple.

    “I’ve always belonged to what isn’t where I am and to what I could never be”.  (Pessoa In: Ciuraru, 2012).

    The FLO performers are:
    Franziska Schroeder – LiveSHOUT and Pessoa reading (English)
    Matilde Meireles – LiveSHOUT and Pessoa reading (Portuguese)
    Maria Mannone – LiveSHOUT streams of piano improv from Palermo
    Maria Papadomanolaki – LiveSHOUT streams of sounds from Crete
    Anna Xambó – LiveSHOUT streams of sounds from Sheffield
    Nela Brown – LiveSHOUT streams of sounds from London
    Ariane Stolfi – LiveSHOUT  streams of sounds from Porto Seguro and playsound.space

    Female Laptop Orchestra (FLO), a music research project established in 2014 by Nela Brown, connects female musicians, sound artists, composers, engineers and computer scientist globally, through co-located and distributed collaborative music creation. Each FLO performance is site-specific and performer-dependant, mixing location-based field recordings, live coding, acoustic instruments, voice, sound synthesis and real-time sound processing using Web Audio API’s and VR environments with audio streams arriving from different global locations (via the internet and mobile networks). From stereo to immersive 3D audio (and everything in between), FLO is pushing the boundaries of technology and experimentation within the context of ensemble improvisation and telematic collaboration.

    Female Laptop Orchestra: https://femalelaptoporchestra.wordpress.com/

    LiveSHOUT: http://www.socasites.qub.ac.uk/distributedlistening/liveSHOUT/

    Locus Sonus soundmap: https://locusonus.org/soundmap/051/

    Presenters Bios:

    Ximena Alarcón Díaz is a sound artist researcher interested in listening to in-between spaces: dreams, underground public transport, and the migratory context. She creates telematic sonic improvisations using Deep Listening, and interfaces for relational listening. She has a PhD in Music, Technology and Innovation from De Montfort University (2007), and is a Deep Listening® certified tutor. Her project INTIMAL is an “embodied” physical-virtual system for relational listening in telematic sonic performance (RITMO-UiO, 2017-2019, Marie Skłodowska Curie Individual Fellowship). She is currently a Senior Tutor in the online Deep Listening certification program offered by the Center for Deep Listening (RPI), and works independently in the second phase of the INTIMAL project that involves: an “embodied” physical-virtual system to explore sense of place and presence across distant locations; and a co-creation laboratory for listening to migrations with Latin American migrant women.
    http://ximenaalarcon.net

    Franziska Schroeder is an improviser and Reader, based at the Sonic Arts Research Centre, Queen’s University Belfast where she mainly teaches performance and improvisation.
    In 2007 she was the first AHRC Research Fellow in the Creative/Performing Arts to be awarded a 3 year grant to carry out research into virtual / network performance environments. Her writings on distributed creativity have been published by Routledge, Cambridge Scholars, and Leonardo. In 2016 she co-develop the distributed listening app LiveSHOUT.
    Within her research group “Performance without Barriers”, which she founded in 2016, Franziska currently designs VR instruments with and for disabled musicians.
    https://pure.qub.ac.uk/en/persons/franziska-schroeder

    Rebekah Wilson is an independent researcher, technologist and composer. Originating from New Zealand she studied instrumental and electroacoustic music composition, and taught herself computer technology. In the early 2000s she held the role of artistic co-director at STEIM, Amsterdam, where her passions for music, performance and technology became fused. Since 2005 she has been co-founder and technology director for Chicago’s Source Elements, developing services that exploit the possibilities of networked sound and data for the digital sound industry while continuing to perform and lecture internationally. Holding a masters in the field of networked music performance, her current research on the topic can be found on the Latency Native forum.
    https://forum.latencynative.com

    Nela Brown is an award-winning Croatian sound artist, technologist, researcher and lecturer living in London, UK. She studied jazz and music production at Goldsmiths, University of London, followed by a BA (Hons) in Sonic Arts at Middlesex University London. Since graduating in 2007, she worked as a freelance composer and sound designer on award-winning international projects including theatre performances, dance, mobile, film, documentaries and interactive installations. In 2014, she started Female Laptop Orchestra (FLO). In 2019, as part of the prestigious Macgeorge Fellowship Award, she was invited to join the Faculty of Fine Arts & Music at the University of Melbourne, Australia to deliver talks and workshops about collaborative music-making, laptop orchestras and hack culture, as well a number of performances with FLO. She is currently doing a PhD in Human-Computer Interaction and lecturing at the University of Greenwich in London.
    http://www.nelabrown.com/

    Italian noise-rock duo OvO has been at the center of the worldwide post-rock, industrial-sludge, and avant-doom scenes for nearly two decades. Their “always-on-tour” mentality, coupled with a DIY ethic, fearless vision, and pulverizing live shows have made them the Jucifer of Europe; impossible to categorize, but always there, appearing in your hometown, like a ghostly omnipresence. OvO’s fiercely independent ethos and grinding live schedule have earned the band a significant worldwide fanbase that have come to expect nothing but the most daring and innovative dark music presentations.
    OvO were on the road for their 20th anniversary European tour when the COVID-19 pandemic hit the continent. The band was forced to cancel the remaining gigs of the tour and drive back to their home country, which was suffering one of the worst health emergencies of its recent history. In the midst of the lockdown, they performed live on the stage of the Bronson club in Ravenna, Italy, and professionally live-streamed the entire concert on DICE.fm
    http://ovolive.blogspot.com

    Kaffe Matthews is a pioneering music maker who works live with space, data, things, and place to make new electroacoustic compositions. The physical experience of music for the maker and listener has always been central to her approach and to this end she has also invented some unique interfaces, the sonic armchair, the sonic bed and the sonic bike that not only enable new approaches to composition for makers but give immediate ways in to unfamiliar sound and music for wide ranging audience.
    Kaffe has also established the collectives Music for Bodies (2006) and The Bicrophonic Research Institute (2014) where ideas and techniques are developed within a pool of coders and artists using shared and open source approaches, publishing all outcomes online.
    During COVID times, Kaffe has produced new music by collaborating with other music makers through streaming platforms and has hosted live-streaming parties in her apartment in Berlin.
    https://www.kaffematthews.net

    Tuesday 28 July 2020, 14:00 – 19:00 CEST
    Anyone can join upon registration using this online form: https://forms.gle/zzLV46NbvgqAtJ7t7