black box fading
(2020-2021)

-

concept

black box fading, a performance for human, sensor-augmented flute (chaosflöte), and improvisation machine (AIYA) recorded in 360° video/3D audio format in VR, is an immersive experience that draws upon the performance interplay between human and machine to craft a narrative that manipulates perceptions of human-machine agency and human-machine interactions in a neosurrealist environment. The final version of this work will be completed in Spring 2021.

Initially appearing as a basic documentation of a performance, the video gradually transforms into a documentation of something in between a performance and an installation, eventually becoming a multifaceted piece of its own. Modulating sound behaviors, reactive visual projections, shifting perceptions of space and scale, and unconventional 360° editing techniques contribute to the sensation of continuously negotiable dynamics between human and machine as well as the disruption of traditional performance hierarchies. Drawing upon Clark and Chalmers’ concept of the extended mind, where the external environment plays an active role in one’s cognitive processes, black box fading gradually blurs the lines between human and machine dichotomies and suggests both as – in the spirit of posthumanism – extended minds of each other.

Working with these concepts, black box fading imagines the improvisation machine as not only a piece of software contained within a computer, but also as a shifting assemblage of decentralized materials (e.g. curtains, tracking systems, mixer, light switch, cameras, speakers, microphones, sensors, etc.). The human, often traditionally placed in the center of attention, moves away from this role and becomes increasingly integrated into the framework of the machine, both in audio/visual performance aesthetic and in physical presence. The scenography of the performance is intentionally dark, allowing the skeleton of the projection visuals to create shifting architectural forms and to reveal that the machine is also the space itself.

The technical production of black boxing fading makes use of a 3D tracking system (OptiTrack) in conjunction with the local sensors of the chaosflöte, which feeds into the improvisation machine and into the framework for generating live sounds and visuals. The machine operates on the basis of recorded buffers – recalling snippets of the human player’s past playing at various points in time and outputing them in various transformations and combinations, as determined by the programmed interaction rules. Combined with a creative use of spatialization in ambisonics, the result is a closely shared sound aesthetic between human/improvisation machine that negotiates the boundaries between their perceived identities.

-

form/media choice

[ presentation format ]

> 360° recorded video of a performance with myself and AIYA
> Video is upload onto YouTube and can be viewed in VR or as a normal 360° video with interactive panning (using the phone's gyroscope)

[ signal flow ]

-

key elements

How can agency (and perceived agency) be given to improvisation machines, and how does working with such a machine impact my performance practice as an improvising musician?

[sound]: feedback

I view audio feedback as a palpable exemplification of machine agency, and one that the performer (me) negotiates throughout this work.

[sound]: latency and delay

As the machine amplifies the sound of the flute, latency becomes audibly apparent and is a persistent reminder of the machine's presence in the form of external embodiment of the performer. The delay (both from the inherent latency and those artificially produced) extends the performer's presence in time. However, the delay, when large enough, can also reinforce the opposite - that the machine is a separate improvisation partner, as the human performer can use the delay line as a musical counterpoint to her own playing.

[sound]: buffer recall

A more sophisticated development of the delay, the improvisation machine recalls snippets of my playing from various points in the past and replays them based on the programmed interaction rules (which, as mentioned previously, is not something that the performer/me is actively keeping track of during the performance). This reinforces the machine as a distinct improvisation partner. Because this sound material is literally consisting of samples from my flute, it can be difficult in the recording, from the audience's perspective, to distinguish between which flute sounds are coming from the performer/me and which are coming from the machine. This ambiguousness paradoxically reinforces the concept of external embodiment to the listener (that the performer's body is perceived as being augmented by the machine) while reinforcing itself as a separate improvisation partner in the live setting from the perspective of the human performer.

[visuals]: shifting grid

The oscillation type and frequency of the grid projection is programmed to rudimentarily reflect the state of the improvisation (from least agitated to most agitated). The grid also serves as a structural reference in the darkness of the recording/space, giving form and architecture to the space. In this case, the perceived embodiment of the machine is also modulating as it creates varying perceptions of this form.

[visuals]: the recreated shadow

The visual projections display virtual shadows of my body, taken from an Intel RealSense camera positioned in the performance space. Depending on my distance from the camera, the shadows become larger or smaller. The shifting scale of these shadows is meant to modulate the perceived heirarchies between human-machine, but also reinforce the connection between the two in a form of external embodiment.

human-machine interaction

In this regard, I am not restricting my interactions with the machine to only my actions on the chaosflöte but also counting interactions with the curtains, light switches, mixer, camera, etc.

between fixed and live

It feels strange to make a work about improvisation in the form of fixed media, but (aside from current pandemic realities) it also opens up an interesting conversation the concerning authenticity of the improvisation depending on if the behaviors taking place within the work appear to be the result of the editor (as in, the exact order/combination/qualities come to be because of the editing) or the result of the original live performance (in an extreme case, trying to produce the closest 1:1 documentation possible...even though what is considered 1:1 can also be debated). In my view, it is always a combination and this combination is inescapable. Still, it impacts the perception of authenticity of the work.

between improvisation and composition

The above issue already appears when the audience evaluates the behavior of the improvisation machine within the live performance: Was that really live, or could that have been a recording? What makes me think it is one or the other? How much does the performer/creator of the improvisation machine already know going into the improvisation - is it actually improvisation or is it more like a performance of an indeterminate composition, where the machine/software/setting is a form of compositional score?

between concert and interactive installation

Two things:

- The physical scene of the performance already behaved like an installation to me, especially in regards to my relationship to that space. As the OptiTrack system tracked the position and rotation of my body and the Intel RealSense camera captured an infared representation of my body, every movement or sound illicited a degree of response from the projections and electronically produced sounds. During breaks, it was interesting to see people working at the IA space walking around within the tracked performance setting and behaving as if it was a type of interactive installation (even though their bodies were not tracked via OptiTrack, the RealSense camera still picked up their movements, serving as their interaction point).
- The nature of the 360° video format is that one cannot fully guarantee the field of vision the audience will choose to experience; one can only influence it by predicting what elements the audience will find the most interesting to pay attention to at a given point in time and use that information to organize the dramaturgy of the video, while at the same time acknowledging that there will be variations on that dramaturgy depending on how the audience chooses to interact with it. This is, to me, much like an interactive installation, where you have a general idea of what the audience might choose to perceive, but there is still an element of indeterminacy.

-

progress

(18 Apr. 2021) The box is evolving, and I am continuing to make new recordings for the final work. Below is a (non-360°) preview from the another entry into the black box series:

(12 Apr. 2021) The second version of black box fading (360°) can be seen at chua.ch/blackbox-NYT. The final verison of this work will be released in Spring 2021.

(7 Feb. 2021) The first version of black box fading (360°) can be seen at
chua.ch/blackbox. The final verison of this work will be released in Spring 2021.

-

credits

Valentin Huber: 360-video filming, stitching, and denoising
Eric Larrieux: Technical support with I.A. Space Ambisonics system & sound recording
Sébastien Schiesser: Technical support with motion capture system and projections
Martin Fröhlich: Technical assistance with I.A. Space projection system
Kristina Jungic: I.A. Space coordination/planning

Scope of my work: concept, performance, programming of improvisation machine behavior (incl. connections with motion capture system, chaosflöte, sound and visual output), sound design, projection visuals/scenography, video editing