Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

PHILHARMONICA. International Music Journal
Reference:

Digital Cybernarrative: "The Brain Opera" by Todd Mahover

Bezmenov Vadim

ORCID: 0000-0003-3981-9649

The head of the pop and jazz vocals studio “Triumph”, Zelenograd Cultural Center

124482, Russia, Moscow, ul. Central Square, 1, office 252

bezmenov333@yandex.ru

DOI:

10.7256/2453-613X.2022.4.38622

EDN:

VKYPVZ

Received:

15-08-2022


Published:

23-08-2022


Abstract: The article is devoted to the application of new technologies of the digital era in modern opera. The phenomenon of cybernarrative is analyzed, which creates special forms of representation of a conditional plot in an opera theater, or representation of the main idea. As a special form of cybernarrative, an analysis of the instrumentation and fundamental ideas of the Brain Opera by American composer and inventor Tod Mahover is proposed. This experimental composition combines both conceptual ideas of comprehending the basics of human thinking and attempts to present brain impulses as musical material that makes up the stochastic variable composition of the opera. Special attention is paid to interactivity, in which the listener pre–communicates with various interfaces - electronic toys that allow him to develop his creative energy and see it as an element of composition in the Opera of the Brain.    The conclusion from the conducted research is that the idea of a cybernarrative has become fundamental for the project, which has received a cybernarrative representation. The treatise of the famous cognitive scientist Marvin Minsky "Community of the Mind" and the element of creative energy of the recipient – visitor of the sound installation serve as the basis of the narrative, while the plot of the libretto itself is conditional and it is hardly read through the journey through the so–called "wilds of the mind". Modern opera uses cybernarrative as a communication system, in the field of which all the latest technologies are included. At the same time, the content of the opera is blurred, which turns out to be variable. There are both positive and negative sides to this: the involvement of the public in the creative process, which Tod Mahover strongly encourages in his concept of the Brain Opera, helps to overcome passive perception, however, the composition ceases to be a complete form and turns into an open work – work in progress.


Keywords:

Tod Mahover, cybernarrative, modern opera, new technologies, modern theater, Brain Opera, hypercello, biofeedback, biocontrol, cyberspace

This article is automatically translated.

Opera as an art form, interfaced with social processes and dependent on changes in cultural orientations, is undergoing a global transformation in the 21st century.

In the present period, defined by incredibly rapid social, economic and cultural metamorphoses caused by the accomplished "digital revolution", the ubiquity of the Internet and all kinds of network devices and personal gadgets, the new subgenre of opera has turned into a discursive space around the relationship between man and technology in the digital age.  This offshoot of the opera genre is called "cybernarrative" opera in foreign studies [1,2]. The cybernarrative defines the specifics of a new opera genre branch, inextricably linked with digital technologies, which in this context acquire thematic significance and come to the fore.

In the XXI century, this direction receives a powerful impetus for development, however, its foundations were laid at the end of the twentieth century and one of the important sources of new ideas was the work of the American composer and inventor Tod Mahover. Mahover has been leading an experimental group called "Hyperinstruments" for many years./Opera of the Future" at MIT Media Lab.  The direction of Mahover's multifaceted creative activity, combining both a scientist and a composer, flows into the direction defined as "Global Science Opera". It combines science, art, technology and education through a global network of scientific, artistic and educational institutions and projects. At the heart of digital interaction is the idea of creating collective operas based on remote digital technologies.

Created by Mahover and his team, the Media Laboratory - MIT Media Lab, presented in 1996 a project called Brain Opera, which revealed the first-of-its-kind experience of interactive interaction between listeners and creators of an opera performance, which included both online participants and live viewers. The premiere of the Brain Opera took place at the Lincoln Center in New York between July 23 and August 3, 1996.

In the Media Laboratory, together with his creative team, Mahover developed not only the concept of the Brain Opera, but also the idea of the so-called "Toy Symphony" - a project in which the potential of children's creative energy reached its apogee. The composer and inventor suggests considering the ability to music as a kind of universal human possibility that exists a priori in the same context as language abilities. To date, Mahover has created five operas. Let's list them in chronological order: opera "VALIS" (VALIS): opera in two acts (1987) based on the novel by Philip K. Dick; "Brain Opera" (1996) – interactive musical opera installation based on the book by Marvin Minsky; "Resurrection" (1999 – opera based on the novel by Leo Tolstoy; "Skellig" (2008), an opera based on the novel of the same name by David Almond; "Death and Power" (2010), an opera with live electronics and robotics developed at the Massachusetts Institute of Technology Media Laboratory. Libretto by Robert Pinsky; "Schoenberg in Hollywood" (2018), an opera with cinematic fragments and live electronics. Libretto by Simon Robson.

Having also developed many innovative musical technologies, including Hyperinstruments, which expand the possibilities of musical virtuosity, as well as Hyperscore, software that allows amateurs to create complex, original music using graphic images: lines and colors.

Among his later works is the opera "Death and Power", which includes various specially created devices: an interactive musical chandelier, animatronic walls and a whole army of robots. Further, Mahover's creative group focused on combining musical technologies with the ideas of transhumanism.
Working with patients in institutions such as Boston's Tewkesbury Hospital, the group's goal is to create targeted musical events that adapt to the skills, existing problems and needs of each individual.

"Brain Opera" has become an entertaining bright and unusual interactive project). This interactive performance toured Europe, Asia, the USA and South America from 1996 to 1998, and in 2000 its interactive installation was permanently installed at the House of Music in Vienna. The creative collaboration of the "Opera of the Brain" was located between different cities and countries: communication was carried out through the use of Skype and Team Viewer combinations.

The project was originally addressed not only to adult music lovers, but also to the youngest music lovers under the age of 8. Perhaps the reason for such popularity was the lack of behavioral barriers and the desire to be included in the social game — a creative process involving people of all ages. Machover sought to attract these groups of people of different ages directly, through the creation of special tools that include modern technologies.

The idea of creating an interactive composition, the basis of which can be a physical gesture or some musical idea formulated by the recipient under special conditions attached, attracted many composers of the XX - XXI century. The realization of this idea inevitably raises the question of the need to create a musical system based on human physiology. Biofeedback systems have traditionally been a means of analyzing the evoked reaction. Since then, the concept of biocontrol has been developed quite widely. Recently, the development of special interfaces has become widespread and has the potential for development in various music applications. Analog biofeedback systems have been used in music since the 1960s. David Tudor, composer and co-author of John Cage, applied biofeedback, directly controlling the sound. "Music for a Solo Artist" by Alvin Lucier has become the most famous composition based on interactivity. Lucier wrote: "I realized the value of the situation on the EEG [electroencephalogram], presenting it as a performative element of the theater. I was also touched by the image of a motionless, if not paralyzed person, who, by simply changing the states of visual attention, can be activated with the help of percussion [10.5]."

David Rosenbum created a number of hybrid works with biofeedback and defined this field as follows: "The term "biofeedback" will be used to refer to the representation in the body of information through sensory input channels about the state and/or the course of change of the biological process in this organism, in order to regulate or control performance over this process, or the same is simple with the task of internal research and increasing self-awareness" [2, 47].

The use of EEG or recording brainwave signals has fascinated many performance artists and musicians. Since the time of Lucier and Rosenbum, the most "advanced" multichannel electrode systems allow for more detailed monitoring of human brain activity. Meanwhile, electronics have made the concept of biofeedback accessible to a wide range of people. Interfaces such as Interactive Brainwave Visual Analyzer (IBVA) by Masahiro Kahata were also used by New York musician Mia Masaokas in order to expand instrumental performance practice. They used a low-amplitude, high-noise signal, however, extracting musically significant signals from the biopotential of the brain still remains a problem in artistic and performative practice.

The use of medical electrode systems made it possible to thoroughly study musical ideas and their representation in the EEG system. Dutch scientist Peter Dasein studied the physical responses to the EEG as rhythmic stimuli, exploring the possibility of detecting an imaginary rhythm through the EEG by the subject, comparing the output signal of the EEG with the recorded traces when listening to this rhythm. Developments in the field of computer modeling of thought processes - interfaces (BCI) have been expanded to the possibility of musical interaction. Brazilian composer Eduardo Miranda offers brain-computer musical interfaces (BCMI) using continuous EEG readings that activate generative musical algorithms and biosignals, overcoming also the complexity of modulation of musical dynamics. Andrew Brose created a meditative installation using these technologies [3, 100].

The idea of biomusic has existed and developed since the beginning of the twentieth century. Back in 1934, E. Adrian and B. Matthews psychophysiologists made an attempt to convert the brain electroencephalogram (EEG) into sound. The alpha rhythm is a steady oscillation with a frequency of 10 Hz. They were recorded using an encephalogram. The famous performance by Alvin Lucier "Music for Solo Performer" (Music for Solo Performer 1965), one of the first performers of which was John Cage, had its foundation in collaboration with Edmond Duane, a scientist engaged in research on brain biopotentials.

The advent of digital signal processing methods in the 1980s made the control of interaction elements reproducible and at the same time more reliable than using analog technologies. Thus, there has been a fundamental shift in the possibilities of artistic use of biofeedback to the introduction of the concept of biological control. While biofeedback allows you to monitor the physiological state and can be transferred to other media using imaging or ultrasound processing, biocontrol, it seeks to create a reproducible interaction using physiological biosignals. Teresa Marrin Nakra from MIT Media Lab used the active dry electrode of Delsys-the physicality system for electromyographic (EMG) tracking of the orchestra conductor's gestures. Yoichi Nagashima created a homemade scheme even before the creation of theDIY. movement to simulate musical interaction based on EMG [4].

All these revolutionary technologies have become the basis of the cyber narrative in modern opera. The ground for the development of these ideas was prepared by the work of the American composer Tod Mahover and, in particular, his project  Brain Opera.

Brain Opera is an interactive musical journey into the "wilds of the mind", presented simultaneously in both physical and cyberspace. This was the first sample of the opera genre in which the performer, using brain biopotentials in a wireless EEG headset, launched both video, sound environment, and libretto, in close connection with the activity of brain biopotentials.

In Machover, the aforementioned idea of using brain biopotentials, previously used by Lucier, gets a new perspective of creative development. Biopotentials — impulses of brain activity are displayed in real time: the performer interacts with the audience within a 360-degree immersive theater, and a dramatic story is told in the space between the performer and the viewer.

The fundamental idea was to try to find an answer to the question: is there a place in human consciousness that would be impossible to imagine through the mediation of modern technologies, projecting it onto an image and sound medium? Mahover created a preliminary libretto based on an oral narrative – the true life story of Nur Inayat Khan, a Sufi Muslim princess and at the same time a secret British agent in Nazi-occupied France, then murdered in Dachau.

 The main mechanisms of the opera's plot are memory and self-knowledge, which are built on the basis of invasive and complex technologies of observation and self-control. The project involved tactile factors of mutual influence of performers and participants through the creation of a feedback system in which the performer interacted with the audience through movement, gaze, touch and speech, which noticeably changed the alpha rhythms of the performer's brain activity.

The plot is based on the image of a young woman Nur Inayat Khan, whose father Hazarat Inayat Khan brought Sufism to the West in the early twentieth century. Nur was born in Moscow, her mother is American, and her father is Indian; she grew up in Great Britain and France, and, being from an aristocratic Indian family, was the great—great-granddaughter of Tipu Sultan, ruler of the principality of Mysore; she spoke both English and French fluently. During the Second World War, the British Special Operations Directorate SOE took advantage of this fact, and in 1942 she was thrown into occupied Paris to work as a resistance radio operator. Nur was the first female radio operator sent to Nazi-occupied Europe, and her underground work lasted only three months, after which she was exposed. She was tortured by the Gestapo and then sent to a concentration camp. In 1944, Nur was shot in Dachau. During her incarceration, she never divulged information. This fact – the inner work of consciousness, which with the help of modern technical means could be made an object of control, became the main idea of the opera, which is directly related to the central theme of Machover's work about the possibility of observing human consciousness using human biometric indicators — primarily the human brain.

Work on the opera was carried out for two years. As soon as Mahover received proof of his concept that brain waves are really capable of causing visual effects, forming sound and speech, he created a creative collaboration with Max/MSP programmer Tommy Martinez, who developed a patch to trigger different brain wave patterns according to four different mental states. Then he turned to the sound artist Taras Mashtaler. The text was based on the biography of Nur, which existed initially in the materials of the Indian writer Shrabani Basu, published initially online, and then in English under the title "Spy Princess" [5], and in the second version, appeared in the documentary "Enemy of the Reich" directed by Robert H. Gardner.

The sound score, the pre-recorded libretto and the video were contained inside four electronic data banks, where sounds corresponding to certain emotions were recorded: excitement, meditative states and feelings of frustration. These data banks were launched by established criteria that were selected during rehearsals by performers. For example, at the moment when certain states of the heroine Nur were displayed, Mahover sought to represent meditativeness and concentration. At the moment when a certain threshold state was reached, harmonious and calm music appeared. Ninety-nine names of God were heard in the libretto, almost the same as in the famous vocal sextet “Stimmung” Stockhausen.

A sequence of sounds, images and pre-recorded words could be launched randomly from each individual data bank, but certainly at the moment when the threshold for measuring a particular emotion was reached and predetermined. This overcoming was necessary to avoid triggering incorrect chains of reactions, through excessive emotional stress. The data banks corresponding to the emotions of arousal produced short, intermittent sounds. Individual key phrases of the libretto, capable of switching emotional planes, correlated with other emotions, such as disappointment, following similar, but according to well-thought-out trajectories.

          In addition to the history of Nur Inayat Khan, another important source is the widely known and even translated into Russian book by Marvin Minsky "Community of Reason".

Marvin Minsky is the father of artificial intelligence. In 1959, together with John McCarthy, he founded the first Laboratory at the Massachusetts Institute of Technology, where Tod Mahover also worked. Minsky's research led to theoretical and practical shifts in the development of artificial intelligence, influenced the development of cognitive psychology, neural networks, the theory of Turing machines and recursive functions. One of the pioneers of intelligent robotics and telepresence. Minsky, being a proponent of symbolic theory in AI, in 1974 suggested that the human mind interprets each new object, and, in particular, language, through special memory structures, which he called frames. In 1986, he published his book "The Society of Mind" ("The Society of Mind") [6], which served, among other things, as the basis for the theme of Mahover's opera. This study contains a theory of thinking that touches on everything that is possible: from the origin of human speech and ending with the statement that a computer that undoubtedly has a thinking potential, in some cases, may not even obey exclusively pure logic. Minsky, as well as Machover, was fascinated by the idea of tools designed to learn the laws of thinking. The fundamental thesis in the scientist's concept was that thinking is based on the complex interaction of many simple programs.

           Minsky saw in music a special potential for mastering the secrets of the human mind, wondering: "Why do we love music?", he claimed: "Our culture immerses us in it for a few hours every day, and everyone knows how it affects our emotions, but few people think about how music affects other thoughts. It's amazing how little curiosity we have about such a pervasive "ecological" influence. What could we discover if we studied musical thinking? Do we have the tools for this kind of work?" (...) "I feel that music theory has been stuck trying to find universals for too long. Of course, we would like to study Mozart's music as scientists analyze the spectrum of a distant star. Indeed, we find almost universal practices in every musical era. But we must treat them with suspicion, because they can show no more than what the composers then considered universal. In this case, the search for truth in art becomes a parody, in which the practice of each epoch only parodies the prejudices of its predecessors." [6, 39]The problem of searching for universal laws of thinking from Minsky's point of view is that both memory and thinking interact and develop together.

We don't just learn about something, we learn to think; and then we can learn to think about thinking itself. Soon our ways of thinking become so complex that we cannot expect to understand their details in terms of their superficial functioning, but we can understand the principles that guide their growth [6, 39].

The studies of the music-making process carried out by Minsky recognize a number of important aspects of the commonality between composed and freely improvised music. In a 1985 interview with George Lewis, it was said that Minsky was interested in the relationship between free improvisation and musical composition. "I remember talking to Marvin Minsky and Marianne Amacher one day in Soho [...], and I said that I wanted to buy a computer and build an interactive improvisation system on it. [...] They seemed to think it was a good idea and it should be discussed" [7, 25]. So there are "objective" hints (themes, forms of movement, etc.) and, on the contrary, "subjective" signals that have rather psychodynamic functions (for example, for development or initiation), which can be perceived differently depending on the listener and are not necessarily identifiable in the score" [7,26].

Evidence regarding the importance of these "subjective" signals echoes the description of Minsky's musical composition: "Music, of course, does not necessarily have to meet the expectations of every listener; every sound plot requires novelty. Intentions are not so important: intentions, control or novelty sometimes turns into nonsense. [...] Composers can have different goals: to comfort and calm, surprise and shock, tell fairy tales, amaze with artistic originality, teach new things or destroy previous ideas about art. [...] When expectations are confirmed too often, musical the style may seem very boring. [...] Every musical performer should predict and pre-direct the listener's fixation on attracting his attention" [7, 39].

From the point of view of psychoacoustics researcher Irene Delege, performers who engage in freely improvised music are capable of interacting, which could be an objective signal, but perhaps more specialized skills are the skillful management of the very "psychodynamic functions" of subjective signals [10,250]. Their involvement (here it is necessary to recall the interactive tools of Tod Mahover – his hyperinstuments) entails, at least partially, an active switching of the listener's attention through continuous coordination [8, 27]. In improvisation, "meaning is created in real-time performance, as a collision or agreement of different sets of meanings: [...] and what individual performers perceive and/or mediate is what the audience expects and what it eventually receives" [8,29].

The Brain Opera is divided into three parts: "The Wilds of the Mind", an interactive space" in which the audience explores and creates music related to the Brain Opera through six new interfaces. Net Music is a virtual interactive space where participants from the Internet explore and create music related to Brain Surgery through Java applets. The performance Opera of the Brain includes three groups of performers who use new interfaces to simultaneously perform written music and introduce the public to the online storage of the Internet.

At the end of the last century and the beginning of this century, psychologists and psychiatrists hypothesized that human consciousness is somehow "controlled" by one (or a small number) of highly intelligent "control centers". The world of "human thought" and consciousness is so unlike any other phenomena that many considered the mind inexplicable from a scientific point of view. Back in the late 1970s of the twentieth century, Minsky presented a concept according to which human intelligence is not so much different from artificial intelligence as previously assumed. In the book "The Community of Reason", Minsky also proposes the theory that the human mind does not have a "control center", and, in fact, a reasonable thought is a collection or "community" of individual meaningless "agents". Minsky creates a metaphor between the human brain and the so-called "wilds" of these agents. This is the basic concept of the wilds of the mind in the Opera of the Brain and interfaces such as "singing tree", "talking tree", "rhythmic tree", "melodic easels", "gesture walls" — all these resources are nothing but an accumulation of various agents that interact with the Opera of the Brain through six different interfaces.

The interactive section of the Brain Opera, called "The Wilds of the Mind" or "Lobby", opened in the Juilliard Marble Lobby of the theater in July 1996 at the first Lincoln Center Festival. It consisted of 29 installations controlled by the capacity of 40 network PCs and workstations.

 During the action in the Opera of the Brain, these interactive stations were open to the general public, who could study and master them. The stations were divided into five main types, each of which used different methods of gesture recognition and multimedia display. Some of these stations also made it possible to control the structure of sound, while others received samples of users' voices, and others made it possible to parametrically manipulate various topics in the Brain Opera. After about an hour of work in the lobby, the audience was led to the theater space, where three musicians performed similar compositions in the style of the Opera of the Brain on a variety of "hyperinstruments". The Flywheel hyperinstruments are the author's designs of special instruments, such as a Hypercello, a hypercrip (Hyperviolin) or a hyperlink (Hyperbow).  They are designed for quite virtuosic musicians, and therefore there is no need to learn new performing techniques and techniques, and their original appearance and additional options can only improve performance capabilities.

The Brain Opera project focused on the integration of various, often unrelated sound sources originating from different participants in the interactive process in the theater lobby. All this has been integrated into a single collective artistic experience, which, in essence, is also emergent: that is, something disproportionately greater than the sum of all its parts. Our mind also transforms fragmented experiences into rational thinking [6, 308].

Through a system of interactive tools, the contribution of each of the visitors recreated a new look of the musical composition. Similar analogies with thought processes and the mysteries of human thinking were a representation of the concept of the pioneer of artificial intelligence Marvin Minsky. The use of uncorrelated, and rather even stochastic audience participation (simulating neural stimulation) in Machover followed Minsky's theory. Brain Opera strives to actively involve a non-specialized audience in the artistic environment, creating new opportunities for interactive music that until now seemed impossible [6, 309].

Brain Opera as an interactive installation is based on a variety of different instruments and interactive stations developed specifically for this project in the laboratory of the Massachusetts Institute of Technology. At the same time, it should be emphasized that this project was by no means a fixed or purely experimental installation: the components had to work in many real environments and interact with a variety of people. As a result, the technologies created specifically for this project have demonstrated intuitiveness, reliability and the absence of excessive sensitivity to changes in background conditions, noise and disorder.

It is obvious that many different cumbersome devices and installations were used in the Brain Opera, which did not imply the possibility of its representation outside a specifically organized space.

Next, let's move on to a brief description of these devices so that their principle of operation and obtaining information for the creative field of the Brain Opera is clear. 

The simplest and most numerous stations in the lobby of the theater were "talking trees". These interfaces had a dedicated PC, a pair of headphones, a microphone, a color LCD screen, and a Pro Point mouse, a portable device that allowed the thumb to move around the cursor by adjusting the center of pressure at the top of a force-sensitive resistor, the size of a fingertip. "Clicks" were still determined by the button that was set to be able to access the switch using the index finger. The switch was installed under the "mat" of each talking tree. When the listener went under this tree, the mat closed, blocking the PC port. Next, the Macro Mind Director sequence was launched, with video clips by Minsky, whose "Society of the Mind" inspired both the libretto and the general concept of the Brain Opera. Throughout the entire "dialogue", the image of Minsky appearing on the monitor asked users several questions, and their answers were recorded and indexed on a PC, and then transmitted over the network to the "sample bank" for playback during subsequent performances. There were about 15 talking trees in total. Although the dialogue with Minsk seemed to the visitor both interesting and funny, this was just one of the possible applications of the possibilities that were available to the user on each of the talking trees.

The "Singing Trees" were similar in design. Having no tactile interface, they respond exclusively to the singing voice, which is analyzed on the basis of ten dynamic functions. The same parameters controlled the mechanism of algorithmic composition, which effectively re-synthesized the participant's voice on the Kurzweil K2500 synthesizer. "Singing trees" sought to create uniformity as it happens in a singing voice; the longer the interaction with the interface, the more tonal and "euphonious" the final result of vocal resynthesis became [6].

Derived factors were also used to control the playback of animation on the LCD screen. At a certain moment, a ballerina appeared and began to dance, and when the voice was interrupted, the animation transformed into a set of simpler images. The reciprocal relationship of visual and sound factors was obvious. Sound and visual stimuli prompted to find the appropriate sounds, prompting the visualization process. The Flywheel had three singing trees, each of which worked in different sequences of images.

Another interface used in Brain Opera is a melodic easel. It consists of monitors connected to a single network, embedded in a suspended "table". These monitors were equipped with pressure-sensitive touch screens (Intelli Touch from ELO Touch Systems). Users controlled the parametric sequence by performing one of the Brain Opera-themed tasks by moving their finger across the screen. Synthesized voices created on the Kurzweil K2500 sampler and Korg Prophecy synthesizer reacted to pressure and speed. The video sequence played on the monitor was also determined by the finger position and pressure, using various real-time video processing.

Another interactive invention was a special easel for creating melodies.  Each of them used a couple of computers (one of which was designed for music, and the other for creating videos), as well as for musical sounds and creating various visual effects. The position and pressure data (physical and acoustic) were labeled depending on time (on the left), and in the raster version (on the right), where the values determined the radius. The pressure dropped to zero when the finger came off the glass. Intelli Touch used propagating surface acoustic waves. Through the glass of the touch screen, it was possible to determine the coordinates of the finger: time - according to the acoustic curve and the peak of absorption, which was also determined by the position of the finger.

In the Harmonic Driving interface, the user "controlled" an animated car through a graphical and musical interface. Instead of using a conventional steering wheel or joystick, which could evoke associations with the experience of computer toys, the user controlled his new experience with a special interface created using a large, bendable spring (2 inches in diameter 15 inches in length), which created a completely different feeling, and was more suitable for creating graphics. Musical parameters were selected both with the help of graphics (pointing at different tracks or hitting musical objects) and continuously (joystick actions aimed directly at musical effects). The bending angles, according to the coordinate system, were measured using a capacitive sensor to determine the displacement between the coil springs located in the middle. Four sensors were installed outside the coils at an angle of 90°. The transmitting electrode (transmitting a 50 kHz sine wave), of a similar design, was completely wound on a coil above the pickups. As the spring bent, the pickups came closer, and the capacitive connection between the transmitter and receiver was transformed. Shielded cables were laid from these electrodes to the nearest amplifier, then measurements of the electric field "Fish" digitized four proximity signals into 7-bit MIDI values.

The twisting of the spring was also measured by a potentiometer that rotated through the relative angle between the top and bottom of the spring. The presence of a participant was detected at the moment when a light beam appeared directed across the seat of the chair on which the performer was located, and was interrupted when the participant of the interactive musical game left him. At this moment, the software was automatically reset, and the received signals of the potentiometer and photodetector were digitized.

All three harmonic drivers work according to the same principle. Each of them uses a PC to play music (generated by an E-Mu Morpheus synthesizer) and an IBM RS-6000 (a workstation for creating graphics). An array of eight LEDs operating under MIDI control is built into the joystick. The Rhythm Tree is an electronic drum kit consisting of 320 drums.

The pads are interconnected like a garland of "Christmas lights. When a peak exceeding the threshold value was detected, it was assumed to extract a set of functions from the next 0-15 micro-seconds of a remotely programmable interval. These parameters also included the polarity of the peak of the initial PVDF signal, and the number of significant zero crossings detected, and the total integral amplitude of the signal.

The drums used an efficient circuit that transmitted data with minimal delay.

Almost all parameters of the drum pad (trigger thresholds, integration time, LED modes, etc.) were programmed. The Visual Basic program was specially created to enable these parameters to quickly configure individual and group drum pads. In this case, the produced data files would be downloaded and combined into a single sample bank with a common music software. This list of parameters was constantly sent for subsequent reprogramming.  All pads had a common connection to the analog-digital input to provide common functions for direct audio synthesis.

The latest installation is a "Gesture Wall" that uses the transmission mode of electric field sensors to measure the position and movement of the user's hands and body in front of the projection screen.  The brass plate of the transmitter on the floor was activated by a low-frequency sinusoidal signal (in the range of 50-100 kHz; at the same time, each wall of gestures was tuned to different frequencies). At the moment when the performer stepped onto the transmitter platform, this signal acted directly through the performer's shoes. A set of four antennas installed around the perimeter of the screen was configured via synchronous demodulation mode to receive this transmitted signal and reject the out-of-band background. The amplitude of these received signals, which corresponded to the strength of the capacitive coupling (hence proximity to the body), was detected and routed through special logarithmic amplifiers to approach the linear voltage range, and then digitized and output via MIDI to a PC. The LED installed in each sensor was activated by a signal in accordance with the increase in intensity as the performer approached. These LEDs could also be controlled directly via MIDI to highlight any color.

The connected PC determined the positions of the hands in the plane of the receivers and the distance from this plane by linear combinations of four sensor signals. The weight coefficients were determined by calculating the data obtained with the help of hands located in the plane of the sensors.  The principle of the wall of gestures in its principle of action resembled in part both the theremin and the terpsiton created later by Leo Theremin, which consisted of a platform equipped with antennas controlling the space, with which the dancer controlled the musical material obtained from the movements.  Of the three instruments created, only the last one, made in 1978, has survived to this day. 

Similarly to this principle, the walls of gestures also had to create an adequate audiovisual response, reflecting a wide range of different body types and possible poses and body movements taken by the performer. Subsequently, another device was developed based on a scanning laser rangefinder capable of determining the exact positions of several hands in the plane, regardless of the size of the body or the pose it assumes. The "back-end" of each gesture wall consisted of a pair of PCs (one of which had a music analysis code and sensors, as well as a graphics analysis code), a Kurzweil 2500 synthesizer and a video projector. Musical cards formed sequences, the amplitude of which increased as the body approached the sensor plane (based on the zero point - silence when the participant was at a great distance from the sensors). The range changed as the arms/body moved vertically. Low notes were formed when the hands were near the lower sensors, and high notes appeared when the hands were near the upper sensors. When the timbre of the instrument changed, the movement of the hands or body was scanned - from right to left. Visual comparisons created changes in the video sequence: as a person approached the sensors, effects focused on the reaction to the position of the hands or body appeared.

Another interactive tool was a "touch chair", based also exclusively on the transmission mode of measuring the electric field. Research on audio tactile displays is relatively new. The main goal of such developments is to improve tactile perception by simultaneously processing several audio-tactile channels. By increasing the number of channels, it is possible to effectively improve the information received.  A similar principle, already in the 2000s, was guided by the creators of the Emoti-Chair interface, a system of sensory sound substitution that provides the hearing-impaired with an audio—tactile version of music [8]. This is a system of sensory substitution, which assumes that music can be perceived as a tactile modality, identifying vibrations emanating from different instruments and sounds covering the sound frequencies of the spectrum, represented at several points of the body.

The touch chair at the Flywheel is also similar to a wall of gestures, except that the performer is located on a chair with a transmitter electrode attached to the seat, providing reciprocal communication with the performer's body. Since the performer is now in a sitting position, he can freely move his legs, which are also monitored using a pair of measuring electrodes mounted on the platform of the chair (the indicators under them are similar with the illumination of the position of the legs). A pair of footswitches was also available to enable a hard, sensor-independent trigger to switch to display modes, etc. The chair system is widely used in many cases in the Brain Opera, for example, it often triggers and cross-attenuation of several developing sound sources through movements of the arms and legs. The third execution tool is based on a completely different set of technologies.

These are the so-called "batons", which were relatively popular interfaces in electronic music research and a variety of different types of sound reproduction and were built on the basis of controllers. Some of them were optical trackers, most of which were based on a CCD array. The camera picked up an infrared light source projected on the tip of the baton, and some used a segmented photodiode detector. The baton is a multimodal portable input device that measures several types of user activity using three different sensor systems. As with previous batons, the position of the infrared LED at the tip of the baton is accurately tracked. A set of five force-sensitive resistor strips mounted along the performer's handle measured the continuous pressure of the thumb, index, middle finger, as well as the last two fingers together and the palm. The system was supplemented with a set of three orthogonal micromechanical accelerometers.

In order to ensure a reliable and fast response in theatrical productions with a large, unpredictable background from incandescent stage lighting, interactive video was not used, however, instead a special tracker with synchronous demodulation based on 2D position-sensitive photodiodes (PSD) was built. This video camera was located a few meters away from the performer so that it was possible to see the performance in full. The infrared filter located above the camera, together with the narrow bandwidth of the demodulation filter, completely suppressed interference from static and dynamic stage lighting devices, while maintaining a fairly prompt response to the dynamic movements of the baton.  The source of gesture information, with which it was possible to control several musical parameters, also used blood pressure data.

The "Magic Carpet" interface consisted of two subsystems: a carpet that determined the location and pressure of the legs and a pair of microwave motion sensors that responded to the speed of movements of the upper body. Ten seconds of actual data received from this system, reacting to a person who usually walks diagonally across the carpet, and then retreats, became the main material for subsequent sound transformations. For each sound event, special designations were applied-circles with a proportional radius of speed, transforming into MIDI material. The data showed movement and reflected the dynamics of steps. It was quite expected that with steps in the opposite direction, possibly higher pressure, they could also be more tightly grouped in time, since the "trampling" was essentially instantaneous, compared to the nature of standard steps forward. A wider variation in the data could be a consequence of the fact that heavy footsteps could shake the floor tiles on which the carpet was laid. In the event that two people were moving along the Magic Carpet, their position could be estimated using simple clustering algorithms and filters that were consistent according to the principle of matching wires x and y, within one or two scanning intervals with a frequency of 60 Hz. The movements of the upper body of the body were monitored by two microwave sensors.  The raw output signal was amplified by a rectified low-pass filter in order to reproduce signals with a voltage proportional to the total number of sensed movements.

A projection wall with five rangefinders detecting people was in interaction with the ensemble. Three signals (number of movement, speed, trigger, direction) were sampled and converted into a continuous MIDI controller. Subsequently, a fully digital version of this signal converter was developed; since the beat frequency was so small that it could be based on a simple microprocessor. The steps provoked low humming sounds, the timbre of which was determined by the pressure of the feet and the pitch, the basis of which was the location of the step. The movement of the upper body formed high "ringing" arpeggios.

A sensor system specially developed for the Brain Opera project was supposed to measure the space and its filling: the general presence and position of people in various places of the hall. With the help of the received and processed data, it turned into a multi-channel sonar rangefinder.

So, it is obvious that the Brain Opera project presented a lot of new and unconventional interfaces that were specially created to interact with the musical environment. Their forms differed from traditional interfaces, such as keyboards, mice, etc., striving to create a flexible environment that would be built from these "smart objects" and sensory spaces that capture and register any kind of physical activity or movements that could lead to a complex multimedia reaction and create an interesting sound result. Viewers in the Brain Opera tended to expect a musical response from all nearby objects when they first saw the new interfaces. Musical juxtapositions performed on interactive instruments were intuitive and encouraged the viewer's creative curiosity.

The design of this interactive musical environment accompanied the process of dynamic adaptation to the skill level and style of the participants. The observation of these processes is of absolute scientific interest. Musical juxtapositions and parametric sequences in the Brain Opera worked independently on each instrument. Although this satisfied individual performers and viewers (many of whom were acoustically isolated due to headphones or being near the appropriate speakers), in general, the sound field in the Opera of the Brain reached a certain stochastic (poorly controlled by the composer) process, and the creative energy of chaos became an integral artistic goal of the Opera of the Brain, the Master initially sought collective musical expression, which eventually turned out to be realized, giving creative freedom.

It is also obvious that such a process created the need to develop special integration mechanisms in order to create a balanced interactive space between general and private artistic expression. It was necessary to create ways to coordinate network audio responses while maintaining deterministic musical feedback at the "action-response" level to meet the individual needs of participants. Although the concepts of interaction of performativity and methods of compositional control in the Opera of the Brain were limited to the received recordings on various interfaces: Talking trees and other devices.

Speaking about the cyber narrative in the opera house, we can state that it was this idea that became fundamental for the Mahover project.  The libretto and the story of Nur Inayat Khan, a Sufi Muslim princess and at the same time an intelligence agent, are in this case only the starting point of the plot, which has received a cybernarrative representation.  These two lines – the treatise "Community of the Mind", the element of creative energy of the recipient – visitor of the sound installation, serve as the basis of the narrative, while the plot itself is hardly read through the journey through the "wilds of the mind".

Conclusions from the conducted research: modern opera uses cybernarrative as a communication system, in the field of which all the latest technologies are included. At the same time, the content of the opera is blurred, which turns out to be variable. There are both positive and negative sides to this: the involvement of the public in the creative process, which Tod Mahover strongly encourages in his concept of the Brain Opera, helps to overcome passive perception, however, the composition ceases to be a complete form and turns into an open work – work in progress.

References
1. Barrettara R. Cyber-Narrative in Opera: Three Case Studies. Diss. PHD , New York, 2019 – 275 p.
2. Rosenbaum D. Action, Mind, and Brain: An Introduction Cambridge, The MIT Press,-302 p.
3. Ribas V., Ribas R., Martins H.The Learning Curve in neurofeedback of Peter Van Deusen: A review article // Dement Neuropsychol. 2016 Apr-Jun;10(2):98-103
4. Tanaka, A., Sensor-Based Musical Instruments and Interactive Music // The Oxford Handbook of Computer Music, Oxford Handbooks / Dean R. T. (ed.) 2011 233-257
5. Basu S. “Spy princess : the life of Noor Inayat Khan” London Omega Publications, Inc.; 1st edition-267 p.
6. Ìèíñêèé Ì. Ñîîáùåñòâî ðàçóìà Ìîñêâà: ÀÑÒ, 2018–592 ñ.
7. Machover, T, "Brain Opera Update, January 1996", Internal Document, MIT Media Laboratory, 1996
8. Roads, C. “Improvisation with George Lewis,” in Composers and the Computer, C. Roads, Ed. William Kaufman, Inc. 1985 Massachusetts: The MIT Press. Roads-260 ð.
9. Karam M., Branje C., Nespoli G., Thompson N., Russo F. A., Fels D. The emoti-chair: an interactive tactile music exhibit. // CHI '10 Extended Abstracts on Human Factors in Computing Systems (CHI EA '10). 2010. Association for Computing Machinery, New York, NY, USA, 3069–3074.
10. Deliège, I, Sloboda, J. Perception and Cognition of Music, East Sussex: Hove Psychology Press,1997-480 p.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The subject of the research of the presented article is a fairly new kind of modern musical art - digital cybernarrative. Immersion in the study of experimental musical genres at the junction of academic opera, electronic music, digital technologies and medical research determines the relevance of the work. The scientific novelty of the research is characterized by the introduction of information about digital cybernarative technologies in general and information about the "Opera of the Brain" by Tod Mahover, in particular, into the everyday life of Russian art criticism. Due to the reliance on foreign sources, the author of the article significantly expands the information about the above-mentioned phenomena for the Russian-speaking audience. The article is quite informative. The author describes in detail the concept of "digital cyber narrative". Examines the origins of its origin through the use of biofeedback mechanisms in musical art, including the work of Tod Mahover's predecessors: David Tudor, Alvin Lucier and others. The author of the article describes in sufficient detail the effect of the technologies involved by Tod Mahover himself in the "Opera of the Brain". Of interest is the description of the installations of the interactive section of the opera: "talking trees", "singing trees", "melodic easel", "harmonic driver", "wall of gestures" and others. The author analyzes in detail the mechanism of action of these technologies. In general, this "section" of the opera represents "collective co-creation", collective musical expression, interactive interaction with the audience. Referring to the characterization of the libretto, the author of the article points to two sources. One of them is the story of Nur Inayat Khan (a Sufi Muslim princess and at the same time a secret British agent). Note that the author mentions two sources of Nur's biography ("The Spy Princess" and the film "Enemy of the Reich"), but does not provide their specific data (authors, time of creation, etc.). The plot outline with Nur's biography allows us to find the points of contact between the "Opera of the Brain" and the traditional libretto. Another source of the libretto, as the author points out, is Marvin Minsky's book "Community of Reason" (note that the translation "Society of Reason" is sometimes given in the text of the article). The integration of this source directly into the libretto is rather vague. In general, there are still some "semantic gaps" in understanding when and how the switch from the plot libretto to immersion in the processes of observing the reactions of "brain waves" occurs. Delving into the characteristics of cyber technologies, the author rather sparingly characterizes the role of "opera performers", mentioning that: "The Opera of the Brain includes three groups of performers who use new interfaces to simultaneously perform written music and acquaint the public with the online repository of the Internet." However, understanding the specifics of the new genre, nevertheless, the question remains open regarding the performers in relation to the traditions of academic opera, that is, whether vocalists and orchestrators themselves are supposed to be involved. The author examines in sufficient detail the technologies of the embodiment of the digital cyber narrative using the example of the "Opera of the Brain". However, the abundance of information provided is not clearly structured enough, which prevents its perception. Semantic failures constantly occur throughout the article: the transition of the presentation from the description of technology to the description of the plot, an unexpected switch to the characterization of the experiences of predecessors and back, and so on. All this leads, among other things, to the repetition of certain information, which could be avoided with a more logical structuring. The author also repeatedly mentions that the work of Tod Mahover has become one of the sources of the development of a new direction, but does not provide information (even in reference order) about other works by Tod Mahover or his followers. Separately, it can be stated that the considered digital cyber narrative has some intersections with the phenomenon of Science opera. In the opinion of the reviewer, the role of Marvin Minsky's book for the libretto of the "Opera of the Brain" is not clearly revealed. And, as noted above, the structure of the libretto and the role of the performers are not fully disclosed. Despite the experimentation of the new genre, opera research lacks reliance on traditional methods of analyzing opera performances and opera librettos. According to the reviewer, the sequence of description in the analysis of the "Opera of the Brain" (separately, the role of predecessors, libretto, performers, structure, mechanisms of implementation of the cybernarrative) it would contribute to the best perception of the study. In general, I would like to emphasize that the work is of interest to the readership, makes a significant contribution to the analysis of new subgenres of modern art, in particular, digital cybernarative.