Want to create interactive content? It’s easy in Genially!
Thesis: reactive visuals
abi
Created on March 28, 2024
Start designing with a free template
Discover more than 1500 professional designs like these:
Transcript
MULKI: மூழ்கி
(Tamil for immerse) is a concept for an immersive live performance experience that aims to target the different interaction pathways in concert settings to enhance artist identity and audience social, emotional and cognitive response, agency, and attachment to the artist.
View the reactive-visuals
Read the paper documentation
Each of the visuals were developed for a specific segment of a performance. Click through the different reactive visuals below for a demo of the product, details about its functionality, and suggested types of songs to use it with.
This set of products was developed as part of a creative exploration of reactive-visuals for NYU Tandon School of Engineering's Master of Science in Integrated Design and Media.
Jump to a specific section of the documentation or click the purple arrow on the bottom right to be guided through it page by page.
Plug-and-play audio-reactive
Vocal-reactive
Table of contents
MIDI-reactive
MIDI-reactive with webcam feed
Introduction, background & literature review
Preliminary research
Methods
Mulki: மூழ்கி
Discussion
Full paper with references
Use the arrows at the top of every page to navigate to the next or previous page
And the home button at the bottom of every page to come back home!
Introduction
Previous
Next
"Music has always pushed the envelope of what defines interaction” (Tanaka, 2006). Live music settings are widely believed to be social and artistic activities where relationships are established in real time, mediated by psychological, social, musical and environmental factors (Serra, 2015). These relationships are often actualized through different kinds of reciprocal interactions that occur within the performance setting. As an artist, I continually try to find ways to enhance audience engagement when performing live, exploring different ways to engage the different pathways of interaction presented by the performance setting, which I think of as a system of interactions. Figure 1 that outlines the system of interactions often found in small/medium-sized performance settings (see here ). In the innermost setting, artists interact with each other on stage to express themselves physically, emotionally and creatively. There is often a lead artist, accompanied by instrumentalists. These interactions engage an audience, who consume the expressions of the artists and also express themselves in ways that influence the artists expression of themselves and their art. In the outermost setting, audio and visual engineers pay careful attention to both the other settings to manipulate the wider environment and enhance the performance setting through audio and light.Through technological innovation, designers and artists have been exploring ways to capitalize on the many different interaction pathways present at such settings for decades. In applying social computing to artistic creativity, the belief is that “technological evolution can be assimilated directly in cultural production, ultimately leading to possible new forms of musical content” (Tanaka, 2006). One example of this application is Sensorband. Since the 1990s, Sensorband has been using virtual reality gloves to use arm muscle tension to generate music (Sensorband, 1994). Digitally produced sound has become an integral part of contemporary musical performance. But, while scholars have highlighted the potential for such technological innovation to elevate the experience and produce novel sounds, they have also warned of the loss of audiences’ familiarity with acoustic instruments and may disorient listeners (Tanaka, 2006). Through technological innovation, designers and artists have been exploring ways to capitalize on the many different interaction pathways present at such settings for decades. In applying social computing to artistic creativity, the belief is that “technological evolution can be assimilated directly in cultural production, ultimately leading to possible new forms of musical content” (Tanaka, 2006). One example of this application is Sensorband. Since the 1990s, Sensorband has been using virtual reality gloves to use arm muscle tension to generate music (Sensorband, 1994). Digitally produced sound has become an integral part of contemporary musical performance. But, while scholars have highlighted the potential for such technological innovation to elevate the experience and produce novel sounds, they have also warned of the loss of audiences’ familiarity with acoustic instruments and may disorient listeners (Tanaka, 2006).
Home
Introduction
Previous
Next
Parellely, scholars have also flagged another avenue of potential disorientation for audiences - audio-visual stress. Behavioral scientists have long been applying cognitive perspectives to artistic expression, emphasizing how setting that are saturated with sensory information may induce negative experiences such as sensory overload or overstimulation, wherein participants may experience “a more passive exposure to concurrent or competing stimuli” or “instead must process information to produce an appropriate response” (Vass, 2023). The integration of human agency through corporeal gestures, however, can soften the disorienting effect of digital music technology. As performers manipulate virtual interfaces, their physical movements provide a visual and tactile connection for audiences, reminding them that behind the electronic sounds, there's a human presence guiding the musical journey. This project was therefore motivated by an interest in technological innovation that harnesses the pathways of interactions in performance settings in a coherent, intuitive manner that enhances audience and/or artist agency. The initial motivation for this project was to engage the audience to visual environment pathway (see Figure 2 here ). As an artist who has performed across diverse contexts over the past 10 years, I am on an eternal exploration of different ways to engage audiences. The active participation of audiences possesses the potential to fundamentally alter the atmosphere of a performance, infusing it with vitality and dynamism. This realization emerged during an extensive series of performances in New York city throughout 2023, prompting a fundamental question: What if spectators could visually perceive the transformative influence of their collective energy? While this was the initial motivation for the thesis, as I engaged in the creative exploration of reactive visuals, due to a multitude of factors described in the methods section of this paper, the focus shifted towards the expression of instrumentalists through real-time visual art (see Figure 3 here ). Delving into this inquiry led me to explore the domain of interactive experiences within concert settings. Developers and designers of music technology have applied the idea of idiomatic writing – an approach to musical technological innovation that is applied to communication infrastructures that is often rooted in the theoretical underpinnings of interaction, agency and experience – to curate coherent performance experiences (Tanaka, 2006).
Home
Next: Preliminary research
Introduction
Previous
VJing is one example of idiomatic writing that was founded in a history of experimentation that has sought to underscore the “connections between musicians, the excitement of using computers to define a new social context for music making, as well as exploring the possibilities of systems too complex for direct control” (Gresham-Lancaster, 1998). Within the contemporary milieu of musical performance experiences, characterized by dynamic interactions extending beyond mere auditory exchanges between performer and audience, the integration of visual elements, notably through video jockeying (VJing), has emerged as a pivotal dimension through which such experiences are elevated (Kim, 2007). Over time, the concerted efforts of interaction designers and engineers have yielded interfaces facilitating real-time synthesis of visuals and audio. For example, Hook et al. (2009) produced a documentary that delves into the creative processes which shape VJ artists’ performances. Through a participatory design workshop, a personal interactive tool for one of their participants was developed in order to better support the unique needs of VJs in the context of live performances. Generating real-time visuals, as emphasized by Hook et al.’s research, while performing requires tools and interfaces that can uniquely serve the needs of those interested in performing in the audio and visual realms at the same time. Similarly, Jung et al. 's (2014), who also documented the process of creating a DJing and VJing tool that allows artists to simultaneously make use of interactive elements for their performances, emphasizing the advantages of real-time interaction between these two forms of artistic expression. In a study introducing a multi-touch tabletop application surface technology designed to revolutionize VJing, Taylor et al. (2009), too, explore the benefits of creating more dynamic and immersive visual experiences for live music performances through reactive-visuals. Over again, many scholars have suggested that music has an ancestral, biological function that serves the purpose of emotional communication (Chabin et al., 2021). A multitude of studies have done so through lenses of object-reactivity, social theory, or aesthetics (Serra, 2015), exploring whether the “pleasure felt at the individual level transmitted on a collective level?” (Chabin et al., 2021). However, most studies of music in such settings have lacked a transdisciplinary lens that acknowledges the complex nature of performance settings (Serra, 2015). This primary research that informed this project aimed to adopt such a transdisciplinary lens that captures social, emotional, and cognitive features of live performance settings that influence the experience of it, in addition to how this translates to investment and attachment in the artist. Nevertheless, amidst this transformative landscape, pertinent questions emerged: What impact do interactive visuals exert on audience experiences during live musical performances? How does investment in audience engagement enhance the dynamism of live performances beyond traditional concert formats? These inquiries laid the groundwork for the research endeavors conducted between September and December 2022 that informed the development of the reactive-visuals presented in this paper.
Home
Next
Preliminary research
Previous
Motivated by these considerations, the primary objective of the preceding semester's work was to explore the influence of pre-programmed versus audio-interactive visuals on audience attachment to artists and their investment in them. In an A/B test that split participants into two mini-concert experiences (one in which reactive visuals were employed and another in which non-reactive visuals were used), participants first experienced the concert and then completed a survey and diary entries. The survey and diary entry data collection tools were developed based on the theoretical underpinnings of a body of research that has highlighted the influence of factors such as cognitive and emotional response and agency on interactions within performance settings (Chabin et al., 2021; Serra, 2015). The subscales in the survey reflected factors researchers have assessed in relation to attachment to artists in live concert experiences: Cognitive response, Agency, Attachment, and Value. Schramm & Hartmann’s Parasocial Interaction Scale (PSI; 2019) was initially developed to assess the interaction between a member of an audience and a television persona (either a character or a real person), and later adapted by O’Neill et al. (2019) for musical concert experiences. In this study, the goal was to assess and compare audience members’ cognitive and affective responses to non-interactive versus interactive visuals. To assess audience members’ sense of agency, Tapal et al.’s (2017) Sense of Agency scale was adapted to measure participant’s sense of agency in a musical environment. Finally, attachment to the artist was measured using a shortened version of Leisewitz et al’s (2022) measure of Fan-Artist attachment. The scales items were pulled from these sources and adaptations were made to best suit the context in which the survey was being administered (see here for subscales and items in the survey administered and here for diary entry questions presented in the survey). Following a band rehearsal in Brooklyn, a projector was set up in the rehearsal room to project visuals. Interested participants were invited to attend 2 performances. The artist performed a 1-song show to two participan ts at a time. Each participant experienced the show with both types of visuals (interactive) and (non-interactive) and the order of the visuals were switched between participants to account for order-effects.
Results
A case-study approach was then employed to look at individual differences to see if any patterns emerged. Through this, the case of Kyle v Kole emerged, with Kyle being the participant who reported the highest across all the subscales and Kole being the participant who scored the lowest across almost all the subscales (see here for differences in their survey summed scores). Kyle reported the highest investment in the artist at the end of the concert experience and Kole the lowest (see here ).
The figure here displays the summed scores for each subscale. While the sample size was limited (N = 4), the subscales in which the biggest differences in scores between reactive and non-reactive visuals were the attachment and value subscales. Although the sample size was small and this limits our ability to generalize further, it is important to note here that those who engaged in the study had significant experience working in music production and/or concert experience.
Home
Next: Methods
Preliminary research - results (cont.)
Previous
Qualitative insights from the diary entry, however, highlighted how, while it could be the case that the reactive visuals can be attributed to Kyle reporting the highest scores across all subscales in the survey, Kyle also had the least experience with live concert experiences (see here ), which might have influenced him to over-value the experience as a while. Kyle had been to 2 open mics in the last year, whereas Kole had been to a variety of shows, including ones where the audio-visual experience was likely to be much higher-quality than what this experience offered. While acknowledging the limitations imposed by a small sample size of four, qualitative reflections gleaned from participant diary entries underscored the significance of interpersonal connections forged among attendees (see here ), which aligns with research that has highlighted how concert participants “value and enjoy the concerts not only for their music but especially for the relationships that are created between musicians and audiences during the concert (Serra, 2015). Building on these insights, the thesis project described below chronicles the journey of creating an immersive audio-visual concert experience wherein artists and audiences are able to shape the visual terrain. Conceptually akin to conventional concert settings, this experience sought to generate and manipulate visuals in real-time in response to auditory inputs from the artist. The goals of this project: Milestones and Requirements For the build out of the reactive-visuals, key milestones included the refinement of interactive visual elements to align more closely with contemporary concert standards facilitated through collaboration with faculty and student resources. Additionally, identifying suitable venues conducive to immersive experiences, conducting comprehensive testing with lighting and sound engineers, and adjusting the products to better account for aesthetic and logistical considerations were paramount. The ultimate objective was twofold: in the short-run, to present findings through scholarly channels while showcasing the culmination of research efforts in a demonstrative showcase; and in the long-run, to use the reactive visuals during live performances.
Home
Next: Stills
Previous
Methods
The reactive-visuals were created and revised through a series of systematic testing. The development of the products was largely guided by YouTube tutorials. Draft products were first developed in remote settings, before being tested in The Garage (a performance venue at NYU Tandon that is well capacitated to facilitate immersive experiences). During the test, each of the visuals were displayed on a projector, with audio exported to two stage monitors and a surround-sound speaker system. As each of the products were tested, detailed notes were journaled to take note of revisions that needed to occur for functional, logistical, and aesthetic purposes. Several test cycles were conducted, each of which occurred between 1-2 week time periods. The four cycles wherein the most significant changes occurred during this creative exploration are described below.
Test 4
Test 3
Test 2
Test 1
In preparation for piloting in a live performance setting in March 2024, the primary objective was to assess the clarity and coherence of the visuals' functionality. Clarity, in this context, refers to the distinctness and legibility of the visual elements, considering factors such as pixelation, spatial arrangement, color contrast, and potential visual clutter. Coherence pertains to the alignment between on-stage actions, such as keyboard playing or vocal performances, and the corresponding visual displays projected behind the performer. This involved testing for latency issues and ensuring that the visuals responded accurately to real-time inputs, thereby enhancing touch sensitivity. In addition, the multiple rounds of testing also highlighted the significance of song choice to the clarity and coherence of the visuals. Therefore, this period of final revisions also included set testing with a diversity of songs, including those from instrumentation-heavy genres such as RnB, rap, EDM and more acoustic songs. Upon an attempted pilot at a medium-sized venue in New York City, wherein the visuals were set up, a projector was available on site, and there was capacity to project sound and visuals. A seemingly minor consideration inhibited the ability to fully pilot the products: the lack of an HDMI extension long enough to connect the laptop that needed to be connected to the MIDI device on stage to the visual controls in the back of the room. Moving forward, collaborative engagements with audio and lighting technicians at the Garage informed ongoing refinements to the visual components to align with the envisioned setup and efforts were focused on streamlining the transition between individual visuals during live performances while maintaining audience engagement. At this stage, challenges primarily revolved around logistical considerations and the presentation of the products that clearly articulated their purpose and function. To facilitate comprehensive documentation and presentation, the products were filmed in a rehearsal ahead of the final showcase (see images of live demonstrations next!).
Home
Next
Mulki: மூழ்கி
Previous
Home
Next: Discussion
Mulki: மூழ்கி
Previous
Home
Discussion
Previous
The set of products created have the potential to be exceedingly valuable to independent artists who would like to elevate their performance experiences. One significant barrier to exploring audio-visual experiences concurrently is that doing so often requires notable investment; in graphic design for the elements that go into the visuals, software to do so, and audio/lighting technicians to support in-person, to name a few. A crew also often needs to be present in order to orchestrate the visuals at the right time, paying careful attention to how songs progress during a set. TouchDesigner is a free software and, while the learning curve may be steep, there is sufficient documentation available online to support independent artists in developing tailored visuals that do not require as much as an investment (apart from time). This is valuable to not just artists themselves, but also to managers supporting artists, DJs, VJs, event coordinators, and others who are looking to curate bespoke experiences that align with an artists/brand’s image. The motivation for this project was to create products that allow audiences to more actively manipulate the environment. While this was not achieved during the project period due to the constraints mentioned above, what was achieved is a set of products that allow artists to visually display the often unnoticed energy they put into their instruments, with little additional effort. Even so, their value to artists, designers, and those in the performing arts remain stark. In a qualitative study on the relationships in live music, Serra (2015) observed and interviewed orchestra players and audiences to better understand their role in the generation of emotional content and meaning. “The critical attention and the emotional participation of the audience pushes the orchestra members to give their best and to play better for the sake of their listeners” (Serra, 2015). The findings of the study highlighted how the energy artists input into their instruments, beyond the physical inputs required to generate sound, influence the audiences’ emotional states and perceptions of the experience. One participant reported that, “I’m interested in the body-language of the musicians when they play, I have never seen it like this, they dance… express a lot more... there’s no comparison.” Further, the results of this study emphasize the vitality of emotions in live performance settings, in addition to challenges verbally expressing these emotions. “What listeners experience or perceive is not just what the music or its interpretation expresses, but also the emotional impact of the visual, relational and environmental aspects of the activity.” The set of products developed for this project allow for the visual expression of the often invisible emotional energy that artists place in their instruments, allowing for this to be expressed in the visual environment in real-time. The goal is to use these products as a foundation to build an album of products from, with plans to pilot them in a performance setting in late 2024. In addition, this interactive report was developed to better communicate what the products are, why they were developed, and share the TouchDesigner files for other artists who may be interested in exploring tailored reactive-visuals. In documenting the project, I am also hoping to hear from a wider community of artists and designers on how to continually hone these products to make them as streamlined as possible. Time and resource-permitting, user testing between these products and non-reactive visuals that studies key factors that influence the live music experience, such as audiences’ cognitive response, sense of agency, attachment and value (akin to the study that informed this project) will help concretely identify the benefits of reactive-visuals and further room for improvement.
Home
MIDI-reactive w. webcam feed
This product was developed and refined for use for slower songs, with the webcam feeding inviting the audience on stage with the artist. Similar to the MIDI-reactive product above, it allows for sensitivity to touch to be visualized and graphic build up/down through audio buses.
Test 4
Significant revisions were made in the form of refinements to the overall aesthetic so that the visuals complemented each of the song choices. At an in-class demonstration to faculty and students, I received a wealth of valuable feedback, encompassing various aspects of the visual presentation:
- Experimentation: Attendees appreciated my willingness to experiment with different types of reactive visuals and the thoughtful consideration given to the different segments of a performance and types of songs. The variety offered by the different types of visuals was remarked as engaging and intriguing.
- Dramatic Impact: The visuals were acknowledged to have made a "dramatic impact" on the overall performance, effectively capturing attention and enriching the audience's experience.
- Curiosity Sparked: There was a notable sense of curiosity among the audience regarding specific types of visuals, particularly the vocal visualizer and the motion capture, indicating a strong interest in their potential.
- Positive Reactions to MIDI-Reactive Visuals: The MIDI-reactive visuals (excluding motion capture) received significant positive feedback, contributing positively to viewers’ overall experience.
- Creative Tailoring: Recommendations were made to take a more significant creative leap in tailoring the visuals to align with the genre of the music. While the visuals were appreciated, there was a consensus that further enhancement could be achieved by aligning them more closely with the mood, style, or theme of the music being presented.
- Plug-and-play audio-reactive: This visual component demonstrated effectiveness in dynamically responding to a diverse range of musical inputs, particularly favoring compositions characterized by robust bass, kick, and snare counts.
- Microphone input: Following feedback received from class peers and thesis instructors, the microphone-reactive visuals were deemed ready for performance, ensuring seamless alignment between audio input and visual output.
- MIDI keyboard input: Although progress was made in aligning graphics with desired parameters, further adjustments were deemed necessary to refine color tones and eliminate distracting elements, such as intense background webcam footage, which detracted from visual coherence.
- Motion input: Challenges persisted in developing motion-reactive visuals that effectively engaged the audience without compromising clarity or coherence, particulary in the pixelated output generated by the audiences’ movement.
Figure 2: Audience to visual environment pathway
Vocal-reactive
This product’s functionality is most clear and coherent when used with a direct microphone input and light instrumentation.
Agency
Attachment
Value
Cognitive response
- I was in full control of what I did during the show.
- I am just an instrument in the hands of somebody or something else
- My actions just happened without my intention
- I was the author of my actions
- The consequences of my actions felt like they logically followed my actions
- My movements were automatic - my body simply made them
- The outcomes of my actions generally surprised me
- Things I did were subject only to my free will
- The decision whether and when to act was within my hands
- Nothing I did was actually voluntary
- While I was in action, I felt like a remote controlled robot
- My behavior was planned by me from the very beginning to the end
- I was completely responsible for everything that resulted from my actions
- I feel personally connected to this artist.
- This artist gives me the feeling that I am loved and cared for.
- This artist reminds me of persons who are important to me.
- This artist symbolises a bond with friends or family.
- This artist reminds me of important things I’ve done or places I’ve been
- My thoughts and feelings toward the artist are often automatic, coming to mind seemingly on their own.
- My thoughts and feelings toward the artist come to my mind naturally and instantly.
- I would feel distressed if this artist stopped performing.
- I could easily imagine a life without this artist.
- How frequently would you listen to all new music this artist releases?
- How frequently would you purchase all new music this artist releases?
- How frequently do you intend to go to all concerts this artist plays in your area?
- How frequently do you intend to buy all new merchandise this artist releases?
- What price (in $) are you willing to pay for a vinyl record of this artist?
- What price (in $) are you willing to pay for a concert ticket to see this artist?
- What price (in $) are you willing to pay for this artist’s merchandise t-shirt?
- The artist repeatedly attracted my entire attention
- I watched closely how the artist behaved
- I didn’t really notice the artist
- I rarely paid attention to the artist
- Every once in a while during the performance, I thought about whether the artist is similar or dissimilar to me
- I have considered what unites me with, and what distinguishes me from the artist.
- I have rarely thought about whether I personally would have acted in the same way as the artist.
- I did not compare myself to the artist.
- I actually never wondered whether the artist has something to do with me.
- The artist's behavior had a strong influence on my own mood.
- I occasionally reacted very emotionally towards the artist
- What the artist said or did did not trigger any emotions in me
- I reacted rather matter-of-factly and emotionally unfazed towards the artist
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Figure 5: Kyle vs Kole - across subscales
Figure 3: Artist to visual environment pathway
Figure 7: Kyle vs Kole - concert experiences
Prompts
Prompts
- Please describe your emotional state during the concert. How did the music and interactive visuals make you feel?
- Were there any specific moments that particularly moved you or evoked strong emotions? Please describe these moments and what about them moved you.
- How did the visuals contribute to your overall concert experience? Were there any specific visual elements that stood out to you?
- Did you feel a sense of connection or engagement with the artist or the audience during the concert? If yes or no, please explain.
Please respond to at least one of the prompts below about your thoughts, feelings, and reflectings during the concert.
- Reflecting on the concert, how did it influence your mood or emotions? Did your emotional state change after the event? How? If not, that is very much ok, too!
- Do you find yourself reminiscing about specific aspects of the concert, such as particular songs, visuals, or moments? Please describe these aspects.
- Did the concert experience enhance your attachment to the performing artist or group? If so, in what ways?
- If you were to attend a similar concert in the future, what aspects of this experience would you hope to recreate or avoid.
How about after the concert? Please respond to any one of the prompts:
Test 3
Vocal reactive: While the initial iteration (Acrylicode, 2022a) suffered from ambient noise interference, by limiting the input signal to a single handheld device and adding gain control to get rid of the extraneous noise, a final version that could be used during slower songs was developed.
Plug-and-play audio-reactive: this reactive-visual product was built to be used particularly with backing tracks (see here).
Motion-reactive: Experimentation with audience motion capture, particularly via webcam and slitscan techniques (Tschepe, 2023) aimed to facilitate audience participation in shaping visual outputs. While, functionally, this product did what it was meant to, several challenges made it untenable for this further testing. The primary challenge was picking up refined audience stimuli. Different input thresholds were tested to try to identify what would work best, but in a small/medium sized venue, wherein lighting changes throughout the show and audiences move around, this was difficult. In absence of a dedicated camera crew who could move through the space and capture input at the right level, the kinds of visuals produced by this product did not meet the professional quality standards the others did. In redirecting the camera towards the artist, however, the potential for it to offer audiences a more personal experience of the performance emerged.
MIDI-reactive: Transitioning to MIDI-based reactivity (Acrylicode, 2022b) provided cleaner input signals, resulting in a more cohesive visual experience aligned with musical performance, empowering performers with enhanced control. Apart from graphics generating through MIDI input, I also added (1) touch sensitivity so that the more pressure the instrumentalist puts on the keyboard, the larger and brighter the graphics get and (2) visual control through MIDI buses, to allow for drastic dimensional changes that build on the base visual to align with changes in the song. The motivation for this was primarily from personal experience in observing audiences responding to the overt actions of all artists on the stage, not just the singer, and how this, combined with the visuals, can lead to a more connected experience.
Figure 8: Kyle vs Kole - diary entries
Test 1
(Colorful Coding, 2021; The Coding Train, 2016) of the audio-reactive visuals was developed using P5JS, which is a JavaScript library for creative coding (Codecademy, n.d.).
Version 1
The emphasis here was placed on achieving satisfactory audio-reactivity, which was successfully implemented. Recognizing the potential enhancement that motion capture could bring to the visual experience, the subsequent iteration
(Colorful Coding, 2021; Tidwell, 2022), incorporated this feature while addressing the deficiencies in audio-reactivity noted in
Version 2
Version 1, including lag in the generation of visuals and discrepancy between the audio and visual. Although motion capture functionality was successfully integrated, this test cycle highlighted 2 key challenges:
- P5JS platform, while appropriate for initial prototyping and capable of the development of audio-reactive multimedia, was not suitable for performing with for the required live perfor mances. Consequently, a transition was made to developing audio-reactive visuals using TouchDesigner, a visual programming software that allows for the creation of multimedia products and applications and the display of them in real time (The NODE Institute, n.d.).
- The integration of motion capture led to undesirable glitches. The initial idea of a single multimedia product that would be audio and motion-reactive was scoped down to first establish the audio-reactivity to a point of professional quality, and then integrate other types of audio inputs such as vocal and MIDI.
MIDI-reactive
This product is most coherent and clear when used with indie-pop songs that shift between down- and up-beat rhythms, allowing for changes in the instrumentalists’ pressure on the keys to be visualized and graphic dimensions to be built up and broken down through the audio buses in the MIDI-keyboard.
Figure 1: Common interactions in performance settings
Figure 4: Summed scores per subscale: reactive versus non-reactive visuals
Test 2
During the second phase of testing, a concerted effort was made to evaluate the prototype's viability within live performance settings over the course of a week. Given that the transition to TouchDesigner meant that I would be learning a new software, the primary step was to build an audio-reactive visual that would generate visuals in real time, but not necessarily one that is reactive to real-time stimuli. Therefore, a visual was built in which the dimensions of the graphics are manipulated by the kicks, snares, highs, lows and mid ranges of the audio bandwidth of any song I could drag and drop into the software (Acrylicode, 2023):
The kick (also known as the bass drum) exists on the low end of the audio spectrum, compared to the snare which is a crisper sound that adds a snap to it. In order for the visual to react to this, in addition to the high, low and mid range of the audio input, the audio was first passed through an analyzer, after which the thresholds for which the visual needs to react to were identified (Tschepe, 2020). In testing this, I realized the potential for such visuals to be used with backing tracks. Encouragingly, the audio-reactivity feature demonstrated visual appeal, as displayed in the provided demonstrations. However, this testing phase also brought to light a series of challenges associated with scaling the prototype to larger venues, prompting a focused two-week period to address these emergent issues. The next step was to develop products that responded to live stimuli. Contrary to initial assumptions, vocal input yielded suboptimal results, necessitating a pivot towards synthesizer-generated sounds. Moreover, the observed variability in pitch changes and background noise during testing underscored the need for refinement in input selection and visualization mapping. Extensive experimentation across diverse musical genres revealed that simpler arrangements, particularly those featuring synthesized sounds and vocals, elicited more effective visual responses. I then began to concentrate efforts on isolating synth-generated inputs and accurately mapping speed and pitch changes to visual elements.
Figure 6: Kyle vs Kole - Value subscale
Plug-and-play audio-reactive visual
For electronic-dance music with heavier bass and drums, this visual adjusts to pivot between the slow, ambient and upbeat parts of the song.