All posts by joowonpark

Learning Aphasia

Mark Applebaum’s Aphasia (2009) for solo performer is a difficult but rewarding piece. I have been intermittently practicing it since 2021, but I have yet to reach a satisfactory level. Knowing that my run of the piece today is better than in any other day, it gives me joy and energy to keep practicing. I wanted to share that delight in the last month’s Wayne State Faculty Recital Series. Despite many mistakes, the performance was received well by students and guests. Aphasia attracts and affects audiences and performers like no other electronic pieces I know. I can think of two, perhaps very personal, reasons.

Soulslike 

Aphasia is an Elden Ring-equivalent of electroacoustic music. The unforgiving difficulty is part of the fun. In practicing Aphasia, I had innumerable  “YOU DIED” moments. Learning small sections required focus and discipline, and still, the chance of succeeding was slim. But like a soulslike game, the process of achieving goals was fun. I understood more about the piece’s structural relationships, sound design techniques, and choreographic thoughts through repeated readings, listening, and failing. 

When I memorized the piece, I felt joy, similar to the moment I beat the final boss in Elden Ring after hundreds of hours of playtime. I learned to work consistently to achieve seemingly impossible goals. Musicians, of course, acquire this trait, but I needed a reminder and reinforcement.  Practicing Aphasia during the COVID quarantine semesters did that job. 

The audience watching Aphasia immediately gets its pleasant-but-difficult aspect. It’s fun but serious. It’s confusing, but there’s the order. And it’s inviting – instead of “I can’t do what I see on stage,” the audience may feel “I want to try it one day.” Aphasia seems to be an excellent introduction to studying the performance aspect of electronic music. Once a person has experience of doing Aphasia, watching the performance has a new meaning. It’s like watching a PVP match when knowing the mechanics and insights of a video game.   

Notation

Aphasia is a fully-notated piece composed in 2009. Hundreds of performers, mainly percussionists, played it in the 2010s. In contrast to many graphic notations,  the score is not to be freely interpreted. There is a clear notion of a mistake when the performer misses the timing or makes a wrong hand gesture. This aesthetics contrasts with my solo performance practice in the 2010’s, in which I freely improvised on stage with found objects, MIDI controllers, and found objects. The improvisatory nature of my performance made the audience expect the unexpected at the expense of irreplicability – I was the only person who could perform my solo electroacoustic works. In the composer of Aphasia’s words, I was the best and the worst performer of my piece.  

While only the improviser could perform their pieces, hundreds of percussionists performed Aphasia as the composer intended. Aphasia’s technology barrier is low, and the piece is a hit in every concert. When done correctly, the piece is indistinguishable from many high-tech compositions using motion sensors or controllers. Performers can make such an experience by studying notation, like many other pieces that they have studied previously. 

Aphasia taught me the effectiveness of low technology and low dependence on improvisational skill. With well-written notation, such a composition travels far, invites more performers, and lasts long. Many mixed pieces (i.e., those with an instrument and prerecorded electronics) have the same strength, but Aphasia stands out by featuring the most accessible instrument. My recent electronic ensemble pieces reflect this approach. They are built to be transferable, non-virtuostic,  and have low technological/financial barriers. 

Summary

Aphasia reinforces the music fundamentals I often forget – practicing and overcoming challenges is an essential part of musicianship, and great notation makes performers change and improve. I forgot these, perhaps as a byproduct of pursuing the new and the cutting-edge as a profession. 

Learning Aphasia also rekindled my role as a student. To create and teach, I need to study and practice. Consistently and continuously. The result of doing so does not have to be perfect, but sharing and explaining what I experienced is what teacher-artists do. Knowing me, I will probably forget the lessons. When that happens, I will relearn Aphasia (and/or play Elden Ring) to remember it again.

Breathing Land (2025)

I wrote music for Breathing Land, a dance film published in 2025. It is now available to watch online.

I worked with talented choreographers and dancers of Artlab J. I also worked with filmmaker Dae Won Kim. Detailed credit is found in Artlab J’s Instagram post.

The choreography and the movie are amazing. I am happy to work with such talented artists. Musically speaking, there are some experimental moments I liked quite a bit. There are some parts where I featured no-input mixer sounds for certain emotions and messages. I am also satisfied with the climactic buildup at the end.

Solo Electronic Improvisation

Since 2009, I have been presenting a solo set of live electronic music. Among the many electronic performance techniques, I specialize in creating electronic sounds on stage without pre-recorded samples. I use a combination of digital effect processors coded with SuperCollider to improvise a uniquely electronic soundscape in concerts and recordings. For more than a decade, I have marketed myself as an expert in that specific style. It is represented as a yellow rectangle in the diagram below. 

The categorization is not meaningful to anyone else, but it was a useful research goal for me in the 2010s. I share three representative pieces of my solo electronic improvisation for listening and analysis purposes.

Three Examples 

100 Strange Sounds (2012-2014) is a set of one hundred short video recordings featuring my live electronic music techniques. Each piece pairs a sound-making object with my SuperCollider code that processes its sound. I invite viewers to notice and enjoy the unexpected relationship between what they see and what they hear. For example, the sound of a cabbage becomes something else with a bunch of effect processors in 100 Strange Sounds #77

Large Intestine (2013) is a piece I made after 100 Strange Sounds #42. As described in the blog on style analysis, the no-input mixer improvisation enhanced with SuperCollider has been my favorite electronic instrument for more than a decade. Large Intestine, as the title suggests, epitomizes my interest in noise, digital signal processing, and improvisation. I plan to play this work in as many concerts as possible in the future.

Touch (2014) is my kitchen-sink piece that pairs multiple sound objects with multiple effects. It’s a summary of 100 Strange Sounds, in which I bring random objects on stage and improvise the combination and sequence of sounds. The piece opened many doors to career opportunities in the 2010s as an electronic music improviser. The techniques and technologies I learned in performing and refining Touch became a source for future non-improvisational compositions for electronic ensembles. 

Technology

All three pieces mentioned above use a variation of a single SuperCollider patch, available for download at this link. And this linked PDF explains the hardware and software setup to perform the pieces (warning: it is a little outdated). 

When I run the patch, it creates a GUI with multiple buttons that trigger customized effects. I control the number and timing of the effects’ on/off states with a mouse click – No MIDI controllers or control surfaces. A few clicks, probably unnoticed by the audience, are enough because I wanted the listeners to focus on the interaction I have with the non-electronic objects on the stage. 

As for the hardware,  I use a couple of microphones for Touch, one audio interface, and a laptop. This article explains the gear I used over the past 11 years.

Technique

Like other improvisations, the key technique in performing solo live electronic music is listening. I listen for variations that the computer part adds to the acoustic instruments, then respond with another instrument or effects. Because I cannot play a scale or harmony with the instrument (like cabbage), the listen-and-react decisions are often non-musical and raw. “The current sound is long, so I’ll play short sounds next.” “I will go from a simple to a complex texture.”  “The sound is very high in pitch. I’ll complement it with a very low rumble.” I also ask questions and try to come up with the best answer on stage. “What happens if I granularize the chattering teeth sound?” “The plastic block sounds harsh. Can I make it harsher?” “What is common between a slinky and a coin sound?”  

Free improvisation focusing on reactions and questions is fun, but it can quickly lose control of the length and form. So I plan a specific gesture or sound combination for transitions. The Extension and Connection blog linked earlier has such an example in Touch.  

Annecdote

More than fifteen years of experience in improvising with live electronics forms the foundation of my musicianship. I identified myself as a composer after earning a PhD in composition in 2008, but it did not lead to a gig or collaborations when I moved to Philadelphia for my first job as a music technology professor. The dire situation led me to develop a solo set I can prepare and present quickly in any situation. The strategic change, fortunately, worked, giving me ample opportunity to refine my performance and improvisation techniques. 

These days, I am comfortable identifying myself as a composer-performer of electronic music. My sound may not be fresh or cutting-edge at this point, but I think I have a bit more to contribute to the current solo setup. Perhaps the contribution is a documentation and theorization. Perhaps it is just one more new piece!

More electronic music composition/performance/practice articles are found at the Computer Music Practice project.

Extension and Connection – Computer Music Composition Method

Music technology extends an instrument’s acoustic capacity. An amplifier makes a guitar louder than an orchestra. A pitch shifter expands the range of a snare. Singing through a delay or harmonizer creates thick harmony that one cannot make as a soloist. Computer music compositions, then, feature digital tools that overcome the physical limitations of acoustic instruments. It also uses electronic sound to connect one section to the next, or one story to another.  

Sonic metaphors with digitally processed sounds, which I call extension and connection, are observable in electroacoustic compositions. My favorite example is at around the 2:00 time mark in Paul Riker’s Cubicle (2007). In there, typing sounds multiply to become a rain. It signals the beginning and the ending of a section. The telephone and dog barking sounds in the piece also go through uniquely electronic transformations and serve as a signal to a section or scene change. Listen to the whole piece to see if you agree with my interpretation.

Another favorite example is in Paul Koonce’s Breath and the Machine (1999). In the first three minutes of the piece, some overtones of a two-note violin motif linger longer than the rest. Those extended and exposed overtones of the violins train the listener’s ears to focus on them so that they can notice the same technique throughout the piece. Once the ears are tuned to search for the extended overtones, sections with seemingly random choices of different sounds make sense – the series of lingering overtones, for me, form a melodic line. I listen for counterpoint and common tone modulation between sounds that are difficult to notate traditionally. Again, listen to the whole piece to see if you agree. 

Paul Koonce was my doctoral advisor, and I have known Paul Riker as an inspiring colleague. I learned electroacoustic extension and connection from exemplary teachers and colleagues during my graduate school years, and have refined it since then.  My take on the typing-to-rain sound is in the first and last movements of Dubious Toppings (2019). At the 1:00 time mark, the electronic ensemble collectively creates rain-like typing sounds, stating the potential and limit of the featured digital instrument and its relationship to a piano. This opening gesture repeats at the last movement at the 8:30 time mark, but with pitched tones. I wanted to end the piece like a movie’s final scene, where a changed protagonist returns home after an adventure.

I learned to thread different sounds with common effects from Breath and the Machine, and apply them in electronic improvisation. In the linked 2015 video, many sound-making objects are processed with a SuperCollider patch with a fixed set of effects. The common effects bound seemingly random objects. A chattering teeth toy and a spinning coin could connect if both go through the Looping Pitch Var effect on my patch.

As for creating a form, I reserve a specific combination of effects and objects for transitions. At around the 11:50 time mark, I use a slinky combined with a granular processor and long reverb to signal a new section. Planning a soundmark like this helps me to develop and pace. The videos of other improvisations from 2015 and 2017 show the same slinky technique occurring in the latter half of the performance.   

The extension and connection demonstrated in this article are within a specific composition. But the concept can be applied on a larger scale. Sampling and remixing are about connecting and extending sounds from existing songs. Audio coding can start by extending existing code and connecting it to another module to create a product.  On a personal level, I extend and connect what I learn from my peers and teachers by applying it in different contexts, formats, and technologies. Below is the current practice of my extend and connect project.

Computer Music Practice

Scale – Computer Music Composition Method

Control and presentation of sound in different scales is a distinguishable feature of computer music. In this context, scale does not refer to a group of notes in different pitches, like a C major scale. It instead refers to proportions, as in big vs. small, long vs. short, and few vs. many. Music technology is capable of rendering a single musical idea in extreme proportions, and the collection of those sounds could become a composition.

I will demonstrate a scale-based electronic music composition process with Control Click, a sound installation composed in 2016. The piece is an 11-minute site-specific work for eight or more computers, creating an arcade-like environment with electronic blips and blinks. The computers are networked to play the same SuperCollider file, functioning as both a performer and a lighting device. The video below is a version of Control Click presented at the 2016 Third Practice Electroacoustic Music Festival.

Sound Design With Proportions

Featuring various scales/proportions in computer music means applying different values to a control parameter. If one can control the pitch of an electronic instrument, experiment with low Hz and high Hz. If the duration of a note in an electronic instrument could be programmed, make very short and very long sounds. The keyword here is extreme. A computer is capable of following laborious or precise instructions that are difficult or impossible for humans to execute. 

In Control Click, each computer algorithmically generates a melodic line based on a chord. I cannot control the exact sequence of pitches, but I could control the chord type, note duration, and tempo. The range of note duration and their playback pace is wider than that of acoustic instruments, thus capable of creating different timbres and moods. The audio example below plays the melodic line in normal, slightly longer, and very short note durations.

By playing the melodic line heard above with very long note duration and decelerating tempo, I could create the sound below. Note that the tremolo of individual notes reveals more as the note duration becomes longer. Longer and stacked notes with different tremolo rates create a sense of a chord with long reverb.

The sound heard above was inspired by the FFT time-stretching technique, which inspired composers to discover hidden sounds too short to be heard and appreciated in an audio file. The technique can also make a long audio phrase so short that one cannot identify the pitch. In other words, time-stretching scales the duration parameters in extreme proportions. But such an idea is applicable beyond FFT. The audio below is how I applied the duration/tempo scale to the percussion sound.

Composition With Proportions

The idea of applying different proportions can also be applied beyond parameter change. In Control Click, the example sounds in the previous section are meant to be played by multiple computers. But as a site-dependent piece with random number generators, each computer emits a distinguishable note sequence at different physical locations. My goal was to create a sonic environment of an arcade from my childhood – chaotic, overwhelming, and delightful. 

Links below point to the moment in the piece that uses previously mentioned scaling examples in an ensemble format. 

  • The normal melodic line with percussion (1:30)
  • Long note duration (2:30-2:50)
  • Short note duration (5:30-6:00)
  • Extreme extension of note duration and tempo (8:50-10:00)

In the third link, Long note duration, the melodic line is detuned by a random amount at synced timings. The effect of one computer doing so is not so noticeable. But when multiple computers are out of tune in a large space, it creates an impact that I cannot recreate in a concert hall.

Notation of Proportions

The concept of controlling a range and scope of musical parameters, rather than instructing specific notes to play, is transferable to human performance. A proper notation to play an electronic instrument within a limited range can be considered as proportional control of choices. Seven Bird Watchers (2019) for drum machine ensemble is an example.

Seven Bird Watchers uses drum machines with customized sync tracks, and the sync track defines the form—the piece is simply seven sections with an increase in tempi and sonic range.  While the composed sync track holds Korg Volca Beats’ tempo together, the human performers change the drum machine’s parameters according to the score. The score depicts the range of parameters performers can improvise.


For example, the early section has limited parameter changes and choices. It lasts about 35 seconds with a moderate increase and decrease in tempo. The performers, as shown in the score above, have a very limited choice of parameter change – the dark area of the Time/Depth/Pitch/Decay knobs, as well as the dark areas in the instrument choice, are the areas in which the performers can move or use knobs and buttons in Volca Beats.

The latter section, in contrast, has a bigger range of tempo changes with an extended duration of 85 seconds. The performers are free to use the entire range of the knobs with almost all available sounds. The proportion of choices and resulting sounds is more varied. For example, the tempo gets so fast that the sixteenth-note run of some percussion instruments loses sense of rhythm. It starts to sound like a bird chirping.

References

For further study, read Curtis Road’s Microsound. I learned the musical application of scale and proportion from this book. Research the scale and proportion in visual art as well. There are ample examples of how different scales make ordinary events extraordinary. Watching a movie on a big screen feels different than watching it on a phone screen. A slow-motion video effect is fun. Similarly, a sound with varying time scales and contrasting parameter values fascinates me.

Computer Music Composition Method has other related entries. Read them if interested