The choreography and the movie are amazing. I am happy to work with such talented artists. Musically speaking, there are some experimental moments I liked quite a bit. There are some parts where I featured no-input mixer sounds for certain emotions and messages. I am also satisfied with the climactic buildup at the end.
Music technology extends an instrument’s acoustic capacity. An amplifier makes a guitar louder than an orchestra. A pitch shifter expands the range of a snare. Singing through a delay or harmonizer creates thick harmony that one cannot make as a soloist. Computer music compositions, then, feature digital tools that overcome the physical limitations of acoustic instruments. It also uses electronic sound to connect one section to the next, or one story to another.
Sonic metaphors with digitally processed sounds, which I call extension and connection, are observable in electroacoustic compositions. My favorite example is at around the 2:00 time mark in Paul Riker’s Cubicle (2007). In there, typing sounds multiply to become a rain. It signals the beginning and the ending of a section. The telephone and dog barking sounds in the piece also go through uniquely electronic transformations and serve as a signal to a section or scene change. Listen to the whole piece to see if you agree with my interpretation.
Another favorite example is in Paul Koonce’s Breath and the Machine (1999). In the first three minutes of the piece, some overtones of a two-note violin motif linger longer than the rest. Those extended and exposed overtones of the violins train the listener’s ears to focus on them so that they can notice the same technique throughout the piece. Once the ears are tuned to search for the extended overtones, sections with seemingly random choices of different sounds make sense – the series of lingering overtones, for me, form a melodic line. I listen for counterpoint and common tone modulation between sounds that are difficult to notate traditionally. Again, listen to the whole piece to see if you agree.
Paul Koonce was my doctoral advisor, and I have known Paul Riker as an inspiring colleague. I learned electroacoustic extension and connection from exemplary teachers and colleagues during my graduate school years, and have refined it since then. My take on the typing-to-rain sound is in the first and last movements of Dubious Toppings (2019). At the 1:00 time mark, the electronic ensemble collectively creates rain-like typing sounds, stating the potential and limit of the featured digital instrument and its relationship to a piano. This opening gesture repeats at the last movement at the 8:30 time mark, but with pitched tones. I wanted to end the piece like a movie’s final scene, where a changed protagonist returns home after an adventure.
I learned to thread different sounds with common effects from Breath and the Machine, and apply them in electronic improvisation. In the linked 2015 video, many sound-making objects are processed with a SuperCollider patch with a fixed set of effects. The common effects bound seemingly random objects. A chattering teeth toy and a spinning coin could connect if both go through the Looping Pitch Var effect on my patch.
As for creating a form, I reserve a specific combination of effects and objects for transitions. At around the 11:50 time mark, I use a slinky combined with a granular processor and long reverb to signal a new section. Planning a soundmark like this helps me to develop and pace. The videos of other improvisations from 2015 and 2017 show the same slinky technique occurring in the latter half of the performance.
The extension and connection demonstrated in this article are within a specific composition. But the concept can be applied on a larger scale. Sampling and remixing are about connecting and extending sounds from existing songs. Audio coding can start by extending existing code and connecting it to another module to create a product. On a personal level, I extend and connect what I learn from my peers and teachers by applying it in different contexts, formats, and technologies. Below is the current practice of my extend and connect project.
Control and presentation of sound in different scales is a distinguishable feature of computer music. In this context, scale does not refer to a group of notes in different pitches, like a C major scale. It instead refers to proportions, as in big vs. small, long vs. short, and few vs. many. Music technology is capable of rendering a single musical idea in extreme proportions, and the collection of those sounds could become a composition.
I will demonstrate a scale-based electronic music composition process with Control Click, a sound installation composed in 2016. The piece is an 11-minute site-specific work for eight or more computers, creating an arcade-like environment with electronic blips and blinks. The computers are networked to play the same SuperCollider file, functioning as both a performer and a lighting device. The video below is a version of Control Click presented at the 2016 Third Practice Electroacoustic Music Festival.
Sound Design With Proportions
Featuring various scales/proportions in computer music means applying different values to a control parameter. If one can control the pitch of an electronic instrument, experiment with low Hz and high Hz. If the duration of a note in an electronic instrument could be programmed, make very short and very long sounds. The keyword here is extreme. A computer is capable of following laborious or precise instructions that are difficult or impossible for humans to execute.
In Control Click, each computer algorithmically generates a melodic line based on a chord. I cannot control the exact sequence of pitches, but I could control the chord type, note duration, and tempo. The range of note duration and their playback pace is wider than that of acoustic instruments, thus capable of creating different timbres and moods. The audio example below plays the melodic line in normal, slightly longer, and very short note durations.
By playing the melodic line heard above with very long note duration and decelerating tempo, I could create the sound below. Note that the tremolo of individual notes reveals more as the note duration becomes longer. Longer and stacked notes with different tremolo rates create a sense of a chord with long reverb.
The sound heard above was inspired by the FFT time-stretching technique, which inspired composers to discover hidden sounds too short to be heard and appreciated in an audio file. The technique can also make a long audio phrase so short that one cannot identify the pitch. In other words, time-stretching scales the duration parameters in extreme proportions. But such an idea is applicable beyond FFT. The audio below is how I applied the duration/tempo scale to the percussion sound.
Composition With Proportions
The idea of applying different proportions can also be applied beyond parameter change. In Control Click, the example sounds in the previous section are meant to be played by multiple computers. But as a site-dependent piece with random number generators, each computer emits a distinguishable note sequence at different physical locations. My goal was to create a sonic environment of an arcade from my childhood – chaotic, overwhelming, and delightful.
Links below point to the moment in the piece that uses previously mentioned scaling examples in an ensemble format.
Extreme extension of note duration and tempo (8:50-10:00)
In the third link, Long note duration, the melodic line is detuned by a random amount at synced timings. The effect of one computer doing so is not so noticeable. But when multiple computers are out of tune in a large space, it creates an impact that I cannot recreate in a concert hall.
Notation of Proportions
The concept of controlling a range and scope of musical parameters, rather than instructing specific notes to play, is transferable to human performance. A proper notation to play an electronic instrument within a limited range can be considered as proportional control of choices. Seven Bird Watchers(2019) for drum machine ensemble is an example.
Seven Bird Watchers uses drum machines with customized sync tracks, and the sync track defines the form—the piece is simply seven sections with an increase in tempi and sonic range. While the composed sync track holds Korg Volca Beats’ tempo together, the human performers change the drum machine’s parameters according to the score. The score depicts the range of parameters performers can improvise.
For example, the early section has limited parameter changes and choices. It lasts about 35 seconds with a moderate increase and decrease in tempo. The performers, as shown in the score above, have a very limited choice of parameter change – the dark area of the Time/Depth/Pitch/Decay knobs, as well as the dark areas in the instrument choice, are the areas in which the performers can move or use knobs and buttons in Volca Beats.
The latter section, in contrast, has a bigger range of tempo changes with an extended duration of 85 seconds. The performers are free to use the entire range of the knobs with almost all available sounds. The proportion of choices and resulting sounds is more varied. For example, the tempo gets so fast that the sixteenth-note run of some percussion instruments loses sense of rhythm. It starts to sound like a bird chirping.
References
For further study, read Curtis Road’s Microsound. I learned the musical application of scale and proportion from this book. Research the scale and proportion in visual art as well. There are ample examples of how different scales make ordinary events extraordinary. Watching a movie on a big screen feels different than watching it on a phone screen. A slow-motion video effect is fun. Similarly, a sound with varying time scales and contrasting parameter values fascinates me.
Computer Music Composition Method has other related entries. Read them if interested
Create a patch, make different sounds with it, and arrange them in order. This is my go-to method for computer music composition. Instead of a theme in the theme and variations form, a computer musician begins a composition by making an electronic instrument or an audio app patch. Then, the composer explores different sonic possibilities of the instrument. The sounds created with the instrument are then presented in a particular order. The article demonstrates this process with my old composition. I will also provide more recent practices of the method with the entries in Computer Music Practice.
Tool and Variations in Decrescendo (2003)
Step 1. Make a software instrument
A computer music composer’s first job often is to design a digital instrument or a patch. A patch in this context is a specific connection of features/modules in an audio programming environment, such as Max, Csound, or SuperCollider. In Decrescendo, a fixed-media piece published in 2003, I wrote a Csound patch that generates a series of sine tones according to an adjustable overtone series. The formula to make a pitch series is as simple as the one below, but I could control tempo, note duration, and pan to my taste.
Note of a scale = fundamental frequency* (overtone number *detune value)
Here are two sound examples generated from the Decrescendo instrument.
Step 2. Make variations with the instrument
A customized instrument has the potential to generate sounds of various timbres with different, sometimes randomized, settings of its parameters. The second step in the tools and variations method is to experiment and document as many different parameter settings as possible that yield distinct sounds. In Decrescendo, I adjusted the fundamental frequency, note duration, scale direction, pan position, and detune value to create different, but related, sounds. Some variations are created with duplication and overlap (more on this in another article). Below are some audio examples.
Documented variations of parameter settings in a digital instrument are called presets. Featuring presets of an instrument is a distinct characteristic of electronic music compositions. Here’s an article about presets for further study.
Step 3. Sequence the variations
The next step after gathering a library of presets is to make decisions on when to play which sounds. The decision-making and its documentation involve selecting a few from many sounds in Step 2. The deciding factor depends on the context and personal taste. In the case of Decrescendo, the piece opens with an unaltered overtone series, followed by slightly detuned scales. The second section (00:30) contrasts the opening by presenting a few descending overtone series. The third section (00:50) reminds the listener of the opening gesture with a further exploration of detuning and tempo variation. The preset choices for the rest of the piece are my answers to the question, “What makes sense based on what we have heard so far?”
The sequencing, an act of ordering events, of various sounds made with an instrument, is not formulaic. There is no right answer, but the choices are based on the context, experience, and taste of the creator.
Computer Music Practice
Tool and variation is a method that could be applied to many digital music formats. Here are my recent applications of the method in installation, fixed media, and electronic ensemble works. The entries are part of the Computer Music Practice project.
Control Click(2016): In this installation for multiple desktops, every computer plays the same SuperCollider patch. The instrument is designed to generate a randomized rhythm and timbre at a pre-scored and fixed timing. In other words, the instrument randomly generates timbre at a fixed sequence of changes. I saved a surprising and best preset setting I found for the climax.
Seven Bird Watchers (2019): In this electronic ensemble piece, I did not design the instrument, but made a specific drum pattern for Korg Volca Beats. The score displays seven variations of button combinations and gestures that performers need to create with the said drum pattern at a specific timing. The variable tempo is composed/sequenced with SuperCollider.
RMHS (2020): RMHS is a drone generator made with SuperCollider. A user can download the patch, set parameters, press a button, and create a drone of microtonal harmonies. The RMHS album consists of eight examples of such drones. The sequence portion of this project is the track order, which reflects my interpretation of consonant and dissonant harmonies.
Four Hit Combo (2024): The preset variation and sequence creation process is similar to that of Seven Bird Watchers – I notated different instances of presets for performers to interpret. But the instrument in Four Hit Combo does not have a set sound. Instead, it is a platform that processes any incoming audio files with a set of gestures based on granular synthesis. It is possible to create an instrument without sound in computer music!
A preset is a parameter configuration of a digital electronic instrument. A preset can make one synthesizer sound like a drum, string, or anything else. It also makes one reverb unit imitate the acoustics of a stadium, bathroom, or any other environment. Compared to an analog instrument’s patch, a preset can be saved and loaded. I can accurately recreate and use favorite or project-specific sounds instantly in a digital modular synth. And there could be thousands of presets for one instrument. The ability to access a large quantity of presets instantly and accurately is, in my opinion, the digital instruments’ most distinguishing advantage over analog or acoustic instruments.
Many electronic music production starts with browsing through presets. A Logic Pro user can choose and play a sound, such as Eerie Strings or Wormhole Lead. These two sound different, but both are made with the same Retro Synth software instrument. In other words, Eerie Strings and Wormhole Lead are presets of Retro Synth.
A producer could complete a piece with 100 different sounds using a DAW, but that does not mean there were 100 different instruments. There could have been 10 instruments with 100 saved variations/presets. Of those 100 presets, some could also be the creator’s original or modification. Some presets could change their parameters within a piece via automation.
The idea of using multiple configurations of an instrument and then dynamically changing them applies beyond presets. Preset-changing is a uniquely electronic composition technique, as some electroacoustic compositions feature one or two electronic instruments with sequenced presets. For example, in Armor+2 (2015), I cue digital instruments’ parameter changes according to the score. I used SuperCollider to achieve this, but any apps with cue features could do the same.
In Armor+2, the computer randomizes parameters of a digital instrument where a boxed word is notated in the score (FM, AM, Stutter, Ticks). I think of this process as a random preset change that yields expected but different effects at every performance. For example, one can hear a stuttering effect in different rhythmic patterns in measures 12 (0:30 in the recording) and 20 (0:50 in the recording). The same randomization happens with FM and AM effects throughout the piece. The dynamically changing, yet well-timed change makes the computer part function like a jazz accompanist. A jazz pianist plays the notated chord progression but improvises how those chords are presented. Similarly, the computer part of Armor+2 changes the clarinetist’s sound as notated, but the resulting sounds are varied at every performance.
Another example is in Save Point By The Lake (2024) for a laptop ensemble. Every performer in this piece plays piano samples according to the score. For example, performers press keys A, F, and J on the computer keyboard in measures 2,3, and 4 with notated rhythm.
But unlike an acoustic piano, the laptop will not always play a F major chord. I designed the instrument so that at every stroke of a key, SuperCollider randomizes pitch, dynamics, and detuning amount. Pressing an F key at measure 1 and doing the same action in measure 2 yields a different note. This way, the ensemble can play the piece in notated rhythm with computer-assisted interpretations.
A preset is a recordable and recollectable variation of a digital instrument. It is an equivalent of a save or a snapshot of an app, and it is a powerful tool to express originality and creativity. Anyone can use Logic Pro’s Retro Synth or SuperCollider, but a customized preset can sound unique. If those original sounds are put together in order or layered with other sounds, the result could be a composition. Lastly, if we expand the definition of an instrument, other creative processes could be thought of as a preset change. If a recording studio is an instrument, what are its presets? If an orchestra is an instrument, what are its preset changes and randomization? These wonderings are delightful and provocative.