Tag Archives: composition

Scale – Computer Music Composition Method

Control and presentation of sound in different scales is a distinguishable feature of computer music. In this context, scale does not refer to a group of notes in different pitches, like a C major scale. It instead refers to proportions, as in big vs. small, long vs. short, and few vs. many. Music technology is capable of rendering a single musical idea in extreme proportions, and the collection of those sounds could become a composition.

I will demonstrate a scale-based electronic music composition process with Control Click, a sound installation composed in 2016. The piece is an 11-minute site-specific work for eight or more computers, creating an arcade-like environment with electronic blips and blinks. The computers are networked to play the same SuperCollider file, functioning as both a performer and a lighting device. The video below is a version of Control Click presented at the 2016 Third Practice Electroacoustic Music Festival.

Sound Design With Proportions

Featuring various scales/proportions in computer music means applying different values to a control parameter. If one can control the pitch of an electronic instrument, experiment with low Hz and high Hz. If the duration of a note in an electronic instrument could be programmed, make very short and very long sounds. The keyword here is extreme. A computer is capable of following laborious or precise instructions that are difficult or impossible for humans to execute. 

In Control Click, each computer algorithmically generates a melodic line based on a chord. I cannot control the exact sequence of pitches, but I could control the chord type, note duration, and tempo. The range of note duration and their playback pace is wider than that of acoustic instruments, thus capable of creating different timbres and moods. The audio example below plays the melodic line in normal, slightly longer, and very short note durations.

By playing the melodic line heard above with very long note duration and decelerating tempo, I could create the sound below. Note that the tremolo of individual notes reveals more as the note duration becomes longer. Longer and stacked notes with different tremolo rates create a sense of a chord with long reverb.

The sound heard above was inspired by the FFT time-stretching technique, which inspired composers to discover hidden sounds too short to be heard and appreciated in an audio file. The technique can also make a long audio phrase so short that one cannot identify the pitch. In other words, time-stretching scales the duration parameters in extreme proportions. But such an idea is applicable beyond FFT. The audio below is how I applied the duration/tempo scale to the percussion sound.

Composition With Proportions

The idea of applying different proportions can also be applied beyond parameter change. In Control Click, the example sounds in the previous section are meant to be played by multiple computers. But as a site-dependent piece with random number generators, each computer emits a distinguishable note sequence at different physical locations. My goal was to create a sonic environment of an arcade from my childhood – chaotic, overwhelming, and delightful. 

Links below point to the moment in the piece that uses previously mentioned scaling examples in an ensemble format. 

  • The normal melodic line with percussion (1:30)
  • Long note duration (2:30-2:50)
  • Short note duration (5:30-6:00)
  • Extreme extension of note duration and tempo (8:50-10:00)

In the third link, Long note duration, the melodic line is detuned by a random amount at synced timings. The effect of one computer doing so is not so noticeable. But when multiple computers are out of tune in a large space, it creates an impact that I cannot recreate in a concert hall.

Notation of Proportions

The concept of controlling a range and scope of musical parameters, rather than instructing specific notes to play, is transferable to human performance. A proper notation to play an electronic instrument within a limited range can be considered as proportional control of choices. Seven Bird Watchers (2019) for drum machine ensemble is an example.

Seven Bird Watchers uses drum machines with customized sync tracks, and the sync track defines the form—the piece is simply seven sections with an increase in tempi and sonic range.  While the composed sync track holds Korg Volca Beats’ tempo together, the human performers change the drum machine’s parameters according to the score. The score depicts the range of parameters performers can improvise.


For example, the early section has limited parameter changes and choices. It lasts about 35 seconds with a moderate increase and decrease in tempo. The performers, as shown in the score above, have a very limited choice of parameter change – the dark area of the Time/Depth/Pitch/Decay knobs, as well as the dark areas in the instrument choice, are the areas in which the performers can move or use knobs and buttons in Volca Beats.

The latter section, in contrast, has a bigger range of tempo changes with an extended duration of 85 seconds. The performers are free to use the entire range of the knobs with almost all available sounds. The proportion of choices and resulting sounds is more varied. For example, the tempo gets so fast that the sixteenth-note run of some percussion instruments loses sense of rhythm. It starts to sound like a bird chirping.

References

For further study, read Curtis Road’s Microsound. I learned the musical application of scale and proportion from this book. Research the scale and proportion in visual art as well. There are ample examples of how different scales make ordinary events extraordinary. Watching a movie on a big screen feels different than watching it on a phone screen. A slow-motion video effect is fun. Similarly, a sound with varying time scales and contrasting parameter values fascinates me.

Computer Music Composition Method has other related entries. Read them if interested


Tool and Variations – Computer Music Composition Method

Create a patch, make different sounds with it, and arrange them in order. This is my go-to method for computer music composition. Instead of a theme in the theme and variations form, a computer musician begins a composition by making an electronic instrument or an audio app patch. Then, the composer explores different sonic possibilities of the instrument. The sounds created with the instrument are then presented in a particular order. The article demonstrates this process with my old composition. I will also provide more recent practices of the method with the entries in Computer Music Practice.

Tool and Variations in Decrescendo (2003)

Step 1. Make a software instrument

A computer music composer’s first job often is to design a digital instrument or a patch. A patch in this context is a specific connection of features/modules in an audio programming environment, such as Max, Csound, or SuperCollider.  In Decrescendo, a fixed-media piece published in 2003, I wrote a Csound patch that generates a series of sine tones according to an adjustable overtone series.  The formula to make a pitch series is as simple as the one below, but I could control tempo, note duration, and pan to my taste.

Note of a scale = fundamental frequency* (overtone number *detune value)

Here are two sound examples generated from the Decrescendo instrument.

Step 2. Make variations with the instrument 

A customized instrument has the potential to generate sounds of various timbres with different, sometimes randomized, settings of its parameters.  The second step in the tools and variations method is to experiment and document as many different parameter settings as possible that yield distinct sounds. In Decrescendo, I adjusted the fundamental frequency, note duration, scale direction, pan position, and detune value to create different, but related, sounds. Some variations are created with duplication and overlap (more on this in another article). Below are some audio examples.

Documented variations of parameter settings in a digital instrument are called presets. Featuring presets of an instrument is a distinct characteristic of electronic music compositions. Here’s an article about presets for further study.  

Step 3. Sequence the variations 

The next step after gathering a library of presets is to make decisions on when to play which sounds. The decision-making and its documentation involve selecting a few from many sounds in Step 2. The deciding factor depends on the context and personal taste. In the case of Decrescendo, the piece opens with an unaltered overtone series, followed by slightly detuned scales. The second section (00:30) contrasts the opening by presenting a few descending overtone series. The third section (00:50)  reminds the listener of the opening gesture with a further exploration of detuning and tempo variation. The preset choices for the rest of the piece are my answers to the question, “What makes sense based on what we have heard so far?”

The sequencing, an act of ordering events, of various sounds made with an instrument, is not formulaic. There is no right answer, but the choices are based on the context, experience, and taste of the creator. 

Computer Music Practice

Tool and variation is a method that could be applied to many digital music formats. Here are my recent applications of the method in installation, fixed media, and electronic ensemble works. The entries are part of the Computer Music Practice project.

Control Click (2016): In this installation for multiple desktops, every computer plays the same SuperCollider patch. The instrument is designed to generate a randomized rhythm and timbre at a pre-scored and fixed timing. In other words, the instrument randomly generates timbre at a fixed sequence of changes. I saved a surprising and best preset setting I found for the climax. 

Seven Bird Watchers (2019): In this electronic ensemble piece, I did not design the instrument, but made a specific drum pattern for Korg Volca Beats. The score displays seven variations of button combinations and gestures that performers need to create with the said drum pattern at a specific timing. The variable tempo is composed/sequenced with SuperCollider.

RMHS (2020):  RMHS is a drone generator made with SuperCollider. A user can download the patch, set parameters, press a button, and create a drone of microtonal harmonies. The RMHS album consists of eight examples of such drones. The sequence portion of this project is the track order, which reflects my interpretation of consonant and dissonant harmonies.

Four Hit Combo (2024): The preset variation and sequence creation process is similar to that of Seven Bird Watchers – I notated different instances of presets for performers to interpret. But the instrument in Four Hit Combo does not have a set sound. Instead, it is a platform that processes any incoming audio files with a set of gestures based on granular synthesis. It is possible to create an instrument without sound in computer music!

Slump Recovery Project

I was unable to complete a single composition over the summer despite multiple attempts. Such unproductivity led to a chain of negative thoughts on the impact and relevance of my work and effort. I lost the will to create by July. The slump phase is not new for me or any creatives, but getting over it is never easy. To recover, I tried methods that worked in the past, but also did something new.

The usual remedy for a slump for me is working on related projects in small bites. My main project is music composition, and related projects are any other music activities. I focused on relearning and appreciating fundamental musicianship. 

  • I reread books that inspired me. When reading became tiring, I listened to audiobooks. 
  • I wrote blogs on the electronic music practice. It helped me to organize and reflect. 
  • I listened to all and any music. Summer is a great time to catch up on listening. I heard new things in old songs, and rediscovered time-tested techniques in new music. 
  • I practiced bass guitar, an instrument I wanted to play but did not have time to learn. You know, every other electroacoustic musician plays bass.

Engaging in musical activities kept me from completely letting go of being a musician. Unfortunately, this was not enough for me to get back to composing. I had to make a specific plan and task to get back to a creative routine.  

  • Write a 1-2-minute piece with the most familiar tool. In my case, it would be SuperCollider.
  • Start and finish a piece in one sitting. The goal is to remind myself of the joy and necessity of completing a piece.
  • Quality does not matter. Do not evaluate or self-critize the piece. 
  • Pieces do not have to have a new idea. An old idea presented in a different context is good enough.

This attempt worked, kind of. I finished a piece on July 22, but was not able to produce another piece until July 28. But from that day, I completed a 1-1.5-minute piece every day until July 31. It was a small win with an impact. Now, I am comfortable sitting on a computer for a few hours per day,  enduring the tedious or negative side of the creative process. 

As a record of this summer’s slump and post-slump, I share Slump Recovery Project, a 6-minute piece in 5 sections.

Lastly, here’s the log of my composition failures and successes in May-July 2025.

  • 5/19 Stopped working on a duet for plastic tube sqeakers and computer after two weeks. A simulation of the performance was disappointing, so I stopped.
  • 7/10 Attempted to write a new laptop ensemble piece. Again, a simulation of the performance sounded too much like a piece I wrote a few weeks ago.
  • 7/13 Jotted an idea for a no-input mixer duet. I thought writing for my favorite instrument would motivate me. It didn’t. I got more discouraged. 
  • 7/16-21 I could not do anything
  • 7/22 (check date): Made a 1.5-minute piece on SuperCollider titled I-IV-V-vi
  • 7/28 Made a 1.5-minute piece on SuperCollider and Logic Pro titled A Note of Happy
  • 7/29 Made a 1.5-minute piece on SuperCollider titled Decrescendo Revisited
  • 7/30 Made a 1-minute piece on SuperCollider and Logic Pro titled Riff
  • 7/31 Made a 50-second piece on SuperCollider and Logic Pro titled Elastic Drum

Computer Music Practice – Learning

The articles in the Learning section of CMP cover computer musicianship. They are examples of a music technologist’s work and efforts that the audience does not see. But they are essential steps for artistic improvement. Every musician has routines to refine themselves, and the Learning section shares my version of thoughts, actions, and reflections on computer music practice.

There are four subsections, and the first three are listen, think, and act. The first and most fundamental step in musicianship is learning to listen. Then, a conscious and analytical listening connects to thinking. Thinking means analyzing and imagining sounds and techniques to enhance a piece, organizing and comparing past compositions to identify creative patterns, and articulating those thoughts into words for reference. These thoughts become tangible results through actions. The results could be a composition, a concert, a career move, an idea, or another sound to circle back to the listen-think-act process. 

Listening, thinking, and acting are necessary steps in composing, coding, or improvising, as the repetition of those steps refines one’s skills. The refining process in music technology is essential but often overlooked. There are more instruments and techniques I can learn in music technology. I chose a few that interest me the most and spend time and energy to improve at them, rather than using the newest tools. Performers of non-electronic instruments have resources and historical references on the refinement process, such as etudes and method books for orchestral instruments. Computer music does not seem to (or rather, is not designed to) have a standard practice routine, but I can at least share my practice routine specific to computer musicianship. 


Computer Music Practice (CMP) is an interactive and personal example of computer musicianship. Click each entry in the chart to read and listen to Joo Won Park’s computer music research.

Sans Trou Ni Fin (2025)

Sans Trou Ni Fin (without hole or end) is a collaborative work with Biba Bell. It was premiered on June 26th, 2025, as a site-specific movement piece. I made the music and designed the playback system. The show was about 45 minutes and had a total of four performances on June 26th and 27th, 2025.   

Form 

The show has six dancers, one reader, and a Detroit house with a remarkable design. Biba’s program notes below describe the experiences of audience members during the show. 

Sound Design and Composition

I visited the site a few months before the premiere, took notes and photos, and composed music. I made three different but correlated 10-minute pieces that will be played simultaneously on three sides of the center garden. I have also added a fourth sound that will be played back from a portable speaker. During the performance, a dancer walked with the portable speaker and visited the three sites. 

Below is a diagram showing a customized playback system for the show. It consists of three portable speakers, one subwoofer, a multi-channel audio interface, and a PC running Ableton Live. 

The tracks in the soundtrack album match the labels in the diagram.

The three places, the Library, Living Room, and Kitchen, each have their own music (tracks 3, 4, and 5), and Traveler intermittently visits them with a fourth sound (track 6, an abbreviated version of the original 30-minute file). The first track, named Sans Trou Ni Fin – Part I, is a simulation of all of the sounds playing together. Sans Trou Ni Fin – Part II is played at the last 10 minutes of the piece, where all dancers gather in the center garden. The windows to the garden are open for the last movement, allowing sound to travel with fewer obstructions. Part II also uses a subwoofer to add a low-frequency thump. During the shows, I was cueing the sound from a storage room, doors closed and hidden from the audience. 

The composition process consists of combining new and old techniques. SuperCollider codes used in End Credits, Save Point By The Lake, Tree Breezes, and Hold Drum became the starting points for the pieces. I edited the codes so that the resulting sounds are in the same key, tempo, and duration. For the ending, I added a slow version of Mellotron 7. During the rehearsal, the team wanted a 60-second-long transition sound to be played while the audience moved to another room. I quickly assembled the transition sound with a drone from Living Room. I did not upload the transition music.  

Remarks

Sans Trou Ni Fin was a second collaboration with Biba (info about the first collaboration is here). Biba brings out the beauty of the immobile space with the mobile human bodies. Her work has the best “here-and-now” experience a live performance can create.  And the performers, Hunter, Ta’Rajee, Matthew, Elizabeth, Aaron, and Chris, delivered it with 200%. I am grateful to work with the crew. 

Update 11/25/2025

One of the artists, Matthew Piper, wrote a beautiful article about the performance. Read about it here!

https://www.matthewjpiper.com/post/haunting-the-house-notes-on-dance-and-space

As for the documentation, the two performances are now available on Vimeo