Category Archives: Computer Music Practice

Overundertone (2015)

All compositions contain aspects of the creator’s thoughts and life at a particular time.  Overundertone, a 2015 album consisting of eight electroacoustic tracks, is a reflection of me a decade ago.  Listening to the album feels like reading an old diary.  The me in 2015 is unfamiliar to the me in 2025 – he is passionate and curious about the world and people. He had the thoughts and emotions I wish to have now. Below are what I learn about myself when I listened to the tracks.

  • Eyelid Spasm: I liked high frequencies, so I made a piece using them. I played with my (then) 5-year-old and 1-year-old sons, all the time, so the playfulness is in the piece. I even used a picture of me mimicking an animal (I think, I hope) for the kids as a cover photo. I don’t think I can hear many frequencies featured in this piece anymore.
  • Cross Rhythms: I wrote it as a class project example. I asked Oberlin’s TIMARA students to pick a page in Tom Johnson’s Imaginary Music and make an electroacoustic piece about it.  I chose Cross Rhythms and composed a scene where two different rhythms overlap. The teacher-composer identity is in the piece. 
  • Three Corn Punch: It’s a recording of a live performance. It is probably my last piece that does not involve electronic sound. It uses a Disklavier, though.  There are no new techniques here. I learned to accept that I don’t have to develop a new concept for every composition. A good idea from others and myself needs repetition, reinterpretation, and refinement.
  • Cornfields and Cicadas: This is one of the soundscape works using original field recordings and synthesized sounds. I have been creating a series using this instrumentation since my graduate student years. I remember writing it with less struggle and stress, but the quality was about the same. It is a sonic diary of a vacation to a farm in Pennsylvania, where I went with my family and friends. 
  • Beft: I wrote it because I was a dad reading Dr. Seuss to the kids. Beft is a creature in Things You Can Think that only moves to the left. It contains sounds and techniques I loved then – Shepard tone, 8-channel spatialization, overtones, etc. It was also a part of a class project example, like Cross Rhythms. My teacher-composer-dad is all represented in Beft. 
  • Snake and Ox: It is a recording of an improvisation using instruments I used in solo shows. They are a no-input mixer, SuperCollider, and a custom synth. The no-input mixer sound was the most exciting thing to me. I remember dancing along with the no-input mixer noises while practicing. 
  • 10M to Fairmount: It is a sonic diary of a park in Philadelphia, where I lived for six years. Philly feels like a hometown since I started my family there. I must have been interested in visuals in addition to field recordings and synthesizers then. The piece has a video version. Like Cornfields and Cicadas, it is a diary-like piece.  
  • Sky Blue Waves: It’s a piece from 100 Strange Sounds, a project I thought would be my magnum opus. The track has a simple instrumentation (celesta and a field recording of a beach), but has the not-so-happy aspects of my life at that time. As a contrast to Eyelid Spasm, it worked well as a closing piece of the album.

These songs are forgotten, but are still significant to me. Overundertone is an archive of emotions, efforts, and life in audio, the format I love the most. The album reminds me to strive (용써라) like 2015. The jaded, slumped me of 2025 needs that. 

jwp in 2015

JNNJ (2016) – for percussion duo and computer

Program Notes

JNNJ was commissioned and premiered by Hunter Brown and Louis Pino in 2016. The piece is inspired by the life and dynamics of my family. The title is a combination of the first letters of mom, dad, and two sons. 

Technical Needs

  • One computer with a DAW or Max. A Logic Pro X session is provided, but any DAW will work. The tape part can also be played with the provided Max patch. 
  • Stereo sound system 
  • Two headphones for click tracks 
  • An audio Interface with four separate outputs 
  • TapeL.aif should be routed to output 1, connected to the left speaker 
  • TapeR.aif should be routed to output 2, connected to the right speaker 
  • ClickTrackL.aif should be routed to output 3, connected to Perc1’s click track 
  • ClickTrackR.aif should be routed to output 4, connected to Perc2’s click track 

Performance Needs

  • Two percussionists with a snare drum and a large cymbal for each performer. 
  • Both performers use brushes for the entire piece. 
  • Perc1 stands close to the left speaker, and Perc2 stands close to the right speaker

Performance Instruction

  • Each performer gets his/her own click track. The click tracks run in various tempi and over the entire piece. Each performer should follow his/her own click track. 
  • Interpret the score like a jazz chart. Improvise in the notated style (funk and swing). 
  • Pay attention to the pitch of the click track to hear the section changes. 
  • Section-specific notes: 
    • S1: The tape part will fade in at around the 20-second mark.
    • S2: Perc1 transforms the rhythmic pattern to a swing (indicated as “target rhythm”) while slowing down. 
    • S3: Perc2 transforms the rhythmic pattern to a more energetic and busy funk rhythm while slowing down. Listen for the white noise cue for the next section. 
    • S4: Both Perc1 and Perc2 trade off solo while speeding up. The trade offs will gradually overlap with each other. Listen for the white noise cues to play uneven brush sweep on cymbals. 
    • S5: Both parts will get significantly faster. When the tempo becomes too fast, freely improvise with great energy. Increase the use of cymbals throughout the section. 
    • S6: Both Perc1 and Perc2 play energetic cymbal improvisation while slowing down. Accompany the tape part after the click track stops. At the end of the fixed part (7:00), create a quiet, windy sound by swinging the brush in the air.

Composing Click Track

A computer is excellent at doing precise tasks. It is a good tool for creating music that needs precise control. I can ask a computer to make a click track from 120BPM to 150BPM in 2 minutes, and it will do so without a hitch. In JNNJ, I used the precision of a computer to create a percussion duet featuring continuously changing tempo.

JNNJ requires each percussionist to follow their assigned click track. The tempi of the click tracks change constantly, asynchronously with each other. The performers are asked to follow the click track while improvising according to the score. To realize this idea, I made a click track with distinct features.

  • The click track gradually but precisely changes at a given duration. The performers listening to the click track should feel comfortable adjusting the tempo while reading the score.
  • To aurally cue the section changes, the pitch of the click changes in the click track. The notes are in harmony with the fixed media part.
  • The click tracks play unpitched count-in beats for the parts where a sudden or fast change is needed. 
  • The performers also read the click tracks in their score. The unstemmed quarter notes in the computer part indicate the pitch of the click tracks. Most of the fixed media part is described in words in the score.

The click tracks and fixed media parts are coded and rendered in SuperCollider. The audience does not get to hear it, but I think the most distinguishing feature of JNNJ is the click track. The fixed media part is simple in terms of timbre, so that the audience can focus on the duel of percussionists marching in different beats.

Refinement

The palindromic approach to tempo change (one part goes from X BPM to Y BPM while others go from Y BPM to X BPM at the same time) was previously explored in a fixed media piece called Cross Rhythms. Prior to that piece, I also explored palindromic timbre in a few other fixed media pieces. The third movement of Sound Mobile sounds exactly the same when played forward and backward.  Multiple and varied attempts at expressing an idea through sound are necessary for refining and redefining.

Seoseok Bell – Brief Analysis

Seokseok Bell is a track in Dot Zip, an album of 22 generative music. The album’s purpose is to demo a uniquely electronic sound rendered with codes. Each track has a downloadable SuperCollider code that a listener can render and modify.  Listen to SeoSeok Bell at Bandcamp and download the SuperCollider code from here

The following paragraphs analyze the form, code, and musical aspirations in making Seokseok Bell. It teaches how to start and progress a composition from a single synthesized sound. The learning is most effective if the reader has a SuperCollider installed on their computer. Please watch a tutorial video on how to run SuperCollider codes written for Dot Zip.   

Program

Seoseok (서석) is a small town in the mountainous region of Korea. The sound of the bell in a chapel in the town reminds me of peace and love. The piece recreates (or interprets) the bell sound using an additive synthesis-like process and then presents it in an ambient-like style. 

Form

Seoseok Bell creates a bell-like tone by adding multiple sine waves. The bell tones and a simple bass line then make a three-part contrapuntal music. The resulting music has many variations due to the randomization in overtone frequencies, note sequence, and rhythms. The SuperCollider code SeoSeokBell_DotZip.scd does this through the following steps.

  • Step 1: Make two sine waves detuned to each other with a randomized frequency difference, creating a single tone with a pulse.
  • Step 2: Create an overtone series. The notes in the overtone series are randomly detuned.
  • Step 3: Play the sound multiple times with short, randomized time intervals.
  • Step 4: Generate soprano and tenor parts by randomly choosing a note in a scale. At the same time, generate a bass part with simpler overtones in tune.

Code

SeoSeokBell_DotZip.scd has the following sections. Watch a tutorial video on how to use the code.

  • SynthDef(“SingleB”): synthesizes sound described in Step 1
  • ~bell: makes sound described in Step 2
  • ~shake: make sound described in Step 3
  • ~sop, ~tenor,  and ~bass: make sound described in Step 4 
  • SynthDef(“NiceB”): synthesizes bass tone described in Step4
  • SystemClock.sched: schedules start and stop time of ~sop, ~tenor, and ~bass

SynthDef(“SingleB”) and SynthDef(“NiceB”)

The two SynthDefs use simple waveform generators (SinOsc.ar and LFPulse.ar) as audio sources. SynthDef(“SingleB”) uses a percussive amplitude envelope with randomized attack and release times. The envelope also includes a transient generated with LFNoise2.ar. The  SynthDef(“NiceB”) has an envelope on the filter frequency of RLPF.ar

~bell

In ~bell function, SynthDef(“SingleB”) is duplicated using Routine. The below formulas determine the frequencies of the duplicated Synths.

pitch=(freq*(count))*rrand(0.99,1.01);
pitch2 =pitch*(interval.midiratio)*rrand(0.99,1.02)*rrand(0.99,1.02);

Where argument count is increasing by 1 at every iteration of a .do loop

Once defined, ~bell function generates a sound using the following arguments:

~bell.(fumdamental frequency, amplitude, duration, pan position, interval value of overtones)

~shake

~shake duplicates function ~bell with a Routine with randomized .wait, creating a slight delay between the instances of Synths. Once defined, the ~shake function generates a sound using the following artumdnts:

~shake.(fumdamental frequency, amplitude, duration, interval value of overtones, delay time)

~sop, ~tenor, and ~bass

The three functions ~sop, ~tenor, and ~bass are Routines that play ~shake or Synth(“NiceB”)  with frequencies picked from the array ~scale or ~scalebass. The global variables ~bpm and ~beat determine the wait time. The three Routines receive .play and .stop messages according to the timings set by SystemClock.sched.

Uniquely Electronic

In electronic music, a sound design process is often the starting point of a composition. Seoseok Bell began as an exercise inspired by the Risset Bell. I wanted to create a bell sound using additive synthesis. However, such an exercise should not end as a sound design only. The composer or researcher should present the findings in a musical context

More Analysis and Tutorials

What I Remember About My First Gig

I scanned a photo of my first electronic music improvisation gig in 2002.

It counts as the first gig, for it was the first performance in front of an audience that did not include anyone I know. I also played a complete set with an electronic instrument for the first time. The concert was probably in late 2001 or early 2002, and I don’t remember much of it other than bits of incidents and happenings. A personal keyword unifying the gig is uncomfortable.

  • It was the second time in my life traveling to Brooklyn.  When I arrived at the performance space, everybody except me seemed confident about what they were doing. I was forcing myself not to show my newbieness. It felt weak to show how impressed I was with others’ art and sound.
  • The event organizer introduced himself as Doc. Doc provided a place to hang out in an apartment and food for all the performers. He made a soup (a chili?) with too much ginger. After hurrying to eat the soup among people I didn’t know, I stayed in the apartment’s hallway.  In hindsight, everybody was nice to me. I just did not know how to react to kindness from strangers.
  • I don’t remember much about the performance. From the looks of the picture, I was performing nervously and seriously. I had the attitude of playing in a college recital hall, but the stage was a folding table in a dark basement with DIY lighting. I did not make eye contact or interact with the audience.
  • I felt I did not belong to the event and the culture it belonged to. So I slept early in a room of a person I do not know, woke up at dawn, and hurried to the bus stop. I didn’t say thank yous or goodbyes.

That was my first and queasy gig. The quality of the music I presented was OK, but the quality of social performance were bismal. I could have made friends and fans, but I ran away. Now that 20+ years have passed since the first gig, I feel comfortable socializing with strangers (if needed).  It took me a while to feel like that. Perhaps teaching helped.  I share this experience with my students, who are younger than the 2001 me, to let them know that it is OK to feel bad after the gigs. The career does not end there. Just do more performances, make a few more mistakes, and find a way to feel comfortable showing what you love in front of people you do not know. 

I wish I had audiovisual documentation of the performance, but I had a Motorola cell phone at the time. However, I found a backup of a video demonstration Luis and I made a few weeks after the Brooklyn Performance. It is delightful to see how much my musical practice has changed and remained the same since 2002.

I used a loaned Radio Baton, and my friend Luis Maurette used my Phat-boy MIDI controller. We built a Max patch for the machine and ran it on my very first iBook. The video was shot in an ensemble practice room at Berklee College of Music. Luis and I were Electroic Production and Design students (back then, the major was called Music Synthesis). Ableton Live was just released a few months ago.