All posts by joowonpark

847 Twins – Brief Analysis

The production of 847 Twins, the title track in the album Fan Art, is documented in four sections. The first section, Program, is a one-paragraph description of the music written for a concert booklet or album promotion. I share information and thoughts that may help listeners enjoy the music. The second section, Form, is for the creators who want to learn how I used electronic sounds in composition. The third section, Code, is for the technologists who want to learn how I designed the piece in SuperCollider, a code-based audio app. Links to the code are available here. The last part, Anecdote, has extra narrative relevant to 847 Twins but is optional to enjoy the piece.

If preferred, read this article in PDF format.

Program

847 Twins is a two-movement piece based on harmonic progressions of Prelude & Fugue in C Minor by J.S. Bach. An electronic remake of Bach is a well-known practice pioneered by Wendy Carlos and Pierre Schaeffer (Switched-On Bach & Bilude). I learned so much from reading and listening to their works. J.S. Bach is also my hero composer. Therefore, it seemed appropriate to dedicate a song to my musical cornerstones in an album about fandom.

Listen to the tracks linked below before reading the next sections.

The tracks are available on other major platforms at  https://noremixes.com/nore048/

Form

Mvt I. Pluck

Pluck and Blip, the two movements of 847 Twins, algorithms written in SuperCollider use the harmonic progression of the Prelude in BWV 847. The downloadable code, 847_Pluck.scd, generates randomized voicing patterns played by a guitar-like synth. Below is a step-by-step explanation of how the composition process. 

  1. Design an electronic string instrument. Each note of this instrument is detuned at a different ratio every time the string is “plucked.” The note’s duration, dynamic, string stiffness, and pan position also vary randomly. 
  2. Using the instrument in Step 1, strum a chord with notes at a measure in BWV 847. Unlike a guitar, a strum of a chord can have multiple pan, accents, and note durations due to the randomization in Step 1.
  3. Each measure of BWV 847 is played four times before advancing to the next measure.
  4. Add a bass part with gradually increasing loudness. It plays the lowest note in the corresponding measure. 
  5. Add the intro and the outro for a better form. They are not quoted from BWV 847.

In short, the first movement of 847 Twins is a reinterpretation of BWV 847 featuring an imaginary string instrument and a synth bass. I loved how Bach created exciting music with a predictable rhythmic pattern. The key was harmony and voicings. I wanted to emphasize that aspect with an additional layer of dynamics articulations in Pluck. The added bass line, which imitates the “left hand” of basso continuo, fills in the low-frequency spectrum of the piece. The bass part is best experienced with a headphone or a subwoofer. 

Mvt II. Blip

The first movement lacked elements of counterpoint, so I tried to make an electronic polyphony in the second movement. In Blip, each measure has 3-6 parts playing different phrases derived from a measure in BWV 847. The phrase shape, the number of voices, and articulation are determined randomly at every measure and create a disjunct yet relative form. Schaeffer’s Bilude explores this idea by combining piano performance and recorded sounds.   

Below is my process of creating a random phrase generator. Please run 847_Blip.scd to hear the piece.

  1. Create a list of pitch sets by reducing repeating notes in each measure of BVW 847.
  2. Make three different synth sounds.
  3. Make a phrase generator that uses the list in Step 1 and synths from Step 2. The instrument choices, phrase length, note subdivisions, and articulations are randomized. The SuperCollider code also has the option to generate a rhythmic variation (i.e., insert rest instead of a note). 
  4. Make a polyphony generator that spawns the phrase generator described in Step 3. The number of polyphonic voices and their octave transpositions are random. 
  5. Play and record Step 4 twice. Then, import the tracks to a DAW. Insert a reverb plugin on one track. The reverb should be 100% wet. 
explanation of analysis
explanation of analysis

The algorithm described above creates different timbres, polyphonic patterns, and the number of voicings at every measure. Furthermore, every rendition of the SuperCollider code makes a unique version of Blip. One measure can be a duet of two-note phrases, and the following measure can be an octet of eight phrases played in a four-octave range. The room sound created by the DAW reverb plugin doesn’t reflect the source, but it sounds similar enough to be heard as part of a whole. 

Code

Mvt I. Pluck

The SuperCollider file for Pluck consists of seven parts. Please download and use 847_Pluck_Analysis.scd to hear and modify each part. Make sure to run the line s.options.memSize=8192*16 to allocate enough memory. 

  • SynthDefs: SynthDef(“Gtr”) uses a Karplus-Strong physical model with controllable pan, frequency, stiffness, amplitude, and duration. SynthDef(“Bass”) makes a sinusoid tone with a percussive amplitude envelope. The UGen Lag.kr smoothens the sharp transient of the amplitude envelope. 
  • ~onenote: this function uses two SynthDef(“Gtr”) to create a detuned note. The amount of detuning is randomized along with other parameters of the SynthDef.
  • ~stroke: this function creates instances of ~onenote with pitches specified in the ~chords array.  ~chords is a collection of all the notes in the Bach Prelude, categorized and indexed by measure number. The order of the notes in a measure is random.  ~stroke plays the chord in sequence or reverse to simulate a guitar’s up and down stroke motions. 
  • ~strums: this function continuously triggers ~stroke. The global variable ~pulse determines the tempo. ~strumsend function is used once for the ending. 
  • ~clock: this function changes the chord progression at time intervals set by the global variable ~mdur. It also changes the parameters of ~strums by altering the values of global variables ~mm~accent~volume~notedur, and ~stiff. Note that both ~strums and ~clock functions must run simultaneously for a correct chord progression. 
  • ~bassline: this function plays SynthDef(“Bass”) a few seconds after the start of the piece. It uses the if condition to change the rhythmic pattern. The line pitch=~chords.at(count).sort.at(0) picks the lowest note of each measure as a bass note.
  • SystemClock: this scheduler syncs ~strums, ~clock, and ~bassline to play a version of Pluck. Every rendition of SystemClock will make a new variation of the track.    

Mvt II. Blip

The SuperCollider file for Blip consists of four interconnected parts. Please download and run 847_Blip_Analysis.scd to hear each part.

  • SynthDefs: The three SynthDefs, PBeepTBeep, and SBeep, are all slightly detuned percussive instruments featuring a classic oscillator waveform, such as sine, triangle, and pulsewidth. 
  • ~phrase: this function creates a short melodic pattern based on pitch sets received from global variable ~arp. It controls which SynthDef to use, amplitude, phrase length, note duration, and transposition. The last two arguments activate or deactivate that random rhythm generation and arpeggio pattern variation.  
  • ~section: this function duplicates ~phrase. The number of ~phrase and octave transpositions are randomized. The function also makes further variations on amplitude, note duration, and panning.
  • The Routine in the last section uses the ~piece array as a cue list with details on when and how to trigger the ~section. The array ~chords is a list of all the notes in corresponding measures of the Bach Prelude. The Routine also sends a changing pitch set from ~chords to ~phrase via the global variable ~arp.

Anecdote

847 Twins does not use the Adagio section of the Prelude and Fugue. When composing the first movement, I could not transition from a constant 16th-note drive to a free and improvisational ending. I tried to address this incompleteness by writing a complementary movement, Blip, but it did not work out. I made a satisfying solution six months after completing 847 Twins by incorporating an instrument I could improvise aptly and freely. Nim6tet, the sixth track in Fan Art, has six layers of no-input mixer improvisation guided by the chord progressions of the Adagio section. It shamelessly shows off no-input mixer sounds I can not create with other instruments. 

It took many attempts in the period of 1.5 years to finish three tracks about the first half of BWV847. The electronic interpretation of the Fugue part is a puzzle yet to be solved.

More Analysis and Tutorials

Updated on 4/13/2023

Personal Statement – Explanation and Tips

Statement letter for tenure portfolio – music technology

My promotion and tenure portfolio included a personal statement. According to my department’s guidelines, a 3-5 page personal statement “addresses how their research, scholarly, or creative work has developed over time and what activities are likely to be undertaken in the future.” The document is sent to the external reviewers at the beginning of the evaluation process and is read by the department, college, and university committee. Like statement letters of any job application, the document is essential for both applicants and evaluators. 

Tenure-track faculty in music get various degrees of help and guidelines for the personal statement, but examples are rare in the public domain. Information on how to write or evaluate tenure-track faculty specialized in music technology is rarer. Therefore, I shared mine as an example for a tenure-track music technology faculty. I submitted my tenure packet in the Summer 2021 and was promoted to Associate Professor of Music in Spring 2022 at Wayne State University, an R1 public institution. A complete guideline for my WSU’s promotion and tenure packet is here.  

Tips

If you are writing a personal statement for a university tenure evaluation in the field of music technology, here are a few tips.

  • Seek an advisor in your institution. Learn the priorities and process specific to your job. There is a chance that you are the only, and the first, tenure-track music technology faculty. If so, find out what counts as acceptable activities and achievements for the review. (Does my solo performance of a no-input mixer and drum machine count as creative research?)
  • Know that the statement may be read and evaluated by both experts (external reviewers) and non-experts of your field (university-level committee). You need to convince people you do not know that you are good at what you are doing. (How do I convince my colleagues at Math that my no-input mixer and drum machine piece is research?)
  • Provide accessible and measurable evidence. Examples include a list of invited performances, documentation of refereed competitions, journals, conferences, and tracks released by record labels. Other people or institutions accepting your work can prove that your creative outputs are valued. (Multiple conference acceptance and guest artist invitations by other universities prove that my no-input mixer and drum machine are making an impact)
  • Do not be humble. Your department committee will need to convince the college-level committee how great you are. Your college-level committee needs to convince the university-level committee how awesome you are. Give them measurable achievements so that they can advocate for you. (Hey, Joo Won’s no-input mixer piece may not be pleasing, but look at the number of conference acceptance he received to play the piece)

The tenure process was not easy, but it gave me an opportunity to evaluate my career thoroughly. I look forward to writing a new version of the personal statement for the next promotion (which will be many years from now).

Personal Statement – For Promotion and Tenure Evaluation

Joo Won Park 

For Promotion and Tenure Application (2021)

A music technologist is a composer, performer, and instrument maker whose primary tool is electronic devices. I am a music technologist specializing in electroacoustic composition, solo performance, and electronic ensemble.  As a teaching musician, I share ideas of uniquely electronic sounds and performance practices with my students. Over one hundred presentations of my work in the past five years prove my contribution and significance in the field.   

I strive to be prolific, consistent, and strategic in my creative process to be a better scholar with a distinct sound. A highlight of my research outputs since my hire at Wayne State University in 2016 includes: 

  • 2 full-length solo albums 
  • 3 peer-reviewed albums 
  • 3 collaborative albums   
  • 18 performed and recorded electroacoustic compositions in solo or ensemble format 
  • 1 peer-reviewed article on music technology at an international-level journal  
  • 2 articles for a local music agency and the College Music Society  
  • 5 electronic music apps 

A summary of publications, presentations, grants, and awards since 2016 proves that my research is a significant contribution to the field of electroacoustic music:  

  • Presented electroacoustic compositions at 34 peer-reviewed conferences and festivals  
  • Received 38 invitations to present electroacoustic compositions at national and regional events  
  • Presented 19 shows as a featured solo artist or ensemble director  
  • 21 different electronic ensembles performed my pieces nationwide  
  • 26 paper presentations and guest artist talks  
  • Produced 12 campus electronic music concerts   
  • Received 4 grants or awards: High Wire Lab Award ($1000, external), Arts and Research Humanity Research Support Program ($5500, internal), New Music USA Grant ($2970, external), and Knight Arts Challenge Grant ($5000, external) 
  • Received 2020 Kresge Artist Fellowship ($25,000) 

A list of professional services shows the electroacoustic community’s trust in my experience and expertise. 

  • Society for Electroacoustic Music in the United States: board member since 2016  
  • Korean Electroacoustic Music Society’s Conference: editorial member since 2013 
  • Associate Director of Third Practice Electroacoustic Music Festival since 2009  
  • Juror in 19 national and international peer-reviewed conferences and journals 

As electroacoustic music may be an unfamiliar topic, I pay extra attention to guide the audience through my creative and aesthetic choices. In addition, reviews of my work in the media highlight my creative process. Below is an excerpt from my interview with Cleveland Classical in 2019.  

“It’s one thing to push yourself out of your comfort zone. It’s quite another to deliberately put yourself in risky situations over and over again — part of the artistic strategy of electroacoustic composer and improviser Joo Won Park. “I like to solve a puzzle in front of the audience,” he said during a telephone conversation from Detroit.” [H-12]

The puzzle I solve involves technology. I create and apply technology to extend artists’ ability to produce sound beyond human capacity. All my works feature what machines can do uniquely or better than humans. Touch [M-1] is a work I perform most often in tours and solo concerts. It is a culminating work that stemmed from 100 Strange Sounds (www.100strangesounds.com), a YouTube project of one hundred solo improvisations featuring everyday objects and electronics. In my interview with Paolo Yumol for Killscreen.com, he articulated the goal of the project, writing: 

“In many ways, 100 Strange Sounds captures the spirit of Park’s work as a whole; it demonstrates the lengths to which Park will go to find the musicality in his everyday surroundings, to find the beauty in mundanity. Park sources ideas from his immediate surroundings and day-to-day experiences, whether it be spending time playing with his kids or walking around Detroit.” [H-15]

Touch and 100 Strange Sounds have positive elements I continue to cultivate in other works. They also taught me a limit I had to overcome. My pieces before 2016 used specific, expensive, and difficult-to-operate software and hardware. These instruments ensure high audio quality and performance capability, but few performers can replicate or present the piece without my assistance. Addressing technical affordability and replicability while maintaining satisfying artistic quality is my ongoing mission, and I approach it by starting a composition with instrument design. 

I code free and original software synthesizers and performance systems that run on multi-platform computers. Doing so lowers the technological and financial barrier for performers who wish to present my pieces. I also choose to use cheap and readily available hardware and build a simple interface to minimize a technical expert’s involvement. As one can hear in Hungry [M-1], taking out everything except the essence is more than a presentation method. It is an aesthetic goal I follow. Mo Willems said, “simple and easy are opposites” In his drawing tutorials for children, and I wholeheartedly agree with it. 

Like Willems’ picture books, I want the audience and the performers to be delighted when they experience my music. If listeners and performers feel unexpected joy by discovering a musical relationship between humans and machines, they understand my intention. In PS Quartet No.1, [M-1], ensemble members use their decade(s) of video game muscle memory to make music with game controllers. At the end of Beat Matching [M-1], I ask performers to shape their mouths as if they were making funny sounds while brushing their teeth. When these human actions interact with a custom application I created, the ensemble engages in a uniquely electronic sonic experience. These sounds are also guaranteed to be different at each performance by design. 

I aim to craft these unrecordable moments in electroacoustic music. As a director of the Electronic Music Ensemble of Wayne State (EMEWS), I share this goal with a talented group of students. EMEWS is an all-undergraduate electronic music ensemble consisting of current Wayne State Warriors. EMEWS won the 2019 New Music USA Grant to do a week-long East Coast tour [II-C4]. I am confident that the group, which was as large as 22 students in Winter 2019, is one of the most performed and traveled undergraduate electronic music ensembles in the nation before the pandemic.  

In 2018, I received the Arts and Humanities Research Support grant to further research electronic music ensemble performances. The grant allowed me to compose and present EMEWS pieces that are uniquely electronic and transferrable. Twenty other electronic ensembles that performed my pieces show I achieved my goal. Among those ensembles, eighteen rehearsed my pieces without my involvement. As a closure to the Arts and Humanities project, I published an article about ensemble instrument design in a peer-reviewed journal [III-D2].  

When institutions invite me for guest lectures and presentations, I highlight the importance of improving electronic music craftsmanship. One method I recommend is choosing one or two electronic instruments to practice consistently and explore all possibilities. For example, Cobalt Vase and Func Step Mode [M-1] are pieces for a drum machine that I have practiced for a few years. The instrument needed for these pieces is readily available, but the performance techniques I explore are not. To teach and document the methods, I developed a graphical notation for the drum machine. Seven Bird Watchers [M-1] uses such notation, and the ensemble members can now create sounds that took me a year to develop in one rehearsal.  

The COVID-19 followed shortly after the recording of Seven Bird Watchers. I feared that my research focus on the here-and-nowness of electroacoustic music might need to pause. However, I learned that I could share my research without compromise by rethinking the presentation method. Computer Music Practice Examples (CMPE) [M-1] is a series consisting of apps, streaming videos, codes, and tutorials. The participants download a free application to make music and learn about its production process by watching original videos and slides. Bypassing streaming audio and excluding expert performers, CMPE users experience what I hear in my studio with a few mouse clicks. Additionally, they have permission to use or modify the apps for their artistic practice.   

Computer Music Practice Examples is a continuation of my goal to create and present a uniquely electronic, unrecordable, and delightful experience. The project may not have a chance to be evaluated by peers before the tenure review, but below Facebook data proves its potential and influence. 

  • ~16500 total views (5/16/20-7/25/21) 
  • 945 followers (as of 7/25/21) 

I am confident that I became a better researcher, musician, teacher, and community leader over the past years working at Wayne State University. The extra challenge in 2020-2021 made me even more ready to tackle projects requiring long-term commitment and institutional support. When tenured, I plan to mature the relationship I built with the Detroit music community through continuing involvement. I also have a vision to create more significant interdisciplinary projects. A committed partnership with the city and the state, combined with the Music Technology program’s steadily growing alumni, will attract new undergraduate applicants. They will become a regional and national musical force when they graduate. I will refine and enhance this virtuous cycle by continuing to be a creative role model.  

Another post-tenure goal is to share my expertise with the broader community by increasing the number of presentations and publications at international-level conferences. As for teaching, I want to position Wayne State’s Music Technology program as the leader in Michigan and beyond. I want to devise a plan to attract more out-of-state and international students, working professionals, and established artists. This progress will be parallel to the constant update and improvement of the current curriculum.  

The included tenure and promotion packet provides details on my steady growth as a music technologist. The document also proves my long-term and continuing commitment to creative research, teaching, and service. For the most recent updates, please visit my website www.joowonpark.net

Distant Yet Audible

Distant Yet Audible, Commissioned and Premiered by Sarah Plum, VTNMT Festival, December 7, 2022

Link to Score and Performance Audio Files

Hardware & Software Needed

  • A robot vacuum with remote control: the composer used Eufy Robovac 11S
  • Bluetooth speaker: a light but loud model that can go on top of the robot vacuum
  • Audio player app on a phone or a computer: must have shuffle and Bluetooth function
  • A stage with minimum obstacles for the robot vacuum

Performers needed

  • Violinist: on stage along with the robot vacuum
  • Tech operator: controls the robot vacuum and audio player off stage

Hardware Preparation

  1. Pair the Bluetooth speaker and the audio player
  2. Attach the Bluetooth speaker on top of the robot vacuum. Secure it with tape
  3. Download all audio files (mp3) from https://joowonpark.net/distantyetaudible 
  4. Put the audio files into a playlist in the music player app
  5. Turn on the Shuffle mode to randomize the playback order of the files
  6. Place the robot vacuum and its charger on the stage

Score example and explanation 

Cue #

Low Energy / Slow / QuietHigh Energy / Fast / Loud
  • In each cue, the violinist chooses a phrase from 4-6 options. If the sound emitted from the robot vacuum seems to be in the low energy/slow/quiet category, choose a phrase from the left column. Choose a phrase from the right column if the vacuum sound fits the high energy/fast/loud category.
  • The following determines the cue duration.
    • Duration of the audio file. Go to the next cue if a new audio file plays. 
    • Time for the vacuum to take a turn. If the vacuum changes its direction, go to the next cue.
    • Violinist can signal the tech operator to go to the next cue
  • The regular bar line in the notation indicates the duration of the cue. If a cue lasts 30 seconds, play the phrase between the regular bar line for approximately 30 seconds
  • The repeat bar line in the notation indicates that the phrase should be repeated until the next cue.
  • Improvise the pitch and rhythm according to the note shapes and positions
    • Notes within the bar line: play mid-range notes
    • Notes above the bar line: play high-range notes
    • Notes below the bar line: play low-range notes
    • Note durations: interpret according to the current cue
      • Whole notes and half notes: very slow and slow 
      • Quarter notes: not fast nor slow 
      • Eighth notes: fast 
    • Glissando: notated as a  line connecting two notes
    • Rectangular note heads: play airy, noise-like yet-pitched tone
    • X note heads: mute strings

Performance Instruction for the Tech Operator

  • The Tech operator’s main job is to cue the robot vacuum and the audio player.
  • If the audio player plays the next track, go to the next cue by changing the vacuum’s direction 
  • If the vacuum turns to a different direction before the end of the audio file, go to the next cue by skipping to the next audio file on the audio player.
  • At the performer’s signal to go to the next cue, change the direction of the vacuum and skip to the next audio file.
  • Note that the beginning of the piece (cue #0) is a vacuum solo. The audio file starts to play at cue #1
  • Note the specific sequence of turning off the vacuum and the audio player at cues #28-30 

Contact Joo Won Park for questions or inquiries (https://www.facebook.com/joowonmusic/ or https://joowonpark.net/)

End Credits – Brief Analysis

End Credits, a fixed media composition included in the Fan Art album, is documented in five sections.  The first section, Program, is the program note to be included in a concert booklet or album promotion. I share information and thoughts that may help listeners to appreciate the music better. The second section, Form, is for the creators who want to learn how I used electronic instruments to create a complete piece. The third section, Code, is for the technologists who wish to understand how I coded the piece. Musicians familiar with code-based apps like SuperCollider and Max will benefit from analyzing the code. In the fourth section, Inspirations, I share why I chose to write the piece. The content is too personal to be on the program.  The last part, Uniquely Electronic, is a bonus section featuring sounds and ideas I could not express in an album format. Fan Art is available as streaming stereo tracks, but they are originally designed for multi-channel sound installation. The last section provides resources to realize songs in Fan Art at full capacity.

A PDF Version of this article is also available.

Program 

End Credits is an algorithmic composition based on the harmonic progression of Debussy’s Clair de Lune. The SuperCollider code written for the piece generates notes with unique overtones, and the overall sound reminds me of organ music at viewings. My friend and I joke about writing each other’s farewell music, and I got one for him now. If he doesn’t like it, I will use it as my exit theme.

Listen at Other platforms https://noremixes.com/nore048/ 

Form 

End Credits uses the harmony of Debussy’s Clair de Lune. The downloadable SuperCollider code, EndCredits.scd,  makes sound according to the following instruction.

  1. Choose a list containing all notes present in measure x.
  2. Scramble the order of the notes.
  3. Play notes at random timing. There is a 50% chance of two notes being played simultaneously.
  4. When all notes in the list are used, move to measure x+1.
  5. Repeat steps 1-4 in slow tempo (quarter note = 3.6 seconds). End Credit uses harmonic progressions from mm1 to mm27 of Clair de Lune.  

EndCredits.scd code also generates each note according to the following instruction.

  1. Make a sine tone with randomized slow vibrato and tremolo using two LFOs. Randomize amplitude envelope, LFO rate, LFO amount, and pan positions. 
  2. Make a single note by combining 5 sounds made in Step 1. Then, randomize each note’s positions, frequencies, and pan positions to make a slightly detuned note with a wide stereo image. 
  3. Make a single note with a random number of overtones using the note generated in Step 2. The note’s duration is also random but is almost always longer than a quarter note (3.6 seconds).

The resulting sound is an imaginary organ capable of changing the stops at every note. The instrument also seems to have multiple sustain pedals.

The compositional objective of making End Credits is akin to minimalism. I wanted to create a simple process that yields unexpectedly delightful sounds. So I simply made an ambient piece using additive synthesis and traditional harmony with computer-aided instructions. There’s no new technology or concept, but we create new sounds by combining old ideas. 

Code

EndCredits.scd, the SuperCollider file I made to generate the album version of End Credits,  has the following sections.

  • SynthDef (“Cell”): makes a sine tone with controlled random values
  • ~note: make a note event using SynthDef(“Cell”)
  • ~chords: an arrayed collection of note numbers representing harmonic contents.
  • ~event: play one measure using ~note with pitches  in ~chords
  • SystemClock: play the music 

To make sense of this section, open EndCredits_Analysis.scd on SuperCollider and refer to the code while reading the next sections. The analysis .scd file has simplified working codes. 

SynthDef(“Cell”) 

End Credit uses one SynthDef featuring two  LFOs, one ASR envelope, and one stereo sine tone generator. By playing two instances of SynthDef(“Cell”) with slight pitch differences, we can make a simple detuned sound. The codes in section //1. “Cell” without randomness shows the simplest form of the instrument. The actual SynthDef used in the piece is in //2. “Cell” with randomness. It applies ranged random values to give varieties in amplitude envelope’s attack and release time, LFO’s frequency, phase, amplitude, and pan position. 

~note

~note is a function with the following parameters.

~note.(pitch in MIDI number, duration (sec), volume (0-1), number of overtones);

By providing a number for each parameter, ~note creates a tone with a varying number of detuned overtones. As we can observe in //3. ~notes,  loops (.do) and ranged random number generators (rrand) were extensively used. Try to run the following line on SuperCollider to hear the difference. Notice that the sounds are not identical when the codes are re-evaluated.

~note.(50,10,0.3,1); //no partials

~note.(50,10,0.3,5); //some partials

~note.(50,10,0.3,10); //many partials

~chords and ~event

~chords is an array of interval values representing the notes in a measure in Clair de Lune.  As evident in //4. ~chords,   method .scramble is at the end of every measure to randomize the sequence order. For analysis purposes, only three lists are inside ~chords

~event plays ~notes according to the pitch choices in ~chords. One measure in End Credit is thus generated with the following parameter

~event.(measure index number, note duration factor, amplitude, overtones, harmony probability (0-1.0))

As we can observe in //5. ~event, the note duration factor is a multiplier for each note’s duration. The larger the number, the longer the note duration, resulting in a sustain pedal-like effect. The overtone amount also gets a slight randomization for variety. The last parameter, harmony probability, can control the chance of the following note being played simultaneously. 

SystemClock

Codes in //6. SystemClock is responsible for putting everything together to produce audible sounds. The section uses Routine to make ~event go through all the measures provided in ~chords. SystemClock provides a 2-second silence, a little pause before listening to everything. 

Inspirations

I composed End Credits n February 2021, the end of strict COVID-19 isolation days. I must have been listening to Debussy recordings often to keep myself together. Clair de Lune’s lowest note, Eb in mm15, felt like the most beautiful piano note then. The timing was perfect, and it resonated with the piano’s body and the listener’s mind. I wanted to recreate that in an electronic music context. So, I created a sound that imitates the slight detune of the low range of the piano. Then, perhaps due to COVID blues, I instructed the computer to play those sounds in blurry and slow motion.  

Uniquely Electronic

End Credit has two playback modes: ~onetime or ~infinite. The one-time version with a fixed duration (8:26) is available on major distribution platforms for listening, and any media player can play it. The installation version goes on indefinitely with varied timbre, timing, and duration. The listener will need to run EndCredits_infinite.scd on SuperCollider. Let it run for hours on solemn and not-so-happy occasions!

More Analysis and Tutorials

Updated on 4/13/2023