In Four Hit Combo, each laptop ensemble member uses four audio files to create twenty-six flavors. Musical patterns arise from repetitions (loops), and different combinations mark forms in music. The laptop ensemble members prepare their own samples before the performance, and they control loop start points and duration according to the score and the conductor’s cue. Because there are no specific audio files attached to the piece, each performance could give a unique sonic experience.
Laptop: each performer needs a computer with SuperCollider installed
Amp: connect the laptop to a sound reinforcement system. If the performance space is small, it is possible to use the laptop’s built-in speaker.
Pre-Performance Preparation
Determine a conductor and at least three performers. If there are more than three performers, parts can be doubled
Each performer prepares three audio files (wav, aif, or mp3). The first file should contain a voice. The second file should contain a pitched instrument sound. The third file should contain a percussion sound. All files should not be too short (less than a second) or too long (more than a minute). The [voice], [instrument], and [percussion] files should be different for all performers.
While the voice, instrument, and percussion files are different for all performers, they should share one common sound file. This file will be used in the [finale].
The conductor prepares one audio about 10-30 seconds long. It could be any sound with noticeable changes. For example, a musical passage would work well, while an unchanged white noise would not.
Download FourHitCombo_Score.pdf, FourHitCombo_Performer.scd, and FourHitCombo_Conductor.scd from www.joowonpark.net/fourhitcombo
Open the .scd files in SuperCollider. Follow the instructions on the.scd file to load the GUI screen.
Score Interpretation
Proceed to the next measure only at the conductor’s cue. The conductor should give a cue to move on to the next measure every 10-20 seconds.
In [voice], [instrument], [percussion], and [finale] rectangle, the performers drag-and-drop the audio file accordingly.
In [random] square, performers press the random button in the GUI.
In the square with a dot, quickly move the cursor in the 2D slider to the notated location.
In the square with a dot and arrow, slowly move the cursor from the beginning point to the end point of the arrow. It is OK to finish moving the cursor before the conductor’s cue.
In a measure with no symbol, leave the sound as is. Do not silence the sound.
In measure 27, all performers freely improvise. Use any sounds except the commonly shared sound reserved for [finale].
I am at the age where I can make a “How It Started vs. How It’s Going” analysis of my music. Comparing performance practice change over a decade or so is valuable for my growth, especially when the piece involves improvisation and no score. I can see where I came from, where I am now, and where I should go next. Large Intestine for no-input mixer and computer premiered in 2013, and I still present it in concerts. Watching the August 2013 version and the recent June 2024 version in sequence gives me a chance to contemplate my electronic performance practice. Did the technology and style change over 11 years?
Technology
I took the “if it ain’t broke, don’t fix it” approach for Large Intestine in terms of the hardware and the software. The SuperCollider patch I coded 11 years ago is almost identical to the 2024 version. There were maintenance updates, such as replacing a deprecated UGen with a current one, but the signal-processing algorithm is untouched. The hardware signal flow is also unchanged, although I upgraded the mixer for increased possibility and flexibility.
I perform the piece by changing the mixer settings and SuperCollider patch. The SuperCollider patch consists of eight effect processors, and I turn on and off those effects in different combinations. It is much like playing a guitar with a pedal board. Over the past 11 years, I have bought a “new guitar” but am using the “same pedal.” The sonic possibilities remain the same, but how I play the instrument, the style in other words, has changed.
Download and run the SuerColider patch for Large Intestine as a reference. You can test it using a mic or any other instrument.
I observed the following differences in the 2013 and 2024 versions of Large Intestine.
2013
2024
Mostly slow and gradual parameter changes
Mostly fast and abrupt parameter changes
I discover things on stage
I present previously experienced sounds
The mixer supports computer sounds
The computer supports mixer sounds
Long duration (10+ minutes)
Short duration (less than 8 minutes)
When I was a no-input beginner, I could not make quick transitions and variations. In the 2013 version, I treated the mixer as one of many sound sources that could pass through signal processors. Like itsumbrella project, 100 Strange Sounds, Large Intestine featured my SuperCollider capacity. The 2024 version shows that I reduced the dependency on the computer part. I also learned to say more within less time.
Gained confidence also changed the performance goal. I make a one-sentence goal when I am improvising solo. My goal for Large Intestine used to be “Let me figure out what no-input mixer can do on stage,” as it delighted me to discover the mixer’s unique sound and its augmentation by the computer. 11 years later, there are much less delightful discoveries in the piece. But I can now expect the sound I can create. The current motto is, “Let me show you my favorite no-input mixer moments I learned previously on stage.”
Evaluation
I am my work’s biggest supporter and critic, but that does not help my career development. The audience ultimately decides the longevity of the work. Large Intestine was fortunate to be liked by the audience on many occasions. It received some honors, such as being included in the SEAMUS CD series (2015), a peer-reviewed annual album released by the Society for Electroacoustic Music in the United States. There were multiple invitations to perform at different venues, and it became an integral part of my solo performance practice. The positive feedback from presentations motivated me to delve further into the no-input mixer world. I composed the following pieces based on the learnings from Large Intestine.
There are also clear limits to Large Intestine and my solo electroacoustic improvisation. I don’t expect other performers to play Large Intestine as it lacks score or instruction. The experience and joy I had with the piece are not transferable to other performers. This bothered me. I tackled this issue by teaching others how to play no-input mixers. I currently enjoy organizing no-input mixer workshops and no-input ensemble sessions. The mixer is a great introductory instrument for electronic music performance.
“SuperCollider for the Creative Musician teaches how to compose, perform, and think music in numbers and codes. With interactive examples, time-saving debugging tips, and line-by-line analysis in every chapter, Fieldsteel shows efficient and diverse ways of using SuperCollider as an expressive instrument. Be sure to explore the Companion Code, as its contents demonstrate practical and musically intriguing applications of the topics discussed in the chapters.“
The endorsement had a word count limit. This book deserves a more detailed review. I agree with Fieldsteel’s statement in the Introduction that the book is a “tutorial and reference guide for anyone wanting to create electronic music or experimental sound art with SuperCollider.” Musicians, media artists, and programmers will learn the fundamentals and practical applications of SuperCollider by reading the book from cover to cover. I especially recommend this book to musicians seeking the connection between creative coding and their artistic practice. Electronic musicians learn to express musical ideas in numbers and symbols when they code music. Coding trains users to think of music differently as a result, and the author does an excellent job of teaching how to do so.
Fieldsteel’s expertise in composing, performing, and teaching SuperCollider for over a decade is evident in every chapter. The author correctly anticipates common beginner challenges and provides the most efficient solutions. I love Tip.rand sections dedicated to troubleshooting and debugging. They are essential in increasing productivity and decreasing the frustration of learning a new environment. The book’s biggest strength, as demonstrated in Tip.rand, is its accessibility. The language, style, and examples do not assume that the readers have previous programming, music synthesis, or audio engineering experience. Included figures, tables, and example codes are also effective and pedagogical. I was happy to see that the printed codes’ font is identical to the default font of SuperCollider IDE. It reconfirms the author’s effort in creating inviting chapters to learn a language with a considerable learning curve.
I spend the first month of my SuperCollider class helping students overcome the initial steep learning curve. The book will dramatically reduce the time and frustration of going over that hump. I don’t think other existing SuperCollider resources will help as much as Fieldsteel’s book for that purpose.
How much time and energy does a musician spend on an interdisciplinary project? A three-month-long production period does not equal 90+ days of labor. How many days are spent on music, and how many of them are spent on meetings with the collaborators? How much of the music created for the show ends up in the show? I ask myself these questions to better understand the practical role of a music creator in a project involving performing artists of other fields. Answers to these questions require measurable data, such as the total working days and total minutes of music composed. These numbers lead to an insight into the productivity of music creators.
In 2023, Artlab J, Detroit Puppet Company, and I created a one-hour show titled Objects at Play (video link). It was a non-verbal dance and puppet show aimed at young audiences. The first meeting was on February 18, 2023, and the show premiered on May 27, 2023, at the Detroit Film Theatre. I recorded my production process from the start of the project to study my collaboration productivity. I gathered and organized the data according to the numbers I worked and the minutes of music I produced as I composed, recorded, and mixed music. The analysis and statistics revealed that a fraction of the total collaborative period is spent on person-to-person interaction. About two-thirds of the total music communicated with the collaborators ended up in the show.
There are three limitations to this article.
I am sharing my work process as a solo electronic musician who could compose and share music without other musicians. The workflow described in the following section may not apply to performers or composers of non-electronic genres.
No similar data were collected from the collaborators of Objects at Play. Comparison of productivity across the discipline was outside the plan.
The analysis focuses on the practical aspects of collaboration. There will be no aesthetic discussion of Objects at Play.
Data gathering
I used a production diary consisting of a web folder with a session log, screenshots, and photos to record the project’s progress. In the SessionLog text file, I briefly described the work I have done in a workday. Each entry has links to photos of hand-written notes or screenshots of the hard drive folder containing music files used for the project. The screenshots function as a reminder of content changes in music tracks. The Old Versions folder in the screenshots contains obsolete or rejected session files. I kept these files to calculate the amount of music that had not been used at the premiere.
Work Routine
The creative team of me, choreographer Joori Jung of ArtLab J, and theater director Carrie Morris of Detroit Puppet Company shared a Google Drive folder for remote communication and file transfer. The team worked on multiple projects, so daily or weekly meeting was not an option. The list below shows how I worked on the project in this context as a musician.
The first in-person meeting with Joori and Carrie was on 2/16. The three discussed the overall vision of the piece.
After meeting #1, I worked on short and independent tracks that could match the to-be-developed scenes.
I shared nine music tracks with the collaborators via Google Drive before the second meeting.
At the second meeting on 3/9, Joori and Carrie shared their work-in-progress scenes. The directors also shared current music tracks-to-scene placement.
After meeting #2, I made five additional tracks. I also revised and expanded the tracks used in the scenes.
I shared the updated tracks with the collaborators before the third meeting.
At the third meeting on 4/25, the directors shared new tracks-to-scene placement. The deadline for the final version of the music was set.
After meeting #3, I made three additional tracks. I continued revising and mixing the tracks to a presentable form.
I delivered the final versions of the tracks. The directors and performers continued working on the project until the premiere on 5/27, but I did not create more music for the show.
Separate from the theater premiere, I worked on a 14-track album with edits suited for audio-only release. It was published on Bandcamp a day after the premiere.
Note that I had the aesthetic decisions in creating music, but the directors in charge of movement and stage decided the music’s length, order, and selection. Unlike solo projects, the decisions that drove the project forward were not mine by design.
Data Organization
I organized the information in the production diary into two categories. The first category traces how allocations of the music track to one of the seven scenes change after the collaborator meeting. The second category is statistics on days worked and the amount of music produced.
Tracks-to-Scene Organization
Figure 1 shows how each track I made and shared with the collaborators changed their use throughout the project. The blocks with letters A to Q represent 17 tracks with independent musical themes. I composed the first nine drafts after the first meeting, five after the second meeting, and three more after the third meeting. These tracks were available as separate mp3 files on Google Drive for the choreographer and the theater director.
<Figure 1>
The middle column represents the tracks-to-scene assignment after the second collaborators’ meeting. Four scenes needed new music. The two scenes required a combination of tracks. All tracks needed expansion and revisions in terms of the music’s length and formal development. Note that four out of the nine tracks shared before the second meeting were rejected.
The right column represents a revised tracks-to-scene assignment after the third meeting. It became the final version. Some tracks included in the previous version, such as tracks A and L, ended up being excluded from the show. All but one rejected track after the second meeting came back as a part of Scenes 5 and 6. Track M changed its function from the theme of Scene 6 to the finale of Scene 5.
Productivity Analysis
I measured the amount of work by the days I spent on the project and the length of music created and shared with the directors. There are 102 days from the initial meeting on 2/16 to the album premiere on 5/28. According to the production diary,
I worked 37 days on this project (36.3% of total project days).
I met with collaborators in-person for 3 days (8.1% of the working days, 3% of total project days).
I did not record the minutes I worked on each day, so I cannot calculate the hours I worked on.
In terms of the total amount of music, I gathered the following from screenshots and project files.
14 out of 17 tracks made it to the show (82.4%).
The total amount of music communicated with the collaborators was 11474 seconds consisting of 34 drafts (figure 2).
The premiere used 3319 seconds of music (figure 3). That is 28.9% of the music communicated with the collaborators.
The project used a total of 14.1GB in the hard drive. The files were Logic Pro sessions, SuperCollider files, and audio recordings of me playing a melodica.
<Figure 2>
<Figure 3>
Interpretation of the Data
The collaborative process is about quickly adopting and adapting to changes. My role as a music composer was to react to the developing dance and puppetry. It meant constant addition, elimination, and revision. 14 out of 17 tracks making it into the final version looks like a satisfactory rate, but it is less than a third of the music shared with the collaborators. At the same time, once-rejected music can become useful if the circumstances change. Keeping the Old Versions folder intact was a strategically right decision.
I worked on the project for about a third of the total project period and waited for collaborators to develop their part asynchronously to my music production schedule. Waiting is part of the process for musicians in interdisciplinary projects. It is possible to have time to work on a separate project while engaged in a long-term collaboration.
Notice that I did not discuss budget and fees in this article. The amount of time and energy spent on a project does not account for the creator’s previous experience and skill. 30 days I spent on Objects at Play could have been 90 days of work for some or 10 days of work for others. My productivity analysis is not a suggestion for budgeting or calculating artist fees. Its object is to be a reference for a better collaborative practice.
I performed Elegy No.2, written in 2018 for violin and computer, with melodica at the SPLICE Inistute 2023. It is not a happy song, but I share what I can express only with music. Sarah Plum recorded the original version beautifully, but I have been playing the song as my solo shows since COVID.
If you own a melodica and want to play this, the score and SuperCollider file are available HERE. You don’t need to know how to use SuperCollider. The instruction to run the code is here. Please use the score as a guideline, and feel free to improvise.