Hi all! My name’s Mark, and I’m very excited to get to know you and your music over the course of this semester.
I’m a second-year PhD student in musicology with an interest in experimental and improvised music. I have a bit of experience with pen-and-paper composition, and dealing with analog sound (I ran tech for shows at a small non-profit in Chicago prior to returning to school), but have virtually no hands-on experience with digital music-making practices.
I’m also very excited just to have a reason to make some music again – as someone who usually creates music in improvised contexts with others (mostly on brass instruments and sometimes keyboards), the pandemic was very isolating!
My primary goals for the course are to develop some basic facility working with DAWs and other tools, and to experiment with what’s possible. Elastic Arts, my old workplace, currently has a 16-speaker system for electroacoustic music installed, so if I’m feeling ambitious I may try to create something which I could play over that system. I’m very open to collaboration and can’t wait to hear what emerges from this class.
Thank you all for telling this story with us! It was a lot of fun! Here’s an image of the story flowchart (small green numbers are the poll numbers). The darker in color a box is, the darker that path was. If you want a higher quality version, you can access it at the link here:
Our performance will be tomorrow evening, December 19th, at 5pm. Each of the three “sets” below should take around an hour each (roughly 5pm, 6pm, 7pm start times).
Please be prepared well in advance of your performance … I recommend arriving with everything set to go on your end, ready to stream to our Zoom meeting. I will ask each of you to verbally introduce your piece briefly.
A Zoom link will appear in the Canvas calendar along with this same concert order info.
Should you have any concerns during the concert or in advance of your performance, you can contact us privately in the Chat window. The TA’s may interact with the you in the Chat to alert when your performance is upcoming/next, but feel free to follow along in the order below.
If you did not yet respond to the Google form sent by Prof. Ernste or do not see your name listed below, please do complete the form and/or contact Prof. Ernste and your TA immediately to be slotted in.
CONCERT ORDER
Aman Gupta
Isaac Murphy
Janie Walter
Kaushik Ravikumar
Zachary Vero
Brian Shi
Melissa Gao
Brett O’Connor
Thomas Bastis
Lazarus Ziozis
Carter Gran & Jack Samett
INTERMISSION 1
Michael Xing
Chris O’Brian
Michael Zhang
Jacob Pelster
Eshaan Jain
Will Smith
Grace Wu
Luc Wetherbee
Irwin Chantre
Kyle Betts
Lucas Petrello
INTERMISSION 2
Jack Weber
Jack Pilon
Nathan Huang
Jocelyn Gilbert
Ben Goldberg
Sai Mallipedhi
Euna Park & Joshua Kaplan
Arsen Omurzakov
Isaac Singer
Brandon Feng
Jayansh Bhartiya
In advance of our coming final performances, I will be uploading a series of tutorial videos for your review, illustrating several potential methods for streaming your end-of-semester performances, from the simplest (sharing the Desktop or audio only in Zoom) to more complex arrangements using tools like OBS (https://obsproject.com/), mentioned previously.
For tomorrow’s lab on OBS, it might be useful to review this first tutorial below, illustrating recording of video, Desktop, and audio sources in OBS. This tutorial will be included in an upcoming FAQ page, including the other live streaming tutorials I mentioned.
Is anyone interested in collaborating on the final project? I’m interested in making something that takes advantage of the online / streaming nature of the performance, perhaps letting everyone contribute live or trigger clips or something. (Maybe something inspired by the type of performance of In C?)
Purr-data/PD-L2Ork, a Pd distribution for Virginia Tech’s Linux Laptop Orchestra (L2Ork)
There are dozens of tutorials and help systems available. I suggest this video series by Dr. Rafael Hernandez, for starters, as well as Pd’s own built-in help system. A useful forum on Pd, including examples provided by users can be found here.
A deeper history of Pd from Miller Puckette himself, including its origins in the earliest computer music languages can be read here.
Max/MSP is commercial software, similar to Pd having the same original source code. Max is available as a 30-day trial download on the Cycling74 website. Cycling74 was purchased by Abelton in 2017.
As with Pd, there are a multitude of tutorials available, including a help system built into the software itself. Some example projects using Max can be perused here.
More information and (non-crashing!) demonstrations to follow. I recommend perusing YouTube and the recent blossoming of live streaming techniques on YouTube, Vimeo, Twitch (most directly used for gaming but also music).
Corey Keating is a DMA candidate at Cornell University. He holds degrees in music from San Jose State University, Bowling Green State University in Ohio, and Cornell University. Mr. Keating has taught courses in music theory, music technology, composition, and aural skills. His composition instructors include Chris Dietz, Kevin Ernste, Pablo Furman, Mikel Kuehn, Roberto Sierra, Marilyn Shrude, and Steve Stucky.
Program:
Re-Fixed Media
A collection of compositions and generative musical works from my time here in Ithaca, remixed and woven together with alliterative aspects, musical memories, and impromptu interludes.
Following on to our conversations about samplers, beats, and beat slicing, we’ll be exploring loops more fully this week. Here are some loops to play with in class.
Here is some further listening from today’s class.
My band from college was called “Milk of Amnesia”. We performed primarily in the MidWest (Chicago, Minneapolis, Madison, Milwaukee) where all three of the band members went to college together (UW-Madison). The song excerpt I shared in class today from “Kamikaze Airplane” can be heard again here: Kamikaze Airplane.
My piece for solo guitar and electronics, Roses Don’t Need Perfume, uses sounds of the guitar as an electronic backdrop for a live solo guitar part. All electronic sounds are “acoustic”, i.e. they are derived from guitar. You can hear the recording from the commercial CD, Draw the Strings Tight–which I engineered myself here in Barnes Hall–on my website. A score is there also, or linked here.
The piece is long (15 minutes), so I encourage you to listen to the first 3 minutes (Movement #1, first page of the score) only.
Here, too, since I mentioned this method in class previously, is an image from that recording showing the microphone placement…two near mics (12th fret, behind the sound hole) and a third large diaphragm condenser mic further away).
With that speech, Kennedy calmed his audience from rioting, channeling his own experience losing his brother, JFK, who was assassinated 5 years earlier in 1963. Partway through, he quotes Aeschylus … lines that would late appear on his own gravestone after his own sad assassination just months later.
“Even in our sleep, pain which cannot forget falls drop by drop upon the heart, until, in our own despair, against our will, comes wisdom through the awful grace of God.” – Aeschylus
My central ideal with this piece was to channel that mutual empathy, which seemed to me important to our current moment.
My piece is for viola, percussion, and unmanned piano. The piano is used as a resonator (speakers placed inside and under) as well as being played *inside* by the percussions (fingers, sticks, mallets, his ringed finger, eBows).
In the excerpt you will hear, the first part is made up of these “inside the piano” sounds. Then you’ll hear Robert Kennedy’s voice, from his April 4th 1968 recitation of the Aeschylus, resonated into the piano. The voice is circulated back to the piano repeatedly (feedback) to enhance the frequencies of the voice (those “partials” we’ve been talking about) as they make the piano strings ring sympathetically. To hear this effect yourself, find a piano, put down the pedal, and shout into it! Finally, in the last section, the percussionist uses the eBows to play the strings directly, creating a singing melody. The drone sound in the background is derived from MLK’s voice, from his last speech (“I’ve Been to the Mountaintop”), given the day before his death. It’s specifically derived from the word, “see” in the line:
“…only when it is dark enough, can you see the stars”.