PureData first sound patch, Music 2421


As a first introduction to PD and as a head-start to Tuesday’s Music 2421 meeting, I would like you to create a simple PD “patch”: two oscillators offset by 3 hertz, creating an interference/beating effect.

Steps for the impatient (TL;DR):

1. Create a new patch (File–>New)
2. Create two oscillator objects: [osc~ 440] and [osc~ 443] (Put–>Object)
3. Create a [dac~] audio output object (Put–>Object)
4. Connect the oscillators to the outputs, one to each of [dac~]’s two inlets
5. Turn on PD’s “DSP” in the PD main window, enabling sound processing

You should now hear two oscillators beating 3 hertz apart.

Detailed instructions:

Open PD and create a new patch (File–>New). You will get a completely blank slate, PD’s default state….ready for us to make anything we can imagine.


Onto this blank canvas, we will place “objects” (things that perform actions, receive data or audio, make calculations, etc), numbers, messages, comments, and graphical objects. By combining the functionality of lower-level objects (such as those that add numbers, generate a signal, or take in audio), we will construct our own musical instruments/tools.

IMPORTANT: When working in PD there are two modes: “Edit Mode” (when you are editing/building the patch) and “Performance Mode” (when you are operating the patch). While working, you will frequently toggle back and forth between these two modes. It is therefore worth, memorizing the keystroke, Command-E (or Ctl-E on Windows).

In Edit Mode, go to the “Put” menu and choose an “object” (note the keystroke as well, Command-1). In the dotted box that appears, type “osc~ 440″(that’s “osc tilde, SPACE, 440) and click anywhere on the patch to “instantiate” the object.


Note: the “~” (tilde) which, looking like a sine tone, designates this as an audio objects. In a minute we’ll see objects without the tilde, those that send/relay/create messages.

Objects like [osc~] have inlets (to receive messages) and outlets (to output their data) or values can be supplied as “arguments”. Here with [osc~ 440], the argument “440” tells the oscillator its frequency.

Even though we don’t hear anything yet, let’s create a second oscillator with a frequency 3 hertz higher than our first, so [osc~ 443].

To hear these sound generating objects, we need an audio output object called [dac~]. By default, [dac~]’s two inlets are speaker outputs 1 and 2 or the LEFT and RIGHT channels.

Your patch should now look like this:


Before the patch can make sound, we have to connect the oscillators to the “dac”. Mouse-over the outlet (black dash at the bottom-left of the object) and connect [osc~440] to the [dac~]’s left-most input (to channel 1, LEFT). Connect [osc~443] to the [dac~]’s right-most input (to channel 2, RIGHT).

Finally (after turning DOWN the volume on your speakers!!), go to PD’s main window and turn on the “DSP”, PD’s way of enabling sound processing.


You should now hear two oscillators beating together at a separation of 3 hertz.

For more fun: here’s a more advanced version (you may need to right-click and choose “Save As…”) that uses the text keyboard to play notes/frequencies. Hit the number keys 0 – 5 to change the interval between the two oscillators and hit any letter key to play a “solo” over this “drone”. I recommend, for once, turning on your CAPS LOCK as the notes will be lower that way. See if you can figure out why!

No-input listening

Students of Music 6421: This week as you prepare your pieces for next Tuesday’s in-class performance, I would  you just to listen to some music from a space outside of our normal purview, to seek out musical sub-genres of electroacoustic improvisation.

For starters, some music for you from Toyko and “Onkyo”, first Toshimaru Nakamura:
“The first thing for me is not emotion or concept, just sound”. – Nakamura

Sachiko M:

And a brief excerpt from a “documentary” on both of them:

Paul Lansky Discussion

Dear Colleagues,

Let me first ask your ideas about the music of Lansky independent of his thoughts on the paper.

First, I have to admit I found the aesthetic in his music quite interesting in general. On the other hand, specifically three Idle Chatter’s does not totally convinced me. In the micro level, the speech particles was very interesting. However, the form which comes out of those in a tonal sense and its cycles make it less interesting for me. Also the the timber and its textural categorization was overly stable. For sure it does not has to neither change, nor transform but since those parameters,  the feeling of beats and tonal expectations sounds together, those works become less interesting for me then the six fantasies or some of his other works.

Maybe the understanding of time is the main cause for this. For instance, in the Steve Reich’s music, maybe the phase shifting ideas makes the works more interesting. Even in the Pendulum Music, the teleological order of the moments makes the constant timbre of mics interesting. However, in Chatters, the understanding of time seems ordinary to me.

What is your opinions?




Paul Lansky and Layering

I was initially introduced to Paul Lansky’s music a few years ago, beginning with Idle Chatter. Immediately, I was both attracted to it and blissfully unaware of the monumental effort required to create a piece like this 30 years ago. But now, after browsing around a bit and reading about the process of working with a room-sized computer and punch cards, I feel a new found appreciation for the depth he was able to achieve. Certainly, an even slightly less complex piece would have saved hours and hours of work. I found some of this information about Lansky’s process in the article posted below, which offers an analysis of the three Idle Chatter pieces and a few interesting anecdotes about Lansky’s music.

Here is the link to that article:

Those pieces aside, I wanted to post another piece of his titled Table’s Clear. To my ears, this piece sounds a good deal like Idle Chatter, except the acoustic content comes from recordings of children playing percussion instruments instead of his wife’s voice. Both pieces share a surprising depth of layers. The seemingly complex “macro groove” is served by a number of smaller “micro grooves” if you could even consider them grooves at the micro level. Additionally, the timbral material remains quite static throughout the piece. Again, matching Idle Chatter. According to the above article, Lansky intentionally sought out a complex layering of material thinking that it would better maintain the interest of audiences over multiple listens. The more I think about this tendency, the more I seem to notice it in his music.

Here is a link to Table’s Clear:


thoughts on Lansky

Something that I have always found interesting about Paul Lansky is his consistent use of tonality, and in particular, diatonic harmonies. This feature is present from his earliest works, such as the pervasive dominant 7th sonority in mild und leise (1973), and still continues today. Given his studies at the “Babbitt-dom” of Princeton in the mid-1960s, as well as his as affiliation with George Perle, it is rather surprising that Lansky does not have even a slightly larger body of early work demonstrating influence of the 12-tone system.

In “The Inner Voices of Simple Things” (1995), Lansky explains to Jeffery Perry that a simple pitch palette allows for nuance in other parameters:

I didn’t decide that I was going to write using tonal syntax. I still don’t think of it that way as much as letting the pitch contours and context occupy a certain relatively uncomplicated niche. It often seems to me as if telling complicated pitch stories is something that performers do so well, while machines have other capabilities, to create worlds and landscapes which have very different agendas (52).

This concept is particularly clear in his works that use technology to manipulate the human voice. The Chatter pieces explode bits of speech into an incomprehensible mass that is (re)constructed into discernible harmonies. Six Fantasies makes use of various filters to “harmonize” the reader’s voice, but also draws attention to subtle shifts in timbre. In these works, Lansky exposes the listener to a new perspective on the voice. Like many other composers, he aimed to “use the computer as an aural camera on the sounds of the world” (52), but rather than magnifying noise elements or the spectra of everyday sounds, Lansky did so uniquely— in a predominately pitch-centered sound world. Since his transition to primarily acoustic composition, this sound world seems to have remained largely in tact.

In the same interview, Lansky insists that “If a piece elicits more curiosity about its production methods than about its content, it is essentially a failure” (45). I largely agree with him— but my question is whether this (consistently diatonic) content is always successful on its own.  What happens when we apply this statement to Lansky’s recent works for acoustic instruments? Do they make us wonder about their “production methods” or “content”?

I became acquainted with Threads (2005) through researching repertoire for my own percussion quartet and also heard it live in concert (although the acoustics of Seiji Ozawa Hall reduced it to a warbling wash). Lansky calls it a “cantata” for percussion and the piece structured as such, containing movements deemed as “Arias” and others called “Preludes”. Even though it is a successful piece, I don’t perceive the same freshness of his earlier works with computer.  Just to be a little polemical, perhaps I might turn Lansky’s own statement around and say that his early computer works are ingenious in how they used sounds of the world as a camera on the expressive potential of the computer.  If Lansky proved the computer to be an effective medium for elegant sounds, might his aims be similar with regard to the percussion quartet?

Class Performance #1

Here is the order for our Tuesday  / Thursday (2/24 and 2/26) performances of your first pieces. Please come on time (or early, where possible) prepared to perform either on my provided laptop (along with a launchpad, microphones, and MIDI keyboard–likely the red one from Studio C) or with your own computer, using any of the USB studio devices.

I will have connectivity for two computers (stereo audio connection) with the hope of moving through your pieces as smoothly as possible. If you have your own audio interface, I can easily connect 1/4″ or XLR cables as well.


Ian Hoffman
Cameron Niazi
Erna Woyee
Laura Furman
Vaibhav Aggarwal
Cassidy Molina
Charles Peng
Hanbyul Seo
Mihir Chauhan
Kristin Murray
Riley Owens


Chun-Han Chuang
Jasmine Edison
Aarohee Fulay
Shane Moore
Julia Klein
Mimi Lee
Matthew Mardesich
Mengya You
Yuan Zhou Bo
Matthew Williams
Kevin Garcia
Brendan Sanok

Score11 and Csound examples and…templates!

This post is for students of Music 6421:

In addition to the examples found via “lsex sc” in the terminal, there are also a collection of blank templates (marimba, tsamp, gran, etc) that can be used to create your own scores. These are identical to the examples (same p fields and comments on ranges, scaling, etc) without any values in place.

To access the list of templates for Score11 (this is analogous to accessing examples):

lstp sc


Some tips for using templates

– Semi-colons are required at the end of each p-field statement
– Anything after a “<” is a comment. I encourage you to make comments in your personal files
– for adding sound files to the tsamp and gran instruments, use mksffuncs (help file here)

No class, Tuesday Feb 10th

This is a reminder to all students that I am away from Ithaca on Feb 10th so there will be no class meetings. Classes will meet as scheduled later in the week (Thursday for those in Music 2421; by scheduled appointment for those in 6421).

— Professor Ernste

Abelton Drum Rack basics

Following onto our work in class, here are some tips for using the Drum Rack, very similar to the behavior in some other Ableton instruments that store and use audio samples (see simpler, instrument racks, and others).

Creating Drum Racks in Abelton Live 9


Paul Lansky, keynote address

Here, for students of 6421, is the article I mentioned in class. This speech was given to a room full of people who know Lansky’s work well, so it makes some assumptions (you may need to listen to his music first: the pieces on our list as well as Mild Und Lise, which he mentions several times).

Lansky keynote, ICMC 2009

More than being an auto-biographical sketch or a reminiscence, the speech seeks to illuminate his changing relationship with computer music but also his observations about the changing landscape of technology and music making generally.

(In the introduction, Lansky refers to a important interview where he formally bid farewell to computer music. This was very public and, at the time, controversial. I will leave it to you to unearth it…)

Skip to toolbar