Author Archives: Kevinernste

Pitch ratios

Ascending Intervals Descending Intervals
Interval Frequency Ratio Interval Frequency Ratio
unison 1 : 1 unison 1 : 1
m2 1 : 1.059 minor 2nd 1 : 0.943
M2 1 : 1.122 Major 2nd 1 : 0.8909
m3 1 : 1.189 minor 3rd 1 : 0.84
M3 1 : 1.26 Major 3rd 1 : 0.7937
P4 1 : 1.334 Perfect 4th 1 : 0.749
aug4/dim5 1 : 1.4142 augm 4th/dim 5th 1 : 0.707
P5 1 : 1.498 Perfect 5th 1 : 0.667
m6 1 : 1.587 minor 6th 1 : 0.63
M6 1 : 1.682 Major 6th 1 : 0.595
m7 1 : 1.7818 minor 7th 1 : 0.561
M7 1 : 1.887 Major 7th 1 : 0.530
Octave 1 : 2 Octave 1 : 0.5


For ascending intervals greater than an octave, multiply the INTEGER portion
of the Frequency ratio by 2 for each successive octave (1, 2, 4, 8, etc.)


– a minor tenth up = 2.189
– 2 octaves + a tritone up = 4.4142

For descending intervals greater than an octave, divide the Freq. ratio by
2 (if between 1 and 2 octaves), by 4 (if between 2 & 3 octaves), and so on.


– an octave plus a perfect 4th down = 0.3745 ( 0.749/2 )
– 2 octaves plus a minor 3rd down = 0.21 ( 0.84/4 )

PD extended, installation

To install PD(-extended) on your Mac, PC, or Linux system please visit:

Download the installer appropriate to your operating system and architecture (I don’t recommend the “Alpha” release for prime-time usage, but it you are curious about recent developments in the interface (significant) and function of PD, you are welcome to download that also).

For some Mac users, you may need to install “X11”, the venerable Unix graphics system on which some of PD’s functionality rests. You can download that, available now as “XQuartz” from here.

Here, too, is mPD, a mobile version for Android, for those interested:

In addition to PD’s built-in Help system, please see the following sites for more help and shared patches.

– PD Forum and Patch Repo – Repository for patches, tutorials, and discussion related to PD.

– PD FLOSS Manuals – including concepts, working patches, and installation/setup help

Programming Electronic Music in PD (“loadbang”) – Johannes Kreidler’s book

No-input listening

Students of Music 6421: This week as you prepare your pieces for next Tuesday’s in-class performance, I would  you just to listen to some music from a space outside of our normal purview, to seek out musical sub-genres of electroacoustic improvisation.

For starters, some music for you from Toyko and “Onkyo”, first Toshimaru Nakamura:
“The first thing for me is not emotion or concept, just sound”. – Nakamura

Sachiko M:

And a brief excerpt from a “documentary” on both of them:

Class Performance #1

Here is the order for our Tuesday  / Thursday (2/24 and 2/26) performances of your first pieces. Please come on time (or early, where possible) prepared to perform either on my provided laptop (along with a launchpad, microphones, and MIDI keyboard–likely the red one from Studio C) or with your own computer, using any of the USB studio devices.

I will have connectivity for two computers (stereo audio connection) with the hope of moving through your pieces as smoothly as possible. If you have your own audio interface, I can easily connect 1/4″ or XLR cables as well.


Ian Hoffman
Cameron Niazi
Erna Woyee
Laura Furman
Vaibhav Aggarwal
Cassidy Molina
Charles Peng
Hanbyul Seo
Mihir Chauhan
Kristin Murray
Riley Owens


Chun-Han Chuang
Jasmine Edison
Aarohee Fulay
Shane Moore
Julia Klein
Mimi Lee
Matthew Mardesich
Mengya You
Yuan Zhou Bo
Matthew Williams
Kevin Garcia
Brendan Sanok

Score11 and Csound examples and…templates!

This post is for students of Music 6421:

In addition to the examples found via “lsex sc” in the terminal, there are also a collection of blank templates (marimba, tsamp, gran, etc) that can be used to create your own scores. These are identical to the examples (same p fields and comments on ranges, scaling, etc) without any values in place.

To access the list of templates for Score11 (this is analogous to accessing examples):

lstp sc


Some tips for using templates

– Semi-colons are required at the end of each p-field statement
– Anything after a “<” is a comment. I encourage you to make comments in your personal files
– for adding sound files to the tsamp and gran instruments, use mksffuncs (help file here)

No class, Tuesday Feb 10th

This is a reminder to all students that I am away from Ithaca on Feb 10th so there will be no class meetings. Classes will meet as scheduled later in the week (Thursday for those in Music 2421; by scheduled appointment for those in 6421).

— Professor Ernste

Abelton Drum Rack basics

Following onto our work in class, here are some tips for using the Drum Rack, very similar to the behavior in some other Ableton instruments that store and use audio samples (see simpler, instrument racks, and others).

Creating Drum Racks in Abelton Live 9


Paul Lansky, keynote address

Here, for students of 6421, is the article I mentioned in class. This speech was given to a room full of people who know Lansky’s work well, so it makes some assumptions (you may need to listen to his music first: the pieces on our list as well as Mild Und Lise, which he mentions several times).

Lansky keynote, ICMC 2009

More than being an auto-biographical sketch or a reminiscence, the speech seeks to illuminate his changing relationship with computer music but also his observations about the changing landscape of technology and music making generally.

(In the introduction, Lansky refers to a important interview where he formally bid farewell to computer music. This was very public and, at the time, controversial. I will leave it to you to unearth it…)

Spear spectral editor

Sinusoidal Partial Editing Analysis and Resynthesis
for MacOS X, MacOS 9 and Windows

Downloads (free) here.

Be sure to read the “help” page for SPEAR containing keystrokes, hints, and solutions to common problems in SPEAR and analysis/resynthesis generally.

SPEAR is an application for audio analysis, editing and synthesis. The analysis procedure (which is based on the traditional McAulay-Quatieri technique) attempts to represent a sound with many individual sinusoidal tracks (partials), each corresponding to a single sinusoidal wave with time varying frequency and amplitude.

Something which closely resembles the original input sound (a resynthesis) can be generated by computing and adding all of the individual time varying sinusoidal waves together. In almost all cases the resynthesis will not be exactly identical to the original sound (although it is possible to get very close).

Aside from offering a very detailed analysis of the time varying frequency content of a sound, a sinusoidal model offers a great deal of flexibility for editing and manipulation. SPEAR supports flexible selection and immediate manipulation of analysis data, cut and paste, and unlimited undo/redo. Hundreds of simultaneous partials can be synthesized in real-time and documents may contain thousands of individual partials dispersed in time. SPEAR also supports a variety of standard file formats for the import and export of analysis data.

Read more in the ICMC paper “Software for Spectral Analysis, Editing, and Synthesis.” (pdf) or in the dissertation paper Spectral Analysis, Editing, and Resynthesis: Methods and Applications (dissertation).

— Michael Klingbeil, author of SPEAR

Recording Techniques

Learning to create high-quality recordings is a central technique in computer/electroacoustic music.  The core skills learned here will translate directly into recording studio environments, field recording, and almost any other situation which requires audio input.

The core principle is to maximize signal quality on the way in.  As a rule of thumb, this means recording the maximum amplitude without going over the limit while deferring effects like reverb, equalization (EQ), and other environmental effects to later.  The last part of this is important to note: if one were to record sounds with reverb or EQ that, in the end, sounded unflattering, there is no way to get back the original signal. Think of it like taking a picture, if a shot you took turned out blurry (or the camera’s cap was on!), all the sharpening or lighting effects in the world will never get back the original shot.  One could, however, easily manipulate a clear, high-quality shot–blurring or applying other effects–to achieve almost any look.

In seeking the highest quality signal there are a number of important concepts to understand.  Each of these will be discussed in detail in the following sections.

Signal to Noise Ratio

One of the first and most important concept to understand for recording is signal-to-noise ratio.  In analog systems, this refers to the available bandwidth for the signal relative to the noise inherent in its physical system (circuits, recording medium).

In the digital realm there is no inherent physical noise (except where it interfaces with the analog realm) except during the process of quantization.  Our goal in the digital world is to maximize the signal relative to this quantization noise, a value determined by the number of bits the recording system is using.  We want to excite as many of these available bits as possible, filling them with signal to overcome the quantization noise.  To do so is a matter of signal resolution, not of volume.

Gain Staging

A “gain stage” is any point in the signal path where a gain boost or attenuation is available.  In other words, it’s any place in the chain where you have amplitude control, such as the output knob on an electric guitar, or the input level on an amplifier, etc.

Let’s use the hypothetical guitar/amplifier signal-path scenario to illustrate the importance of gain staging.  Say our guitar player turns the volume/output knob on his instrument almost all the way down, while turning the amplifier input all the way up.  In this case, the amplifier is being taxed to compensate for the guitar’s weak signal.  The result will be a very thin sound for the guitar as well as an unpleasant amplification of circuit noise inherent in the amplifier.  We are not maximizing our signal at all…and may in fact be harming our amplifier.

Now let’s reverse the scenario, the guitar level is “cranked” while the amplifier input is almost to nothing.  Here again the problems are both timbral/aural and physical.  The amplifier is not taking advantage of the incoming signal, acting as a barrier to the guitar rather than a support.  Regardless, the overfed guitar signal might be overloading the amplifier input, causing physical damage.

The above is a simple case.  Yours will be more complex including 3 and 4 gain stages.  You will need to understand your signal path fully in order to get the best results.

Again to repeat a rule of thumb: the shorter the path, fewer signal traps along the way.  You will have gain stages where the signal must be raised or lowered while the ideal of many gain stages is to be transparent, to simply pass along the signal without impacting it, without boosting or attenuating.

Skip to toolbar