Author Archives: Kevinernste

Digital Audio: bit depth

For every digital sample, our analog to digital converter asks “what is the amplitude?”.  The question that remains is, how is this amplitude represented? The answer is “bit depth” which determines both how many different amplitude levels/steps are possible and what the overall capacity of the system is…how loud of a signal it can tolerate.

For CD-quality sound, 16 bits are used.  This means we will have 2^16 (“two to the 16th power”) different amplitude values available to us, or 65,536 steps.  Since the number of steps is divided between positive and negative values (our crests and troughs from before) this means it is divided into 32,767 positive (plus zero) and 32,768 negative values. For each sample taken, the actual amplitude must be “rounded” to the nearest available level…producing another “error” relative to the original audio signal.  The signal is “quantized”. This “quantization error” produces as small amount of “quantization noise”, noise inherent to digital recording.  A digital system is totally noise-less on its own, but as soon as it is recording a signal, it makes these errors and ends up with this small amount of noise.


The amount of inherent noise versus the system’s capacity for the desired signal is called the signal-to-noise ratio, a concept I will illustrate in the next installment.  The signal-to-noise determines both how loud and how soft a signal can be cleanly recorded. It determines the recording’s “dynamic range”.

The overall amplitude capacity of an digital system can be theoretically approximated as 6 decibels per bit.  For our 16-bit CD-quality signal, this means our system can tolerate 96 dB.

So, is 16-bits enough?  The threshold of hearing or the threshold of pain varies among individuals, but is often cited as 120 or 130 dB.  So it may be that–unlike the CD-quality sampling rate and its accommodation for the range of human hearing–our 16-bit system is not enough. If one is not careful when recording, a signal can easily exceed the maximum amplitude, producing “clipping”.  In clipping, the waveform hits its amplitude ceiling resulting is a cropped waveform.


The changing peaks above the maximum amplitude end up flattened. The naturally fluctuation amplitude levels are just chopped off and the resulting sound is jarring and distorted.  Increasing the bit depth will provide more amplitude “headroom” for these louder signals. 24 bits, for example provide 2^24 (over 16 million!) amplitude steps and 144 dB of theoretical overall capacity (24 x 6 dB).  So, a higher bit depth has a higher tolerance for ampltude, up to and beyond our “threshold of pain”.

As an added benefit, this higher bit depth also results in less inherent noise (!!).  Our signal-to-noise, therefore gets a two-fold benefit: more capacity for our signal and less inherent noise.

The long and short of this is, if you have a higher- bit depth system available to you, absolutely use it.  Of the two possible changes one can make to a digital conversion system, sampling rate and bit depth, the increase in bit depth will have the most profound impact on audio quality.  You will also help increase the signal-to-noise ratio, avoid clipping, as the system can tolerate higher amplitudes before going over.

Further reading:

Digital Audio: sampling rate basics

The conversion of a analog audio signal or voltage into a digital representation is known as quantization.  The continuous, real-world audio signal, representable as a smooth waveform with positive and negative pressure levels, is recorded in a series of periodic snapshots known as “samples”. The rate at which these amplitude snapshots occurs is called the “sampling rate”.  Each sample is, like a frame of video, a picture of the signal at that moment.  Specifically, it is a picture of its amplitude. That, in the end, is all the recording system cares about: “what is the amplitude?”. The succession of these amplitude measurements (“samples”, shown below as dotted lines) results in a digital approximation of the original audio signal. Image

The frequencies and notes we hear in a recorded piece of music are merely the result of these changing amplitudes over time.
The difference between the actual incoming audio signal (grey line) and the quantized digital signal (red line) is called the “quantization error”.  The difference looks terrible at the moment, but we’ll get  back the original smooth signal a little later on.
For CD-quality sound the rate is 44,100 samples per second, sometime written as 44.1k (kiloHertz).  This sampling rate is one of many but, as part of the original CD-quality standard, it is certainly the most commonly used, even today when CD’s are all but obsolete.  The reason for this number and not something higher or lower is a compromise between two things: 1) the desire to have enough resolution to record all of the sounds humans care about and 2) the need to keep file sizes small enough to fit on a standard CD.  Raw audio at 44.1k (16bits) uses around 10 MB per minute, and since a CD disk can hold around 750 MB, this leaves room for 75 minutes of music, enough to store a standard double-sided album of music.
But is 44.1k enough? As we discussed in class, this question demands we know a little about human hearing. The range of our hearing includes frequencies up to around 20,000 Hertz–some of us less, depending on age and/or the number of really loud, hearing-destroying concerts we’ve attended.  Whatever sampling rate we choose, the system must take samples fast enough to represent signals we humans care about.  Since every cycle of a waveform has both a positive and negative pressure, a crest and a trough, top and a bottom, we must dedicate a minimum of two samples for each cycle of a wave.  Therefore, the highest frequency a digital system can represent is half of the sampling rate.  This is the so-called “Nyquist frequency”, the highest frequency that a digital conversion can represent given the sampling rate.  In the case of 44.1k, the highest frequency we can acurately represent is 22,050 Hertz.  According to our initial understanding of human hearing, this frequncy seems to be enough, we can capture frequencies up to 20k and even a little beyond. This is just the beginning of the story, as we’ll review in the next section on bit depth.
Further reading:

Concert Order, Sunday December 7th 2014

Concert order for tomorrow’s 3pm performance in Lincoln Hall B20 performance is below with available rehearsal times in parenthesis.

[table]Computer C (Left), Own Laptop (Center), Computer D (Right)
Shelby Hankee (10am), Laura Furman (10:10am), Henry Chuang (10:20am)
James Winebrake(2:10pm), , Yundi Gao (10:40am)
Michelle Gostic(10:50am), Aarohee Fulay (11am), Skyler Gray (11:10am)


Adam Beckwith  (11:30am), “”, Benjamin Hwang  (11:40am)
Chad Lazar (11:50pm), “” , Kwang Lee  (12:00pm)
Jennifer Lim  (12:10pm), Riley Owens (12:20pm) , Nicholas Livezey  (12:30pm)
Mary Millard (12:40pm), “” , Cassidy Molina (12:50pm)
Cameron Niazi (1pm), “” ,


Brendan Sanok (1:10pm), “” , Hanbyul Seo (1:20pm)
William Seward (1:30pm), Marcus Wetlaufer (1:40pm), Suk Sung (1:50pm)
Matthew Williams (2pm), “”, Jasmine Edison (10:30am)
Christopher Yu (2:20pm), Luka Maisuradze (2:30pm), Lisa Zhu (2:40pm)

Studio and lab issues resolved

Earlier today, a student reported issues in the library lab and studios, including 1) Live 9 licensing problems, 2) Reason network licensing (library lab), and 3) Rewire sharing between Live 9 and Reason.

All issues have now been resolved. All studios are repaired, the library lab license server is back online, and Rewire has been confirmed to work, once again, in all three studios.

– Please be sure to use Live 9 for the remainder of the semester, particularly if you are using Rewire.

– If and when issues of this kind (issues affecting the usability of the labs for any user) come up I certainly appreciate hearing about them so they can be resolved immediately. The more detailed your input the more quickly we can resolve the concern.

— Professor Ernste

P.S. While solving an issue in the lab, someone asked about the Network Drive being unavailable. Please see the FAQ on that issue if that happens again.

Tyler Ehrlich’s ScoreViewer for Google Glass

Tyler Ehrlich’s ScoreViewer for Google Glass , a project initially conceived for use by Professor Cynthia Johnston Turner and the Cornell Wind Ensemble, provided a framework for the performance of Professor Kevin Ernste‘s AdWords™/Edward, the first commission piece of its kind for the Google Glass Explorers program.

Score “cards” can be uploaded directly from the web and called up verbally for performance (“OK Glass, perform Kevin’s piece”. Once loaded, performers “wink” through parts of the score (pages, cards, etc).

In AdWords™/Edward, winking advances through a series of short, repeated/looped phrases (see examples below, click to open) displayed in their glasses…a stylistic homage to Terry Riley’s In C, celebrating its 50th Anniversary year in 2014.

IMAGE: John Roark (

Listening from today

Erik Satie: Vexations, score and music (excerpt).

Terry Riley: In C (1964)

Original recording (instrumental ensemble)

Another version (chamber ensemble)

Version for orchestra

Musical score here.

Steve Reich: Come Out

Brian Eno: Music For Airports (Ambient 1)

Alvin Lucier: I am sitting in a room

(Optional) In Bb (YouTube crowdsourced video/music project)

Recording audio in Ableton Live

Recording audio directly into Ableton Live’s DAW is simple, requiring only an audio track and the specification of the input channel. This recording method has advantages over an editor, such as Audacity, in that it allows the selection of arbitrary input channels and easily facilitates the layered synchronization of new material onto old.

1) Create an audio track in Live

Screen Shot 2014-10-07 at 4.33.32 PM

2) In the new track’s input/output section, select the audio input channel you wish to record from.

Screen Shot 2014-10-07 at 4.34.24 PM

3) Arm the record for this new track (WARNING: if the input is a microphone, make sure the speakers are turned down, monitoring only through headphones).

Screen Shot 2014-10-07 at 4.34.59 PM

4) Arm the master record (circle, top-level “transport” control) and hit “play” (triangle)

Screen Shot 2014-10-07 at 4.35.58 PM

Connecting your laptop to studio computers and speakers

1) Using the cable supplied in each studio (1/8″ to split 1/4″, red and white), connect your laptop output to the front “Hi-Z 1 and 2” inputs on the Apogee Ensemble (top-most silver device under the computer).

Screen Shot 2014-10-06 at 5.46.56 PM

2) In either Audacity or Live, set monitoring to “On” or arm record for channels 1 & 2 — in Audacity, 1& 2 are the default; in Live you must specify the track input, as shown here:

Tutorial on “Recording Audio” (as needed):

3 (May be needed)) Open the “Apogee Maestro” software (in /Applications if not shown as a purple “A” icon in the Macintosh dock). In the software’s “Input” tab, under channels 1 & 2 (left-most channels), set the input from “Mic” to either “Inst” (instrument) or “+4”, depending on what is listed.

  • If you do this, please do set it back to “Mic” when you are finished as this might confuse others using the studio after you.


Cornell Cinema presents:

A Sneak Preview of the New Documentary

Elektro Moskva

Introduced by Trevor Pinch (Science & Technology Studies, Cornell)

Wednesday, September 24 at 7:15pm

Willard Straight Theatre

The film features rare archival footage, including the last 1993 interview with famed inventor Leon Theremin.

Watch a trailer at:

Directed by Dominik Spritzendorfer & Elena Tikhonova

Welcome to the weird and definitely wired world of avant garde rock musicians, DIY circuit benders, vodka-swilling dealers and urban archaeologists/collectors, all fascinated with obsolete Soviet-era electronic synthesizers that were the by-product of the KGB and Soviet military, created in the off-hours by scientist/inventors cobbling together spare transistors and wires. In Russian and English with English subtitles. Cosponsored with Science & Technology Studies and The History Center of Tompkins County.

1 hr 29 min

More at

Assignment 3: Due Thursday October 2nd

This assignment is, as announced in lecture, in two parts.

1. Choose a song or piece of music you know (or think you know!) well. Analyze the song in terms of its form and progression in time,  listening carefully to how its inner details might aid in this progression. What do you think makes the music tick? What makes it move forward? What are the instruments and/or sounds and how do they develop? Are there small details, momentary or otherwise unnoticed, that you thing are important?

The result should be a diagram, in letters or symbols, of the form of the music plus a brief verbal description. This need not be any more than a few paragraphs to a full page, describing what you perceive to be the driving factors in the music.

2. A short re-mix using the materials provided below, taken from This should not be a time-consuming exercise as many of the raw materials will work nicely with one another without effort, but consider the relationships not only of simultaneity but also in time. Think about the form as a compositional strategy: how could/should the music unfold?

Turn in the resulting WAV or AIFF audio file along with a brief description of your re-mix. Did you follow or attempt to follow a particular form? Or was the result serendipitous? If so, can you make some brief observations about the result?

Here are the links to download content:

Instrumental tracks

Vocal tracks

The original artist page on CCMixter is here.

These materials are available under a Creative Commons “Attribution / Non-commercial” license, meaning:

You are free to: “share”—-copy and redistribute the material in any medium or format, and “adapt”–remix, transform, and build upon the material. You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. You may not use the material for commercial purposes.

Skip to toolbar