Author Archives: Kevinernste

Recording Techniques

Learning to create high-quality recordings is a central technique in computer/electroacoustic music.  The core skills learned here will translate directly into recording studio environments, field recording, and almost any other situation which requires audio input.

The core principle is to maximize signal quality on the way in.  As a rule of thumb, this means recording the maximum amplitude without going over the limit while deferring effects like reverb, equalization (EQ), and other environmental effects to later.  The last part of this is important to note: if one were to record sounds with reverb or EQ that, in the end, sounded unflattering, there is no way to get back the original signal. Think of it like taking a picture, if a shot you took turned out blurry (or the camera’s cap was on!), all the sharpening or lighting effects in the world will never get back the original shot.  One could, however, easily manipulate a clear, high-quality shot–blurring or applying other effects–to achieve almost any look.

In seeking the highest quality signal there are a number of important concepts to understand.  Each of these will be discussed in detail in the following sections.

Signal to Noise Ratio

One of the first and most important concept to understand for recording is signal-to-noise ratio.  In analog systems, this refers to the available bandwidth for the signal relative to the noise inherent in its physical system (circuits, recording medium).

In the digital realm there is no inherent physical noise (except where it interfaces with the analog realm) except during the process of quantization.  Our goal in the digital world is to maximize the signal relative to this quantization noise, a value determined by the number of bits the recording system is using.  We want to excite as many of these available bits as possible, filling them with signal to overcome the quantization noise.  To do so is a matter of signal resolution, not of volume.

Gain Staging

A “gain stage” is any point in the signal path where a gain boost or attenuation is available.  In other words, it’s any place in the chain where you have amplitude control, such as the output knob on an electric guitar, or the input level on an amplifier, etc.

Let’s use the hypothetical guitar/amplifier signal-path scenario to illustrate the importance of gain staging.  Say our guitar player turns the volume/output knob on his instrument almost all the way down, while turning the amplifier input all the way up.  In this case, the amplifier is being taxed to compensate for the guitar’s weak signal.  The result will be a very thin sound for the guitar as well as an unpleasant amplification of circuit noise inherent in the amplifier.  We are not maximizing our signal at all…and may in fact be harming our amplifier.

Now let’s reverse the scenario, the guitar level is “cranked” while the amplifier input is almost to nothing.  Here again the problems are both timbral/aural and physical.  The amplifier is not taking advantage of the incoming signal, acting as a barrier to the guitar rather than a support.  Regardless, the overfed guitar signal might be overloading the amplifier input, causing physical damage.

The above is a simple case.  Yours will be more complex including 3 and 4 gain stages.  You will need to understand your signal path fully in order to get the best results.

Again to repeat a rule of thumb: the shorter the path, fewer signal traps along the way.  You will have gain stages where the signal must be raised or lowered while the ideal of many gain stages is to be transparent, to simply pass along the signal without impacting it, without boosting or attenuating.

Digital Audio: bit depth

For every digital sample, our analog to digital converter asks “what is the amplitude?”.  The question that remains is, how is this amplitude represented? The answer is “bit depth” which determines both how many different amplitude levels/steps are possible and what the overall capacity of the system is…how loud of a signal it can tolerate.

For CD-quality sound, 16 bits are used.  This means we will have 2^16 (“two to the 16th power”) different amplitude values available to us, or 65,536 steps.  Since the number of steps is divided between positive and negative values (our crests and troughs from before) this means it is divided into 32,767 positive (plus zero) and 32,768 negative values. For each sample taken, the actual amplitude must be “rounded” to the nearest available level…producing another “error” relative to the original audio signal.  The signal is “quantized”. This “quantization error” produces as small amount of “quantization noise”, noise inherent to digital recording.  A digital system is totally noise-less on its own, but as soon as it is recording a signal, it makes these errors and ends up with this small amount of noise.

Image

The amount of inherent noise versus the system’s capacity for the desired signal is called the signal-to-noise ratio, a concept I will illustrate in the next installment.  The signal-to-noise determines both how loud and how soft a signal can be cleanly recorded. It determines the recording’s “dynamic range”.

The overall amplitude capacity of an digital system can be theoretically approximated as 6 decibels per bit.  For our 16-bit CD-quality signal, this means our system can tolerate 96 dB.

So, is 16-bits enough?  The threshold of hearing or the threshold of pain varies among individuals, but is often cited as 120 or 130 dB.  So it may be that–unlike the CD-quality sampling rate and its accommodation for the range of human hearing–our 16-bit system is not enough. If one is not careful when recording, a signal can easily exceed the maximum amplitude, producing “clipping”.  In clipping, the waveform hits its amplitude ceiling resulting is a cropped waveform.

Image

The changing peaks above the maximum amplitude end up flattened. The naturally fluctuation amplitude levels are just chopped off and the resulting sound is jarring and distorted.  Increasing the bit depth will provide more amplitude “headroom” for these louder signals. 24 bits, for example provide 2^24 (over 16 million!) amplitude steps and 144 dB of theoretical overall capacity (24 x 6 dB).  So, a higher bit depth has a higher tolerance for ampltude, up to and beyond our “threshold of pain”.

As an added benefit, this higher bit depth also results in less inherent noise (!!).  Our signal-to-noise, therefore gets a two-fold benefit: more capacity for our signal and less inherent noise.

The long and short of this is, if you have a higher- bit depth system available to you, absolutely use it.  Of the two possible changes one can make to a digital conversion system, sampling rate and bit depth, the increase in bit depth will have the most profound impact on audio quality.  You will also help increase the signal-to-noise ratio, avoid clipping, as the system can tolerate higher amplitudes before going over.

Further reading:

Digital Audio: sampling rate basics

The conversion of a analog audio signal or voltage into a digital representation is known as quantization.  The continuous, real-world audio signal, representable as a smooth waveform with positive and negative pressure levels, is recorded in a series of periodic snapshots known as “samples”. The rate at which these amplitude snapshots occurs is called the “sampling rate”.  Each sample is, like a frame of video, a picture of the signal at that moment.  Specifically, it is a picture of its amplitude. That, in the end, is all the recording system cares about: “what is the amplitude?”. The succession of these amplitude measurements (“samples”, shown below as dotted lines) results in a digital approximation of the original audio signal. Image

The frequencies and notes we hear in a recorded piece of music are merely the result of these changing amplitudes over time.
The difference between the actual incoming audio signal (grey line) and the quantized digital signal (red line) is called the “quantization error”.  The difference looks terrible at the moment, but we’ll get  back the original smooth signal a little later on.
For CD-quality sound the rate is 44,100 samples per second, sometime written as 44.1k (kiloHertz).  This sampling rate is one of many but, as part of the original CD-quality standard, it is certainly the most commonly used, even today when CD’s are all but obsolete.  The reason for this number and not something higher or lower is a compromise between two things: 1) the desire to have enough resolution to record all of the sounds humans care about and 2) the need to keep file sizes small enough to fit on a standard CD.  Raw audio at 44.1k (16bits) uses around 10 MB per minute, and since a CD disk can hold around 750 MB, this leaves room for 75 minutes of music, enough to store a standard double-sided album of music.
But is 44.1k enough? As we discussed in class, this question demands we know a little about human hearing. The range of our hearing includes frequencies up to around 20,000 Hertz–some of us less, depending on age and/or the number of really loud, hearing-destroying concerts we’ve attended.  Whatever sampling rate we choose, the system must take samples fast enough to represent signals we humans care about.  Since every cycle of a waveform has both a positive and negative pressure, a crest and a trough, top and a bottom, we must dedicate a minimum of two samples for each cycle of a wave.  Therefore, the highest frequency a digital system can represent is half of the sampling rate.  This is the so-called “Nyquist frequency”, the highest frequency that a digital conversion can represent given the sampling rate.  In the case of 44.1k, the highest frequency we can acurately represent is 22,050 Hertz.  According to our initial understanding of human hearing, this frequncy seems to be enough, we can capture frequencies up to 20k and even a little beyond. This is just the beginning of the story, as we’ll review in the next section on bit depth.
Further reading:

Concert Order, Sunday December 7th 2014

Concert order for tomorrow’s 3pm performance in Lincoln Hall B20 performance is below with available rehearsal times in parenthesis.

[table]Computer C (Left), Own Laptop (Center), Computer D (Right)
Shelby Hankee (10am), Laura Furman (10:10am), Henry Chuang (10:20am)
James Winebrake(2:10pm), , Yundi Gao (10:40am)
Michelle Gostic(10:50am), Aarohee Fulay (11am), Skyler Gray (11:10am)

“”, INTERMISSION ONE, “”

Adam Beckwith  (11:30am), “”, Benjamin Hwang  (11:40am)
Chad Lazar (11:50pm), “” , Kwang Lee  (12:00pm)
Jennifer Lim  (12:10pm), Riley Owens (12:20pm) , Nicholas Livezey  (12:30pm)
Mary Millard (12:40pm), “” , Cassidy Molina (12:50pm)
Cameron Niazi (1pm), “” ,

“”, INTERMISSION TWO,

Brendan Sanok (1:10pm), “” , Hanbyul Seo (1:20pm)
William Seward (1:30pm), Marcus Wetlaufer (1:40pm), Suk Sung (1:50pm)
Matthew Williams (2pm), “”, Jasmine Edison (10:30am)
Christopher Yu (2:20pm), Luka Maisuradze (2:30pm), Lisa Zhu (2:40pm)
[/table]

Studio and lab issues resolved

Earlier today, a student reported issues in the library lab and studios, including 1) Live 9 licensing problems, 2) Reason network licensing (library lab), and 3) Rewire sharing between Live 9 and Reason.

All issues have now been resolved. All studios are repaired, the library lab license server is back online, and Rewire has been confirmed to work, once again, in all three studios.

– Please be sure to use Live 9 for the remainder of the semester, particularly if you are using Rewire.

– If and when issues of this kind (issues affecting the usability of the labs for any user) come up I certainly appreciate hearing about them so they can be resolved immediately. The more detailed your input the more quickly we can resolve the concern.

— Professor Ernste

P.S. While solving an issue in the lab, someone asked about the Network Drive being unavailable. Please see the FAQ on that issue if that happens again.

Tyler Ehrlich’s ScoreViewer for Google Glass

Tyler Ehrlich’s ScoreViewer for Google Glass , a project initially conceived for use by Professor Cynthia Johnston Turner and the Cornell Wind Ensemble, provided a framework for the performance of Professor Kevin Ernste‘s AdWords™/Edward, the first commission piece of its kind for the Google Glass Explorers program.

Score “cards” can be uploaded directly from the web and called up verbally for performance (“OK Glass, perform Kevin’s piece”. Once loaded, performers “wink” through parts of the score (pages, cards, etc).

In AdWords™/Edward, winking advances through a series of short, repeated/looped phrases (see examples below, click to open) displayed in their glasses…a stylistic homage to Terry Riley’s In C, celebrating its 50th Anniversary year in 2014.

IMAGE: John Roark (www.johnroarkmedia.com)

Listening from today

Erik Satie: Vexations, score and music (excerpt).

Terry Riley: In C (1964)

Original recording (instrumental ensemble)

Another version (chamber ensemble)

Version for orchestra

Musical score here.

Steve Reich: Come Out

Brian Eno: Music For Airports (Ambient 1)

Alvin Lucier: I am sitting in a room

(Optional) In Bb (YouTube crowdsourced video/music project)

Recording audio in Ableton Live

Recording audio directly into Ableton Live’s DAW is simple, requiring only an audio track and the specification of the input channel. This recording method has advantages over an editor, such as Audacity, in that it allows the selection of arbitrary input channels and easily facilitates the layered synchronization of new material onto old.

1) Create an audio track in Live

Screen Shot 2014-10-07 at 4.33.32 PM

2) In the new track’s input/output section, select the audio input channel you wish to record from.

Screen Shot 2014-10-07 at 4.34.24 PM

3) Arm the record for this new track (WARNING: if the input is a microphone, make sure the speakers are turned down, monitoring only through headphones).

Screen Shot 2014-10-07 at 4.34.59 PM

4) Arm the master record (circle, top-level “transport” control) and hit “play” (triangle)

Screen Shot 2014-10-07 at 4.35.58 PM

Connecting your laptop to studio computers and speakers

1) Using the cable supplied in each studio (1/8″ to split 1/4″, red and white), connect your laptop output to the front “Hi-Z 1 and 2” inputs on the Apogee Ensemble (top-most silver device under the computer).

Screen Shot 2014-10-06 at 5.46.56 PM

2) In either Audacity or Live, set monitoring to “On” or arm record for channels 1 & 2 — in Audacity, 1& 2 are the default; in Live you must specify the track input, as shown here:

Tutorial on “Recording Audio” (as needed): https://www.ableton.com/en/articles/recording-audio/

3 (May be needed)) Open the “Apogee Maestro” software (in /Applications if not shown as a purple “A” icon in the Macintosh dock). In the software’s “Input” tab, under channels 1 & 2 (left-most channels), set the input from “Mic” to either “Inst” (instrument) or “+4”, depending on what is listed.

  • If you do this, please do set it back to “Mic” when you are finished as this might confuse others using the studio after you.

maestro-input2-inst

Cornell Cinema presents:

A Sneak Preview of the New Documentary

Elektro Moskva

Introduced by Trevor Pinch (Science & Technology Studies, Cornell)

Wednesday, September 24 at 7:15pm

Willard Straight Theatre

The film features rare archival footage, including the last 1993 interview with famed inventor Leon Theremin.

Watch a trailer at: elektromoskva.com

Directed by Dominik Spritzendorfer & Elena Tikhonova

Welcome to the weird and definitely wired world of avant garde rock musicians, DIY circuit benders, vodka-swilling dealers and urban archaeologists/collectors, all fascinated with obsolete Soviet-era electronic synthesizers that were the by-product of the KGB and Soviet military, created in the off-hours by scientist/inventors cobbling together spare transistors and wires. In Russian and English with English subtitles. Cosponsored with Science & Technology Studies and The History Center of Tompkins County.

1 hr 29 min

More at cinema.cornell.edu

Skip to toolbar