Documentation

Notes, Wavelength, and Frequency

Note Octave Frequency (Hz) Wavelength (M)* Comment
C 0 16.351 20.812m
C# / Db 0 17.324 19.643m
D 0 18.354 18.540m
D# / Eb 0 19.445 17.500m
E 0 20.601 16.518m
F 0 21.827 15.590m
F# / Gb 0 23.124 14.716m
G 0 24.499 13.890m
G# / Ab 0 25.956 13.110m
A 0 27.5 12.374m Lowest Note of Piano
A# / Bb 0 29.135 11.680m
B 0 30.868 11.024m
C 1 32.703 10.405m
C# / Db 1 34.648 9.821m
D 1 36.708 9.270m
D# / Eb 1 38.891 8.750m
E 1 41.203 8.259m Lowest Note of Bass
F 1 43.654 7.795m
F# / Gb 1 46.249 7.358m
G 1 48.999 6.945m
G# / Ab 1 51.913 6.555m
A 1 55 6.187m
A# / Bb 1 58.27 5.840m
B 1 61.735 5.512m
C 2 65.406 5.203m
C# / Db 2 69.296 4.911m
D 2 73.416 4.635m
D# / Eb 2 77.782 4.375m
E 2 82.407 4.129m Lowest Note of Guitar
F 2 87.307 3.898m
F# / Gb 2 92.499 3.679m
G 2 97.999 3.472m
G# / Ab 2 103.826 3.278m
A 2 110 3.094m
A# / Bb 2 116.541 2.920m
B 2 123.471 2.756m
C 3 130.813 2.601m
C# / Db 3 138.591 2.455m
D 3 146.832 2.318m
D# / Eb 3 155.563 2.187m
E 3 164.814 2.065m
F 3 174.614 1.949m
F# / Gb 3 184.997 1.839m
G 3 195.998 1.736m Lowest note of violin
G# / Ab 3 207.652 1.639m
A 3 220 1.547m
A# / Bb 3 233.082 1.460m
B 3 246.942 1.378m
C 4 261.626 1.301m Middle C
C# / Db 4 277.183 1.228m
D 4 293.665 1.159m
D# / Eb 4 311.127 1.094m
E 4 329.628 1.032m
F 4 349.228 0.974m
F# / Gb 4 369.994 0.920m
G 4 391.995 0.868m
G# / Ab 4 415.305 0.819m
A 4 440 0.773m Tuning reference note
A# / Bb 4 466.164 0.730m
B 4 493.883 0.689m
C 5 523.251 0.650m
C# / Db 5 554.365 0.614m
D 5 587.33 0.579m
D# / Eb 5 622.254 0.547m
E 5 659.255 0.516m
F 5 698.456 0.487m
F# / Gb 5 739.989 0.460m
G 5 783.991 0.434m
G# / Ab 5 830.609 0.410m
A 5 880 0.387m
A# / Bb 5 932.328 0.365m
B 5 987.767 0.345m
C 6 1046.502 0.325m
C# / Db 6 1108.731 0.307m
D 6 1174.659 0.290m
D# / Eb 6 1244.508 0.273m
E 6 1318.51 0.258m
F 6 1396.913 0.244m
F# / Gb 6 1479.978 0.230m
G 6 1567.982 0.217m
G# / Ab 6 1661.219 0.205m
A 6 1760 0.193m
A# / Bb 6 1864.655 0.182m
B 6 1975.533 0.172m
C 7 2093.005 0.163m
C# / Db 7 2217.461 0.153m
D 7 2349.318 0.145m
D# / Eb 7 2489.016 0.137m
E 7 2637.021 0.129m
F 7 2793.826 0.122m
F# / Gb 7 2959.955 0.115m
G 7 3135.964 0.109m
G# / Ab 7 3322.438 0.102m
A 7 3520 0.097m
A# / Bb 7 3729.31 0.091m
B 7 3951.066 0.086m
C 8 4186.009 0.081m
C# / Db 8 4434.922 0.077m
D 8 4698.636 0.072m
D# / Eb 8 4978.032 0.068m
E 8 5274.042 0.065m
F 8 5587.652 0.061m
F# / Gb 8 5919.91 0.057m
G 8 6271.928 0.054m
G# / Ab 8 6644.876 0.051m
A 8 7040 0.048m
A# / Bb 8 7458.62 0.046m
B 8 7902.132 0.043m
C 9 8372.018 0.041m
C# / Db 9 8869.844 0.038m
D 9 9397.272 0.036m
D# / Eb 9 9956.064 0.034m
E 9 10548.084 0.032m
F 9 11175.304 0.030m
F# / Gb 9 11839.82 0.029m
G 9 12543.856 0.027m
G# / Ab 9 13289.752 0.026m
A 9 14080 0.024m
A# / Bb 9 14917.24 0.023m
B 9 15804.264 0.022m

Pitch ratios

Ascending Intervals Descending Intervals
Interval Frequency Ratio Interval Frequency Ratio
unison 1 : 1 unison 1 : 1
m2 1 : 1.059 minor 2nd 1 : 0.943
M2 1 : 1.122 Major 2nd 1 : 0.8909
m3 1 : 1.189 minor 3rd 1 : 0.84
M3 1 : 1.26 Major 3rd 1 : 0.7937
P4 1 : 1.334 Perfect 4th 1 : 0.749
aug4/dim5 1 : 1.4142 augm 4th/dim 5th 1 : 0.707
P5 1 : 1.498 Perfect 5th 1 : 0.667
m6 1 : 1.587 minor 6th 1 : 0.63
M6 1 : 1.682 Major 6th 1 : 0.595
m7 1 : 1.7818 minor 7th 1 : 0.561
M7 1 : 1.887 Major 7th 1 : 0.530
Octave 1 : 2 Octave 1 : 0.5

 

For ascending intervals greater than an octave, multiply the INTEGER portion
of the Frequency ratio by 2 for each successive octave (1, 2, 4, 8, etc.)

Examples:

– a minor tenth up = 2.189
– 2 octaves + a tritone up = 4.4142

For descending intervals greater than an octave, divide the Freq. ratio by
2 (if between 1 and 2 octaves), by 4 (if between 2 & 3 octaves), and so on.

Examples:

– an octave plus a perfect 4th down = 0.3745 ( 0.749/2 )
– 2 octaves plus a minor 3rd down = 0.21 ( 0.84/4 )

Recording Techniques

Learning to create high-quality recordings is a central technique in computer/electroacoustic music.  The core skills learned here will translate directly into recording studio environments, field recording, and almost any other situation which requires audio input.

The core principle is to maximize signal quality on the way in.  As a rule of thumb, this means recording the maximum amplitude without going over the limit while deferring effects like reverb, equalization (EQ), and other environmental effects to later.  The last part of this is important to note: if one were to record sounds with reverb or EQ that, in the end, sounded unflattering, there is no way to get back the original signal. Think of it like taking a picture, if a shot you took turned out blurry (or the camera’s cap was on!), all the sharpening or lighting effects in the world will never get back the original shot.  One could, however, easily manipulate a clear, high-quality shot–blurring or applying other effects–to achieve almost any look.

In seeking the highest quality signal there are a number of important concepts to understand.  Each of these will be discussed in detail in the following sections.

Signal to Noise Ratio

One of the first and most important concept to understand for recording is signal-to-noise ratio.  In analog systems, this refers to the available bandwidth for the signal relative to the noise inherent in its physical system (circuits, recording medium).

In the digital realm there is no inherent physical noise (except where it interfaces with the analog realm) except during the process of quantization.  Our goal in the digital world is to maximize the signal relative to this quantization noise, a value determined by the number of bits the recording system is using.  We want to excite as many of these available bits as possible, filling them with signal to overcome the quantization noise.  To do so is a matter of signal resolution, not of volume.

Gain Staging

A “gain stage” is any point in the signal path where a gain boost or attenuation is available.  In other words, it’s any place in the chain where you have amplitude control, such as the output knob on an electric guitar, or the input level on an amplifier, etc.

Let’s use the hypothetical guitar/amplifier signal-path scenario to illustrate the importance of gain staging.  Say our guitar player turns the volume/output knob on his instrument almost all the way down, while turning the amplifier input all the way up.  In this case, the amplifier is being taxed to compensate for the guitar’s weak signal.  The result will be a very thin sound for the guitar as well as an unpleasant amplification of circuit noise inherent in the amplifier.  We are not maximizing our signal at all…and may in fact be harming our amplifier.

Now let’s reverse the scenario, the guitar level is “cranked” while the amplifier input is almost to nothing.  Here again the problems are both timbral/aural and physical.  The amplifier is not taking advantage of the incoming signal, acting as a barrier to the guitar rather than a support.  Regardless, the overfed guitar signal might be overloading the amplifier input, causing physical damage.

The above is a simple case.  Yours will be more complex including 3 and 4 gain stages.  You will need to understand your signal path fully in order to get the best results.

Again to repeat a rule of thumb: the shorter the path, fewer signal traps along the way.  You will have gain stages where the signal must be raised or lowered while the ideal of many gain stages is to be transparent, to simply pass along the signal without impacting it, without boosting or attenuating.

Digital Audio: bit depth

For every digital sample, our analog to digital converter asks “what is the amplitude?”.  The question that remains is, how is this amplitude represented? The answer is “bit depth” which determines both how many different amplitude levels/steps are possible and what the overall capacity of the system is…how loud of a signal it can tolerate.

For CD-quality sound, 16 bits are used.  This means we will have 2^16 (“two to the 16th power”) different amplitude values available to us, or 65,536 steps.  Since the number of steps is divided between positive and negative values (our crests and troughs from before) this means it is divided into 32,767 positive (plus zero) and 32,768 negative values. For each sample taken, the actual amplitude must be “rounded” to the nearest available level…producing another “error” relative to the original audio signal.  The signal is “quantized”. This “quantization error” produces as small amount of “quantization noise”, noise inherent to digital recording.  A digital system is totally noise-less on its own, but as soon as it is recording a signal, it makes these errors and ends up with this small amount of noise.

Image

The amount of inherent noise versus the system’s capacity for the desired signal is called the signal-to-noise ratio, a concept I will illustrate in the next installment.  The signal-to-noise determines both how loud and how soft a signal can be cleanly recorded. It determines the recording’s “dynamic range”.

The overall amplitude capacity of an digital system can be theoretically approximated as 6 decibels per bit.  For our 16-bit CD-quality signal, this means our system can tolerate 96 dB.

So, is 16-bits enough?  The threshold of hearing or the threshold of pain varies among individuals, but is often cited as 120 or 130 dB.  So it may be that–unlike the CD-quality sampling rate and its accommodation for the range of human hearing–our 16-bit system is not enough. If one is not careful when recording, a signal can easily exceed the maximum amplitude, producing “clipping”.  In clipping, the waveform hits its amplitude ceiling resulting is a cropped waveform.

Image

The changing peaks above the maximum amplitude end up flattened. The naturally fluctuation amplitude levels are just chopped off and the resulting sound is jarring and distorted.  Increasing the bit depth will provide more amplitude “headroom” for these louder signals. 24 bits, for example provide 2^24 (over 16 million!) amplitude steps and 144 dB of theoretical overall capacity (24 x 6 dB).  So, a higher bit depth has a higher tolerance for ampltude, up to and beyond our “threshold of pain”.

As an added benefit, this higher bit depth also results in less inherent noise (!!).  Our signal-to-noise, therefore gets a two-fold benefit: more capacity for our signal and less inherent noise.

The long and short of this is, if you have a higher- bit depth system available to you, absolutely use it.  Of the two possible changes one can make to a digital conversion system, sampling rate and bit depth, the increase in bit depth will have the most profound impact on audio quality.  You will also help increase the signal-to-noise ratio, avoid clipping, as the system can tolerate higher amplitudes before going over.

Further reading:

Digital Audio: sampling rate basics

The conversion of a analog audio signal or voltage into a digital representation is known as quantization.  The continuous, real-world audio signal, representable as a smooth waveform with positive and negative pressure levels, is recorded in a series of periodic snapshots known as “samples”. The rate at which these amplitude snapshots occurs is called the “sampling rate”.  Each sample is, like a frame of video, a picture of the signal at that moment.  Specifically, it is a picture of its amplitude. That, in the end, is all the recording system cares about: “what is the amplitude?”. The succession of these amplitude measurements (“samples”, shown below as dotted lines) results in a digital approximation of the original audio signal. Image

The frequencies and notes we hear in a recorded piece of music are merely the result of these changing amplitudes over time.
The difference between the actual incoming audio signal (grey line) and the quantized digital signal (red line) is called the “quantization error”.  The difference looks terrible at the moment, but we’ll get  back the original smooth signal a little later on.
For CD-quality sound the rate is 44,100 samples per second, sometime written as 44.1k (kiloHertz).  This sampling rate is one of many but, as part of the original CD-quality standard, it is certainly the most commonly used, even today when CD’s are all but obsolete.  The reason for this number and not something higher or lower is a compromise between two things: 1) the desire to have enough resolution to record all of the sounds humans care about and 2) the need to keep file sizes small enough to fit on a standard CD.  Raw audio at 44.1k (16bits) uses around 10 MB per minute, and since a CD disk can hold around 750 MB, this leaves room for 75 minutes of music, enough to store a standard double-sided album of music.
But is 44.1k enough? As we discussed in class, this question demands we know a little about human hearing. The range of our hearing includes frequencies up to around 20,000 Hertz–some of us less, depending on age and/or the number of really loud, hearing-destroying concerts we’ve attended.  Whatever sampling rate we choose, the system must take samples fast enough to represent signals we humans care about.  Since every cycle of a waveform has both a positive and negative pressure, a crest and a trough, top and a bottom, we must dedicate a minimum of two samples for each cycle of a wave.  Therefore, the highest frequency a digital system can represent is half of the sampling rate.  This is the so-called “Nyquist frequency”, the highest frequency that a digital conversion can represent given the sampling rate.  In the case of 44.1k, the highest frequency we can acurately represent is 22,050 Hertz.  According to our initial understanding of human hearing, this frequncy seems to be enough, we can capture frequencies up to 20k and even a little beyond. This is just the beginning of the story, as we’ll review in the next section on bit depth.
Further reading:
Skip to toolbar