1421 week 0 summary

[note: with this file, if you prefer a different formatting, font or plain text, the “reader view” in a browser like Firefox is especially useful; or you can try downloading the text or webpage file as html or txt]

Below is some of what Prof. Ernste mentioned in his introduction to the course yesterday. Here I merge what was discussed in both section meetings (there were only minor variations between both).

If you feel you grasped everything pretty well, you can skim the underlined phrases, which expand a little, explaining or defining potentially unfamiliar terms.

In bold-italics, some terms are defined.

In bold, some extra-important ideas important to remember.

Intro

We started things with a sound check – specifically, to make sure your Zoom settings allow you to receive stereo signals. John Eagle helped explain how to check your settings on Zoom to ensure it allows a stereo signal.

We also introduced the teaching assistants – who are also your teachers, Josh Biggs and John Eagle. They are practicing artists in Cornell’s highly selective Doctor of Musical Arts program in composition (typically only one or two are admitted each year). They are excellent and knowledgeable practitioners of the technology being covered in this course.

Stereo, as you may already know, sends two different but connected signals to a device which has two speakers. Virtually all laptops have stereo speakers, though it’s easier to hear the nuance on headphones or on quality speakers or sound monitors which are spaced apart. Stereo is not the only way to output sound – mono, for a single speaker, has made a comeback, due to mobile phones and some streaming formats. Movie theaters will use multiple speakers – a certain kind of “surround sound.” A lot of academic studios use a format of four speakers, each with its own unique signal, surrounding the listener. Stereo, though, has been around a long time because it’s economical, and is effective in mimicking real sounds as we generally hear through two ears.

Did it occur to you that Prof. Ernste was not just using a microphone connected to his computer (which was capturing his voice)? Whenever he played musical sound, he was sending another audio signal directly from his computer using another application feeding audio to Zoom – in other words, the sound files he played from his hard drive were routed in his system to Zoom, and then eventually to all of our speakers. This principle – getting applications to share inputs and outputs will be an important part of our music-making toward the end of Part Two and throughout Part Three.

Incidentally, about the music we heard: it was Prof. Ernste’s own recent composition. It used recorded sounds from Six Mile Creek, among others. It was recorded and processed using a specialized technique related to stereo – binaural sound – a very intense and hyper-realistic approach to stereo using two microphones placed on opposite side of a styrofoam head (and a lot of studio technique).

 

This is a course putting creativity first

This course is a “maker” course for music using computer technology. One of the primary goals of the course is therefore to emphasize tools and skills for creative work, which manifest in three composition projects. We spend some lab sessions listening to these resulting works, and the final project is itself a live-streamed performance. In service of this creative work, we’ll look at history, theory, technical aspects of hardware and software, and work with smaller skill-based assignments.

Note that the assignment for Project One is posted online now, so you can start thinking about it. (There will be smaller assignments assigned weekly and due before Project One.)

Creative space of the course = subjectivity and subversion welcome

Objective space of the course = deadlines: firm. The assignments are designed to build cumulatively on each other, so it’s important to finish each on on time so as to be able to move on to the next.

Some details re. hardware and software

In this iteration of the course, given the final weeks which will be remote: a small MIDI controller is required, as mentioned in the syllabus, for Part 3, and possibly sooner if necessary. We will give advance notice as possible. This device has the advantage of being usable for your future work.

The items suggested in the syllabus are roughly the cost of a textbook; if you anticipate issues with affordability, please do reach out to us and we may be able to find an alternate solution. The devices mentioned in the syllabus are also bundled with Ableton Live Lite software which you’ll install on your Windows or Mac OS computer in the fully remote final portion of the course. (There’s also a trial version of Ableton Live Studio downloadable from the company’s website, which is the software found in the studios, currently available for a 90-day free trial, which may or may not be useful to you when you’re remote.)

In addition, there may be other sources of financial support for tech at the university level, including the Access Fund, though we haven’t been able to confirm whether this type of expense would qualify. (The Access Fund application for the fall semester will open on September 8th at 8:00 EST and is scheduled to close on November 23rd at 5:00pm EST.)

https://scl.cornell.edu/identity-resources/first-generation-low-income-support/access-fund

If you don’t know what a MIDI controller is, then you’ve enrolled in the right course. In Part Two, we will address this in detail. For now: it’s a hardware device which communicates with digital music software using a protocol in music common over the last thirty or so years, called Musical Instrument Digital Interface, but everyone uses the term MIDI). “Controller” can actually mean many things – basically anything which can send basic on/off signals (or sometimes scaled ones, e.g. 0-127).

Here, the important thing about the three commercial options we suggest is that they all have a 8×8 grid of buttons, which is a common way in which prepared sounds can be laid out in your software and then triggered with the hardware controller in live performance. (These devices also have the benefit of being easily recognized by the Ableton software.)

Ableton Live – sometimes we’ll just call it “Live” – is one of many commercial manifestations of something called a DAW, digital audio workstation. We’ll talk about what this is – for now, let’s think of it as a swiss army knife of musical software functions. It allows you to record, manipulate, synthesize, and listen to sounds, using a visual interface. Typically you’ll record and synthesize sound on multiple tracks which can be combined (mixed) together; you can also edit and process sound with specific mini-features, output files, use your sounds in live performance, link other software to the DAW, etc. (Ableton, by the way, has certain features which are hard to find in most other DAW’s. You’ll learn what they are.)

We will introduce other software types – to start off Project One, this will include a sound editor, used primarily for recording and chopping up sound, using the nimble and libre/free program Audacity. (Most features of sound editors are also found in DAW’s.)

As Prof. Ernste said, learning specific software is not our goal. If anything, we learn what he calls “metaphors,” or types of software, well enough that you may quickly acquaint yourself with any current or future software developed of the same type.

Website and the Studios

Prof. Ernste introduced the Box folders in which to submit your work. Sign ups of the studios will begin Thurs Sep 10, at 2 hours/day, 6 hours/week, and they must be accessed, solo, by individuals only, due to current Cornell health policy. We will share more details on health safety protocol related to these small spaces. Studios are best accessed from the side entrance to Lincoln Hall with ID-swipe (north, parking lot facing).

The studio spaces are special because they are “critical listening” spaces incorporating industry standard software and hardware, including professional grade microphones. The most important aspect of the studios made available to you are the speakers – in this case, typically referred to in the field as studio monitors. They are used for careful listening, typically by an individual sitting equidistant between them (in this case, there are two of them, and your chair is the third point of an equilateral triangle). These monitors employ a flat signal, which means they produce audio faithfully representing the signal sent to them, without smoothing, adding to, mitigating, or otherwise messing with. This hardware is what’s typically used for mixing, which is a technique we’ll introduce in a couple of weeks. For now, about mixing, we can say that recorded (or custom-made, i.e. synthesized) sounds can be edited, combined and otherwise altered to create a new recording (or file, in digital terms). Mixing involves the playback and subtle, or sometimes not subtle, balancing of various parameters of these multiple sounds to make a convincing unified product on the two (or more) monitors, sometimes called a final mix.

The overview of the course

Much of the previous paragraph constitutes Part One of this course – working with, editing and mixing and reworking, recorded sounds from the real world; how recording and playback rely on electric (analog) signals, and understanding how sound is stored on and generated from the (digital) computer – all in service of making music.

Much of Part Two involves the “other side” – constructing sounds from scratch, and controlling them using the computer and devices attached to it. Once you have a solid notion of how to organize and control such sounds, we also bring back recorded sounds – here called samples – as just another kind of material which can be subjected to the same controls. We also introduce synthesis and control methods beyond the DAW.

(Just an aside: when short recorded sounds are called “samples,” this is not the same meaning as “sample rate” of digital audio even though the same word is used. In Week One, we will learn more about digital audio.)

Part Three includes extending and sharing this knowledge with live performance, streaming, and individualizing your interests.

Some examples played in class

Prof. Ernste may share the works by former students as links or as files.

Alumnx included Amy Lin, Ross Anderson, Nathan Ward, and Suneth Attygale.

-EM

Leave a Reply

Skip to toolbar