                   The midiins algorithms

The _C_s_o_u_n_d instrument algorithms within  the  _m_i_d_i_i_n_s   sub-
directory  of  the  _E_a_s_t_m_a_n  _C_s_o_u_n_d _L_i_b_r_a_r_y on _a_r_c_a_n_a accept
MIDI input, either from real-time MIDI controllers or from a
previously created MIDI file.  To see a list of the current-
ly available instrument algorithms in the _m_i_d_i_i_n_s directory,
type
                         _l_s_m_i_d_i_i_n_s

                ---------------------------

_1.  _M_I_D_I _C_O_N_T_R_O_L_L_E_R _I_N_P_U_T_S

In the current MIDI setup within the studio, there  are  two
sources of real-time MIDI performance input data:
     (1) the _C_l_a_v_i_n_o_l_a keyboard controller, and
     (2) the Yamaha _M_C_S_2 controller and merger box

Signals from the MIDI OUT port on the _C_l_a_v_i_n_o_l_a  are  routed
to  the MIDI IN port on the _M_C_S_2, where they are merged with
MIDI signals created directly on the _M_C_S_2. All MIDI  signals
then  are passed along MIDI channel 1 from the MIDI OUT port
on the _M_C_S_2 to a MIDI IN port on the _S_t_u_d_i_o  _3  MIDI  inter-
face, and then from a _S_t_u_d_i_o _3 output to a serial port input
on the SGI.

Three types of MIDI signals are avalable from the  _C_l_a_v_i_n_o_l_a
and _M_C_S_2:
     (1) Basic _n_o_t_e _i_n_i_t_i_a_l_i_z_a_t_i_o_n  (note-on/note-off)  sig-
     nals,  of  the  kind produced by most keyboard, guitar,
     percussion and wind  controllers.  These  signals  come
     from the _C_l_a_v_i_n_o_l_a.
     (2) Foot switch/controller signals from the three  foot
     pedals on the _C_l_a_v_i_n_o_l_a.
     (3) _C_o_n_t_i_n_u_o_u_s _c_o_n_t_r_o_l_l_e_r signals from the _M_C_S_2.

Currently, _A_L_L of the _m_i_d_i_i_n_s algorithms are  programmed  to
respond  to  the  first  two  groups  of MIDI signals listed
above. However, only those _m_i_d_i_i_n_s  algorithms  whose  names
begin  with  the character string _c_c (such as _c_c_s_a_m_p_t_r_e_m and
_c_c_s_a_m_p_d_e_l) are programmed to respond to the continuous  con-
troller data from the _M_C_S_2. Currently available MIDI signals
within each of the three groups above are discussed below.


(1) Basic note initialization signals
The basic note initialization  signals  from  the  _C_l_a_v_i_n_o_l_a
include:
o+ _n_o_t_e _o_n : A note is initiated when you depress  a  key  on
the  _C_l_a_v_i_n_o_l_a,  or  whenever  a  note  event is encountered
within a MIDI file. These _n_o_t_e-_o_n messages  are  accompanied
by _n_o_t_e _n_u_m_b_e_r and _v_e_l_o_c_i_t_y messages.
o+ _n_o_t_e _o_f_f : A note currently sounding  is  terminated  when
you  release  the  key  on the _C_l_a_v_i_n_o_l_a that initiated this
note, or, in a MIDI  file,  when  a  "note  off"  signal  is
encountered for a currently "active" note
o+ _n_o_t_e _n_u_m_b_e_r : Each key on a MIDI keyboard (and  additional
possible  "keys" below and above the range of an 88 key con-
troller) is assigned a number between 1 and 127. The "middle
C"  key  is  number 60, which most often (but not always) is
mapped to the pitch _c_4 ("middle C," or 261.6 herz).
o+ _v_e_l_o_c_i_t_y : A the _v_e_l_o_c_i_t_y  _s_e_n_s_i_t_i_v_i_t_y  controller  within
the _C_l_a_v_i_n_o_l_a or some other MIDI performance device measures
the _q_u_i_c_k_n_e_s_s (NOT the force or weight) with which  a  "key"
is depressed.

(2) Foot switch/controller note initialization signals  from
the three _C_l_a_v_i_n_o_l_a foot pedals:

=> RIGHT pedal (controller number 64) : "_s_u_s_t_a_i_n" pedal
The right pedal on the _C_l_a_v_i_n_o_l_a sends out a MIDI controller
#  64 signal. This is a _n_o_t_e _i_n_i_t_i_a_l_i_z_a_t_i_o_n controller which
sends out a single value at the  onset  of  each  note.  The
range  of values should be from 0 to 127. However, the reso-
lution of the data created by the right pedal of the  _C_l_a_v_i_-
_n_o_l_a is so poor that this pedal essentially functions like a
two-position _o_n/_o_f_f switch.

In the _m_i_d_i_i_n_s algorithms, this pedal has been mapped  as  a
_s_u_s_t_a_i_n  pedal.   If  you depress the pedal _B_E_F_O_R_E playing a
note, then play the note, and then release the key (and,  if
you wish, the pedal), the note will sustain for up to 5 or 6
additional seconds (assuming that the input soundfile  dura-
tion  is this long) with a gradual decay in amplitude. It is
not possible to sustain a note or  input  soundfile  at  its
full original amplitude with the right pedal.
Given the poor resolution of this _C_l_a_v_i_n_o_l_a pedal, it  makes
relatively  little difference whether the pedal is depressed
all or only part way down. Eventually, we  may  replace  the
Clavinola  right pedal with a more sensitive foot pedal con-
troller in which the position of the pedal _w_i_l_l  affect  the
sustain time.

_C_l_a_v_i_n_o_l_a _L_E_F_T _a_n_d _M_I_D_D_L_E _f_o_o_t _s_w_i_t_c_h _p_e_d_a_l_s
_T_h_e _l_e_f_t and _m_i_d_d_l_e foot pedals on  the  _C_l_a_v_i_n_o_l_a,  respec-
tively  MIDI  controller numbers 67 and 66, each send out an
_o_n/_o_f_f note-initialization signal at the beginning  of  each
note.  The pedal is sensed as being either "down" or "up" at
the onset of a note, and subsequent changes in  pedal  posi-
tion during the note have no effect.
With most of the _m_i_d_i_i_n_s algorithms, these two  pedals  have
been  programmed  to  affect the articulation of notes, with
the _l_e_f_t pedal producing sharper,  more  percussive  attacks
and  decays  than  normal  and  the _m_i_d_d_l_e pedal producing a
smoother, more legato-like articulation. However, in  a  few
of  the  algorithms,  such  as  _m_i_d_i_p_l_u_n_k,  these two pedals
instead, or additionally, affect other aspects of the sound,
such as timbre.
=> LEFT pedal (controller # 67) : usually serves as a "_s_t_a_c_-
_c_a_t_o" articulation pedal
The left pedal has been mapped to function somewhat  like  a
"_n_o_i_s_e  _g_a_t_e"  or  "_s_t_a_c_c_a_t_o" pedal, decreasing the rise and
(especially) the decay times for each note. This  has  rela-
tively little effect on sound sources with very fast attacks
and decays (such as high pitched pizzicato  tones)  or  very
slow  attacks  and  decays,  like  some of the environmental
sounds in the _s_f_l_i_b/_e_n_v directory.  Generally,  it  is  most
useful  with sustained arco, woodwind, brass and vocal tones
that you wish to play rapidly, in staccato-like fashion.

=> MIDDLE pedal (controller # 66) : generally functions as a
"_l_e_g_a_t_o" pedal
Depressing this pedal while playing an  instrument  such  as
_m_i_d_i_s_a_m_p  will cause the tones of input soundfiles to have a
more gradual initial amplitude rise, and a more gradual con-
cluding  amplitude  decay.  To  play  a melody with "legato"
articulation, keep this pedal depressed and slightly overlap
the  keys  as  you play. This generally works best with sus-
tained tones such as are found in _m_i_d_i_f_u_n_c_s _s_f_l_i_b files like
_v_l_n, _s_o_p_1, _s_o_p_2 and _o_b_o_e. For idiophonic sounds (e.g. string
pizzicati or martele, piano or xylophone tones), the result-
ing  "smoothing" of the initial attack usually is not desir-
able or useful, except for special effects, such as  turning
a piano timbre into more of a harmonium-like timbre.


(3) Continuous controller signals from the _M_C_S_2
Currently  available  continuous  controllers  on  the  _M_C_S_2
include the _p_i_t_c_h _b_e_n_d wheel, the _m_o_d_u_l_a_t_i_o_n wheel, _c_o_n_t_i_n_u_-
_o_u_s _s_l_i_d_e_r _1, _c_o_n_t_i_n_u_o_u_s _s_l_i_d_e_r _2  and  _f_o_o_t  _c_o_n_t_r_o_l_l_e_r  _2.
The  positions  of  these _c_o_n_t_i_n_u_o_u_s _c_o_n_t_r_o_l_l_e_r_s are sampled
continuously (many times per second) by the  MIDI  hardware.
Unlike  all of the note initialization controllers discussed
above, performance changes made with continuous  controllers
affect  not  only  notes subsequently played, but also notes
already sounding.  The use of these controllers is discussed
later.

                ---------------------------

_2.  _U_S_I_N_G _T_H_E _M_I_D_I_I_N_S _A_L_G_O_R_I_T_H_M_S

In addition to MIDI input, the _m_i_d_i_i_n_s algorithms require  a
skeletal  _C_s_o_u_n_d  _s_c_o_r_e file. This score file includes _f_u_n_c_-
_t_i_o_n definitions that may include synthetic audio waveshapes
(such  as  a  sine  tone), or keymapped input soundfiles, as
well as other types of  data  needed  by  the  algorithm  to
create  or process sound.  To see a list of currently avail-
able function definition  files  that  can  be  included  in
_C_s_o_u_n_d score files, type
                        _l_s_m_i_d_i_f_u_n_c_s

For information on these function files, type
                      _h_e_l_p  _m_i_d_i_f_u_n_c_s

The _m_i_d_i_i_n_s algorithms and  _m_i_d_i_f_u_n_c_s  function  definitions
included  within a score file can be used with the following
scripts and commands:

(1) _m_k_c_s_o_u_n_d_m_i_d_i_p_l_a_y   :  This  local  script  provides  the
easiest-to-use but the most limited method of running _C_s_o_u_n_d
with real-time MIDI input.
The user must choose  from  among  the  pre-defined  _m_i_d_i_i_n_s
instrument  algorithms and generic _m_i_d_i_f_u_n_c function defini-
tion files, specifying these  choices  within  a  string  of
arguments  on  a single command line. The script creates the
necessary _C_s_o_u_n_d orchestra  and  score  files.  After  these
files  have  been  loaded  into RAM, the user play sounds in
real-time using MIDI input from the _C_l_a_v_i_n_o_l_a  and/or  other
MIDI  controller  sources  connected  to  the  _S_t_u_d_i_o _3 MIDI
interface in the studio.

Limitations:
     o+ The user cannot edit (alter) the pre-defined  instru-
     ment algorithms or functions.
     o+ Midi files cannot be used. MIDI input must  be  real-
     time.
     o+ The output sound samples cannot  be  written  into  a
     soundfile.

The  command  line  argument  syntax  for   _m_k_c_s_o_u_n_d_m_i_d_i_p_l_a_y
(which can be abbreviated _m_k_c_s_m_p) is:
_m_k_c_s_m_p  [_S_R]  [_C_H_A_N] [_G_A_I_N##]  [_D_U_R##]  _F_U_N_C_T_I_O_N_S  _I_N_S_T_R_U_M_E_N_T
For more details, consult the online or  hardcopy  _m_a_n  page
documentation for this script.

(2) _c_s_o_u_n_d_m_i_d_i_p_l_a_y  : If you wish to edit  (change)  one  of
the  _m_i_d_i_i_n_s  algorithms and/or one or more of the _m_i_d_i_f_u_n_c_s
files, or to create  your  own  MIDI  input  orchestra  file
and/or your own function definitions and score file, you can
use this script to automate _C_s_o_u_n_d compilation and playback.
As  with  the  _m_k_c_s_o_u_n_d_m_i_d_i_p_l_a_y script, real-time MIDI input
from the _C_l_a_v_i_n_o_l_a and _M_C_S_2 then initiates a  "performance,"
in  which the output samples created by _C_s_o_u_n_d are passed to
the DACs on the SGI system for audition.

Limitations:
     o+ Midi files cannot be used. MIDI input must  be  real-
     time.
     o+ The sound samples cannot be written into a soundfile.

The command line argument syntax for _c_s_o_u_n_d_m_i_d_i_p_l_a_y   (which
can be abbreviated _c_s_m_p) is:
                  _c_s_m_p  _O_r_c_h_F_i_l_e _S_c_o_r_e_F_i_l_e
(To use the deault orchestra and score files _o_r_c  and  _s_o_u_t,
simply follow the command name with a dot, like this:)

                          _c_s_m_p  .
For more details, consult the online or  hardcopy  _m_a_n  page
documentation for this script.
     [To create an orchestra file that  includes  a  _m_i_d_i_i_n_s
     algorithm,  consult  the  _m_a_n page for the script _m_k_m_i_-
     _d_i_o_r_c_h. To create a score file that  includes  a  _m_i_d_i_-
     _f_u_n_c_s  file,  use  the script _g_e_t_m_i_d_i_s_c_o_r_e. To obtain a
     copy of a _m_i_d_i_f_u_n_c_s file, use the script  _g_e_t_m_i_d_i_f_u_n_c_s.
     Typing _g_e_t_m_i_d_i_s_c_o_r_e or _g_e_t_m_i_d_i_f_u_n_c_s, with no arguments,
     will display a usage summary for the script.]

(3)  The  normal  _c_s_o_u_n_d   command,  with  appropriate  flag
options  and  arguments,  can be used to create either a new
soundfile or real-time audio output with  _a_n_y  syntactically
correct orchestra file and score file.

(Eastman commands to run _C_s_o_u_n_d with _M_I_D_I _f_i_l_e input (_c_s_m_f_p,
_c_s_m_f_p+ and _c_s_m_f) are discussed later, in the _M_I_D_I _F_I_L_E_S sec-
tion of this document.)
                ---------------------------

_3.  _D_O_C_U_M_E_N_T_A_T_I_O_N _O_N _I_N_S_T_R_U_M_E_N_T _A_L_G_O_R_I_T_H_M_S  _i_n  _t_h_e  _M_I_D_I_I_N_S
_D_I_R_E_C_T_O_R_Y


     All of the algorithms except those that begin with  the
character  string  _S_T  generate or read in monophonic source
signals. However, stereo output, incorporating various types
of  left-right  localization, is possible. (See the _m_a_n page
for the _m_k_c_s_o_u_n_d_m_i_d_i_p_l_a_y script for  examples.)  Instruments
whose  names  begin  with  the  characters _S_T, such as _S_T_m_i_-
_d_i_s_a_m_p, are _s_t_e_r_e_o-_i_n/_s_t_e_r_e_o-_o_u_t, and require  stereo  input
soundfiles,  such  as  those  found  in  the  _m_i_d_i_f_u_n_c_s file
_S_T_w_i_n_d_s.

Important BUG report: Occasionally, if you play a rapid suc-
cession  of  notes or a dense chord, or have too many simul-
taneously sustaining tones, the I/O overload will cause  the
audio  hardware  on the Indy to become confused, and it will
repeat fragments of sounds in  short,  gutteral  bursts.  If
this  happens, wait several seconds, then try playing again,
and if you are lucky  the  problem  will  be  corrected.  If
machine  gun-like bursts of sound continue, or if you get no
sound at all, kill the _C_s_o_u_n_d job by typing a _c_o_n_t_r_o_l _c.  If
you  wish  to  try  again to play a similarly rapid or thick
chordal passage, lower the sampling rate.

     The _m_i_d_i_i_n_s algorithms currently can  be  divided  into
three groups:
     (1) Those that create simple,  synthetically  generated
     audio  waveforms, and use only note initialization MIDI
     signals from the _C_l_a_v_i_n_o_l_a.
     (2) Those that process soundfiles, and employ only note
     initialization MIDI signals from the _C_l_a_v_i_n_o_l_a.
     (3) Those that are programmed to respond to  _c_o_n_t_i_n_u_o_u_s
     _c_o_n_t_r_o_l_l_e_r data from the _M_C_S_2, as well as note initial-
     ization signals from the _C_l_a_v_i_n_o_l_a.


_3._1.  _S_i_m_p_l_e _s_i_g_n_a_l _g_e_n_e_r_a_t_i_n_g _a_l_g_o_r_i_t_h_m_s

_m_i_d_i_w_a_v_e (which can be abbreviated _w_a_v_e) is a  simple  fixed
waveform signal generator
     Required functions: a synthetic fixed wave  form,  such
     as  provided  by the _s_i_n_e, _t_r_i_a_n_g_l_e, _s_q_u_a_r_e or _s_a_w_t_o_o_t_h
     wave function definitions in the _m_i_d_i_f_u_n_c_s directory.

_m_i_d_i_p_l_u_n_k : like its cousin, the _s_c_o_r_e _p-_f_i_e_l_d based Library
instrument  _p_l_u_n_k,  _m_i_d_i_p_l_u_n_k  implements  a  variant of the
Karplus-Strong plucked string algorithm  to  create  timbres
reminiscent  of  pizzicati,  harpsichords and other types of
plucked string sounds.  Real-time performance sampling rates
of  32000  or 22050 often are necessary, but with this algo-
rithm, more than most, lower  sampling  often  reduce  audio
quality.   Low  pitched  tones  often  are  quite bright and
buzzy, and, with the default timbre, notes above  _c_5  or  so
become  increasingly  "hollow-sounding"  but  also sometimes
unusably noisy.
Depressing  the  _m_i_d_d_l_e  Clavinola  pedal  runs  the  signal
through a low pass filter that creates a more delicate muted
quality, and decreases note durations  somewhat.  Depressing
the  _l_e_f_t Clavinola pedal also tones down the twanging qual-
ity  of  the  timbre  somewhat,  and  creates  a  much  more
staccato-like  articulation.   Using  either of these pedals
generally improves the quality of higher pitched tones.
     Required functions: the _s_i_n_e file in the _m_i_d_i_f_u_n_c_s sub-
     directory.


_3._2.  _S_o_u_n_d_f_i_l_e _p_r_o_c_e_s_s_i_n_g _a_l_g_o_r_i_t_h_m_s

_m_i_d_i_s_a_m_p and _S_T_m_i_d_i_s_a_m_p are sampling  (actually  resampling)
instruments  that  play back input soundfiles with keymapped
transpositions and apply the MIDI _v_e_l_o_c_i_t_y  value  for  each
note  to  the  output  amplitude.  (The  quicker  a  key  is
depressed, the higher the output amplitude.) Real-time  per-
formance  with  _m_i_d_i_s_a_m_p  sometimes requires a sampling rate
lower than 44100 to avoid glitches. With  _S_T_m_i_d_i_s_a_m_p,  lower
sampling rates usually are required.
     Required functions: any of  the  _m_i_d_i_f_u_n_c_s  files  that
     include keymapped _s_f_l_i_b input soundfiles, such as _b_a_s_s_1
     or _v_l_n for _m_i_d_i_s_a_m_p, or _S_T_w_i_n_d_s for _S_T_m_i_d_i_s_a_m_p.


_m_i_d_i_s_a_m_p_b_r_i_g_h_t  is identical to _m_i_d_i_s_a_m_p  in  all  respects,
except  that the velocity value for each note is applied not
only to output amplitude, but also to a low  and  high  pass
filter network. The quicker a key is depressed, the brighter
(greater percentage of high frequency  energy)  the  timbre.
This affects some soundfile timbres more than others.
Because of the additional signal processing involved,  real-
time performance usually requires a sampling rate lower than
44100 to avoid glitches.

_3._3.  _U_S_I_N_G _t_h_e _C_O_N_T_I_N_U_O_U_S _C_O_N_T_R_O_L_L_E_R_S

     Currently available  continuous  controllers  from  the
_M_C_S_2 include:
     the _p_i_t_c_h _b_e_n_d _w_h_e_e_l
     the _m_o_d_u_l_a_t_i_o_n _w_h_e_e_l : (MIDI controller # 1)
     _c_o_n_t_i_n_u_o_u_s _s_l_i_d_e_r _1 (_c_s_1)  :  (we  have  remapped  this
     slider as MIDI controller # 8)
     _c_o_n_t_i_n_u_o_u_s _s_l_i_d_e_r _2 (_c_s_2) : (MIDI controller # 5)
     _f_o_o_t _c_o_n_t_r_o_l_l_e_r _2 : (MIDI controller # 7)

All of these controllers except the _p_i_t_c_h _w_h_e_e_l send  out  a
single byte of data, at the controller sampling rate, with a
value between
     _0 (when the controller is all the way _D_O_W_N)
     and
     _1_2_7 (when the controller is all the way _U_P)
The _p_i_t_c_h _w_h_e_e_l sends out two bytes of data.

     Within all of the _m_i_d_i_s_a_m_p algorithms whose names begin
with the character string _c_c:
     o+ the continuous controllers are _g_l_o_b_a_l, affecting  all
     notes that are playing or are subsequently played
     o+ the _p_i_t_c_h _w_h_e_e_l affects pitch, with a maximum  devia-
     tion  of  one  semitone  above or below the base pitch.
     When the wheel is in its center, detente  position,  it
     has  no  effect. Thus, this wheel can be used to create
     pitch inflections and, within the narrow maximum possi-
     ble range of a major second, glissandi.
     o+  _f_o_o_t _c_o_n_t_r_o_l_l_e_r _2 (the foot pedal patched  into  the
     _F_C_2  jack  on  the  _M_C_S_2) affects _a_m_p_l_i_t_u_d_e ("volume").
     When this pedal is all the way DOWN it has  no  effect.
     Raising  the  pedal slowly will introduce progressively
     greater amplitude attenuation, and   TOTAL  attenuation
     when the pedal all the way UP.
     o+ the functions of the other three  controllers  -  the
     _m_o_d_u_l_a_t_i_o_n  _w_h_e_e_l  and  the two _s_l_i_d_e_r_s (_c_s_1  and _c_s_2),
     vary from one algorithm to the next.  However,  _c_s_1  is
     generally  a _m_i_x_i_n_g controller that determines the out-
     put percentages of processed and unprocessed ("direct")
     versions of an audio signal.

=> => _I_M_P_O_R_T_A_N_T _N_O_T_E: Csound has no way of sensing the  ini-
tial  positions  of  any of the continuous controllers until
they are moved. Each of these controllers (including the _f_c_2
pedal,  which  affects  output  amplitude)  defaults to zero
until the controller is initialized  by  physical  movement.
Thus,  after  submitting  a  _m_k_c_s_o_u_n_d_m_i_d_i_p_l_a_y, _c_s_o_u_n_d_p_l_a_y or
_c_s_o_u_n_d command, before playing any notes,
     YOU MUST MOVE EACH OF THE CONTINUOUS CONTROLLERS TO  BE
     USED  (except  for  the pitch wheel), and RESET EACH TO
     THE DESIRED POSITION.
     In particular, if you do not move the  _f_c_2  pedal,  you
     will get NO SOUND!!!

_A_d_d_i_t_i_o_n_a_l _n_o_t_e_s _o_n _u_s_i_n_g _t_h_e _c_o_n_t_i_n_u_o_u_s _c_o_n_t_r_o_l_l_e_r_s:
     (1) Because  the  continuous  controller  (_c_c)  _m_i_d_i_i_n_s
     algorithms  incorporate more complex, control-rate sig-
     nal processing than an algorithm such as  _m_i_d_i_s_a_m_p,  it
     sometimes is necessary to use sampling rates lower than
     44.1 k to avoid glitches. This is especially true  when
     a stereo output is employed (as with the various stereo
     _C_H_A_N options provided by the _m_k_c_s_o_u_n_d_m_i_d_i_p_l_a_y script.
     (2) Rapid changes in the _p_i_t_c_h _w_h_e_e_l and _f_c_2 (amplitude
     "volume")  controllers  generally  will not cause prob-
     lems. However, rapid changes in the  other  three  con-
     tinuous controllers, and especially in _c_s_2, may produce
     glitches. This is particularly true with the  _c_c_s_a_m_p_d_e_l
     (delay  line)  algorithm, and, to a lesser extent, with
     the _c_c_s_a_m_p_b_r_i_g_h_t and _c_c_s_a_m_p_r_e_v  algorithms.   To  avoid
     problems, it is safest to move _c_s_2, _c_s_1 and the _m_o_d_u_l_a_-
     _t_i_o_n _w_h_e_e_l slowly and, if possible, during silences.

_3._4.  _T_h_e _C_O_N_T_I_N_U_O_U_S _C_O_N_T_R_O_L_L_E_R _A_L_G_O_R_I_T_H_M_S

     The _m_i_d_i_i_n_s algorithms whose names begin with the char-
acter  string  _c_c  work  just like the _m_i_d_i_s_a_m_p and _m_i_d_i_w_a_v_e
algorithms, accepting note initialization signals  from  the
_C_l_a_v_i_n_o_l_a  keyboard  and  "staccato," "legato" and "sustain"
foot switch data from the three _C_l_a_v_i_n_o_l_a pedals.  Addition-
ally,  however,  the _c_c ("_Continuous _Controller") algorithms
are programmed to respond to continuous controller data from
the  _M_C_S_2,  allowing one to introduce pitch inflections with
the pitch wheel, amplitude  variations  with  the  _f_c_2  con-
troller,  and  time varying processing of some "effect" with
_c_s_1, _c_s_2 and  the  _m_o_d  _w_h_e_e_l.  These  algorithms  currently
include:

algorithm:      time varying signal processing operation ("effect")
_______________________________________________________________________________

_c_c_s_a_m_p_v_i_b       sub-audio frequency modulation (vibrato)
_c_c_s_a_m_p_t_r_e_m      sub-audio amplitude variations (tremolo)
_c_c_s_a_m_p_r_i_n_g_m_o_d   ring modulation (audio rate amplitude modulation)
_c_c_s_a_m_p_d_e_l       delay line with feedback, producing echos and/or comb filtering
_c_c_s_a_m_p_r_e_v       reverberation
_c_c_s_a_m_p_b_r_i_g_h_t    control of timbral brightness

     Required functions: Like the _m_i_d_i_s_a_m_p  algorithm,  each
     of  the  _c_c_s_a_m_p  algorithms  requires  keymapped  input
     soundfile function definitions - most often a _m_i_d_i_f_u_n_c_s
     file such as _v_c._p or _s_o_p_1.

     In all of the _c_c algorithms, the _p_i_t_c_h _b_e_n_d  wheel  can
vary  the  pitch  by as much as one semitone higher or lower
than the pitch of the source sound.  _f_c_2 attenuates the out-
put  amplitude.  When  this  pedal  is  all the way DOWN, no
attenuation is introduced. Gradually raising the pedal  will
introduce  progressively  greater  amplitude  attenuation (a
logarithmic mapping of pedal movement is used), and complete
attenuation  (no sound) when the pedal is all the way UP. If
you get no sound from a _c_c algorithm, check the position  of
this  pedal.   Remember  also  that  the  velocity  of a key
depression on the _C_l_a_v_i_n_o_l_a also affects output amplitude.

     The sound processing operations controlled by _c_s_1,  _c_s_2
and  the  _m_o_d _w_h_e_e_l, and the ranges of parameter values they
can produce, are summarized for each of the _c_c algorithms in
the  following  table.  A  dash under a controller indicates
that this controller has no effect  in  a  particular  algo-
rithm.

(algorithm)     _C_S_1                             _C_S_2               _m_o_d. _w_h_e_e_l
                controls "wet/dry," or          controls          controls
                processed/unprocessed mix
______________________________________________________________________________
_c_c_s_a_m_p_v_i_b       vibrato  depth                  vibrato rate      ---
                (+/- 1/4 tone)                  (0 - 10 herz)

_c_c_s_a_m_p_t_r_e_m      tremolo depth                   tremolo rate      ---
                (0 - 100 %)                     (0 - 16 herz)

_c_c_s_a_m_p_r_i_n_g_m_o_d   amp. mod. depth                 tremolo rate      ---
                (0 - 100 %)                     (.25 - 4. times
                                                base pitch)

_c_c_s_a_m_p_d_e_l       delayed vs. direct signal mix   delay time        feedback %
                (0 - 100 %)                     (0 - 8 seconds)   (0 - 98 %)

_c_c_s_a_m_p_r_e_v       reverb vs. dry mix              reverb time       "brightness"
                (0 - 100 %)                     (0 - 4 seconds)   (.2 - 1.)

_c_c_s_a_m_p_b_r_i_g_h_t    timbral brightness              ---               ---
                (dull to bright)


_N_o_t_e_s _o_n _t_h_e _i_n_d_i_v_i_d_u_a_l _c_c _a_l_g_o_r_i_t_h_m_s:

     The ccsampvib and ccsamptrem algorithms work in a simi-
lar  manner.   _c_c_s_a_m_p_v_i_b  performs  sub-audio rate _f_r_e_q_u_e_n_c_y
modulation on the source soundfile (introducing a  _v_i_b_r_a_t_o),
while  _c_c_s_a_m_p_t_r_e_m  performs sub-audio rate _a_m_p_l_i_t_u_d_e modula-
tion (introducing a _t_r_e_m_o_l_o). Both algorithms include an LFO
(low  frequency oscillator), and the output of this modulat-
ing oscillator is applied either to the pitch (_c_c_s_a_m_p_v_i_b) or
to the output amplitude (_c_c_s_a_m_p_t_r_e_m) of the input soundfile.
The  _r_a_t_e  (speed)  of  the  modulator,  controlled  by  the
position  of  _c_s_2, can vary between 0 and 10 herz in _c_c_s_a_m_p_-
_v_i_b, or between 0 and 16 herz in _c_c_s_a_m_p_t_r_e_m.  The  _d_e_p_t_h  of
the periodic frequency or amplitude deviations is controlled
by _c_s_1.
     In _c_c_s_a_m_p_v_i_b, the depth is variable between 0 (no modu-
     lation)  when  _c_s_1  is  all the way DOWN, and a vibrato
     width that varies between a quarter-ton  higher  and  a
     quarter-tone  lower than the base pitch when _c_s_1 is all
     the way UP.
     With _c_c_s_a_m_p_t_r_e_m,  the  depth  of  modulation  can  vary
     between  0 (no effect) to 100 % of the output amplitude
     (maximum effect). Note that if either _c_s_1 or _c_s_2 is all
     the way DOWN, no modulation will result.

     The ccsampringmod algorithm is similar  to  _c_c_s_a_m_p_t_r_e_m,
but  here  the  frequency of the modulating oscillator, con-
trolled again by _c_s_2, is much higher - often several hundred
cycles  per second. The frequency of the modulating oscilla-
tor initially is set to the "pitch"  of  the  _C_l_a_v_i_n_o_l_a  key
that  triggered the note. (For _c_4, or "middle C," this would
be 261.626 herz.) This initial setting then is multiplied by
a value somewhere between 0 and 4., depending upon the posi-
tion of _c_s_2.

     Thus, this algorithm alters the timbre of source sound-
files,  introducing  new  sum  and difference tone sidebands
(frequencies) between the modulating frequency and all  fre-
quencies contained within the source soundfile.  If the fre-
quency of the modulating sine tone oscillator has  a  simple
harmonic  ratio  (such  as  .5, 1. or 3.) to the fundamental
frequency of a pitched input soundfile,  the  sideband  fre-
quencies  will  be  harmonic;  if  the  "carrier" (soundfile
pitch) to modulator frequency ratio  is  not  harmonic,  the
soundfile  will be "detuned" and its timbre will be complex,
often sounding like a woodwind "multiphonic."

     ccsampdel routes the output of  the  sampler  algorithm
through  a  single delay line. (similar to the delay line in
the score-based Library algorithm _d_e_l_a_y_s, and  also  to  the
hardware _E_f_f_e_c_t_r_o_n delay line in the MIDI studio). The three
variable parameters in this algorithm are
     o+ the "wet/dry" mix (the % of the  delayed  signal  vs.
     the % of the direct, undelayed signal sent to the audio
     output)
     o+ delay time
     o+ feedback percentage
=> _c_s_1 controls the amount of the input signal that is  sent
through  the delay line, variable from 0 (when the slider is
all the way DOWN) to up to 100 % (when the slider is all the
way UP).

=> _c_s_2 controls the delay time, variable from 0 (no  effect,
slider all the way DOWN) up to 8 seconds (slider all the way
UP). In this algorithm, the output of _c_s_2 is  mapped  to  an
exponential curve, so changes in delay time are much greater
near the top of the slider excursion than at the bottom:

     Delay time:  0   .1   .2    .5    1.    2.    4.     8.
     (seconds)
     (Slider:)    |----------------------------------------|
In this algorithm, however, changes in the delay time do _n_o_t
introduce Doppler (pitch) shifts.

=> The _m_o_d _w_h_e_e_l controls the percentage of the delayed sig-
nal that is fed back again through the delay line, producing
multiple echos or, if the delay time is less  than  50  mil-
liseconds   or   so,  comb  filtering  (which  introduces  a
resonant, pitched coloration to the sound).

     IMPORTANT NOTE: Changing  the  delay  time  (with  _c_s_2)
     while notes are sounding may produce discontinuities or
     clicks, especially if feedback is employed, unless  you
     move  the slider very slowly. To be safe, move _c_s_2 only
     during silences.  However, _c_s_1 and the _m_o_d  _w_h_e_e_l  gen-
     erally  can  be  moved, either rapidly or slowly, while
     notes are sounding without any ill effects.

     ccsamprev passes sampled audio signals through a rever-
berator (Csound unit generator _r_e_v_e_r_b_2).
     _c_s_1 controls the "wet/dry" output  mix,  variable  from
     zero  per  cent reverberation of the audio signal (when
     _c_s_1 is all the way DOWN) up to reverberation of  100  %
     of the audio signal (when _c_s_1 is all the way UP).
          _c_s_2 determines the  reverberation  time,  variable
          between 0 and 4 seconds.  Moving this slider while
          notes are  sounding  may  produce  noise  or  even
          clicks  unless  the movement is slow. Once garbage
          is introduced into the audio output  it  may  last
          for several seconds, so wait awhile before playing
          again.
          The _m_o_d _w_h_e_e_l controls  _h_i_g_h  _f_r_e_q_u_e_n_c_y  _d_i_f_f_u_s_i_o_n
          (the  _k_h_d_i_f  argument  to  _r_e_v_e_r_b_2).  When the _m_o_d
          _w_h_e_e_l is all the way DOWN, high frequencies within
          the  reverberant  signal  decay  much more quickly
          than low frequencies, and the effect is similar to
          that  of  a small, dry room. When the _m_o_d _w_h_e_e_l is
          all the way UP, these higher frequencies decay  at
          the  same  rate as low frequencies, resulting in a
          much "brighter," "wetter" quality (somewhat   like
          the  ambience  of  a "stone walled cathedral" or a
          "cave)." Generally,  _m_o_d  _w_h_e_e_l  settings  between
          about 1/4 and 1/2 way up work best. If you set the
          _m_o_d _w_h_e_e_l to a high value (near  the  top  of  its
          excursion),  you  may  need  to  lower the _c_s_1 and
          (especially)  _c_s_2  settings  to  avoid  an  overly
          "metallic"  or  "drippingly wet" reverberant qual-
          ity.
     This reverberator works  better  with  sustained  sound
     sources than with staccato or sharp impulse sounds. The
     reverberator includes 6 filters that involve a  lot  of
     computation  and  a  slight  signal delay. Lowering the
     sampling rate does not seem to help  much  if  you  get
     garbage.

     ccsampbright routes the sampler signals through low and
high  pass  filters.  _c_s_1 controls the mix of these two out-
puts. When _c_s_1 is all the way UP, the audio output  will  be
"brighter,"  containing  proportionately more high frequency
energy than  the  source  soundfile;  conversely,  gradually
sliding   _c_s_1  downward  will  cause  progressively  greater
attenuation  of  high  frequencies,  and  a  "mellower"   or
"duller"  timbre.  _c_s_2  and  the _m_o_d _w_h_e_e_l have no effect in
this algorithm.
Because filtering operations are  computationally  intensive
(and  relatively  slow), it may be necessary to use sampling
rates lower than 44.1 k with this algorithm,  especially  if
the "polyphony" (number of simultaneously sounding notes) of
what you play exceeds 2 or 3 "parts," or if  the  output  is
stereo  (as  when  one  uses  one  of  the  _C_H_A_N  options in
_m_k_c_s_o_u_n_d_m_i_d_i_p_l_a_y).

                ---------------------------

_4.  _U_S_I_N_G _M_I_D_I _F_I_L_E_S _w_i_t_h _C_S_O_U_N_D

     _M_I_D_I _f_i_l_e_s contain "recordings" of MIDI note  and  con-
troller  data  in  a generic format that can be read by many
types of MIDI application programs.  (The data within a _M_I_D_I
_f_i_l_e  is  _n_o_t an _a_u_d_i_o "recording," but rather an event list
of the MIDI controller movements and timings that eventually
is  passed  to  a  synthesizer,  which  produces  the actual
sound.) Most often, the performance data within a _M_I_D_I  _f_i_l_e
was created by means of a sequencing program, such as _L_o_g_i_c,
_C_u_b_a_s_e or _V_i_s_i_o_n, or else with an interactive  program  such
as  _M_A_X,  or  perhaps  with a music notation program such as
_F_i_n_a_l_e.  To use MIDI data originally created with  one  MIDI
application, such as _L_o_g_i_c, within another application, such
as _F_i_n_a_l_e or _C_u_b_a_s_e, we first must _e_x_p_o_r_t (copy and convert)
the  sequence  data  within  the _L_o_g_i_c file into a "standard
MIDI file." Then, within _F_i_n_a_l_e or _C_u_b_a_s_e,  we  must  _i_m_p_o_r_t
(read  in  and translate) this intermediate stage MIDI file,
converting it to the format required by the new application.

     To complicate matters,  however,  _M_I_D_I  _f_i_l_e_s   can  be
written  in  various  formats,  and even so-called "standard
MIDI files" come in two flavors: _t_y_p_e _0 and _t_y_p_e _1.
     [In _t_y_p_e  _0  files,  the  MIDI  data  from  all  source
     "tracks" is merged into a single "output" track.
     In _t_y_p_e _1 files,  each  track"  or  "part"  within  the
     sequencer  or notation program is written to a separate
     MIDI file "track."]

=> Our SGI version of _C_s_o_u_n_d, and also our  SGI  version  of
the  IRCAM  _M_A_X  program, both are capable of reading _t_y_p_e _0
"standard" MIDI files.  However, neither _C_s_o_u_n_d nor _M_A_X  can
make use of _t_y_p_e _1 MIDI files.
(Additionally, _C_s_o_u_n_d also can read MIDI files  in  _M_P_U  _4_0_1
format,  but currently we do not use this format at the Com-
puter Music Center.

     For our purposes, no significant data generally is lost
in  the  conversion  of a "multitrack" sequencer or notation
file into a "mono track" _t_y_p_e _0 _M_I_D_I _f_i_l_e.  The MIDI channel
outputs (1 through 16) assigned to each of the source tracks
are preserved in the MIDI file.  _C_s_o_u_n_d converts these  MIDI
channel numbers to _i_n_s_t_r_u_m_e_n_t numbers.  Thus, notes from any
source track that  were  routed  to  MIDI  channel  2  in  a
sequencer file will be played by _i_n_s_t_r _2 in a _C_s_o_u_n_d orches-
tra file, and notes originally sent out MIDI channel 5  will
be  played  by  _i_n_s_t_r  _5.  If there is no _i_n_s_t_r _2 or _i_n_s_t_r _5
within the _C_s_o_u_n_d orchestra file, _i_n_s_t_r _1 will play  all  of
the notes for the "missing players."

     Besides enabling us to use _e_d_i_t_e_d performance data, and
performances  too  complex to have been realized in a single
real-time playing, _M_I_D_I _f_i_l_e_s offer another  advantage  over
real-time  performance  with _C_s_o_u_n_d: the output samples com-
puted by  _C_s_o_u_n_d can be passed _e_i_t_h_e_r to the _a_u_d_i_o  _h_a_r_d_w_a_r_e
on  the SGI for instantaneous audition, or else written to a
disk _s_o_u_n_d_f_i_l_e.  (Recall that with real-time MIDI input,  as
in  the  _m_k_c_s_o_u_n_d_m_i_d_i_p_l_a_y and _c_s_o_u_n_d_m_i_d_i_p_l_a_y commands, it is
_n_o_t possible to write the _C_s_o_u_n_d output samples to a  sound-
file.)

     In addition, with appropriate flag options  to  _C_s_o_u_n_d,
it  also  is  possible to play along with a real-time _C_s_o_u_n_d
compilation of a MIDI file, using  the  _C_l_a_v_i_n_o_l_a  and  _M_C_S_2
controllers  to  introduce additional notes.  However, there
are some limitations to such "music minus one" performances:
     _m_u_s_i_c-_m_i_n_u_s-_o_n_e limitations:
     o+ The output samples cannot be written to a soundfile.
     (Whenever real-time MIDI performance input is  included
     in  a _C_s_o_u_n_d compilation, the samples must be passed to
     the audio hardware of the host computer.)
     o+ The MIDI file data generally cannot  be  modified  by
     means of real-time controller movements.
     (For example, one cannot alter  the  pitch  of  a  note
     triggered  by a MIDI file by real-time movements of the
     _p_i_t_c_h _b_e_n_d _w_h_e_e_l on the _M_C_S_2 controller.)
     o+ The textures easily can become too complex -  involv-
     ing  too  many notes, and too much number crunching  or
     I/O - for _C_s_o_u_n_d to handle on the fly without glitches.

_4._1.  _C_o_m_m_a_n_d_s _t_o _r_u_n _C_s_o_u_n_d _w_i_t_h _M_I_D_I _f_i_l_e _i_n_p_u_t

     The following  table  summarizes  various  options  for
using  MIDI input with _C_s_o_u_n_d, and  local (Eastman) commands
that can be used for each of these options:

 MIDI input source      _C_s_o_u_n_d output:        Eastman command(s):
____________________________________________________________________
(1) _r_e_a_l-_t_i_m_e only    real-time playback   _m_k_c_s_o_u_n_d_m_i_d_i_p_l_a_y (_m_k_c_s_m_p)
(_C_l_a_v_i_n_o_l_a & _M_C_S_2)                         or _c_s_o_u_n_d_m_i_d_i_p_l_a_y (_c_s_m_p)
          (discussed earlier)
____________________________________________________________________
(2) _M_I_D_I _f_i_l_e only    real-time playback   _c_s_m_f_p
____________________________________________________________________
(3) _M_I_D_I _f_i_l_e PLUS
    _r_e_a_l-_t_i_m_e input   real-time playback   _c_s_m_f+ or _c_s_m_f_p  +
____________________________________________________________________
(4) _M_I_D_I _f_i_l_e only    soundfile            _c_s_m_f

Within these Eastman  command  alias  names,  the  character
string  _m_f is short for _Midi _File and the concluding charac-
ter _p (or character string _p_l_a_y) stands for "_Playback  mode"
(real-time  output).   Typing any of the command names above
with no arguments displays a usage summary.

     To create an orchestra file that includes  one  of  the
Library  _m_i_d_i_i_n_s  algorithms  for  use with any of the above
scripts (except _m_k_c_s_o_u_n_d_m_i_d_i_p_l_a_y), you can  use  the  script
_m_k_m_i_d_i_o_r_c_h.   (For usage details, see the online _m_a_nual page
for this script, or else type _m_k_m_i_d_i_o_r_c_h with no arguments.)
To create a scorefile for one of the _m_i_d_i_i_n_s algorithms that
includes one of the Library _m_i_d_i_f_u_n_c_s files, you can use the
local script _g_e_t_m_i_d_i_s_c_o_r_e.  (For a command line syntax  sum-
mary, type _g_e_t_m_i_d_i_s_c_o_r_e with no arguments.)

_c_s_m_f_p : The command line argument syntax for _c_s_m_f_p  (running
_C_s_o_u_n_d in playback mode with a _M_I_D_I _f_i_l_e input) is:
          _c_s_m_f_p  _M_i_d_i_F_i_l_e._m_f  _O_r_c_h_F_i_l_e  _S_c_o_r_e_F_i_l_e
     This is equivalent to typing: _c_s_o_u_n_d  -_o_d_e_v_a_u_d_i_o   -_d_m_6
     -_F _M_i_d_i_f_i_l_e._m_f _O_r_c_h_F_i_l_e _S_c_o_r_e_F_i_l_e)
where
     _M_i_d_i_F_i_l_e._m_f is the name of the MIDI file
     (It is customary to  name  MIDI  files  with  a  a  ._m_f
     filename extension)
     and _O_r_c_h_F_i_l_e  and  _S_c_o_r_e_F_i_l_e  are  the  names  of  your
     orchestra and score files.

_c_s_m_f_p+ : The command line syntax for _c_s_m_f_p+ (running  _C_s_o_u_n_d
in  playback mode with a _M_I_D_I _f_i_l_e input PLUS real-time MIDI
controller input for additional notes) is:
          _c_s_m_f_p+  _M_i_d_i_F_i_l_e._m_f  _O_r_c_h_F_i_l_e  _S_c_o_r_e_F_i_l_e
                             or
         _c_s_m_f_p +  _M_i_d_i_F_i_l_e._m_f  _O_r_c_h_F_i_l_e  _S_c_o_r_e_F_i_l_e
     This is equivalent to typing:
     _c_s_o_u_n_d -_o_d_e_v_a_u_d_i_o  -_d_m_6 -_M  /_d_e_v/_t_t_y_d_2  -_F  _M_i_d_i_f_i_l_e._m_f
     _O_r_c_h_F_i_l_e _S_c_o_r_e_F_i_l_e
                       -------------

_c_s_m_f : _c_r_e_a_t_i_n_g _s_o_u_n_d_f_i_l_e_s _f_r_o_m _M_I_D_I _f_i_l_e _i_n_p_u_t _t_o _C_s_o_u_n_d

     The argument syntax to the _c_s_m_f script is:
     _c_s_m_f   _M_i_d_i_F_i_l_e   _O_r_c_h_F_i_l_e  _S_c_o_r_e_F_i_l_e   [_O_u_t_F_i_l_e]
_E_q_u_i_v_a_l_e_n_t _t_o _t_y_p_i_n_g: _c_s_o_u_n_d -_d_m_6 -_o  _O_u_t_F_i_l_e  -_F  $_M_i_d_i_F_i_l_e
_O_r_c_h_F_i_l_e  _S_c_o_r_e_F_i_l_e

where
     _M_i_d_i_F_i_l_e is the name of a type 0 MIDI file
     _O_r_c_h_F_i_l_e is the name of an orchestra file (there is  no
     default)
     _S_c_o_r_e_F_i_l_e is the name of a  score  file  (there  is  no
     default)
     _O_u_t_F_i_l_e is  the  name  of  the  output  soundfile.  The
     default  _C_s_o_u_n_d output soundfile name _t_e_s_t will be used
     if this argument is omitted.

Example:  _c_s_m_f  _f_u_g_u_e_2._m_f _o_r_c _s_o_u_t _f_u_g_u_e_2
     Result: Soundfile _f_u_g_u_e_2 will be compiled  from  the  3
     input files _f_u_g_u_e_2._m_f (MIDI file), _o_r_c (orchestra file)
     and _s_o_u_t (score file).

=> IMPORTANT NOTE: There is one major complication  in  run-
ning _c_s_m_f commands: determining an appropriate _d_u_r_a_t_i_o_n  for
the output soundfile, and inserting this value into  the  _f_0
("function 0") definition within the score file.

     Whenever MIDI input (either from a _M_I_D_I _f_i_l_e or from  a
real-time  performance) is used with _C_s_o_u_n_d, the argument in
the _f_0 definition within the score file determines a  _t_e_r_m_i_-
_n_a_t_i_o_n  _t_i_m_e  (and thus an _o_u_t_p_u_t _d_u_r_a_t_i_o_n) for the job. The
Eastman _m_k_c_s_o_u_n_d_m_i_d_i_p_l_a_y and _g_e_t_m_i_d_i_s_c_o_r_e scripts are optim-
ized  for  real-time performance, and by default (unless you
include a _D_U_R argument), set this output duration to a  gen-
erous 300 seconds, like this:
     _f_0  _3_0_0    (set the output duration to 300 seconds)
     _e          (end the performance)
When a soundfile is created with MIDI input, the argument to
_f_0  determines the duration of the output soundfile, even if
the _M_I_D_I _f_i_l_e data produces only a  few  seconds  of  sound.
With  the  _f_0  definition above, the soundfile would be very
large (300 seconds in length), consuming lots of disk space,
and likely would consist mostly of silence.

     When we run _C_s_o_u_n_d in real-time _p_l_a_y_b_a_c_k mode (as  with
_c_s_m_f_p  or  _c_s_m_f_p+),  we  typically  terminate the _C_s_o_u_n_d job
manually, after all of  the  notes  have  been  played  (but
before the maximum possible performance time specified in _f_0
has been reached), by typing  a  _c_o_n_t_r_o_l  _c.  However,  when
writing the samples to a soundfile with _c_s_m_f, you should _n_o_t
terminate the job manually in this fashion. With our current
SGI version of _C_s_o_u_n_d, manual termination will not implement
the necessary, correct  file  closing  procedures,  and  the
resulting soundfile will be unreadable and unplayable.

     Thus, before running _c_s_m_f, we  need  to  determine  the
actual  duration of the audio output that will be created by
the _M_I_D_I _f_i_l_e data, then edit the _f_0 argument in  our  score
file,  setting  this  argument  equal  to,  or very slightly
longer than, the duration of the audio output.

     There are several ways in which we  can  determine  the
audio duration of the data within a _M_I_D_I _f_i_l_e.


Generally the _b_e_s_t way to determine the  audio  duration
of  _M_I_D_I  _f_i_l_e data is to use the _r_e_a_d_m_i_d_i program. _r_e_a_d_m_i_d_i
reads in a _M_I_D_I _f_i_l_e and converts the data to the format  of
a _C_s_o_u_n_d score file. If we type
                    _r_e_a_d_m_i_d_i  _S_c_o_r_e_F_i_l_e
we will see something like this:

     ; The Midi file (in C Sound format) begins here.
     ; p1 = midi channel + 1
     ; p2 = start time in seconds
     ; p3 = duration in seconds
     ; p4 = amplitude (midi velocity)
     ; p5 = pitch (midi note number)
     ; Track # 0

     i1     0.000000    0.204167   70   60
     i16    0.000000   16.368750   92   24
     i1     0.191667    0.133333   64   64
     i1     0.342708    0.145833   54   67
     i1     0.511458    0.128125   73   72
     (intervening notes)
     i1    14.437500    0.735416   64   68
     i1    15.229167    0.157291   81   61
     e

If we scan down near the end of the note list, we can deter-
mine  the  end  time  (p2  +  p3) of  the last sounding note
(which is not necessarily the last note in the event list).
     A local _h_e_l_p_f_i_l_e is available on the SGI system for the
     _r_e_a_d_m_i_d_i  program.  Also , more discussion of this pro-
     gram is included below.

If we are creating a new score file that includes  _m_i_d_i_f_u_n_c_s
function  definitions for use with our _M_I_D_I _f_i_l_e, we can use
the _g_e_t_m_i_d_i_s_c_o_r_e utility to create this score, and  set  the
optional  _D_U_R   (duration)  argument  of _g_e_t_m_i_d_i_s_c_o_r_e to the
value we determined above.

If we already have a suitable score file (except for the  _f_0
duration  argument), we could use _v_i or some other text edi-
tor to change the _f_0 value. A  quicker  and  easier  way  to
accomplish  this,  however, is to use the local script _c_s_m_i_-
_d_i_s_t_o_p.  The syntax of this command is:
      _c_s_m_i_d_i_s_t_o_p  _S_c_o_r_e_F_i_l_e  _D_U_R_A_T_I_O_N  [_N_e_w_S_c_o_r_e_F_i_l_e]
where
     _S_c_o_r_e_F_i_l_e is the name of the file to be edited,
     _D_U_R_A_T_I_O_N is a new _d_u_r_a_t_i_o_n and _t_e_r_m_i_n_a_t_i_o_n argument for
     _f_0, and
     the optional _N_e_w_S_c_o_r_e_F_i_l_e argument is the name of a new
     output  file  that  will  contain the edited version of
     _S_c_o_r_e_F_i_l_e. If no new file (third  argument)  is  speci-
     fied, _S_c_o_r_e_F_i_l_e will be overwritten. Thus,

                      _c_s_m_i_d_i_s_t_o_p  _s_o_u_t  _1_6
     will change the current _f_0 line in file _s_o_u_t to
          _f_0  _1_6
For additional examples, type the  command  name  _c_s_m_i_d_i_s_t_o_p
with no arguments.

                       --------------
_A_d_d_i_t_i_o_n_a_l _n_o_t_e_s _o_n _t_h_e  readmidi _p_r_o_g_r_a_m, _f_o_r _a_d_v_a_n_c_e_d _u_s_e_r_s

o+ A companion program, _w_r_i_t_e_m_i_d_i performs a reverse transla-
tion,  converting _C_s_o_u_n_d score files into type 0 _M_I_D_I _f_i_l_e_s.
If you forget how to use either program,  type  the  command
name with no arguments.
o+ _r_e_a_d_m_i_d_i and _w_r_i_t_e_m_i_d_i do _n_o_t convert _a_l_l of the MIDI data
or  score p-fields within the input file. Only the following
data is captured and converted:

MIDI file                    Csound score file
___________________________________________________________________________
MIDI channel number   <-->   instrument number
note-on time          <-->   p2
note-off time         <-->   p3 duration (note-off time minus note-on time)
velocity              <-->   p4 (usually mapped to amplitude)
note number           <-->   p5 (usually mapped to pitch)

All other MIDI controller data (from foot pedals, pitch  and
modulation  wheels,  and so on), or all _C_s_o_u_n_d score file p-
fields above _p_5, are lost in the  process.  Obviously,  this
limits the utility of these programs.
o+ _C_s_o_u_n_d score files created from a MIDI file with  _r_e_a_d_m_i_d_i
can  be used by a score-based Csound instrument algorithm in
which score p-fields _1 through_5 are mapped as in  the  table
above. This would be a bare-boned instrument algorithm.
