1. Reverb –  (short for reverberation) is the acoustic environment that surrounds a sound. Natural reverb exists everywhere. Whether the space being described is a bathroom or a gymnasium, the essential characteristics remain the same. Reverb is composed of a series of tightly-spaced echoes. The number of echoes and the way that they decay play a major role in shaping the sound that you hear. Many other factors influence the sound of a reverberant space. These include the dimensions of the actual space (length, width, and height), the construction of the space (such as whether the walls are hard or soft and whether the floor is carpeted), and diffusion (what the sound bounces off of). (http://whatis.techtarget.com/definition/reverb-reverberation)
  2. Here is another example of what reverb is: http://audacity.sourceforge.net/manual-1.2/effects_reverb.html


  1. Delay: is a simple concept — the original audio signal is followed closely by a delayed repeat, just like an echo. The delay time can be as short as a few milliseconds or as long as several seconds. A delay effect can include a single echo or multiple echoes, usually reducing quickly in relative level. Delay also forms the basis of other effects such as reverb, chorus, phasing and flanging.
  2. Here are a few examples of how to make delay effects in your own projects: http://www.soundonsound.com/sos/may12/articles/designer-delay.htm
  3. Working with the delay and echo effect in Adobe CS6: http://blog.infiniteskills.com/2012/07/adobe-audition-cs6-tutorial-delay-and-echo/

Terms Associated With Broadcast Audio:

  1. B-Roll – video that is shot for a TV news story and used to visualize the script the reporter/anchor has written.
  2. EZ News – the newsroom computer software. It allows you to create news rundowns, write stories for newscasts, print scripts, have teleprompter all from the same location/server.
  3. Natural Sound – aka Nat Sound, Nat S-O-T, or Ambient Sound – Background voices, music, machinery, waterfalls, and other environmental sounds that are recorded on-scene and used to create a sound bed for a recorded or live report.  Primarily used for setting a mood or providing atmosphere for a report. This technique is frequently overused, but when used properly it adds immeasurably to a story.
  4. Nielsen – service primarily used in determining television ratings.
  5. Live shot/Live Report – A TV news story during which a news anchor or reporter is live at a remote location. Within this report can be included a SOT, VO/SOT or PKG.
  6. On-Set Appearance – Reporter appears on set and is introduced by a news anchor. The reporter can than introduce his/her news package or report his/her story from there.
  7. Package (PKG) – A report from a correspondent that contains a sound bite inserted between the introduction and the epilogue (usually inserted after the reporter’s second or third sentence).  These need an in-studio lead for the anchor.
  8. Sound bite (SOT) – edited slice of a newsmaker speaking. Similar to actuality in radio except the person can be seen. Often several SOT can be spliced together with the edits cover with video. These can be included in PKGs and VO/SOTs or can stand alone.
  9. Stand-up – part of package with reporter on screen reading/presenting information.
  10. Voiceover (VO) – A TV news story during which a news anchor or reporter reads a script live as video is played.
  11. Voiceover-to-sound(VO/SOT) – A TV news story during which a news anchor or reporter reads a script live as video is played up to a place when a news maker video/audio sound bite is played. At the end of the SOT, the reporter or anchor resumes reading with or without additional video.
  12. http://www.udel.edu/nero/Radio/glossary.html

Terminology associated with signal processing:

  1. Digital Signal Processing – The process of converting analog transmissions into digital signals. The audio transmissions are processed in real-time and provide a clearer signal after the conversion. This conversion method uses complex algorithms and digital conversion software to receive and process the signal. The output results in a number of different sound effects. This type of processing is present in a number of audio devices such as radios, mp3 players, and surround sound systems.
  2. Statistical signal processing – analyzing and extracting information from signals and noise based on their stochastic properties.
  3. Audio signal processing – for electrical signals representing sound, such as speech or music.
  4. Speech signal processing – for processing and interpreting spoken words.
  5. Image processing – in digital cameras, computers and various imaging systems.
  6. Video processing – for interpreting moving pictures.
  7. Array processing – for processing signals from arrays of sensors.
  8. Time-frequency analysis – for processing non-stationary signals.
  9. Filtering – used in many fields to process signals

Signal Flow Basic Definitions:

  1. Gain: A level adjustment designed to optimize each signal coming into the console.
  2. Pad: If you turn the gain all the way to the left and the signal is still too hot, then you should engage the pad, which will reduce the incoming signal by a preset amount (usually 20 dB or so).
  3. HPF: High-Pass Filter. A circuit which sharply decreases low frequencies, reducing mike handling noise, stage rumble, and plosives (p-pops).
  4. Polarity: A simple switch which flips the polarity of the input. (Sometimes incorrectly labeled ‘phase’). Useful for eliminating phase-cancellation when using multiple mics on the same source (both the top and bottom of a snare drum, for example).
  5. Insert Loop: A patch point for connecting outboard gear, such as a compressor or effects unit.
  6. Direct Out: An individual channel output after the gain stage, but before EQ or fader involvement. Most often used for feeding multitrack recorders.
  7. Aux mix: A separate mix of each channel which has its own output, which can be used to feed stage monitors, a recording mix, sends to a reverb unit, or other uses.
  8. Pre/Post: An indication of where the Aux mix splits off from the main signal. If it is labeled as as “Pre” or “PreFade” mix, then its level is completely independent of the channel’s fader. If it is labeled as a “Post” or “PostFade” mix, then the aux’s level will also be affected by the channel fader as it is adjusted.
  9. PFL: Pre Fade Listen. Works as a “solo” button for the engineer’s headphones. You can isolate an individual channel, and hear changes you make with the EQ. Because it is pre-fade, it does not matter where the fader is at the time.
  10. Group/Subgroup: A Subgroup (or just “Group” on some consoles), is a tool used to help the audio tech during a service or performance. Rather than have to independently mix 32, 40, or even up to 56 channels on a console, you can assign for example, all the drums to one fader called a “subgroup”. The Subgroup does not affect any aux sends, it only affects the main mix. So I can raise or lower the level of all 8 drum mics on one fader – VERY USEFUL.
  11. VCAs and VCA Groups: A VCA stands for Voltage Controlled Amplifier and is a common way to “automate” certain things on a mixing console. You can assign multiple channels to a VCA (just like a group), but the difference is – NO AUDIO IS PASSED THROUGH A VCA. Instead, the VCA acts exactly like a remote control to channels which are assigned to it. Where it gets really interesting is that channels that are assigned to a VCA Group DO NOT have to share a common audio path AT ALL. (This means you can have the entire band on one VCA fader, even if they all are routed to different mixes and subgroups!) Something to keep in mind with VCAs that you don’t have to worry about with Groups: a VCA provides the exact same function as adjusting a channel’s fader (including any changes to it’s Post Aux mixes). This is different than a SubGroup, as a sub would only affect the house mix.
  12. Buss: a common term seen in mixing console owner’s manuals. It is an electrical term rather than an audio term. Technically, an aux mix, a subgroup, a master mix, a mono output, a matrix output, etc. are all busses. The only way this term becomes important to an audio tech is in the possibility that you get some “buss distortion) which may not show up on the meters. If, for example, I assign all 32 channels of a console to SubGroup 1, and the console I’m driving doesn’t have Group Meters, and I keep the Group 1 fader low enough that I don’t get overloads on the Master Mix Meters, then it would be possible to overload the Group 1 buss, creating distortion, that would not show up on any meters. This is an extreme example to make a point – but I think you get it.
  13. Matrix Mix: A completely different kind of output available only on the larger consoles. It’s sole purpose os to create an alternate mix to be used for recording, for routing a different mix to a different room, or for any other specialized purpose. You will not see a Matrix split on the following audio signal flows. Why? Because they are not made up of individual channels! A Matrix mix is created solely from mixing the Main Outputs and SubGroup Outputs. So a Matrix Out is created downstream from any individual channel functions.

Example of Signal Flow: http://www.soundonsound.com/sos/oct08/articles/qa1008_2.htm

14. At the end of the signal flow you have speakers or monitors. Here is a video about how speakers or monitors work:

Editing and mixing techniques to produce programming:

Final Sound Editing, Design, and Mixing – Sound editing, design, and mixing comprise a series of activities that are geared toward polishing the audio of your program to enhance the final presentation. Never underestimate the power of a good mix. Audiences may forgive problems with a program’s picture, but they’ll never forgive poor audio. To clarify, audio post-production involves the following tasks:

  1. Dialogue editing: Editing dialogue involves fine-tuning lines spoken by actors and other onscreen speakers and fixing bad pronunciation, stumbled words, and other slight defects of speech.
  2. Automated dialogue replacement (ADR, looping, or dubbing): This is the process of completely rerecording lines that were originally recorded in unsalvageable situations. For example, if there was an off-camera cement mixer in a critical location that filled the audio with noise that can’t be filtered out, you can simply rerecord the dialogue later.
  3. Voiceover recording: This involves pristinely recording narration in such a way as to best capture the qualities of a speaker’s voice.
  4. Sound design: This is the process of enhancing the original audio with additional sound effects and filters, such as adding car crash or door slam sound effects to a scene to replace sound that was too difficult or unimpressive to record cleanly in the field.
  5. Foley recording and editing: This is the process of recording and editing custom sound effects that are heavily synchronized to picture, such as footsteps on different surfaces, clothes rustling, fight sounds, and the handling of various noisy objects.
  6. Music editing: Whether you’re using prerecorded tracks or custom-composed music, the audio needs to be edited into and synchronized to events in your program, which is the music editor’s job.
  7. Mixing: This is the process of finely adjusting the levels, stereo (or surround) panning, equalization, and dynamics of all the tracks in a program to keep the audience’s attention on important audio cues and dialogue and to make the other sound effects, ambience, and music tracks blend together in a seamless and harmonious whole.

File types with their relative audio software:

There are three basic types of digital audio file: uncompressed, or “common” systems, such as the WAV format; formats that use a compression technique, but lose absolutely none of the data in the compression, known as loss-less compression; and formats that do lose some of the original data, but retain a fairly high quality, known as lossy compression.

  1. The WAV format is the most common of the common digital file types. It is an older format, made as a joint effort between IBM and Microsoft as a way to put audio files on personal computers. WAV files tend to be very large, since they are not compressed at all, so it is rare to find them where space is at a premium. They are used where space is not a big concern, or where compression is not possible for other reasons — standard compact discs, for example, use an uncompressed file using pulse-code modulation (PCM).
  2. The MP3 format is probably the most well known digital audio format, and is a good example of a lossy compression system. The MP3 format was developed in the late 1980s, and had a huge spike in popularity in the mid-1990s with the popularity of the Internet as a file-sharing medium. MP3 files are ideal for sharing online or in any context where space is at a premium because they can be compressed down to much smaller sizes than WAV formats. The quality is reduced — most MP3s are encoded at anywhere between 160 and 320 kb/s, as opposed to the 1411.2 kb/s of a WAV file — but for many people, the loss of sound fidelity is unnoticeable, especially with inexpensive speakers.
  3. AAC, or Advanced Audio Coding, is another audio format that has seen huge popularity in the Internet age. It is a newer compression system, and is generally agreed upon as having a higher-quality sound at the same compression levels as MP3. AAC is also able to accept digital rights management (DRM) systems, which limit how the files can be used or transported. The best example of this is Apple’s use of the AAC format, wrapping it in their DRM system, FairPlay, and putting it in its own container, with the .MP4 extension. While normal AAC files are compatible with a wide range of operating systems and devices, AAC files in an .MP4 wrapper are compatible only with Apple’s software and devices.
  4. The Vorbis format is a lesser-known, but still widely-popular, digital format, similar to MP3 or AAC. It was conceived of as an alternative to MP3, when there was a threat that the file type would become a pay-for-licensing format. Vorbis files are suffixed with the .ogg extension, and in this wrapper are known as Ogg Vorbis files. The quality of Vorbis is comparable to MP3 — and some would say it performs better in some situations — but its success comes from the fact that it is not patented. This format usually sees the most popularity among proponents of the open source movement.

Popular Digital files:

  1. MP3 = 320 kbps: The use in MP3 of a loosely compression algorithm is designed to greatly reduce the amount of data required to represent the audio recording and still sound like a faithful reproduction of the original uncompressed audio for most listeners. An MP3 file that is created using the setting of 128 Kbit/s will result in a file that is about 1/11 the size of the CD file created from the original audio source. An MP3 file can also be constructed at higher or lower bit rates, with higher or lower resulting quality.
  2. CD = 44.1 kHz: The hearing range of human ears is roughly 20 Hz to 20,000 Hz, and via the Nyquist–Shannon sampling theorem the sampling frequency must be greater than twice the maximum frequency one wishes to reproduce, the sampling rate therefore had to be greater than 40 kHz. In addition to this, signals must be low-pass filtered before sampling, otherwise aliasing occurs, and, while an ideal low-pass filter would perfectly pass frequencies below 20 kHz (without attenuating them) and perfectly cut off frequencies above 20 kHz, in practice a transition band is necessary, where frequencies are partly attenuated. The wider this transition band is, the easier and more economical it is to make an anti-aliasing filter. The 44.1 kHz sampling frequency allows for a 2.05 kHz transition band.

Sample Rate and Bit Depth:


  1. Here is a historic look at how recording was done back in the turn of the 20th century: Self mixing before pressing record was a big part of that recording process and how putting people and musicians in set locations will help the mix pop out the instruments the recording engineer wanted back in that time. Recording like this also helped get a natural level of sound “use the musicians or vocalists as a mixing board instead of using the mixing board or DAW to mix the subjects they are recording”.


Sound Effects:

  1. Foley effects: are sound effects added to the film during post production (after the shooting stops). They include sounds such as footsteps, clothes rustling, crockery clinking, paper folding, doors opening and slamming, punches hitting, glass breaking, etc. etc. In other words, many of the sounds that the sound recordists on set did their best to avoid recording during the shoot. The boom operator’s job is to clearly record the dialogue, and only the dialogue. At first glance it may seem odd that we add back to the soundtrack the very sounds the sound recordists tried to exclude. But the key word here is control. By excluding these sounds during filming and adding them in post, we have complete control over the timing, quality, and relative volume of the sound effects. For example, an introductory shot of a biker wearing a leather jacket might be enhanced if we hear his jacket creak as he enters the shot – but do we really want to hear it every time he moves? By adding the foley sound fx in post, we can control its intensity, and fade it down once the dialogue begins. Even something as simple as boots on gravel can interfere with our comprehension of the dialogue if it is recorded too loudly. Far better for the actor to wear sneakers or socks (assuming their feet are off screen!) and for the boot-crunching to be added during Foley. (http://www.sound-ideas.com/what-is-foley.html)
  2. Room Tone: a recording of the sound of the room you have just just shot a scene in. Camera is shut down, actors are gone or standing quiet, lights are on and every other background sound is as it was during the scene Room tone is used to create a matching background for new material to be inserted into the track, or to fill holes created in the track by the romoval of the director’s voice or other background noises.
  3. Ambient Sound: Also known as ambient sound. I use this term in preference to ‘room tone’. It is the background sound of the set wheather interior or exterior. The camera is not operating, the actors stand silent but all else is the same as while the scene was being shot. Purpose is the same as room tone. Ambiance used to be recorded for several minutes, on the theory that any edits would be audible. With the sampling technology of today, I usually record only 15 sec. of steady ambience, more it there are randomly occuring variations that might be recognized.
  4. Wild Sound: Any sound recorded at any time without a synch reference. For instance, recording a playground. The sound doesn’t need to be synced, but adds realism to the environment on screen.

Sound Design Video:

2 thoughts on “Terminology

    1. JP,
      I can’t tell you how sorry I am to hear your bad news. Please know if there is anything I can do to help, just let me know. Take as much time as you need and don’t worry about the assignments or the class. I am sorry for my delay in getting back to you, I have be in and out of Toronto with my daughter in the hospital and just now getting back to the class blog. If I am not to late I would like to come to the viewing or send some flowers. My email is dgray@conestogac.on.ca please send me an email and let me know how you are doing and an update when you are ready.

      Warm Regards,

Thank you!!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s