Sonic Arts Breakdown

This is a great lecture breakdown/overview and history by Kevin Austin into world of Audio Creation: Capturing, Crystallizing and Fragmenting Time: An Introduction to Sonic Arts (

Before the invention of the sound recording and the movie camera, there was no way of capturing the images and echoes of the movement and sounds of life. We do not know how Beethoven (1770 ­ 1828) spoke, or how Mozart (1756 ­ 1791) moved. Written accounts of the day provide some sense of the shadow images of these things, but one man’s “gruff” is another woman’s “manly”.

Today, we capture time, delay it, play with it and transform it: we record, playback, create sonic art objects, and morph the voice of a woman into that of a sustained guitar note (Cher’s recent album). As far back as 1624 (27?), Francis Bacon in his, New Atlantis, proposes …

“We also have sound houses, where we practice and demonstrate all sounds, and their generation … We represent and imitate all articulate sounds and letters, and the voices and notes of beasts and birds. We have also divers strange and artificial echoes, reflecting the voice many times, and as it were tossing it: and some that give back the voice louder shriller deeper We also have means to convey sounds in trunks and pipes, in strange lines and distance.”

This paper introduces on overview of the field of sonic arts, focusing on the developments related to sound in a non-verbal (abstract) context, parenthetically as ‘music’, but centerring on ‘music and sound’ technologies, and electroacoustics / computer music. In five basic sections:

General Introduction
Let’s get Physical (and not so Physical)
You Mean ‘What does ‘mean’, mean?’ (Some antics with words)
Biased Histories
Sources / Addenda / Varia
Upcoming Concerts and Dates

pART the First

General Introduction

There are many uncanny parallels related to the capture and fragmentation of time and emotion in the first decades of this century in the visual arts (Marcel Duchamp: Nude Descending a Staircase No 2, 1912; Giacomo Balla: Girl Running Along a Balcony, 1912), text (James Joyce: Ulysses, 1914­21), image (cinema) and sound (recording). These accompanied the development of abstract artistic creations hinted at in cubism, and brought to the point of (literal) explosion by the Dadaists.

While visual artists were hanging urinals on gallery walls, Luigi Russolo’s 1913 Dadaist manifesto, The Art of Noises, stated that the six families of noises of the Futurist orchestra are:

 1  2  3  4 5  6
Bustling noises
Sounds obtained by friction
Noises obtained by percussionon metals, wood, stone, terra cotta
Voices of animals and men:
In this list we have included the most characteristic fundamental noises; the others are but combinations of these.
The confines of the (Western European) nineteenth century view of reality were being challenged in the sciences (notably radioactivity, relativity and quantum mechanics), and in the arts and technology: things could never be the same. The radio meant that anyone could listen to music: World War I meant that anyone could die a completely senseless death.
Art has always been the benefactor of technological development, and music possibly the most of all. Sound is untouchable, un-seeable, cannot be captured, exists independently of verbal language, has the power to move people to joy, tears and dancing. It is used to wake people, put them to sleep, celebrate birth, mark death and commemorate all forms of human activity which occur between these two (possibly except paying taxes).
Music and technology have always been linked, whether the mechanical technology: making a drum, a bone flute, an ‘ud (the Near Eastern precursor to the lute of the 8th century), the ch’in (Chinese zither dating from the 2nd century), the hydraulos (water organ, 3rd century BCE, Greece), the organ, piano, violin, synthesizer, sampler or turntable; or, the intellectual technology of the invention of notation and printing.
The invention of (a european) music notation about 1,000 years ago is so significant for it allowed melodies to be carried from one place to another, from one time to another, without the singer being present. The receiving musician simply decoded the symbols and reproduced a version of the original melody.
Technological developments encouraged musicians to find ways of making these ungraspable noises. Nineteenth century engineering developments that allowed the building of larger buildings, bridges and boats were incorporated into the development of the piano, strengthening the frame. The industrial demand for high quality steel wire for telegraph lines, fundamentally changed the nature of the violin (from gut strings to steel), and the ability of the piano to sustain and project sound.
And the technologies converged and multiplied in effect: large buildings, and louder pianos meant larger concert halls larger audiences more demand for more (and better) instruments. The craftsman’s workshop gave way to the assembly-line type of production characteristic of the industrial revolution. Composers were not immune to the demands for more, newer pieces for this expanding market.
Magazines and periodicals appeared containing large numbers of new (and some older) pieces to be enjoyed at home – the 19th century hit parade. These were to be dutifully played on the parlour piano by the wives and daughters for the entertainment of the breadwinner, and to make the eligible female offspring more attractive. These publications largely disappeared by the 1930s ­ 50s, replaced by electroacoustic technologies: recordings and radio.
The sciences sought ways to more fully and accurately explain, reproduce, predict and ultimately control aspects of the physical (and by extension the emotional) universe. Mathematical developments allowed for the precision of acoustical theories which meant better concert halls, better instruments, and eventually, the capture of sound onto a ‘fixed medium’ (recording).
It can be argued that sound recording and radio, and, cinema and video have at various times trodden the same developmental path. The grandparent of video is theatre; as that of ‘indy’ recording is the concert. The parallels of structure, function, presentation, creation, distribution and control are very similar. [Below is an over-simplified view of some similarities and differences.]
 Theater =>  cinema =>  video =>  [www?]
 Concert =>  recording/radio =>  (home) recording =>  [www?]

Theater / Concert: at one time, at one place
Film / Radio: at many times, in many places (but fixed)
Video / Recording: whenever, and anywhere (esp at home)
www: anytime, anywhere (demand delivery)
WHO is involved?

Theater / Concert: author / composer => Theater group / orchestra
Film / Radio: author / composer => actors / musicians
Video / Recording: artists, indies
www: !

Theater / Concert: class entertainment
Film / Radio: popular entertainment
Video / Recording: everyone’s entertainment
www: personal interest

Theater / Concert: performance / author; conductor / composer
Film / Radio: (author / composer / performer) => editor
Video / Recording: anyone
www: almost everyone?

Theater / Concert: aristocracy / upper- middle-classes
Film / Radio: society
Video / Recording: artists / younger generation / family
www: !!

Theater / Concert: <yikes!!>
Film / Radio: <ouch!>
Video / Recording: almost spare change (capture/editing equipment, a computer, software, camera and microphone)
www: !!!

Sound, as art and as functional entity, has a psychological / physiological basis of its own. The multiple roles of sound in film are somewhat demonstrative of this, and while this list separates them, they are part of a multi-dimensional continuum. (See examples below.)

Sound is text: words that convey narrative or contextual information
Sound is sound effects (fx): non-verbal information in the form of ‘incidental noises’ that situate action
Sound is music: instrumental / vocal tunes and interludes: strongly conditioned by the hearer’s cultural background
Sound is sound design: that creative activity where sound fx, music and text combine to create a unified whole
Sound is space (multi-speaker): the creative use of space (proximity) and direction of sounds to place the listener ‘inside’ the (virtual) event.
Interestingly, the sequence of their introduction into film was: music, sound fx, text, sound design and space, while an anthropologist could advance that in terms of human development, the reverse order is how the human perceptual system developed. Sounds for the sake of survival need to be placed in a location, understood in the broader context of where one is, a vocabulary is needed for precise communication, and music is the first sonic essential beyond mere physical survival.
In the 30s, 40s, 50s (before television), the radio ‘adventure’ drama was a popular form of entertainment. Cop and detective shows, science fiction fantasy, dramas, soap operas and comedies relied upon text, sound fx and music to allow the listener to establish the imaginary context for the show.

pART the Second

Let’s get Physical (and not so Physical)

But, just what is sound? What are some of its physical, psychoacoustic and psychological aspects?
Put simply, sound (in the physical sense) is created whenever the following conditions are met: a medium for the transmission of vibration (air, water, metal, wood etc); vibration with enough energy to be perceived; vibrations between about 20 and 18,000 times per second. [The air molecules compress and spread apart, rarefy, creating sound waves.]

The sound waves, [whether from the vibration of the loudspeaker in the stereo, the soundboard of a piano, the turbulence created when air hits the edge of building or the edge of a pipe (e.g. the flute), the regular vibration of two objects in close proximity (vocal folds for voice, lips for a tuba player),] move away from the sound source in all directions, in a spherical mode. Sounds also bounce of objects creating echoes and reverberations.

How “high” or “low” a note sounds (e.g. in singing or on a piano) is frequency (or sometimes, pitch). As sound becomes more distant, it becomes quieter (decreases in amplitude).
 Original Sound  Higher Frequency  Lower frequency:lower amplitude
Frequency and amplitude are considered to be two of the three principal elements of sound, the third being spectrum, or tone color. This is represented by various (regular) waveshapes, or some form of noise.


Wave shapes



The human voice is able to change the frequency (pitch), amplitude (volume), and spectrum (tone color) of sounds independently.

Sing the word ‘back’. Sing it higher and lower. This is frequency.
Sing the word ‘back’. Sing it softer. This is a change of amplitude.
Sing the word ‘back’, and then the word ‘beak’. This is a change in spectrum.
Speech is largely a function of changing the spectrum of the sound made by the vocal cords. There are two basic kinds of sounds in speech, vowels ­ (which have pitch), and consonants ­ (which are modified noise, either with or without voice [compare /p/ and /b/ ­ pour, bore]).
Tone color, or spectrum is central in many kinds of sonic arts, and in some styles, is considerably more important than pitch or rhythm (acousmatics, cinema for the ear etc, see below). While the major sonic technological developments of before 1900 focused on aspects of pitch and amplitude (making larger, louder instruments), the major 20th century developments have been in the area of spectrum.
How we hear.
Sound waves travel through the air at about 330 meters per second. (This is why you can tell the distance of thunder, since it takes about 3 seconds for the sound to travel one kilometer.) When the sound waves reach the ear they enter the small holes in the skull (ear canal) and vibrate the ear drum. The pinna, the folded ear flap of the external ear, focuses sound waves towards the ear canal (meatus). The size and shape of the pinna significantly contribute to the perception of sound in front/back, up/down discrimination, which is why with headphones, there is poor front/back and elevation perception in most cases.
The sound waves vibrate the ear drum (tympanic membrane), which moves the stapes ­ three bones (hammer, anvil, stirrups ­ the three smallest bones in the body), which in turn vibrate fluid in the inner ear. The cochlea (the small snail shaped structure in side the mastoid [the thickest bone in the body]) is the organ where physical vibration becomes nerve impulses, which are transmitted to the brain through the eighth cranial nerve (the auditory nerve). The brain understands and organizes these nerve impulses, which we call sound.
(The eustachian tube balances air pressure in the middle ear, on the inside of the ear drum, and when it opens, as when flying, there is a ‘poppping’ sensation.)
Sounds range in dynamic level (volume) from imperceptibly small to deafeningly loud. Exposure to very loud sounds for even very short periods of time, results in premature deafness.
Headphones are deceptive for it is possible to have exposure to very high level sounds for long periods of time without ‘noticing it’.
Ringing or buzzing of the ears is a serious signal of potential ear damage. There are numerous sources of information on ear damage and deafness. Inform yourself before being exposed to loud sounds, and if you are to be exposed, wear good ear protection. Who wants to be deaf at 28?
How are people affected by sound? and by music?
There is a widely held view that responses to specific types of sounds / music(s) are determined by the culture and education. Sounds of a kinetic nature, rhythmic beating of drums for example, most frequently elicit a kinetic response, some kind of free dance or organized physical activity (such as work, or a military activity). Musics with strong kinetic components are frequently part of some other form of social activity, rather than “appreciation of the art” [sic].
Sounds which are predictable and repetitive allow the hearer to “get on with other things”: sounds which change in an unpredictable way need closer attention to be understood. Examples of this range all the way from social dance activities (where the beat just goes on), to the “tuning out” of parental ‘advice’, to adaptation to noisy environments (e.g. a cafeteria or history lecture <ooops!>). Speech ranges all the way from predictable (and ignored) “Read the question.”, to the unusual or aphoristic, where every word has to be de-coded (e.g. Finnegan’s Wake).
If the ear or mind tires of the process of paying careful attention to every detail, then the sound moves from being important, to disappearing from consciousness, i.e., sound becomes noise.
What’s hot? What’s not? and Why?
The same technologies that have fed the developments of ‘art’ electroacoustic / computer music, have been used for popular music. The academic research community developed models for sound manipulation, control and synthesis, built the prototype tape recorders, computers, synthesizers and samplers, and then industry (such as Yamaha) bought the licenses to develop mass-produced, and mass-consumed versions of the ideas.
The history below indicates that such composers as Hindemith and Pierre Schaeffer were using turntables to do compositions 50 to 70 years ago, and these same techniques have been adopted by dub, rub, scrub and tub artists in the 90s. Sampling was in its early experimental stages in the 50s and 60s, and the reduction in cost of equipment now even puts samplers into $10 toys for children (of all ages).
So, is there a fundamental difference between ‘art’ music and ‘popular; culture? One proposed model (from communications / information theory) is a reduction to the concepts of ‘hot’ and ‘cold’ (or cool) media. Simply put, ‘hot’ media give the consumer everything they need to know; ‘cool’ media require input from the receiver.
Much television is considered ‘hot’, because everything needed is given: a show is ‘cool’ when the viewer is not able to be a passive recipient. Much poetry is considered ‘cool’. An example being Japanese haiku or aphoristic poems, where the reader is invited to make their own interpretation, and draw their own, very personal conclusions.

My love and I ponder the same sad face of the full winter moon:
A thousand miles, ten­thousand nights and ten­million tears apart.
Many terms are associated with this concept, and many activities are ‘categorized’ as ‘cool’, connoting some kind of mindful (as distinct from mindless) involvement. Cool media are by definition harder to penetrate: harder to draw unique conclusions about. Mass produced objects are ‘hot’, while unique objects are often considered ‘cool’: disposable items are not ‘cool’, whereas ‘classic’ (and frequently nostalgic) items are ‘cool’.
Is sound music?
Pierre Boulez (may have) said: “Music is the sound.” Charles Ives (may also have) said: “Music is what remains when the sound is gone.” These two famous quotes may or may not obscure the question. There are cultures where there is no single word for the (european) concept “music”. For many people, ‘music’ has elements of pitch (melody and harmony), and rhythm (or more precisely, beat).
The 20th century has witnessed attempts to make all sounds into the materials of music, just as visual artists have incorporated all images and materials into their work. The Dadaists and futurists proclaimed ‘everything is’ (or can be) material for music. This would appear to be a generational thing, for even Aristotle complained that the younger generation just didn’t know what a good tune was.
It should be noted in passing that one of the ways that a music school differs from a visual arts school is that music schools focus on the teaching of materials and creativity that were formalized more than 200 years ago, and for many years of music education, these form the only ‘acceptable’ basis for study. But that’s another thread. Studying music means playing or singing, and learning the names of notes and intervals, scales, chords and particular harmonic and rhythmic vocabulary: that is if the basis of the music is that of DWEMs and their successors. 
Differentiation of sounds (ASA)
With a picture of a smiling woman, it is possible to point out the lips, the eyebrow, the nostril, the flowing hair, and the line joining chin to ear. Sound introduces a specific problem with this non-temporal approach to the world: sound just won’t stand still. Also, when we hear a mono recording of a drum, bass, saxophone and voice, while we ‘hear’ four different things, there is only one sound wave.
After the sounds that have reached the ear have been encoded into nerve impulses, the brain must set about making sense of this ‘stream’ of information; and turn it into understandable patterns. While it may seem ‘obvious’ that there are different sounds present, it has been no simple task to be able to understand how this is done. A basic test of understanding is to be able to get a machine to produce the same results a human would (although not always by the same process).
Detailed study of this problem started in the 1950s as the first general purpose computers were becoming available, with much of the preliminary work being done at the Bell Labs in New Jersey. This is where the first computer synthesized speech (and song) was created. (cf Bicycle Built for Two)
Successful film sound mixers have long developed an intuitive feel for how sounds can and should be put together for maximum effect. The correct mixing of text, sound fx and music have significant impact upon the public and critical reception of a film.
The area of study called Auditory Scene Analysis (ASA) has in the past decade proposed a number of models of how sound differentiation (streaming) and blending (integration) occurs. Much of the information coming out of this research has direct applicability to 3-D and ‘virtual reality’ sound creation.

Describing Sound

There is no single, simple, widely accepted method for describing sound(s) in detail, although many people have worked on this problem, and there are numerous research projects currently underway in this area.
Traditionally, sound has been broken down into two basic categories:

Noise <­­> Useful sounds (not-noise)

This is a useful psychological categorization, as it helps determine one’s relationships to the sound, but however, does little to describe the sound.

(At 3:00 am, an ambulance siren rushing past my sleeping bedroom is noise; at 3:00 in the morning, an ambulance siren stopping next to my unconscious body is ‘not-noise’.)
Sound is described by function and context.
The ‘Noise / Not-noise’ categorization relies upon certain physiological functions of he human ear and mind – along with a number of semantic ones, e.g. if a tree falls in a forest and no one is around, does it make a sound. This is about the definition of ‘sound’, as being a psychological or a physical attribute. [If sound is vibration of air within certain limits, then the answer is likely yes. If sound is the perception of these vibrations, then the answer is more likely no.]
And the sound itself is problematic, being both singular and collective. A bell is a singularity, (or rather, a voice is a singularity), the ocean is a collective: a single bell sound can be described by specific physical and acoustic properties, an ocean needs to be described by the multiple processes going on at the same time (mass structure).
An individual speaking will be heard as ‘speech’. To describe the sound of a crowd (or mob), is different. There are many individual sound sources and they merge into a ‘mass structure’ (composite event), however, through the psychological attribute of ‘selective hearing’ (known both as the ‘cocktail party effect’ where one is able to listen to a specific train of speech even with very high background noise levels, and also the ‘deaf teenager effect’, where the adolescent is unable to hear the parent, but is able to listen to a CD, watch tv and talk on the phone at the same time – selective psychological filtering), individual ‘channels’ of sound can be perceived.
Sound complexes exist on a continuum from multiple discrete sources (streaming ­ e.g. a string quartet), to multiple indistinguishable sources (integration ­ e.g. an ‘amusement center / video-pinball arcade). With the quartet (or even an octet) it is possible for a trained listener to hear (and follow) up to (about) 8 independent parts (lines), whereas the video-pinball machines, while each may be different, meld into a mass structure very quickly.

MASS STRUCTURE << = = = = >> D I S C R E T E S O U R C E S

A heavy-metal band is somewhere near the middle of this continuum, being heard most of the time as a fusion of individual sounds; the european orchestra occupies a wide part of the continuum, sometimes being heard as multiple solo lines, other times as (multiple) mass structures.
Many people approach music purely as emotional/psychological stimulus. Acoustics describes certain aspects of sounds, frequently in relation to the nature of the ‘source’, the way in which the sound transforms over time, and the nature of the transmission medium. Characteristic of all scientific descriptions, this aspect is measurable, repeatable and predictive. Auditory Scene Analysis models show possible relationships of sound and psychological responses.
· psychological representations are: intuitive / learned
· acoustics is: measurable / repeatable
· auditory scene analysis is: intuitive / learned / trained
There is a quasi-descriptive system that has evolved from the acoustic model: spectro-morphology (1980s). While still under development, it does try to describe ‘sound in time’ The ‘surface features’ of a spectrum are often described in the psychological domain as: smooth, liquid, hollow, buzzy, granular, highly textured, uneven, edgy, coarse, fine, pitted, knoby, fuzzy, silken, transparent, translucent, metallic (check any good Thesaurus for more terms borrowed from the visual domain). Compare some of these terms with those suggested by the Futurists (see above).
It is frequently useful to break the texture into component parts, to represent the ‘channelization’, e.g. Sadly, the children’s voices are underpinned by an oboe playing a liquid melody over the sound of a door bell, while church bells behind complement the distant roar of the ocean, like the ever/never dying sleeping breath of the once and forever dead. This requires a model such as ‘auditory scene analysis’, and would be most useful for film and video soundtrack producers.
This particular description is simultaneously (and variably):

· programmatic [referring to some aspect of narrative or story]
· emotional [appealing very directly to the listener, producing involuntary, and unmediated responses]
·associative [that reminds me of / about].
A film/video sound mixer would have to be able to allow each of these sounds (children’s voices, oboe melody, door bell, church bell, ocean) to each be heard, while projecting the psychological condition of sadness, loss and death.

pART the Third

You Mean ‘What does ‘mean’, mean?’ (Some antics with words)

But what do these sounds mean?
A fundamental study in twentieth century art is the focus on how art is or is not (a) ‘language’. The word language is not restricted to ‘words’ in this sense, but largely follows the linguistic principles of the organization of symbols for expression or communication. It is often accepted that there are three hierarchical levels in language: vocabulary, syntax and semantic.

Vocabulary is taken to be the smallest, independent. meaningful unit.
Syntax is the correct sequencing of acceptable vocabulary elements yielding a
Semantic, or meaning.
Language is highly contextual and a function of experience, culture and education. While it may seem that understanding sounds ‘comes naturally’, the listener inexperienced with the orchestral music of Iannis Xenakis may find it difficult or even quite incomprehensible. It may be as Copernicus said, “Chaos is but unperceived order.”
Traditional western (european) based music is dependent upon very strict limits to the acceptability of sounds, and their ‘correct sequencing’, so as to yield meaning. Popular music ‘works’ because it is formulaic, and therefore understandable. Unpopular music may be ‘too formulaic’ (or not formulaic enough cf hot/cool above): it is often a balance between the known and innovative that many people find pleasing.
Music technologies in the 20th century have overturned the limits on (western european) music, by admitting that any sound can be music, and also extending the definition(s) of vocabulary to include smaller units than the ‘note’ and the ‘beat’.
Electroacoustics frequently explores the region “below” the level of the (traditional) vocabulary units of the beat and note. This interest/fascination with the raw elements and materials of sound have led to the creation and evolution of a whole world of sonic arts, ranging from transformed text (text-sound composition) to found sound (musique concrèteindustrialenvironmental), to computer-based creation and manipulation of sounds that emulate real sounds or that have no basis in physical reality.
What are some of the basic types of manipulation of sounds / sound objects?
Transformation of materials follows the generalized pattern of:


The most basic forms of manipulation are those of editing (information reduction), copying and repetition (looping), and montage (mixing). Playing a sound backwards makes it almost completely unrecognizable. The sound can be played faster or s-l-o-w-er, and moved up and down in pitch (frequency).
The tone color can be changed, adding or removing bass (low) or treble (high) frequencies (as the eq / tone controls on a stereo system). Echo and reverberation create a sense of space and reflect the physical environment. And all of these can be combined.
Up to this point the ‘identity’ of the sound has been more or less retained. Other kinds of transformation include (phase) vocoding, where the spectrum of one sound is superimposed upon another sound, often heard as a singing instrument, or a ‘robot’ effect. Granular synthesis is a form of time-stretching, similar to slow motion in film, where a sound event is slowed down, but without changing (some) of its other characteristics. (The parallel to Marcel Duchamp’s “Nude Descending a Staircase No 2” is not lost.)
There are many other kinds of transformations and morphing, but these constitute an area of explanation, that of computer music research, beyond the range of this brief introduction. [It was with the development of the digital computer in the 1940s and 50s that this work began, but rapidly accelerated when much more powerful machines became available in the late 80s and 90s.]
And how has it become so complex??
Except for the ability to create sounds that have no basis in the physical world, nothing in the sound world has changed, and nothing has happened to human perception. But, the 20th century has witnessed a greater desire to understand, and therefore be able to use the constituent elements involved in sound, perception and emotional response. The uneducated ear hears, the uneducated eye sees; the educated ear listens and comprehends, the educated eye perceives and understands. And technologies have broadened and accelerated this process.

pART the Fourth

Biased Histories

All histories are biased: but critical to a the development of a critical sense of history of knowledge is knowledge of events, influences, contexts and resulting effects. The following sections deal with the history of music technologies, electronic and computer musics, electroacoustics and sonic arts. However, before providing the historical sequence and context, a few definitions.
Electroacoustics: is a very general term meaning the use of electricity for the creation, processing, manipulation, storage, presentation, distribution, perception, analysis, understanding or cognition of sound. It is the superset of the field, including both live and ‘fixed’ (as on tape or CD) pieces. Some people consider that it has language limits and defines certain ‘styles’ of work.

Computer Music: The use of (digital) computers for the creation of music. This includes, but is not limited to the creation of sounds, sound structures, compositions, and analysis. There is also an active area where computers are involved in analyzing and composing ‘traditional’ (pitch-based) musics.

Electronic Music: One of the three founding schools of ‘electroacoustic music’. The fundamental concept was the use of electronic equipment (oscillators, filters etc) to create or synthesize sounds. [First associated with Germany in the early 1950s]
Musique concrète: Another of the founding schools, the fundamental concept was the use of a microphone to capture sounds which were then manipulated and assembled into compositions. [Invented and named by Pierre Schaeffer in France in 1948]
Acousmatics: A compositional method by which (most often) sounds from acoustic sources, are manipulated and assembled into compositions. Sound projection with multiple speakers is considered an important element. Frequently thought of as an extension of musique concrète. [Commonly used since the late 1970s]
Cinéma pour l’oreille [Cinema for the ear]: A specific type of acousmatic composition wherein there are frequently strong narrative elements. Often presented in complete darkness: the term says much. [Dating from the late 1980s]
Radiophonic art: Not a concert format. Closely related to “Hörspiel”, radiophonic works need to take into account the medium of presentation. Since the listeners most likely will not be seated in a room designed only for listening to sounds, the creator needs to be aware that listeners may leave and return during the piece, may tune in late, and maybe doing something else at the time. There is also the consideration that some sounds may be blocked by environmental noises, and in fact, many people may be listening on car radios.
CD (or cassette) audio art: Works created to be distributed on, and listened to directly from the CD or cassette. It is clear that the medium is an ‘on-demand’ performance type, and can be played in whole or in part, while sitting down, working, walking, talking, riding a bike or the bus.
Soundscaping: A type of ea that was developed by the Canadian composer, R Murray Schafer in the early 70s. It is the ‘art’ side of the acoustic environment, doing in sound, what a photographer might do with a camera: document aspects of life.
Acoustic ecology: Grew out of soundscaping and links together ecological and social concerns about what is happening to the environment, the planet and life. Just as the visual environment can be degraded by poor visual planning, the acoustic environment can be made poorer by noise pollution etc.
Multi-media, or (just) Media Arts: The combination of several media (?). This term has much controversy surrounding it, but is often taken to include some combination of sound, light, image, live action / performance, electronic transmission / broadcasting. Frequently, the integration of these. The Canada Council recognizes ‘audio’ as part of Media Arts, while electroacoustics / computer music, is under the Music and Opera section.
The following brief histories of several of the stream that contribute to the sonic arts reflect the long history of interest and intellectual curiosity which characterizes the field. Experiments and individual efforts date back thousands of years. The opening of the western mind in the renaissance and the flowering of scientific thought ushered in the articulation of the fundamental principles that during the industrial revolution (18th / 19th c), laid the groundwork for the flood of technical advances of the 20th century.
Since a great many sources have been brought together for this overview, it will be noted that from time to time, dates do not match up. Machines are, after all, only human you know.
1948 was three years after the end of the Second World War and is important in the history of ea for three reasons. In France, Pierre Schaeffer formulated and codified a new field of ‘sound art’, called ‘musique concrète’, which used sounds picked up by a microphone as the basis of composition, and in Canada, Hugh Le Caine displayed the first voltage-controled synthesizer.
Since 1948, the field of sonic arts has undergone the types of shifts and changes characteristic of many developing arts, those of periods of the pioneers, those who establish the ‘reputability’ of the art form, and then the popularization of the medium. This is as true of electroacoustics as it was of Jazz Studies, Film Studies, Women’s Studies, Multi-media or Studies in Sexuality.
The pioneer period runs from the late 1940s to the early 1960s (in most countries), with the period of ‘establishing’ the art form as being the early 60s to the mid- late-70s, and the ‘secularization’ starting in the mid-late 70s. (The term ‘secularization’, while half jocular is also somewhat ‘true’, for many of the pioneers and ‘old fuddy-duddies’ who worked to ‘establish’ ea, often treat it as a religion with all of the trappings and accouterments of genuflection, bowing, scraping and ‘respect’ that one has come to expect from established religion. The icons of this religion are “studios”, “equipment”, “software” and “knowledge”.)
Brief history of sound technology until 1948. (A more complete timeline is found in the Addenda section below.)
==> 1948 Pre-echoes
As cultures stabilized, theories evolved regarding the nature and organization of the sounds that people used. Dating back 4,000 years in China, 3,500 in Babylon, 2,500 in Greece, writings and instruments reflect that throughout history and across cultures, sound and sound-making has fascinated humans.
When numbers assume form, they realize themselves in musical sound. (Shih-chi ­ China 1st c BCE)
In Europe, theoreticians invented notation, contemplated the nature of existing and possible sound worlds, contemplated universal orders, and came to understand mathematical systems. Scientific and engineering advances aided in the development of new instruments: inventors and musicians explored the application of newer discoveries, including electricity. Methods of recording, storing and distributing sonic experience engaged the scientific and musical communities by the start of the twentieth-century.
New artistic visions collide with earlier truths: established orders almost topple as new abstractions force themselves. At the same time, science explodes into greater and greater precision and the digital computer is developed. Everyman and Thensum now have access to ways of capturing crystallized time. Fragmentation faithfully follows.
The highlights of the history of sound recording technologies include:
1880s Cylinder recorders
1900s Disc (78 rpm)
1920s First wire recorders
1930s First experiments with stereo recording
1930s First ‘tape’ recorders (Germany)
1948 First 33 rpm LP
First (mono) open reel tape recorders appear in USA
1950s 45 rpm 7″ record appears
Multi-channel tape recorders (up to 5 channels)
1950s First stereo LPs
1960s First eight-channel recorders appear
1964 Cassette is licensed by Philips
1960s 16 and 24 channel tape recorders appear
Open reel video recorders (b/w)
1970s First digital recorders appear
Home video formats (VHS/Beta)
1980s Multi-channel digital recording
Home digital recording (PCM)
Computer based sound (Apple)
1990s Computer-based digital recording

EA History – Musique concrète – The Beginnings
Pioneers, Inventors and the Tape Period
Musique concrète did not have the kind of slow evolution that one associates with music, it came about through an accident of sorts, and its early links were with radio.
1942 Pierre Schaeffer, working as an engineer for Radio-France (RTF), establishes first ‘sound research’ facility, Studio d’essai, at Radio France (RF) (while under German occupation).
1948 Schaeffer started the first formalized, systematic studies of what was to become musique concrète. On May 3, he takes a RF sound truck to a train station to record railway sounds which were to become the Étude aux chemins de fer. (For a long time there has been an anecdote that Schaeffer discovered looping by accident from a ‘locked’ sound effects record.)
1948 May 15, Schaffer names this ‘musique concrète‘, to indicate that this use of ‘sound objects’ makes a break from the formalism and dependency of preconceived sound (or musical) abstractions. His principle tools for his experiments are turntables, a few microphones, a mixer and some potentiometers. His experiments demonstrate that concrete material can be manipulated at will.
He recorded locomotives wrote a score, transformed and sequenced the sounds. Train whistles are transposed through a change in the turntable speed, thus allowing for the use of melodies. However, notation remains a poor tool when compared to the act of listening to the materials.
Through the looping of recorded speech, words lose their recognizable meanings, new associations are made possible. Through numerous chance experiments, using pre-recorded materials (songs, advertisements, symphonic concerts, etc), Schaeffer combines noises with musical fragments and discovers that these meetings rarely result in musical statements. The difficulty lies in selecting materials that are not singularly anecdotal, that can be isolated and easily placed out of their familiar context to yield new meanings. Schaeffer decides to begin a morphology of sounds (study of the form and structure of sounds).
1950 First musique concrète concert, March 18th, 1950, at the École Normale de Musique (Paris). Two speakers and no musicians/performers! First large musique concrète work ­Symphonie pour un homme seul, by Schaeffer and Pierre Henry. Pierre Boulez, Karlheinz Stockhausen, Luciano Berio, Bruno Maderna and Edgar Varèse, etc are among the first of a growing list of composers to visit the studios and experiment with this new art.
1951 Founding of the Groupe de Recherches de Musique Concrète at the RTF, Paris.
1951 In the USA, John Cage establishes the Project of Music for Magnetic Tape. Other composers active were Earle Brown, David Tudor, Morton Feldman, and Louis and Bebe Barron. The Barrons had in fact been working out of their own private studio since at least 1948. Cage composes Williams Mix based on chance operations derived from the I-Ching.
1952 Vocalise by Pierre Henry, first concrète work derived solely from the voice.
1952 First tape music concert in the United States at Columbia University with music by Otto Luening and Vladimir Ussachevsky. Their music employed almost exclusively traditional instrumental sounds and the human voice transformed using the newly available magnetic tape recorder and techniques of speed variation, overdubbing, and electronic echo and reverberation. Works: Sonic ContoursIncantation, etc.
The 50s and early 60s saw the opening of a great number of studio facilities around the world. By the end of the 1950’s, electroacoustic studios had been established in almost all European countries including France, Germany, Austria, Italy, Sweden, Switzerland, England, Netherlands, Belgium, Spain, Poland. These studios were centered around broadcasting (radio) facilities.
By the early 1960’s, most major universities in North America had established experimental electroacoustic studios and courses, in either engineering, computer science, or music departments.
1955 Dripsody by Hugh Le Caine produced at the Elmus Lab, National Research Council of Canada using his Variable Speed Recorder.
Japan ­ Electronic studio of the Nippon Hoso Kyokai (NHK) founded in Tokyo. (Takemitsu, Mayuzumi, Moroï, Ichiyanagi, Ishii, etc.)
1956 Combination of musique concrète and electronic music sound sources and techniques in Karlheinz Stockhausen’s Gesang der Jünglinge.
1957 Tape works produced at Bell Telephone Laboratories in New Jersey under the direction of Max Mathews.
1958 Thema (Omaggio à Joyce) by Luciano Berio produced at the Studio di Fonologia Musicale
Poème Électronique by Edgard Varèse (produced at the Eindhoven studios) and Concret P-H by Iannis Xenakis (produced at the RTF from a single source: the sound of burning charcoal) created for the Philips Pavilion at the World Fair in Brussels, Belgium. They were played over a 425 speaker sound projection system.
Fontana Mix by John Cage produced at Studio di Fonologia Musicale. Tape collage consisting of environmental sounds, singing, speaking etc. and transformed through splicing and tape transposition and reversal. Overall structure controlled by chance operations.
Electroacoustic Studio founded at the University of Toronto, directed by Arnold Walter and Myron Schaeffer.
1960 Vocalism Ai by Toru Takemitsu using only the word Ai (love) as source material.
1961 Visage by Luciano Berio produced at RAI studio. Vocal source submitted to extensive filtering, editing in combination with electronic sounds which were subjected to amplitude, frequency, and ring modulation. An electronic ‘radio drama’.
1963 Electroacoustic Studio founded at McGill University by Istvàn Anhalt under the guidance of Hugh Le Caine.
1965 Its Gonna Rain by Steve Reich. Phasing techniques using de-synchronization of multiple tape loops. One of the first ‘minimalist’ compositions.
1966 Pierre Schaeffer’s research appears in his Traité des Objets Sonore.
1970 Presque Rien No. 1 by Luc Ferrari. Environmental piece that utilizes voices, children playing, birds, motors, footsteps, waves, bells, etc. Absence of any electronic or tape modifications.
Founding of Le Groupe de Musique Expérimentale de Bourges by Françoise Barriere and Christian Clozier. Many composers from all over the world have worked at the studios in Bourges and been performed and honored during the Festival International de Musique Électroacoustique de Bourges for the past three decades.

EA History ­ Electronic Music 1948 -1970
1948 Homer Dudley of Bell Telephone Laboratories introduces the vocoder to Werner Meyer-Eppler, a physicist and director of the Institute of Phonetics at Bonn University, Germany. The vocoder was an electronic device capable of both analyzing sound and simulating speech.
1950 Werner Meyer-Eppler gives a lecture entitled Developmental Possibilities of Sound at the Darmstadt summer course for new music. Robert Beyer also lectures on Elektronische Musik.
1951 Meyer-Eppler succeeds in synthesizing sounds electronicaly.
1952 First electronic studio established at the WDR Cologne (West German Radio) by Meyer-Eppler and Robert Beyer.
Fundamental to electronic music is the realization of the timbral significance of the overtone series as both a means of composing and of fabricating new sounds. In addition to mixing sine tones (pure frequencies) and other electronic signals together, it is discovered that new timbres can be generated.
1953 First electronic compositions by Robert Beyer and the composer Herbert Eimert. First electronic concert at the Westdeutsche Rundfunk in Cologne.
First electronic compositions, under the influence of acoustic, phonetic and information theory research, uses vocal timbres as a model for synthetic timbral construction and manipulation.
After working at the RTF studios in Paris where he was occupied with the acoustical analysis of sounds, Karlheinz Stockhausen is invited to work in the WDR studios.
1954 Stockhausen’s Studie II employs an electronic realization of the harmonic series. Families of related timbres are created through additive synthesis.
1955 Harry Olsen and Herbert Belar produce in the USA the first modular (computer) synthesizer, the RCA Mark I.
1956 Stockhausen produces Gesang der Jünglinge, first extensive work combining concrète and electronically generated sounds.
1956 Otto Luening’s Theatre Piece No. 2 for electronic sounds, soprano, narrator and instruments, one of the first pieces for tape and live performers.
1958 Edgard Varèse uses electronicaly generated sounds in combination with concrète and instrumental sources in his Poème électronique.
1959 Harry Olson and Herbert Belar introduce their improved RCA Mark II Synthesizer with a typewriter-like keyboard to operate it. The Columbia-Princeton Electronic Music Center (NYC) is established to house the Mark II and make it available to a wide variety of composers.
1961 Harald Bode, the German engineer who had built equipment for the Cologne studio, writes an article in which he describes a new concept in equipment design: modular systems (which become the basis of synthesizers in the 60s and 70s).
Milton Babbitt’s Composition for Synthesizer which attempts to produce instrumental-like sounds existing in complex pitch and rhythmic contexts not available form conventional musical instruments.
1963 Milton Babbitt produces Ensembles for Synthesizer. The possibility of precise control of all musical parameters with electronic instruments lends itself to highly organized and structured compositions in which complex rhythmic textures are realized.
1964 Also in the USA, the engineer Robert Moog builds a voltage controled oscillator (VCO), a voltage controled amplifier (VCA), followed the next year (1965) by a voltage controled filter (VCF). It is a number of years before composers appreciate and take advantage of these new modular sound resources.
Donald Buchla works in similar direction as Moog eventually creating the Buchla Electronic Music System which was employed in Morton Subotnick’s early works.
1966 Stockhausen’s Telemusik created at the NHK radio studio in Tokyo. Intermodulation of a wide variety of folk musics and electronicaly generated materials and modifications.
1967 Stockhausen’s Hymnen, an epic work realized at the WDR studio using a wealth of national anthems and many electronic sources as material.
Wendy (Walter) Carlos’ Switched On Bach.
By the end of the 1960s, most institutional studios possessed one of the growing number of voltage controled modular synthesizers. As well, the low cost of these instruments led to their increasing use by composers and performers in pop music and film.
1990’s Analog returns with a vengeance!

EA History – Computer Music
I. Background
1624 First calculating machine developed by Wilhelm Schickard.
1642 Early gear train calculator developed by French mathematician Blaise Pascal.
1666 Gottfried Wilhelm Leibniz (1646-1716) builds a mechanical calculator showing that, given symbolic representation of thoughts and strict reasoning rules, it should be possible to build mechanical reasoning machines.
1830s Charles Babbage conceives of the Analytical Engine and spend many years trying to build it. It is the precursor of the modern computer.
1854 The mathematician George Boole proposes a binary system of logical operations. The precursor of the modern digital computer. (cf 1937)
1923 The first use of the term robot in the Czech Karel Capek’s play Rossum’s Universal Robots.
1926 A female robot appears in Fritz Lang’s Metropolis.
1936 Konrad Zuse applies for a patent on an electromechanical automatic calculator which included a memory, a central arithmetic processing unit, and the ability to read a sequence of operations from paper tape.
1937 Claude Shannon demonstrates that Boolean logic can be represented by electrical switches.
1944 The first electronic calculating machine, IBM’s Mark I, could multiple two 23-digit numbers in approximately four and a half seconds.
1946 The ENIAC was built, occupying 3,000 cubic feet (330 cubic meters) of space. (Today’s pocket calculators dwarf the capabilities of the original ENIAC.)
1948 The transistor is invented at Bell Labs
1948 Claude Shannon publishes a book explaining Information Theory
1950 Univac delivers the first commercial digital computer. The mathematician Alan Turing creates a theoretical foundation for the feasibility of designing a truly intelligent machine.
1955 Lejaren Hiller and Leonard Isaacson begin experiments in composition with the ILLIAC high-speed digital computer at the University of Illinois.
1956 Dartmouth (NH) Summer Research Project on Artificial Intelligence with John McCarthy, Marvin Minsky, Herbert Simon and Allen Newell (“the science of making machines do things that would require intelligence if done by men”).
1956 Hiller and Isaacson use the Illiac computer to create the first work employing the computer to control compositional choices: the Illiac Suite for String Quartet (1957). The work was played by live performers.
1957 First computer-generated sounds produced at the Bell Telephone Laboratories in Murray Hill, New Jersey under the direction of Max Mathews of the Behavioral Research Laboratory.
1959 Work at the Bell Laboratories by Max Mathews and James Tenney begins and leads to the first “MUSIC” series of computer music programs.
1960 John Kelly and Carol Lochbaum create Bicycle Built for Two at Bell Labs. (The version in 2001: A Space Odyssey is sung by a person.)
1968 A light pen is developed at Bell Labs. Elements such as pitch and amplitude can be drawn on a screen.
1969 Max Mathews and Frederick Moore create their “GROOVE” program which uses the computer to control analog synthesizers.
Charles Dodge produces his Changes in which the computer simulates acoustic musical instruments.
1973 John Chowning of Stanford University publishes an article entitled The Synthesis of Complex Audio Spectra by Means of Frequency Modulation which is the basis for the DX-7 series of synthesizers.
1974 The first International Computer Music Conference is held.
1977 IRCAM (Institute pour le création et recherche en acoustique et mathématiques) is established under the directorship of Pierre Boulez at the Centre Georges-Pompidou in Paris.
1979 First powerful computer music workstation, the Fairlight CMI, is marketed.
1980 First commercial sampler, the Ensoniq Mirage, is marketed.
1982 MIDI Specification 1.0 accepted by major music/sound manufacturers.
1983 Yamaha markets the first DX7 polyphonic synthesizer with 64 programmable FM timbres: a dedicated music computer.
1990s www
The 80s witnessed the final convergence of many streams of thought and activity, with digital computer technologies being used in every aspect of creativity, research and production. Software that was at one time in an experimental music studio is used by “Golden Oldies” radio stations to assemble programs. The carefully crafted sound transformation algorithms of the east-coat academic are plug-ins for the rave and dance-djs computers. The web has erased the walls.
And the future seems to be based around the personal computer (or some variation). Individuals are gaining access to the same basic software as high end users. The homemade CD is a reality; distribution via the web is an inevitability.
There will continue to be dedicated computers as synthesizers, signal processing modules and sequencers, and sequencers etc, and there will be more general access to equipment (and software) for the inclusion of sound in all forms of art and communication. Dedicated DAWs (digital audio workstations) will give way to general interactive systems, frequently with the software being available in the latest version(s) directly from the www. Surround-sound, virtual audio worlds, multi-media, art on demand, education on demand these are all here at the moment, but in undeveloped forms.
On the technical and political (policy) fronts, the issues are ones of control and access. Currently (summer 1999), long-distance and local telephone services are a monopoly, as is cable distribution of video signals. With any luck, the day is not far off when it will be possible to plug the phone into the side of the computer, which itself is on the Videotron network, and make long-distance calls as cheaply as local calls using Bell.
All of this raises other issues of a significantly more complex nature, among them being copyright, ownership, access to information, censorship etc. While they are important matters to be dealt with, they will be sorted out (or not) at the international level.
A glimpse at what the future might be?
That’s a question for futurologists to ponder. I review of the 1930s futurologists views of the year 2000 will provide some insight. Predictions had been for control of the weather and the environment, life made easier by robots that would clean and cook, food taken in the form of pills, everyone zooming about in their personal plane / helicopter, the eradication of disease, hunger and poverty. Well, we’re not quite there yet!
But, as I see it, central to the future will be the process of life-long education. The web has given indications of what might be possible, and the rapid changes indicate that the important aspects of software are not this or that particular plug-in, but an understanding of how things function.
The promise of freeing sound from its constraints have largely begun to be dealt with: it is now clear that the issue is freeing the creative artist and listener from the constraints of their own perceptual and cultural limits.

pART the Fifth

Sources / Addenda / Varia

Brief history of sound technology until 1948
==> 1948
Pre-history While the voice is the original instrument, tools/instruments were developed to make sounds (for ritual, spiritual, ceremonial, entertainment purposes?). There is little indication that sound was used without a functional / movement / ceremonial / theatrical / voice component.
Ancient times Mechanical sound-making instruments are created where the energy source is no longer directly that of the lungs or hands: bagpipes, where the energy is stored in a bag; water organs where flowing water pumps the air (cf hydraulos, 3rd c BCE).
Babylonian music theory (16th c BCE) indicates how to create scales and intervals.
6th century BCE Greece Pythagoras is credited with being the first to examine the nature of ‘consonance’ (meaning the union of sounds). An extension of this is found in the concepts of stream segregation and capturing in Auditory Scene Analysis [ASA] in the 1990s
5th c BCE Greece The monochord (kanón) is developed: a one-string instrument used to explore the relationships of intervals.
4th c BCE China Music theory writings emphasize philosophical, cosmological and educational values of music.
3rd c Greece Ctesibius of Alexandria, an engineer, invents the hydraulis (water organ), one of the first applications of a regulated system of mechanical energy for the production of sound.
When numbers assume form, they realize themselves in musical sound. (Shih-chi; China 1st c BCE)
Greek and Roman architects explored acoustical properties for theaters (e.g. amphitheatres), but answers are all ‘qualitative’, not ‘quantitative’.
2nd c Greece Claudius Ptolemy writes Harmonika, a treatise on harmonics, acoustics, interval theory, tetrachords, modes, the monochord, and the relationships between notes, parts of the body, and heavenly bodies
c 1000 Theoretician, teacher Guido of Arrezo provides a system for naming notes. (Do – ré – mi. Julie Andrews remains grateful to this day.)
Medieval / Renaissance Instruments like the organ, virginal, spinet, harpsichord and hurdy-gurdy use levers to play or activate sounds at a distance.
1619 Johannes Kepler’s Harmony of the World. [Chaos is but unperceived order.]
1627 Francis Bacon’s New Atlantis
1630s Marin Marsenne, a French mathematician, philosopher, music theorist and priest, laid the foundation for the modern mathematical understanding of vibrating bodies, acoustics, and numerous aspects of music theory. He brought with him a new awareness of the psychological factors related to musical comprehension.
1701 Joseph Sauveur, formulates a theory about overtone series. He also defines limits of human aural pitch perception.
1750 The french Encyclopedistes produce a 28 volume encyclopedia on the sciences, the liberal arts, and the mechanical arts.
1759 Jean-Baptiste de la Borde, develops a clavecin électrique in which bells are struck by a clappers holding a static electrical charge. A curiosity that made sparks fly.
1760s A mechanical curiosity, a ‘talking machine’ was invented in France. It demonstrated a knowledge of the role of voiced / unvoiced sounds, and vowel formants.
In October, 1846, Chopin wrote to his parents that he had seen “a very ingenious automaton which sings a [song] by Haydn, and God Save the Queen. If opera directors could have many such androids, they could do without chorus singers who cost a lot and give a lot of trouble.”
1800 Volta invents the wet-cell battery, thus providing a more stable way of storing electrical energy.
1837 Galvanic music by Dr. C G Page (Massachusetts) during experiments with battery, coil and magnets. (electro-magnetic induction)
1863 Hermann Helmholtz publishes On the Sensation of Tone, a pioneering work in the field of acoustics. It contains the first systematic explanations of timbre.
1874 Elisha Gray’s Singing Telegraph
1876 Alexander Bell succeeds in transmiting the voice by means of electricity.
1877 Emile Berliner independently perfects both a telephone receiver and a disc record.
1878 Thomas Edison invents the phonograph. Sound is stored as an ‘analog’ to the soundwave: the movements of the stylus are a miniature version of the vibrations in air.
1885 Ernst Lorenz invents the Elektrisches Musikinstrument which used electrical vibrations to drive an electromagnet that was connected to resonating boards, thus translating electrical oscillations into sounds.
1897 Thaddeus Cahill constructs his Sounding Staves which could regulate the number of upper partials/harmonic content in a timbre. Sounds therefore did not necessarily resemble those of familiar instruments.
1898 Danish scientist Vlademar Poulsen invents his Telegraphone, the first electronic recording machine, which was sometimes referred to as the wire recorder. Sound could now be stored in a medium that does not hold a mechanical, analogous version of the sound wave
1900­15 Wallace Sabine (Harvard University) becomes the father of modern architectural acoustics when the is able to quantify (and therefore reproduce and predict) the behavior of sound, notably regarding reverberation.
1901 First trans-Atlantic radio transmission. (Marconi)
1906 Thadeus Cahill creates his Dynamophone or Telharmonium (weighing 200,000 kg) capable of generating sounds by means of a series of electrical generators. Music is distributed through telephone lines around New York City. Concerts in the home! A form of “narrowcasting” to subscribers.
1907 Lee de Forest invents the vacuum tube which provides an electronic way to amplify a signal.
1909 Italian Futurist movement presents its Foundation and Manifesto of Futurism. Notably written by Luigi Russolo, it glorifies machines, speed, strength, etc.
1911 The Technical Manifesto of Futurist Music advocating microtones, experimentation with “found objects”, “everyday” sounds.
Sketch of a New Esthetic of Music by Ferruccio Busoni. A call for new experiments in music. Greatly influences Edgard Varèse who envisions music by machines that frees composers from the limitations of traditional instruments.
1913 Futurist Manifesto and The Art of Noise (March 11), by Luigi Russolo and Francesco Pratella advocating using the more interesting and unlimited resources of “noise”. Russolo invents a family of Intonarumori, mechanical instruments that produce hisses, grunts, pops, etc.
The Futurist movement foreshadowed many experimental approaches to sound and music such as musique concrète, the amplification of inaudible sounds, amplification of “vibrations from living beings”, use of noise and environmental sounds in theater, operatic works. Also many experimental approaches to textual delivery, sound poetry in performance and in recordings would originate with this movement.
1914 In Milan, April 21, the first concert of the Intonarumori in Milan under the title of Art of Noises was presented. A riot ensued.
Darius Milhaud, Paul Hindemith, Ernst Toch begin to use variable speed phonographs to alter the characteristics of pre-existing sounds.
1915 Lee De Forest invents the oscillator, a device that produces electronically generated tones, and contemplates the invention of electronic instruments
1916 Dada movement born at the Cabaret Voltaire in Zürich. Tristan Tzara, Hans Arp, etc. Movement would include Kandinsky, Hugo Ball, Paul Klee, Kurt Schwitters in Europe, and Marcel Duchamp, Man Ray, Max Ernst in USA. Far-reaching influence on poetry, sound-text composition, music of John Cage.
1918 Aerial Theatre by Fedele Azari. Opera using the sonorous possibilities of airplane engines.
In France, Coupleux and Givelet create the Radio-Organ, a 61 note, 10 timbre polyphonic keyboard instrument using over 700 vacuum tubes.
1919 Bauhaus founded by Walter Gropius. Work in sound and textual transformations for the theater.
Leon Theremin (Moscow) invents the Theramin, an extension of the oscillator, which functions by relative distance of the performer’s hand from an antenna on the instrument.
1926 Russolo invents his Psofarmoni, keyboard instruments that imitate animal and nature sounds.
1927 Oskar Schlemmer uses phonograph recordings in theater works emerging at the Bauhaus.
1928 First work in sound for film in Germany by Walter Ruttmann. This work carried on by members of the Bauhaus (Arma, Oskar Fischinger, Moholy-Nagy, Trautwein).
Maurice Martenot introduces various methods for controling timbre (by additive synthesis) in France with his Ondes Martenot.
Friedrich Trautwein establishes a studio for musical experiments in Berlin with Paul Hindemith. Hindemith experimented with varying turntable speeds
1929 Laurens Hammond introduces the Hammond Organ.
Givelet and Coupleux devise machine in France that consists of four oscillators controlled by punched paper rolls thereby incorporating De Forest’s oscillator with the principles of the player. (Degree of automation foreshadows later computer-controled aspects of sound production and composition.)
1935 Magnetic tape recorder (based on the principles of the earlier wire recorder) is invented (perfected) in Germany.
Yeugeny Sholpo, at the Leningrad Conservatory and the Moscow experimental studio builds his Variophones, instruments using preprinted optical tracks to make sound.
1939 Imaginary Landscapes No 1 by John Cage. Radio piece whose sound sources are two RCA Victor test records played on variable speed phonographs along with a cymbal and the interior of a piano.
The Canadian, Norman McLaren works with “drawn sound” in experimental film.
1940 James and John Whitney develop optical soundtrack for film.
1942-3 Imaginary Landscape No 2 & 3 by John Cage. A coil of amplified wire used with various noise makers, and variable speed phonographs. (Harkens back to the Futurists.)
First digital computers.
1945 Grainger and Cross build 8-oscillator synthesizer with synchronization capabilities.
The Allies get tape recorders from the defeated German military machine.
1945-8 Hugh Le Caine builds the Electronic Sackbut, the first voltage controled synthesizer.

Sources include:

Groves Dictionary of Music (China)
Joel Chadabe: Electric Sound
Michel Chion, Guy Reibel: Les musiques électroacoustiques
David H. Cope: New Directions in Music.
David Ernst: The Evolution of Electronic Music.
Jean Laurendeau: Maurice Martenot, luthier de l’électronique
Curtis Roads: The Computer Music Tutorial
Barry Schrader: Introduction to Electro-Acoustic Music
K Marie Stolba: The Development of Western Music
BMI Electronic Music Special Issue. 1970.

Liner notes from recordings etc
The three / four periods: pioneers / academic development / acceptability
Canadian ea/cm

Kwik-take definitions:

ASA: Aspirin (also Auditory Scene Analysis)
DWEM: Dead White European Male
Tree: that which, if it falls in a forest, may or not make a sound
Important Canadians

Hugh Le Caine
Otto Joachim
Istvan Anhalt
Francis Dhomont
Jean-François Denis
David Keane
Barry Truax
Important names in the history of ea/cm: (and what makes someone important)

Pierre Schaeffer
Karlheinz Stockhausen
Canadian associations / organizations / schools

Simon Fraser University, Banff, Université de Montréal, McGill University, Concordia University
List sites:

Text References:
  • Joel CHADABE: Electric Sound, The Past and Promise of Electronic Music (Prentice Hall, 1997) 370 pages A very thorough, up-to-date historical overview of the field of (art-academic) electroacoustic / computer music. Slightly ‘USA-centric
  • The New Grove Dictionary of Music (Washington, DC: Grove, 1980)
  • Joel Chadabe: Electric Sound (Prentice Hall, 1997)
  • Michel Chion, Guy Reibel: Les musiques électroacoustiques (Paris: INA-GRM, 1976)
  • David H. Cope: New Directions in Music (Dubuque, Iowa: WCBrown, 1971)
  • David Ernst: The Evolution of Electronic Music.(New York: Schirmer, 1977)
  • Jean Laurendeau:Maurice Martenot, luthier de l’électronique (Montreal, L. Courteau, 1990)
  • Curtis Roads: The Computer Music Tutorial (MIT Press, 1996)
  • Barry Schrader: Introduction to Electro-Acoustic Music (Prentice Hall, 1982).
  • K Marie Stolba: The Development of Western Music (McGraw Hill, 1997)
suggestions for further exploration:
books / articles
  • Appleton, Jon, ed..The Development and Practice of Electronic Music (Prentice Hall, 1975)
  • Bayle, Francois. Musique Acousmatique. Propositions…Positions. (Paris: Buchet, 1993)
  • Bregman, Albert. Auditory Scene Analysis. (MIT Press, 1990).
  • Chion, Michel. Guide des objets sonores (Paris: Buchet, 1983)
  • Cogan, Robt. & P. Escot Sonic Design. (Prentice Hall, 1976)
  • Emmerson, Simon. The Language of electroacoustic music. (New York: Harwood, 1986)
  • Mountain, Rosemary. “Possible Pathways”, MikroPolyphonie. 2 (1997)
  • Schaeffer, Pierre. Traite des Objets Musicaux. (Paris: Seuil, 1966)
  • Schafer, R. Murray. Tuning of the World. (Knopf, 1977)
  • Tenney, James. META Meta + Hodos. (Frog Peak)
  • Truax, Barry. Acoustic Communication.(Ablex,1984)
  • Wishart, Trevor. On Sonic Art (Amsterdam: Harwood, 1996)
  • Young, Gayle. The Sackbut Blues (Ottawa: Nat Museum of Science & Technology, 1989)
a few journals….
  • Computer Music Journal
  • Journal of New Music Research
  • Musicworks
  • Organized Sound
  • DisContact I / II
  • Presence
  • DiM
A number of sections of this reading have been adapted from articles written between 1984 and the present.

12 thoughts on “Sonic Arts Breakdown

Thank you!!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s