First of all, it is worth remembering that ‘technology’ has always played an important part in music - in many ways, apart from the voice, music has always depended on technology - a drum, a flute, a trumpet, a vioin, are all forms of past technological development. At the same time, as with human nature, there has always been, in certain circles, a profound mistrust of these developments, perhaps best illustrated by Brahms, who for the majority of his life and in the majority (although interestingly not all of his pieces) wrote for the natural trumpet, even though the valved instrument had been available for some time. At the same time, other composers have quickly and imaginatively made use of whatever technological developments have been made available at whatever time - for instance, again with the brass, Haydn’s Trumpet Concerto and Wagner’s use of Adolph Sax’s instruments, and Mozart’s use of the clarinet. Again, all these preferences describe the musicians attitude towards technology, fashion, tradition and new developments rather than anything that we would now think of as an ‘objective’ attitude. Nowadays few people would think that any acoustic instrument, let alone the valved trumpet, as being ‘technological’ at all.
A strange and complex mixture of ‘science’ and ‘art’ and ‘mystique’
Which is dominant usually depends on the fashion of the time
This is probably due to its fundamentally abstract nature, which means that people can afford music many different attributes, again, according to the fashion of the time: ergo:
Ancient Greece - ‘popular/folk’ musics are probably much as they are today -
‘classical’ musics are possibly more to do with Pythagorean theories and ideas such as ‘the music of the spheres’ - a typically abstract view which attributes to music certain ideas, such as an equivalence between mathematics and music. See also the common description of Bach as ‘architectural’ - again, literally, problemmatic. In the same way, more recently, completely different ideas were prevalent during the Renaissance and Baroque eras (religious/intellectual), the Classical era (intellectual/entertainment) and the romantic era (entertainment/humanist).
In the twentieth century this interest in technological and social change has continued with an explosion of 19th century ideas resulting in complex forms of tonality and an reversion to an interest in abstraction in many forms, (the complex tonality of 20th century ‘art’ music being, perhaps, somewhat similar to the reversion to pre-perspective ideas in the graphic arts).
With the emergence of audio technology, bearing in mind what has been mentioned before, there was, as always, a divergence of opinion. Until the Second World War with all its political, social and technological developments, those who dealt in ‘music technology’ were, at best, mavericks - Pierre Schaeffer, Edgard Varese, etc., a long way from both the ‘traditional’ and the ‘avant-garde’ of music.
To a lesser extent, this continued after the war, although, possibly due to the evident impact of technology during this conflict, there was perhaps more respect for the results. In addition, more musicians of greater standing were investigating the possibilities of electronics. Of these musicians, composers such as Stockhausen, Berio and Xenakis are amongst the most prominent. These composers didn’t simply consider the acoustic possibilities of the new technologies - at that time principally magnetic tape, radio and analogue recording and sound synthesis - they imagined often impractical ideas that were unproven or irrelevant to most people in performance -
Stockhausen Kontakte based on a single set of pulses, made using equipment used in radio studios, rather than anything specifically musical.
Xenakis La Legende D’Eer conceived with the idea of broadcast around Paris.
At Bell Laboratories, MIT and other institutions across America, the digital implementation of sound began and developed earlier than most other digital formats.
Again, in another parallel with science, many of these ideas - have since found commercial applications - the magnetic tape, the synthesiser, quadrophonic, ambisonic and surround sound. A whole series of Yamaha synthesisers, from the DX7 to the SY99 were based on digital technology developed at MIT.
The net result, in digital terms, of all this activity, is the division of music technology into two areas - audio and MIDI. This reflects a certain dualism in the nature of music - on the one hand, one perceives single, discrete events, which we might call ‘notes’, on the other, we know that these single events are themselves complex formulations of many processes. So, MIDI looks at music as points - an instrument makes a certain sound and this sound can be reproduced at any pitch - each point is the same, and each point can be edited in a similar way to a line in vector graphic.
Audio editing is more like taking a photograph - one can edit audio files in very complex ways, but changing the pitch of one note in the centre of a complex polyphonic texture is impossible. Indeed, Richard Dawkins in his book, Unweaving the Rainbow, comments wonderingly on our ability to decode these complexities when he describes going to a classical concert and being surrounded by noisy neighbours -
The entire set of vibrations sums up into a single wiggly line on the graph of air pressure against time, as recorded by your eardrum. Mirabile dictu, the brain manages to sort out the rustling from the whispering, the coughing from the door banging, the instruments of the orchestra from each other. Such a feat of unweaving and reweaving, or analysis and synthesis, is almost beyond belief, but we do it effortlessly and without thinking.
Of course, the same can be said for our visual perception, and, as in other areas involving the physiology of perception, we are discovering - often through technology - how little we really understand these very basic things. Certainly, the sort of analysis and synthesis described by Dawkins, is well beyond the reach of technology today.
In general, the music technology available today is ‘owned’ by the commercial part of the industry - the majority of hardware - synthesisers, samplers, effects units and software - synthesisers, sequencers, sound processing units - are designed for and by commercial musicians and programmers.
So, if the synthesiser has a ‘default’ method of being played, it is through a keyboard.
Greatest good for the greatest number
the keyboard reflects the MIDI idea of notes as being discrete pitch events, even if the sound bears no relation to this.
Similarly, a large part of the marketing of these items is based on their preset sounds, which are often rather strange creatures which appear grow and mutate in the commercial arena for a year or so before disappearing once the fashion has passed.
Software - 4/4, C major , it’s easy to copy and paste, difficult to do anything complex in sonic terms - difficult to deal with ‘sound’ as just ‘sound’ in the MIDI environment.
Indeed, the way the entire industry is structured, composing or experimenting with new ways of making and performing with electronics are made very difficult.
It is worth remembering how recent much of this development has been -
Durham, 12 years ago, suspicion of synths, MIDI and sequencers.
Today, we are just about at the point, in education, where a student might feel themselves to be technically competent when they can use a workstation involving a synth, a computer and maybe a small mixer. Some students are too uninterested or too afraid to aspire to even this lofty level.
Eventually we must be looking at a time when an understanding of music, and indeed other, technologies, is more than understanding someone else’s interpretation of a Graphical Interface. Ironically, the more software manufacturers have done to make the interface ‘easier’ the more they have directed users in certain directions - its similar to the use of a keyboard to operate a sythesiser. Sit anyone at a music keyboard and they will immediately and automatically think of piano music - left hand, accompaniment, right hand melody.
A student, having listened to a piece of electroacoustic music in simple ‘classical’ ABA form - what’s the structure? - I couldn’t hear a chorus or a verse or a middle eight.
Just as we wouldn’t call someone who can drive a mechanic, so if we want to understand and control electronics, properly, in terms of composing and performance, we have to understand the mechanics of the hardware and the software, just as a professional driver must understand the mechanics of the car and the violinist the mechanics of the violin.
The future of all technology at a sufficiently high level will be in the development of tools that enable the user to exploit for him or herself the vast range of methods available today. These already exist to some extent but are impossibly difficult for more than a handful of obsessives to understand. In creative terms, these methods must have different and designable modes of performance. Most of the evidence suggests that, just as we prefer to read text from a physically existent book, we perform best on instruments whose limited parameters we can control with infinite precision. Whether any one instrument or method will dominate in future is a matter for speculation, but the evidence again suggests that a limited number of formats will survive, as is the case with acoustic instruments. The intriguing question is whether it would be possible to construct software instruments that differ according to the performer.
In a similar way, in composition, there are increasingly interests in developing algorithmic techniques, where the ‘composition’ is in the compilation and control of particular formulae.
Fractals an influence
This music can then be performed conventionally, on tape or live through the software. This has potentially profound implications concerning the nature of performance itself. Until now electronic music has been ‘static’, principally due to the complexity of the construction or the distributing medium itself. This need not be the case in future.
To sum up, we are at the beginning of this process. As humans, many of us have immediately leapt on certain ideas and structures, prematurely, as the new ‘avant garde’. Many retreat and shy away from these new things. Just as we no longer think of a trumpet or an oboe or a biro or a TV or a telephone or a car as ‘technology’, so, eventually, we will see current technology. But we must be aware in the creative arts that an essential part of nature of the art lies in the physical nature of the ‘instrument’ - that is our interface with the technology. So it must be in our interests to gain an understanding of and ultimately control over the hardware and software at our disposal. In software terms, this must mean that eventually at least a part of composition must mean the development of our own programmes.
A conundrum and two quotes -
Conversation with a student -
I try to teach students this control and to be suspicious of commercially produced equipment for the reasons given above. If that makes them commercially inept is that a good or a bad thing?
My own concern a little while ago for a PhD composition student submitting a portfolio with no conventionally notated music at all. Is this a problem?
Teaching students about Stockhausen who then go on to work on a checkout. Is it worth it - At least their a checkout person who knows something about Stockhausen, which for some reason I now find peculiarly satisfying.