Music and the New Technocracy In comparison with other art forms, music has been particularly accessible to developments in technology. The machinery of musical instruments has become increasingly sophisticated and so it is unremarkable that electronic and then digital devices should have been so quickly exploited by musicians1. This cannot be said about language-based arts, although a considerable amount of language research has been undertaken by those involved in linguistics, cognitive psychology and artificial intelligence, many of whom have stressed its importance to the mind234. Examples of technologically created text exist but these are generally for purposes of novelty, humour and experimentation rather than aesthetics. As the complexity of computer hard/software increases, so will the effect of technology in this area, but only as obdurate difficulties in implementation are overcome. While surreal and 'nonsense' poetry exists, in order to be successful serious consideration must be given to the potential meanings of the words as well as their sonic qualities: "'twas brillig and the slithy toves"5 "riverrun, past Eve and Adam's, from swerve of shore to bend of bay, brings us by a commodius vicus of recirculation back to Howth Castle and Environs."6 Language can be surreal or ambiguous to a point, but is restricted in how purely abstract it can become. Most text-based forms are highly 'meaningful'. The visual arts have made significantly greater use of abstraction. Pure abstraction, though, still represents a comparatively minor proportion of the whole and is, arguably, used as a way of manipulating 'real' objects, or forcing new relationships; 'pure' abstraction can often be related to very real natural phenomena such as texture, colour and shape. The lack of a specific and necessary relationship between an artistic object and 'real' objects means, however (like music), that there is significant scope for abstraction, particularly involving use of patterning. Meaning in music is distinctly ambiguous: opinions about it differ extravagantly. Steven Pinker referred to music as 'auditory cheesecake'7 and like Hofstadter8 is tempted by Deryck Cooke's argument9 that certain melodic sequences directly represent certain emotions. Ian Cross has suggested that 'it may well be that music is the most important thing we humans ever did', fusing Steven Mithen's evidence from prehistory10 with ideas from cognitive psychology to suggest that music blends a number of primary mental functions11. In his book 'The Selfish Gene' Richard Dawkins introduces the meme as a cultural rendering of the gene12; in 'Unweaving the Rainbow' he hints at other biological explanations13. Whatever the details, it is quite clear that music does not have to represent anything directly, and that even if in prehistory music was a sort of non-verbal communication (messages by drums, screams of pain or whoops of joy, the imitation of the environment) "it can now serve as a vehicle for a whole range of aesthetic and transcendent experiences"14. Because music has no precise or necessary relationship with 'things' it is an unparalleled target for abstract experimentation, and technology, especially electronic technology, provides extensive methods for this. It is this characteristic that has led to significant exploitation particularly by commercial musicians and others making prominent use of patterning. Complex, repetitive patterning is particularly facile to program algorithmically. Differences between 'computer' and 'acoustic' instruments lie in the balance between expression and experimentation (or practice). Advances in instrumental technology comprise a fusion of imagination, investigation, aesthetic and technical practice. Whether and to what degree such advances are taken up is then a matter for the individual musician. The diversity of experimentation possible on a digital instrument is very significantly greater than that possible on an acoustic one - there's a limit to how much you can 'do' to a flute before it stops being a flute, whereas in theory a computer is unlimited in the sounds it can produce. If, as has been suggested15, practice conditions minds and bodies to assume previously complex tasks 'unconsciously', then the degree of practice possible on any particular instrument is very important. If an instrument is too simple it will not allow sufficiently complex forms of expression; if an instrument is too complex then too much effort will be spent in 'practice' (or experimentation) and 'unconscious' expression becomes difficult. It is probable that any instrument based on electronic technology will be too complex to allow 'free' unconscious expression - if the performer wishes to exploit that instrument at all fully. While the creation and manipulation of sound/music with computer software and hardware may be experimental, it is also used to emulate real performers playing 'real' instruments; this is for many the chief function of music technology. Is a piece of music composed, 'performed' and recorded solely technologically a 'real' performance or a kind of provisional rendering which subsequently may or may not be performed in reality? Plentiful commercial material is produced in this way and it is becoming increasingly rare to hear through the commercial media 'real' scores played on 'real' instruments by 'real' performers. The process creates anomalies: if you want a flute to play a low A (outside the range of a real flute) would you not use it because a real flute couldn't play it or would you embrace the anomaly, accepting that if the piece were to be performed 'properly' you would edit the note? Why shouldn't I, just for a moment, use two flutes, or five or ten? If I have the capacity to adjust the timbre of my synthesiser's flute sound and I want a particular effect, why shouldn't I use it? After all, 'pure' electroacoustic composition often only uses such processes, so why shouldn't I do it? And if it's never going to be performed in 'reality' what's the problem, anyway? What about instrumental ability? Professional musical performers are able to play problematic passages and even if they can't, they'll be able to use their judgement to create something that simulates the score's intentions satisfactorily. However, there are some things that are particularly difficult for humans - certain rhythmic configurations, for instance, or complex non-repeating patterns of notes. Should an 'emulated' piece respect such difficulties or exploit the advantages of the technology and ignore them? What about the positive things that humans add to musical performances? While for certain 'types' of music (usually more traditional) emulators can produce adequately effective results, more exposed and expressive ideas are difficult, if not impossible, to mimic convincingly. Physical manipulations of an instrument such as the application of special tunings, tone colours or sounds of the instrument's body are technologically infeasible (without the creation of new 'voices'). More importantly, expression itself - subtle differences of attack, vibrato, tuning deviations - musical methods regularly and commonly used by any (good) musician are implemented in simplistic, algorithmic ways if at all. With the increasing prevalence of such methods, how meaningful are these aberrations? Are we losing too much, or are they details that can be lost with the assurance that somewhere else we're gaining more? Much marketing material used to sell music technology emphasises the empowerment of individuals who do not have 'standard' skills to take part in musical activities. So, you do not need to notate music if you can employ software (which is a more generic skill) to do it for you. Due to musical expression's abstract nature it is not even necessary to comprehend musical theory in order to compose a piece of music (although if you want to you can use the compositional algorithm provided!). Similarly, through the use of emulators one doesn't need 'real' instrumentalists to play it. All this is currently possible in one form or another and the technology is certain to become very significantly more powerful and sophisticated than it is now. How good is the software? It can be great: if you use a good product and you know what you're doing you can produce easily editable, professional quality scores. Unfortunately, the reverse is also true: if you use (any) software without knowing what you're doing, the results will be grim, unless you are very very lucky. (Incidentally, what is 'knowing what you're doing'? Knowing what the instruments are, what they sound like, how they are played and who is likely to play them.) To make matters worse, the configuration of QWERTY keyboard and mouse (even with a synthesiser keyboard) is hardly the superlative tool for music notation (or performance). In spite of these potential problems, there are ways in which technology can genuinely enable people musically. Many do not have the time or the inclination to learn theory. Technology can notate, harmonise, improvise, compose, imitate and so on. Once novices have come to terms with simpler tools, they may find a route to more complex and powerful ones. The pitfalls are not for novices, but for those who might wish to develop further. All this has happened since the introduction of affordable desktop systems that have enough power to make MIDI work feasible, and which have taken up with enthusiasm. In consequence, it is far from uncommon for commercial music to be completed entirely without the professional use of studio facilities and the Internet allows inexpensive and effortless distribution. Embryonic musicians no longer require abundant personal resources; nor do they need to rely on their appeal to infamous record companies. Another positive application of music technology involves helping people with physical and mental difficulties to express themselves, although due to the still rather awkward and basic implementations of most hardware as well as the expense and care needed to ensure the welfare of such clients, development has been hindered as has the adoption of such systems. In any case, many providers feel that a more human and real environment is preferable. An understanding of real instruments, real performers and music theory does not only nurture the technical ability to compose. It also encourages an appreciation of the potential diversity of musical ideas. A computer, alternatively, is a multi-purpose device and with sufficient knowledge theoretically anyone can 'do' anything that is computable (although currently the ability to create 'real' performances is not). For the majority of composers this is too much - every time they want to express their emotions, they don't want to have to reinvent the wheel or the flute, let alone the flautist! There is a variety of software that would enable them to do this (programming languages), but these are, understandably, rejected (or less understandably, not considered at all) in favour of predesigned software and hardware. Any software (even a programming language) filters the number of alternatives available to the user - more popular functions are easily available, more complex ones might be there but less apparent; many things won't be available at all. Commercially this is important; first impressions are vital and if a program doesn't do what we want it to do quickly and easily we'll stop using it and move on. Programs compete for different market sectors where power, speed and usability are crucial factors. As with simple and complex instruments, a balance must be drawn between simplicity, ease of use and expressive potential. Where should this balance be, and if we're not happy what, if anything, can be done to alter it? As technology becomes more widespread and powerful, as those producing and using it themselves become more powerful and influential, and as those who are responsible for music education either reject it completely or accept (with the odd complaint) the (commercial) status quo, a vacuum results. Unless technocrats are given reasons to consider the importance of live performance or abstract exploration and experimentation, and when commercial hardware and software producers are naturally and exclusively concerned with their balance sheets, who is going to be concerned if that software simply directs users towards the achievement of certain specified goals with a minimum of effort? Technocrats are frequently young people. They often have a worryingly limited knowledge of themselves as well as the capabilities of their equipment, and, perhaps understandably, rather than exploring the abstractions of sound and expression prefer the undelayed gratification of popularity (and, to be fair, earning a living). (After hearing Jonty Harrison's electroacoustic piece 'Klang'16 a student complained that he didn't understand it because he couldn't hear the verses, choruses and middle eight!) Musical technocrats often see the limits of the software as a definition of their limits - software which for commercial reasons strips choices which might seem too complex, difficult or esoteric. Moreover it is too frequently the case that those charged with the education of such technocrats are unwilling or unable to come to terms with such problems. The technocrats will find their own answers even if they are devoid of the experience of live, complex and subtle music making. They may well reject more elaborate strategies, but they should be made aware of the choices. It rarely occurs to them that if they want to do something, they should feel able to learn to do it themselves. If a piece of software written by a commercial house composes a piece of music at the click of a button - who has composed it? If it's that easy, why bother? What is expression if someone else significantly determines it? How do I know what I want to do if I don't know what I can do? If I play a CD and mime to it I shouldn't be surprised if no one applauds! I am acutely aware of the danger of rejecting the new because of investment in the old; there have been 'paradigm shifts' throughout musical history involving notation, the church, etc. It is possible to argue, though, that because of the nature of the technology involved, this shift is different. The computer is not a musical instrument, but it can be made to look and behave a bit like one. The problem is not restricted to music - all of our ways of doing things, our ways of thinking will modify as computer technology becomes more widespread, efficient and invisible: if we don't question the status quo we won't be able see how we are being directed, or who is doing the directing. In terms of musical training, at least for the moment and while it comes to terms with reality, technology should not be allowed to become too inconspicuous. Sloboda has warned that contemporary musicians have "already come to realize that the unfettered development of electronic music leads to sterility and lifelessness. Electronic instruments must always be constrained by the parameters of 'human' music making"14. Contemporary popular music suggests that this is not necessarily the case - many individuals appear to achieve (complacent) gratification from electronic, if imitative, sounds. My concern is with what is potentially lost - depth, dexterity, the transient delight that we experience when an accomplished performer or imaginative composer glimpses something extraordinary. One possible outcome will be the marginalisation of live instrumental music, at least in the classical sense. We shouldn't be too surprised at this, because western art music always been marginal. Anyone witnessing the contraction of something in which they have considerably invested will view with regret anything that encourages that decline. There is evidence that classical musics throughout the world are sickening from the advance of commercial western music, leaving in this country a rump of Sunday afternoon music (with cannon - rather than canonic - effects). But I don't want to reject the new - on the contrary, I want to celebrate it, to dissolve my own boundaries encircling its capabilities so as to better pursue its potential: to understand what makes a human playing a violin interesting. Aesthetically, computers are strange machines, but nowhere near as strange as we are. We are in a transitional period: computers are powerful enough to persuade us that they are capable of so much, but only if we don't fracture the illusion of virtual reality: it's exciting and inspiring, but it is an illusion that we as individuals have the resources to fashion in our own image. If people are to take advantage rather than be taken advantage of, they must be given, or take, these resources. If they do not take on these issues, someone else will, and with their own agenda. And it needs to be done while considering commercial pressures, attitudes and products which are not in themselves wrong or unwelcome, but tend to encourage a homogeneity of thought and intention. As a musician my main concern is with the prospects for music. We are educating a significant number of technocrats without the ameliorative influence of 'human music making', where too frequently the accent is not on the imaginative, open nature of technology, merely fashionable conventions determined by commercial producers. It is on this group that a significant component of our futures, musically or otherwise, depends: they will hold the code to technology's mirage. It is in everyone's interests to ensure that as many people as possible, including technocrats, are as informed, imaginative and competent as possible. Equally, there will have to be an acceptance of the significance of computers, (including a thorough understanding of their positive and negative aspects), the development of well implemented music and technology courses inspired by those fluent in both disciplines, the general integration of technological issues throughout aesthetically based courses and, possibly most important of all, the education of all educators in technology. The development of technology as an open tool would necessitate the acquisition of technical awareness by aesthetes and technocrats to a degree that they would not immediately appreciate, but then what else are musical instruments but a 'hardware interface' where human meets machine? We don't expect performers to play a piano immediately and with fluency. Why then, should we expect any less or any more from technological 'performance'? Richard Hoadley Anglia Polytechnic University Cambridge March 2001 1 Manning, P. 1985. Electronic and Computer Music. Clarendon Press, London. 2 Chomsky, N. 1957. Syntactic Structures. Mouton, The Hague. 3 Chomsky, N. 1965. Aspects of the theory of syntax. MITPress, Cambridge, MA. 4 Chomsky, N. 1968. Language and mind. Harcourt Brace Jovanovitch, New York. 5 Carroll, L. 1872. Through the Looking-Glass and What Alice Found There 6 Joyce, J. 1939. Finnegans Wake. Faber and Faber, London. 7 Pinker, S. 1998. How the Mind Works. Allen Lane, London. 8 Hofstadter, D. 1979. Godel, Escher, Bach: an eternal golden braid. Basic Books, New York. 9 Cooke, D. 1959. The Language of Music. Oxford University Press, London. 10 Mithen, S. 1996. The Prehistory of the Mind. Thames and Hudson, London. 11 Cross, I. 1999. 'Is music the most important thing we ever did? Music, development and evolution. In Music, mind and science', Suk Won Yi (Ed.), Seoul: Seoul National University Press, pp10-39. 12 Dawkins, R. 1976. The Selfish Gene. 13 Dawkins, R. 1998. Unweaving the Rainbow. 14 Sloboda, J. 1985. The Musical Mind. Oxford University Press, London. 15 Sloboda, J. (Ed.). 1988. Generative Processes in Music. Oxford University Press, London. 16 Harrison, J. 1982. Klang LP: UEA 84099 (released 1984),UEA Records, University of East Anglia.