What happened at 8:00pm, Sunday 20th June 1999, in the Cambridge University Faculty of Music Concert Hall situated in West Road, Cambridge, marked the culmination of about two years' work.
At 8:00pm, more or less precisely, four people each activated the programme called pSY on one of four standard IBM-compatible PCs. Each PC was controlling either a Yamaha SY77 or SY99 synthesiser. Activating the play button told pSY to begin playing a score called The Copenhagen Interpretation. The 'performance' would last precisely fifteen minutes (give or take the minor vagaries of the various system clocks), plus the time the programmes took to turn themselves off but these were the only precise things about the performance. Each of the SYs' outputs was routed to two of the four speakers set around the hall (see Figure 1). I mixed the four from the centre of the auditorium. This mixing was, apart from the initial 'click', the only part of the performance directly under human control. The 'piece' they would perform was not in any completely determined format. Indeed, apart from at five or six moments in the piece, I could not be entirely sure what programme's output would be.
The Copenhagen Interpretation is named in honour of the work undertaken on quantum mechanics by Neils Bohn et alia during the first part of the century. Weirdly, and after the name had been chosen, I discovered that physicists refer to the quantum state of a system by the greek letter psi (psi).
I had been working on a project to create a concert piece for live computers and synthesisers, where the computers were at least to some extent, autonomous during the performance. Of equal importance, I wanted a piece which did not display typically algorithmic tendencies, (loops, ambience, minimalism), but had a sense of teliology. In a sense, it was an experimental musical Turing Test with original, non-stylistic music. (As far as this idea is concerned, the performance is still too controlled, but then I was preparing for a live public performance.)
Why would anyone want to do this? Looking back, I had a number of reasons. First of all, as a composer of acoustic music, when it came to experimenting with the electro-acoustic environment, I became worried about the idea of something being finalised in such a static way. Once it was done, that was that - apart from differences in audio mixing and in environmental acoustics there could be no further development without re-writing or re-constructing the entire piece. This is generally either extremely inconvenient or impossible. The idea was to build diversity into the very construction of the piece, so that the result would be different according to the settings of the programme each time it was played. More than this, the piece was not to have a formalised, final version. By its very construction, the programme/piece has no precise version 'written' anywhere, in the code or the settings. The concept of 'rewriting' a piece each time it is performed in order to achieve this is neither appealing nor intellectually or artistically economical - a live performer does not need to learn again a work or, indeed, an instrument whenever the performer re-interprets a piece. It was also a different prospect from revising a 'written' score, (although I personally find this, too, a difficult and unpleasant activity). Through this process it became clear that when writing acoustic I relied heavily on the performers themselves to 'breathe life' into a score. It seems unthinkable to me that a composer would expect or desire a performer to play a piece identically each time. Performance is an interactive process which, even in cases where one hardly knows or only briefly communicates with the conductor or performers, adds immeasurable complexity and subtlety to the score. Similarly it became clear that in reality when any listener listens to a live performance, their reactions depend quite strongly on the performance as well as how well they know the piece. There are, of course, many aspects of a live first performance that will be unexpected to the listener, even if that listener is the composer. One of the intriguing aspects of developing pSY was trying to incorporate the feeling or the atmosphere of this interpretation while maintaining processes that would be fast enough to allow effective real-time performance.
A second reason for undertaking this project was a dramatic and aesthetic one, and one that may alienate some listeners. The sounds created by the pSY/SY partnership, especially when allowed to meander in their own way, often seem rather strange: as if they are communicating, but in some alien, inhuman tongue. Some have suggested that they are what you might imagine hearing if you set an antenna to receive signals from some distant point in space and intercepted an alien communication. This impression is only heightened during a performance, where the static and immobile pairs simply sit and 'sing'. Their 'expressions' are the same whether they are singing gently or bellowing madly. (This in itself has certain difficulties, similar to those of more standard 'press play' pieces concerning what you look at when listening, and at the conclusion of the piece, who are you applauding and for what reason. These questions and others need answering elsewhere, although during the recent performance the addition of a large display of a dynamic sonic analysis by the (Macintosh) 'Sonogram' programme did a lot to keep listeners entertained. On the other hand, there are reasons for feeling that these particular visual stimuli, when not a direct result of the muscal activity, such as the sight of a live musician performing, are rather more of a distraction from the music than an help to it.)
The third reason was simply the several and diverse practical and technical challenges posed by the project. Over the years standard acoustic instruments have developed beyond any single person's or group's ownership. Individually or in often pre-defined groups, they carry with them a cultural heritage which can belong to anyone should they choose to learn about and understand them. I shall argue that these are basic 'tools' of music and beyond their standard limitations, composers can make them more or less their own as countless others have done before. This may be the case with some electronic instruments (the moog, the EMS), although arguably for a number of reasons none have even approached the status and tradition of even comparatively recently developed acoustic instruments (for instance the saxophone). It is most definitely not the case with computer software - surely another 'tool' although maybe not precisely equivalent in nature. One principal feature of the latter during the last ten to fifteen years has been the extraordinary pace of development surrounding computers; the technology and the software must struggle hard to survive the forces of competition, obsolescence, commercialism and popular fashion. In addition synthesisers and software use methods of sound production which are different, separate and often experimental and these, too, suffer from the same effects of commercialism, competition, etc. How, in this environment, can any piece of software be really felt to 'belong' to anyone, apart, that is, from the authors themselves? I would argue that it cannot and that this is a factor in deterring many people from the medium itself. The only possible exceptions to this are those who have their own personal set-ups, often involving idiosyncratically ancient versions of software and equally peculiar selections of equipment. But even in this case, it is the set-up as a whole concerning there is a sense of ownership, not the software. Having experienced quite a lot of this development I feel this sequence of competition, obsolescence and fashion quite acutely and feel an unease when claiming responsibility for a piece when I myself do not necessarily understand the processes that lie in the background. For some reason I do not feel the same when writing music for a flute, a violin or another 'pre-defined' acoustic instrument.
For many composers, writing one's own software is a rather too extreme solution at the moment - the creation and manipulation of sounds is creatively quite satisfying enough and they are apparently not concerned with any feelings of interference from those who have created the software function which they use to create their music, nor do they seem to feel (or do not feel strongly enough) any loss of ownership because they are using functions designed by someone else. Nor do they feel, understandably, any inclination to undertake the work involved in programming. However, as will become clear from what follows, it is my opinion that a significant part of the future of music technology will lie in the development of a variety of 'intermediate' programming tools that will, effectively perform this task. Where the different levels of programming and where in this hierarchy a sense of the aesthetic appears is quite a complex area and I will be discussing it further below. Ultimately, this is an area requiring discussion concerning possible links between programming and composition.