This seems to me to be a central issue. I may object to an individual's use of a third party programmer's algorithm or function - I may suggest that a large part of the result is due to that programmer rather than to the composer. However, there are problems here, too.
First, how does the use of a programmer's algorithm or function differ from the acoustic composer's use of the skills of the live performer to give life to his or her notes? I have discussed above that this is an extremely important, and possibly crucial element in my way of thinking. However, there is a direct link between 'ownership' of a piece of music, related to my previous discussions of who applauds what at a live performance. Presumbly, the majority of appreciation at a live performance goes to the performer, not the composer (who, after all, is often not there, and more often that I would like, is not even alive!). Is this the part of the equation that balances the acoustic composer/performer equation? As far as performerless music is concerned, it is usually the case that this balance is not there - in spite of the fact that usually the composer has utilised many other people in his or her activity - not just programmers and hardware designers and manufacturers, but those who make, service, set up and check the PA system on which the composition relies. Could this lack of balance be a reason for the common sense of strangeness experienced when applauding a 'press play' piece?
A second problem involves the 'levels' of programming and interpretation mentioned above.**** Some consider that the use of a sequencer is 'programming' (credits are sometimes given for 'MIDI programming', usually meaning this), and indeed as I have mentioned, some of the larger software packages have modules which do indeed involve something similar to 'programming'. In fact, programming in any higher level language is itself dependent on the programme which interprets or compiles it, and ultimately even lower level languages are dependent on the hardware/firmware/software of the machine itself and the operating system it uses. Bearing this in mind, where can attribution really be placed? Am I indebted to IBM for the basic design of the computer, or Microsoft for Visual Basic, or the BBC for the original Basic, or Yamaha for the design of the SY… This argument looks suspiciously like reductio ad absurdam and therefore suggests that such attribution is faulty at some stage. And yet the fact remains that, commercial or non-commercial, software to be used for musical composition by its very construction will not and almost certainly cannot fully reflect the needs and aspirations of every composer. It is true that being able to group together more basic functions into scripts may well give more control to the composer but ultimately, will it ever be possible to come up with a satisfactory 'universal' sound editor. As things stand, the common reaction of users of any of this sort of equipment is initially that 'wow, it can do this, and that and other', but after a little work, usually to find where the software's limits are, disappointment and realism set in. Some people will have the same reaction to an acoustic instrument, although again, there does not seem to be the same attitude towards, for example, a trumpet! We do not eagerly await the next new version of a trumpet. Nor do we complain because it hasn't got strings or a reed!
However one eventually decides to deal with this question, there can be little doubt that the interface has some influence on the way in which it is used and indeed the ways in which it can be used. In almost every case the technological interface will be considerably more complex in detail than that of an acoustic instrument, and I have argued above that this may be because of a fundamental difference between electronic instruments, or the fact that electronic instruments have not had the time to consolidate into a single, more defined idea that can gradually begin to develop a repertoire and a set of practitioners. Which of these turns out to be true is of vital importance to what happens to electroacoustic music in the future. Whatever the results of this, it must be clear that we have much to learn from the way that acoustic instruments are controlled. Most specifically, the difference in nature between the two interfaces as described on pp21-22 above, where the violinist has only (literally) a handful of controllable parameters, but has an almost infinite variety of control within those limits much tell us something about what we might want to control and how we might want to control it. Purely spectulatively, I would suggest that one of the principal things about the acoustic interface is the way in which one can switch seemlessly between precise pitches and amplitudes and more flexible versions in a way that neither MIDI nor audio manipulations really allow.
pSY certainly makes no real attempt to deal with this and is, indeed, in this respect too complex. (A further sub-programme, pSpace, attempts to investigate this matter using the mouse, although this seems too simple an interface to be aethetically satisfying) At least a part of the reason for this is the desire to experiment in as restriction free environment as possible. As I have mentioned above, the most usual method of developing the software's function were decided upon on a 'what if' basis. It would certainly be possible to reduce the complexity of the interface a little, although it couldn't be reduced much without resorting to a loss of functionality.
Here is perhaps the moment to briefly discuss the effect of specifically visual programming on the development of the interface. While a lot has been said concerning the faults of various visual systems, there can be little doubt that the rise of the graphical interface has catalysed an enormous increase in the use of computers in general - a rise that I doubt would have occurred without the development of these minor versions of 'virtual reality'. Again, matters of interpretation do need to be taken into account and users really ought to be aware that what they 'see' as real on the screen is no more a description of how the computer actually works than a violin is a description of how it makes it sound in a technical sense. Again, the interface of an acoustic instrument is an interesting analogy here as it is more than possible to be able to perform extremely well on such an instrument while having only the most basic knowledge of how its sound is made, or indeed how one as a performer interacts physically with that sound. I would have thought, however, that to most really good performers such knowledge is in place - at least in an intuitive if not technical format, and that such knowledge is important in guiding performance practice. Similarly, I would speculate that the majority of (acoustic) composers have a similar type of knowledge, only in possible a more theoretical sense as it is not generally considered essential that a composer should be a skilled performer on an instrument in order to be able to write effectively for it.
There is a more basic part of the visual/verbal argument to be made here. One of the main reasons that I became interested in programming was my experience of understanding music in a non-verbal and generally quite visual way. At least, I feel that music is abstract in the non-verbal sense and that as Roger Penrose describes his view of mathematics:
Almost all my mathematical thinking is done visually and in terms of non-verbal concepts, although the thoughts are quite often accompanied by inane and almost useless verbal commentary...No doubt different people think in very different ways...the main polarity in mathematical thinking seems to be analytical/geometrical. (p549)
I became interested in the idea that computers could possibly allow a more general visualisation of the abstractions of sound just as the early versions of the GUI were becoming popular (Atari, Mac…) and spent some time trying to fathom the difficulties of the then programming tools that enabled one to use such interfaces (without, it must be said, much success). Having, in the last few years finally been presented with products that enable one to use GUIs without too much programming skill, the divide between what I perceived to be intuitively clear about links between images and sound suddenly became very wide indeed, and indicated that if I had such an intuitive belief it was firmly situated in my own way of thinking and has no straightforware correlation with the way computer deal with such matters. (The question of mapping between one structure and another is another issue considered in Hofstadter's book).
This is well illustrated by the idea that in a pSY 'morph' between one set of voice parameters and another will occur in a straight and predictable line is misguided, due to the unbalanced importance of individual parameters (see p11**** above). Similarly, the mapping between visual and audio stimuli can be equally misleading and leads to some complex attempts at implementing visual representations of sound (or for that matter, for simple performance controllers for sound). This has at its heart the way we commonly view a 'note' in terms of a single event which, in standard notation can be notated as a single typographical form. In terms of the 'actual' sound created by a realisation of this form, however, this is not the case at all - the sound is constantly varying in many respects and the impression of pitch is again, a rather subjective matter. In itself, this has an analogy in modern GUIs, where the fundamental interface is a virtual visual representation of complex binary data, and what's more, the visual representation is hardly 'there' at all, but an illusion caused by the ceaseless technology of the video monitor.
Throughout the above discussions, the same topics keep arising, just as they kept occurring to me as I was developing pSY and composing The Copenhagen Interpretation. In summary they are the following:
Initially, there seems to be little problematic about the distinction between computer software and composition. Even with the increasing interest over the last few years concerning the stylistic implementation of software composers (jazz bloke, Cope/Mozart, etc.), and the use of algorithms such as fractals to create sonic events, there is little in many of these that go, in terms of original/creative music, much beyond the curious, although some may be more interesting in computing terms. Much of this can be explained by the traditional divide between 'musicians' and 'technologists'. Most particularly, the principal idea behind them tends to be from the artificial intelligence perspective rather than from any real sense of creative originality. While I am interested in artificial intelligence in general terms, I do not find its application to stylistic emulation of music so fascinating.
Having spent considerably longer constructing the various versions of pSY than composing any other acoustic piece of mind, I find it hard not to consider that the construction itself, at least in as far as my attitude was concerned, was as much a compositional activity than any dot based one. On reflection, I feel that I would feel that, should anyone else use the software to compose a piece that inevitably, a huge influence would come, by definition, from myself. On the other hand, the design of the software is so idiosyncratic and so far from any idea of designing a 'tool' for others to use that this is hardly surprising.
There is also the aesthetic argument involved in programming (see pp8-9**** above), which involves the relationship between composition and programming. It is presumably irrelevant in terms of a 'pure' composition whether the methods used to generate the material are 'correct' or not - whatever this means? Presumably this is because a composition is 'finished' at least in terms of the form of its score (or its final tape form). Indeed, a great deal of the classical tradition involves formulation 'correct' versions of historical pieces - unearthing original editions, autograph scores, sketches and speculating on their validity. Programming, or at least creating live by algorithm is by its nature incomplete in this senses, and I have argued also in the sense that an acoustic composition is incomplete without the interpretation of the live performer. If I subsequently develop pSY, will subsequent performances of Copenhagen be inauthentic? How much change needs to occur in order to make it 'another piece'? Does this equate at all with, for instance, the debate concerning authentic instruments? Nowadays many would consider that playing a piano as a part of the continuo in a baroque piece would be unthinkable, even though to some extent the piano is the logical successor to the harpsichord. Indeed, it used to be the case that it was not such a terrible thing to do. Is the debate about authenticity, then, also one concerning fashion?
The question remains, however - if the purpose of the programme is aesthetic, does it matter if it is based on a bugged system? Could it not be said that some of the greatest art has been created by disfunctional individuals? However, if the purpose of a programme is to behave reliably and to produce 'predictable' results, ultimately, can the result ever be really creative? See also error messages and error handling.
The same principal may apply to pSY's development of FM sounds. It can do so at random and it can produce sounds and ideas that a human programmer would be highly unlikely to come up with, if only because they can be so complex and yet illogical. How does one programme for what is and isn't aesthetically pleasing? Even using complex forms of programming how does one even know what will or won't be pleasing - one can suggest starting points or methods, but one has to understand that at any given point, in this system, a highly displeasing sound can become a pleasing sound, to some, by changing one parameter slightly. Even having said this, we are touching upon the area of aesthetics itself, and to many the whole area of electroacoustics, or even 'classical' music is by definition aesthetically displeasing. How does one then code for individual taste? ****
The relationship between composer, score, performer, listener. The relationship between computer output and provability. The reliance on our understanding of predictability in each. The nature of aesthetic quality versus programming skill. The relationship between the programme and the composition.
Acoustic composition sounds/ideas are introduced and then developed as in Copenhagen where stable sounds become more unstable by degree (which can be seen as a sort of development).
Ultimately the distinct difference between aesthetic and commercial purpose as exemplified by this software.
The interface is and always will be a balance between power, flexibility and ease of use - and this balance of 'levels' - see William Calvin - is not itself a concept that is stable in the individual.
Reducing things to basics - the physicists' rallying cry - is an excellent scientific strategy, as long as the basics are at an appropriate level of organization. In their reductionist enthusiasm, the consciousness physicists act as if they haven't heard of one of the broad characteristics of science: levels of explanation (frequently related to levels of mechanism).
Part of the depth generated by truly great music is balancing between textures where the general is more important than the individual and vice versa. This sort of flexibility is relatively easily and intuitively achieved by the human mind but it would appear to be one of the difficulties in taking a logical approach.
The probable impossibility of a universal software interface and the necessity to encourage not simply the view that one should consider many approaches to the interface but that untimately you must design your own. I would illustrate this by my own use of the Wondrous Function as described above. What possible reason could a software producer have for including such a 'feature' - especially when, considering the complexity of the field of number theory, there are so many other functions that users might feel equally interested in experimenting with? Clearly, a more useful facility would be in the inclusion of a 'formulator' for such functions, one that could cope with many different types of mathematical process. This is all very well, but how far does this formulator have to go in terms of its own flexibility? How many terms should it allow and to what degree of complexity? Presumably, for the 'ultimate' formulator, it would go considerably further than the most formulaeic of composers would wish! And yet to accommodate such composers what degree of user friendliness have to be sacrificed. In providing such a facility you might satisfy them, but what are the chances of a new user (to music) coming to terms with this? They would want software that can help them understand and many manipulate more 'traditional' musical elements. Is it then possible for any programme to satisfy all users? Clearly, without some form of artificial intelligence the answer is no, and even if it were possible using artificial intelligence (I am imagining a programme which could learn and grow in complexity as the user's abilities grew - this is already an intended feature of some programmes), is it desirable? Under what circumstances would we wish a single developer to govern our own growth? Would it be possible to generate artificial intelligence which did not, similarly, reflect the creator's viewpoint? Ultimately, if artificial intelligence were to be implemented, is this not the same argument? If a machine were to display some form of intelligence/consciousness, would commercial applications for that machine not become immoral? Alternatively, would it be possible to create such a creature without it displaying facets of its creator's personality? If it were true artificial intelligence, should we attempt to control its artistic output, even if we considered it unpleasant or even offensive? Can we have intelligence without a moral/aesthetic set of judgements? If a music software programme were developed for a machine that was self supporting and self learning (surely a major part of artificial intelligence), and which created its own music at will, would this machine be the composer, or would the developer of the software?
On pages 7 and 13**** above I discussed in a little detail the number of possibilities open to the user, both in terms of an SY's two FM element voice and in terms of the general MIDI possibilities of however many notes utilising however many intervals. It was clear from those statistics that they reflect the enormous complexity that can lie behind even a comparatively simple pSY sequence. In addition, I later argued that the number and levels of complexity involved in the live performance of an acoustic instrument has a very specific nature that is fundamentally different from live performance on an electronic instrument, and that this difference lies in the availability of different numbers of editable parameters by different forms of controllers. If, for the sake of simplicity, we accept that in the case of a live performer playing an acoustic instrument we have the situation where the performer is using one algorithm (body, mind, fingers, etc.), in order to control another, ostensibly unrelated algorithm (the physical nature of the sound producing instrument and it's possibilities). (In passing, I would say that I do not believe that the performer is following something as clear and straightforward as an algorithm - but something much more complex and would only open the system up to even greater numbers of possibilities, but that is another argument!) In addition, these two algorithms (or sets of algorithms) are, by the nature of musical training, culture, heritage, etc., are working in a unique harmony (at least in the case of a skilled performer), and that this harmony is dependent to some extent on all of these 'parameters' (that is, training, repertoire, brain state…). At least on a fairly basic level, this is what pSY does, only through a number of programmes (algorithms) controlling elements of another (the SY).
Is the (a) programme ever finished?
The above investigations into the implementations of choices with respect to note-ons, etc, are virtually infinite, as are implementations of functions, etc. Is this not a feature of the algorithmic process? Without implementing a full neural network capable of investigating and learning and therefore implementing (potentially) all these possibilities can it ever be said that a computer programme is 'finished'. However, it is clear that a composition can be finished. However, apart from through death, a composer is never finished.
This is where pSY differs from some other programmes. Quite obviously, it does not attempt to introduce intelligence into the idea, not least because the task of introducing elements of aesthetic judgement concerning certain formulations of SY configuration as well as aesthetics concerning the use of these configurations in terms of pitch, velocity, etc., is well beyond my scope at the moment. This viewpoint also makes pSY different, in that it quite clearly does not attempt stylistic emulation. While this is an intriguing area, it does not strike me personally as being a particularly creative response to the medium, although certainly it may shed some light on the area of artificial intelligence, as Hofstadter's responses to David Cope's EMI programme indicate. However, I have to admit that the way in which pSY seems to be able to communicate with only a little setting up and no interference suggests to me that the abstract nature of music means that any Turing Test is quite straightforward to achieve and therefore probably not in the same category as verbal Turing Tests.
I feel as if I have completed this thesis with many more questions unanswered than I began with, and with many of the questions that I have attempted to answer, answered unsatisfactorily. I have hinted throughout that I believe, ultimately, that we cannot really know what the future results of our continuing development of music technology will be, just as speculation concerning any aspect of technology is similarly limited. I hope that I have, however, managed to describe my own concerns with current 'classical' electroacoustic music and outlined a basic approach that may prove significant in its future development. I am certain about one thing - that as time develops there will be more and more interaction between acoustic musicians and music technologists and that this will bring increasingly interesting results in this area. I feel that the limited number of true 'cross overs' - that is, performing musicians who are prepared to take on the challenge of programming is bound to increase as interest in and knowledge of computing expands and as it becomes easier to write for computers. If I have a concern it is that major software developers do not take over the area to such an extent that, in the manner described above, they do not get into a position where they can dictate the music and sounds with which we should express ourselves. Under these circumstances I feel it is in the interests of all of those involved in this fascinating and relatively untravelled area to maintain our independence and critical awareness of the tools offered to us.
It will probably be clear to many that in taking the argument concerning creative software to the extremes implied by developments in artificial intelligence, we are facing some very difficult questions. While some may feel that this is going too far too fast and that such questions may only be appropriate in the distant future, I would suggest that to some extent, the advent of the generally available PC marks a truly major leap into this area, and one which is not possible to ignore. The questions do not solely involve music but I believe, as must be clear from the above, the whole issue of Artficial Intelligence and moral/aesthetic responsibility, and even the nature of knowledge and intelligence as we have come to understand them. Some will find such issues both difficult and disturbing.
Douglas R Hofstadter, Godel, Escher, Bach, an Eternal Golden Braid, Harvester, 1979, Vintage Books, 1980
Roger Penrose, The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics, Oxford University Press, 1989
Michael Hall, Harrison Birtwistle, Robson, 1988 (p45)
Interview with Harrison Birtwistle, BBC Radio 3 Composer of the Week, 1989?
Robert Sherlaw Johson, Messiaen???