Arpeggiator and pSY Two Preludes in Predictability 1: Development Arpeggiator grew from an idea developed for an undergraduate project in simple MIDI implementation and programming. The idea was to imitate the arpeggiators commonly found associated with older, analogue synthesisers and which produce a series of pitches related to the 'original'. It was, and is, an extremely simple idea. You take a pitch to be your starting point. Originally, this was played in, as would have been the case with analogue arpeggiators. The programme takes the input and, according to certain conditions, creates another. It continues to create more until the specified number of notes for the first arpeggio is reached. To be more specific, you may decide that you want to create four arpeggios, each containing three notes. You decide that, as is common with arpeggios, you want all the notes to rise in pitch. You decide that, and this is where Arpeggiator begins to differ from analogue arpeggiators, you want the arpeggio to contain a combination of minor and major thirds, which will create a quasi-tonal, fairly 'soft' chord. Note that, in the current version, you can choose either to have the programme decide which of these intervals is used, or to give a certain weighting to one. There are four possible combinations of three note arpeggios which we can call types (a) to (d): (a) c-eb-gb; (b) c-eb-g; (c) c-e-g; (d) c-e-g#, which you could generalise into the intervals themselves (here the interval is the number and the type the quantifier, so '3+' is a major third, '4' would be a perfect fourth, and 5+ an augmented fifth, etc.): (a) 3-,3-; (b) 3-,3+; (c) 3+, 3-; (d) 3+, 3+ According to the 'ordinary' distribution with no weighting provided, you will get a variety of these in your output of five arpeggios. You are not certain which you will get and in which order you will get them, or whether you will get all of them, or simply the same one repeated five times. Obviously, however, the result depends on certain probabilities. Under the current settings, all arpeggios, however, will be based on middle c. You may then decide that you want the opening note to change as well. Again, you can choose a variety of movement for this. Ultimately, you can control the following parameters: * The range of numbers of notes in each arpeggio * The range of intervals within each arpeggio * Up to two specific intervals within each arpeggio with weightings * The start point of each arpeggio relative to the previous one * Whether the arpeggio is direct up from the initial note, down from it, or in either direction according to a weighting. * The (relative) temporal spacings within each arpeggio, including, optionally, up to two specific range of temporal spacings which can be turned on or off within the programme. * The range of MIDI velocity for the arpeggio (or randomise these). * The likelihood of a second note accompanying any generated note according to certain other parameters. * The MIDI output channel of each note. * The number of arpeggios * The (relative) duration between each arpeggio. Clearly, what initially begins as something quite simple can, in relatively few stages, become really rather complex. From what begins as a simple 1:4 possibility of any given combination of interval soon becomes one of an astronomical number of choices. To give an illustration of this, as we have seen, with a three note arpeggio, when all movement is upwards in pitch and where each arpeggio has the same starting note (and ignoring the duration and velocity of each note), there are four possibilities. If the direction is variable, there become 16 possible variants of each three note arpeggio. If the number of notes in each arpeggio is increased to four, and/or if the variety of intervals available to each arpeggio is increased, then clearly the number of possibilities soon becomes extremely great. The interesting thing here, however, is not the level of variety available, but the way in which within this variety, structure can still be quite clearly heard, as it were embedded into the sequences of notes. It becomes quite clear relatively soon that what is of principal significance here are issues of general control - the amount of movement, the general pitch distribution of notes, the spacing of arpeggios between each other, the type(s) of sound used to respond to the MIDI code generated. Another thing became apparent. In general, the above listed parameters were identified and implemented according to my own feeling of aesthetic requirements. So, it became dull to hear all the arpeggios moving in one direction, so, the ability to generate tone above or below the previous one was implemented. This still meant that each arpeggio began on the same tone which, as above, although of itself was not aesthetically displeasing, it did create a limitation that I felt it would impeded the possible aesthetic value of the outcome. In a sense, then, it could be said that each of these changes in implementation were to do with widening the variety of possible outcomes. In fact it is precisely the opposite. The initial parameters were very extreme - take the total number of possibilities available before the programme begins and from then select an extremely limited range of possibilities according to certain (inevitably extreme) rules. This creates output which is highly predictable, as has been seen above - with the original settings, there were only four possible outcomes, although even here these four possibilities could be in any variety and in any order. Subsequently, from this extreme start, we begin to increase the possibilities again, while all the time increasing the possibilities according to our own aesthetic values. This is equivalent to controlling, or learning to control, the unpredictability inherent in the system. It is in the nature of computing that, having (hopefully) implemented a certain routine, it is tested, in this case by pressing the 'play' button. There are a number of possible outcomes to this. By far the most probable, especially with a routine of any complexity, and most unfortunately, is the generation of an error message, indicating that some syntactical problem has occurred that the programme is unable to deal with. Probably the next most probably outcome is that the programme will run, but in an unexpected way. For instance, instead of choosing a new start note according to the settings, the programme chooses one from way outside the given range, or the programme slips irrepressibly up or down until it disappears from the audible range and then, although the programme is still running, nothing. This would indicate some sort of logical error, either on my part, (not an unusual occurrence), or the part of my programming. In other words, I would have mistakenly predicted that a certain process would create a certain output - I had implemented it correctly but mistaken the result. Far more usually, one mistakes the programming - forgetting to add to a counter in loop so the programmes heads for infinity or some such. One of the least likely outcomes is that at the first running the programme does what I expect without any problems. Of course, in this case, what I expect is itself a variety of possibilities. By definition, I would probably not know precisely what was going to happen, merely a generality, although a generality precise enough for me to know fairly quickly whether it was 'wrong' or 'right'. This, I think, is quite an important point, and goes some way to explaining why the output has, to me at least as well as a number of others who have heard the output, some sort of perceivable structure even though there is no direct 'human' control beyond the initial press of the 'play' button. More importantly, maybe, is the fact that, in spite of this apparent complexity, although it can be seen that in fact all the processes involved are very simple, we are not dealing with the sounds that are being triggered by this process. This was not the case with pSY. pSY It seemed to be a fairly logical progression to move from generating general MIDI events in Arpeggiator to the system exclusive events of pSY. The disadvantage, of course, is that the programme is specific to the Yamaha SY77 and SY99 synthesisers, but the advantages are in the SY's particularly powerful implementation of Frequency Modulation (FM) and the scale of its system exclusive implementation. The SY series of synthesisers was discontinued in 1996????) and is now seen by many, particularly younger people, as rather archaic and out of date when compared to more recent, usually sample-based synthesiser. Frequency Modulation is itself often seen as rather archaic, it being one of the first commercially available method of digital synthesis through the Yamaha DX7 in 1983???). It is certainly true that FM has a particular flavour but much of this can be seen as a direct result of its power and flexibility. Whatever the SY's strength or weaknesses, there is little doubt that although the general method of pSY could be used with other synthesisers, the results would not be as remarkable. Although well acquainted with both FM and the SY synthesiser, I was certainly not clear about what would happen when beginning the development of pSY. Arpeggiator was interesting, again, perhaps more in the fact that the results of the process tended to have the feeling of being structured, at least if you were used to the electro-acoustic medium. A number of students and colleagues experienced the same feeling, always bearing in mind that the programme was being auditioned with the understanding that this was a computer programme and that, at least in part, the programme worked according to probabilities. Thus, it must have seemed possible, or even likely that applying the same principles to the synthesiser's voices themselves would create something interesting at the very least. It was in this spirit, as well as one of intrigue as to how well I personally would be able to undertake such a programming task, as well as learning more about the structure and implementation of System Exclusive messages, that I began the work. SY Voice Structure The SY's voices are constructed from up to four 'elements', two of which can utilise FM structures and two 'waveforms' or samples. Naturally, the FM elements are considerably more editable than the samples. Indeed, it may well be substantially because of this flexibility that pSY works as it does. A single FM element contains six 'operators', which are essentially Yamaha's digital equivalent of oscillators, and a variety of parameters that operate on all six operators such as pitch EG, Element Detune, Filters, LFOs, etc. Each operator contains, as one would expect, a series of envelope generators, wave generators, loop points, etc. Currently, pSY for Macintosh (still in development), can fully utilise about 22 of 45 individual parameters. Finding out which parameters were the most appropriate ones to use for this process was clearly an important part of development, but I think it's worth pointing out that development was (and to some extent still is) very much a creative rather than a purely technical process. In a very real sense, as with Arpeggiator, at each stage I had to decide whether the circumstances meant that development was worth continuing. In a sense this is why development of Arpeggiator has ceased, at least for the present: because it appeared that because of its use solely of standard MIDI messages that it would remain limited to the generation of particular types of textures. So, with pSY, it was a matter of finding out, initially, whether the principal would work, first technically and then creatively. Similarly, once it was clear that editing certain parameters achieved something worthwhile, it became worthwhile choosing other parameters to exploit in this way. pSY's Operation Currently, pSY for Macintosh only utilises one FM element. As a matter of historical accident, this FM element must be element 1 of a 2 FM only SY voice. There is no reason why both elements cannot be used (nor, for that matter, why the waveforms cannot be used, other that time), and indeed it is hoped that the Windows version will use both FM voices. However, as far as creating and learning how pSY works, one FM element is quite complex enough. Effectively, pSY works in two basic ways. The first is in 'editing' the element itself, and using a method that is similar to Arpeggiator's, this is simplicity itself. Smoothing over a few rough edges which will be corrected in the forthcoming implementation, each of the twenty-two utilised parameters in each operator may be selected or deselected individually. A selected parameter is open to editing. The programme keeps a record of selected parameters and on each pass of the loop, chooses one at random. The value of this parameter is then altered according to a variable value "% change", (located in the lower left corner of the screen). It is important to note that this is a maximum - the programme will choose a value from within the range 0%-whatever the setting is, at random. The programme updates the screen and then outputs the system exclusive message that edits the synthesiser itself. Having done this, we clearly need to hear the sound and so the second part of the operation begins. Effectively this is, again very simply, an algorithm dealing with standard MIDI messages noteOn, pitch, and velocity as well as effects, etc. In reality this is almost the opposite of Arpeggiator - a rather basic sequencer. Any design or development as such that there actually was was once again done on an ad hoc basis. Getting pSY under control A typical example of this are the group of options: wait, random of [wait], AllOff%, audition, noteOff, silence. This group of six related functions were created variously and of 'aesthetic necessity'. One of the principal difficulties with pSY originally, (and still now!) is the tendency of the programme to result in long, irrepressible sounds which continue unreasonably and create annoyance. In order to deal with this, over time, the variety of method listed above were mentioned. So, a note or group of notes will be output, (equivalent to a rather uncontrolled Arpeggiator), and between this note or group and the next the programme will pause either for the number of ticks (1/60th of a second) given in the wait box, or, if the random of? checkbox is checked, then for a randomly generated number of ticks from the range 0 to the value given in the box to the right of the random of? label. Although this reduces the number of events, or at least adds some articulation which is not overly regular, it doesn't necessarily alter the number of sounds hanging - and nor would there be any way of dealing with notes that are played with no 'note offs'. Consequently, the note off check box was added, as well as the ability to turn all notes off at certain intervals. This is the AllNoteOff% function. Here, each time the algorithm repeats, a random number is generated and if it falls within the region 0 to the value given in the box to the right of AllOff% then all a MIDI message turning all the notes of is sent to the synthesiser. This certainly does the job, but does it rather too well for most purposes - in order to avoid too many notes you can choose to severely disrupt the flow of the generated sound by turning all the notes off. To mollify this, the silence option was created. If silence is checked, then there are three options used by the AllNotesOff% function - turn half the notes of, turn one-third of the notes off, or turn them all off, (effectively creating silence). Again, these three options are generated with the help of a random number. Finally, there is the audition option. With this unchecked, all generated MIDI notes are output, but with it checked, only if a random number (0-100) is lower than or equal to the value given in the box to the right of audition is the note or group of notes played. It will be clear from this that, with judicious use of each of these functions, quite a reasonable degree of control over which events are played and which are not, and for how long sounds are maintained, is possible. Having done this, it is certainly an option in re-design to make these slightly more logical and certainly clearer to operate, but it is equally clear that the final apparently interactive result has been arrived more through experience and aesthetic judgement than concern over flexibility. There are many more options left unavailable than left available, and so those that are chosen for 'curtailment' have been chosen because I have decided that it is in certain directions that I want the music to go. The pitches, velocities and numbers of the notes output are chosen in a similar way. 2: Aesthetics 2.1 The Aesthetics of the Interface Probably because I am not a trained programmer, having had to learn almost all of what I do know, (which is still, to me, frighteningly limited), the problem of the interface came upon me rather by surprise, and in a far from pleasant way. Having come across, however, it does seem rather extraordinary to me that relatively little time is spent discussing and taking account of this particular problem. Although not of prime concern here, I feel it necessary to outline the problem as I see it, if only to put what follows into context. Some years ago, and almost by accident, I undertook a course called Computer Systems Architecture. At this point, having just completed my PhD in 'acoustic' composition (I just called it 'composition' then), I felt that it was appropriate to concentrate some time becoming more intimately acquainted with technology. I had already been teaching music technology for some years, but in a fairly standard way - here's the software, this is how it works, this is what you need to do with it, now get on with it. I thought that Computer Systems Architecture sounded like a fairly pleasant, technical introduction to how, in precise terms, computers did things - something that I could subsequently follow up with more difficult and more technical areas. This isn't precisely what happened. What I had presumed would be a description of which component goes where and how this part communicates with that part, was in fact a precise description of how assembly language works in the 086 series of processors. I had come across assembly language before on a rather intellectual and non-practical level. I had worked on a PDP-11 computer where the only inputs and outputs were in the form of text and command line. I had read Douglas Hofstadter's Godel, Escher, Bach and so had an intellectual understanding of how a typical computer 'processes' its instructions at various levels before performing them, and I was aware that, at least according to some commentators, this provided an interesting insight into how machines and some natural systems (possibly) worked. However, I had never done any programming at this level - nor, indeed, any programming at any significant level. I had worked briefly with Fortran and slightly more extensively with the Macintosh's HyperCard semi-language, so at least had an idea of structure, variables, etc. I had used the Music 11 and CSound programmes, although they played no part in the teaching I was doing at that time. It was fairly ominous, then, when I was told that the principal assessment for this course, (a ten week course, with two hours taught classes and two hours supervised 'laboratory' work per week), was to be the development of a programme which upon loading * prompted the user for a the input of a number via the screen, * checked for the validity of the input, informed the user if there was a problem and ideally gave a view of what the problem was, * prompted the user for a the input of a second number via the screen, * checked for the validity of the input, informed the user if there was a problem and ideally gave a view of what the problem was, * added the two numbers together, * output the result to the screen, * unloaded the programme correctly on completion.