rhoadley.net   music   research   software   blogs

cv    music    text    software

index    1    2    3    4    5

Part One: Arpeggiator

ScreenShot of Arpeggiator

(Click on above to enlarge)

pSY is a development of another programme called Arpeggiator - the latter is currently a semi-independent component of the former. Arpeggiator grew from an idea developed for an undergraduate project in simple MIDI implementation and programming. The idea was to imitate the arpeggiators commonly found associated with older, analogue synthesisers and which produce a series of pitches related to the 'original' in pitch and time.

In terms of its implementation, Arpeggiator is extremely simple. You specify a pitch as your start note. The programme takes the input and according to certain rules creates an output. It repeats this process until the specified number of notes for the first arpeggio is reached. As an example, you may decide that you want to create four arpeggios, each containing three notes. You decide that, as is common with arpeggios, you want all the notes to rise in pitch. You decide that, and this is where Arpeggiator begins to differ from analogue arpeggiators, you want the arpeggio to contain a combination of minor and major thirds, which will create a quasi-tonal, fairly 'soft' chord. In the current version, you can choose either to have the programme decide which of these intervals is used, or to give certain probability weightings. In this case there are four possible combinations of three note arpeggios:

(a) c-eb-gb; (b) c-eb-g; (c) c-e-g; (d) c-e-g#,

According to 'ordinary' distribution with no weightings, your output will consist of a number of these four arpeggios. You are not certain which you will get and in which order you will get them, or whether you will get all of them, or simply the same one repeated five times. The real result will depend on chance and probability.

Amongst others, the following parameters may be defined:

I am currently in the process of creating a system of 'patches' - using these you can force outputs to follow certain routes through these possibilities, so that, if a low velocity is chosen, the duration between arpeggio notes will be a particular value or range, and that the number of notes, the MIDI channel, the interval type, etc., may be specified likewise.

Clearly, what begins as something quite simple can, in relatively few stages, become fairly complex. As an illustration of this, as we have seen, with a three note arpeggio (utilising two intervals), when all movement is upwards in pitch and where each arpeggio has the same starting note (and ignoring the duration and velocity of each note), there are four possibilities. If the direction is variable (up, down, variable, weighted variable), there become 16 possible variants of each three note arpeggio. If the number of notes in each arpeggio is increased to four, and/or if the variety of intervals available to each arpeggio is increased, then clearly the number of possibilities soon becomes extremely great:

Table of Possibilities

Number of Notes>>>12346812
Number of Intervals vvvv

As an illustration, then, with settings using six notes per arpeggio, a range of six possible intervals and variable directions (in other words the next note can be either above or below the previous one), a single resulting arpeggio will be one of 248,832 possibilities, (remembering that these figures do not include duration and velocity). Compare the above figures to the 479,001,600 possible variants of the twelve-tone row. One of the interesting things here, however, is not the level of variety available, but the way in which within this variety, structure can still be quite clearly heard, as it were embedded into the sequences of notes.

Audio Examples

What is of significance here are issues of general control - the amount of movement, the general pitch distribution of notes, the spacing of arpeggios between each other, the type(s) of sound used to respond to the MIDI code generated.

In general, the above listed parameters were identified and implemented according to my own aesthetic requirements. As it became dull to hear all the 'arpeggios' moving in one direction, so, the ability to direct movement in an upwards, downwards, variable or weighted direction was implemented. This was an improvement, but still left each arpeggio beginning on the same note. Although of itself not aesthetically displeasing, this was clearly a limitation. A relatively simple algorithm solved this problem to my satisfaction, (specifying the possible range and direction of movement of the start note), but even then, it was clear that this process of modification to improve detail could in theory carry on ad infinitum: each time the results would become more specific and predictable.

While it may initially appear, then, that each of these changes in implementation concerned widening the variety of possible outcomes, the process is in fact the opposite. Before programming begins one is confronted with an infinite number of possibilities. Any decision at this stage enormously reduced this number, and this created output which is substantially more predictable. As has been seen above, with the original settings there were only four possible outcomes although even here these four possibilities could be in any variety and in any order. Subsequently, having reduced the possibilities available, we begin to increase the possibilities again according to our own aesthetic values, based on the variety of output achieved with the programme. This is equivalent to controlling, or learning to control, the unpredictability inherent in the system, (it is also, I will argue, a reasonable analogy to the act of acoustic composition). As I shall consider below, it can be argued that it is our idea of the nature of unpredictability that is at fault. Usually, our own idea of something that is 'unpredictable' assumes a 'predictable' environment in which this event should occur. If the whole environment is unpredictable, logically, the only 'unpredictable' event would then become one that is predictable! Similarly, when we say we want, for instance, a texture that is unpredictable we generally mean that we want unpredictability within certain usually quite precisely defined boundaries. In other words, what we usually mean is that we want the subjective idea of unpredictability rather than its objective reality.

Bearing all this in mind how can one know whether an output is correct or not (precisely), especially if the principal concern is aesthetic? If the output is satisfying in aesthetic terms does it matter whether or not the programming is doing exactly what we think? In terms of programming itself this may be a lamentable situation but in aesthetic terms is this not what has happened countless times to many artists who come across new and/or different ways of doing things without fully understanding what they are doing? There were a number of times during this process when I had created an output that I felt was interesting and imaginative and worried, when I later discovered a bug in the system, that removing the bug would alter the programme's ability to create the previous output. Again, this is a point at which it becomes difficult to judge the aesthetic value against the programming one. If the programme achieves an aesthetically pleasing result, is this not enough? Or does the fact that this result is based on a 'deception' or at least an ignorance in terms of the programming damn the result? Presumably the best result is to redefine the previous incorrect function as being a custom one, and offering the correct one as an alternative. I will return to this topic in my conclusions.

Having implemented a certain routine, what should happen, and what does happen when it is tested?

By far the most probable result, especially with a routine of any complexity (and, it feels, especially when I have been responsible for it!), is the generation of an error, indicating that some syntactical problem has occurred. These outputs are usually quite straightforward to deal with as it usually requires the elimination of a typographical error, the proper definition of a variable or some other fairly basic problem.

Another possible outcome is that the programme will run, but in an unexpected way. For instance, instead of choosing a new start note according to the settings, the programme chooses one from way outside the given range, or the programme slips irrepressibly up or down until it disappears from the audible range and then, although the programme is still running, nothing. This would indicate some sort of logical error, either on my part, (not an unusual occurrence), or the part of my programming. In other words, I would have mistakenly predicted that a certain process would create a certain output - I had implemented it correctly but mistaken the result. Far more usually, one mistakes the programming - forgetting to add to a counter in loop so the programmes heads for infinity or some such.

One of the least likely outcomes is that at the first running the programme does what I expect without any problems. In this case, what I expect is itself (hopefully) predictably uncertain. By definition, I would probably not know precisely what was going to happen, merely a generality, although a generality precise enough for me to know fairly quickly (although usually only intuitively) whether it was 'wrong' or 'right'. Of course, over time (with 'practice'?) any problems would be solved and from then on the programme should behave with predictable unpredictability.

Although to some this all may appear quite obvious, I feel it is an important point, and goes some way to explaining why the output has, to me at least as well as a number of others who have heard the output, some sort of perceivable structure even though there is no direct human control beyond the initial press of the 'play' button. This point concerning the apparent perception of structure will be a recurrent theme in this thesis. In addition, there would seem to be a way in which we can quickly judge whether the material with which we are being presented is concerned with texture or detail or a combination of both. Since Arpeggiator's output is primarily textural, we are not so concerned with detail, as we might be if we were expected, for instance, something resembling a Beethoven piano sonata. I shall be returning to this rather difficult point later and investigating the idea that these ideas are at least at some level comparable to live acoustic performance.

In spite of this apparent complexity, (although it can be seen that in fact all the processes involved are very simple), we are not dealing with the sounds that are being triggered by this. This is not the case with pSY.



Screenshot of pSY

(Click on above to enlarge)

While the results from Arpeggiator were interesting and produced material that was clearly structured without it being overly predictable, ultimately, except as a background process, the material was not sufficiently interesting to be used creatively. If one used the material with previously prepared SY multis one certainly did get a hint of something that was interesting if still unsatisfying. It was to some extent clear that the basic units on which the programme was running were too 'large' to be dealt with satisfactorily by this level of unpredictability, and the results tended to become simply textural and dull in detail. Presumably applying the same principle to the parameters of the FM elements of the SY's voice might produce something with an extra dimension of sonic manipulation missing from Arpeggiator, as with much music constructed using MIDI.

Anyone familiar with frequency modulation, or the SY's implementation of it, will know that the mathematical structures underlying the sounds created are complex and after a little manipulation can become confusing to those not expert in the field. From my own experience of teaching I know that sometimes a student can be editing a sound furiously for some time before anything makes any difference to the sound. Then, abruptly, one particular parameter is changed and the whole sound alters drastically. Of course, all this depends upon where one is in the structure of the voice and because of the complexity that the SY allows, it is not possible to say at any given point what will make which difference. If, for instance you are editing both an SY element's algorithm number and certain operators' (oscillators') envelopes, then without knowing which algorithm is being used the effects of the envelope edit will not be known in detail. Of course, this is not the way that the SY was built to be used! However, in essence, pSY applies Arpeggiator's principles to the FM aspects of any given SY's voice.

The immediate problems at this stage were both technical and aesthetic. Although it was relatively straightforward to create structures to edit certain SY parameters 'on the fly', it soon became clear that the results were sometimes grotesquely unpredictable and often equally unpleasant. Sounds would disappear for several minutes only to return suddenly and yet otherwise entirely unchanged. Sounds could become 'stuck' around some extremely irritating high pitched squeal and would appear to resist all attempts to rehabilitate it. I once left the programme running all night, to see how stable it was (and indeed, to find out what it would come up with), and left my room with the SY producing some not overly pleasant, but at least quite gentle sounds. The next day I was informed that by around two that morning some hideous, ear-splitting noise was emanating from the building - it could be heard from the street! Opposed to this bad behaviour were the occasional instances of wonderful behaviour - the emergence of a patterned sequence as if from nowhere and occasionally a lifelike movement from one idea to another. On these occasions I found it extremely difficult to turn the machine(s) off as I was constantly tempted to wait and see what would come up next.

While I felt that there were clear similarities here between learning how to communicate with and control pSY and some sort of alien life-form, it was equally clear that something more would be needed if I were to be able to take my new-found companion into human company which might not be as tolerant as me!

Applying Arpeggiator to pSY

An SY voice can include a maximum of two FM 'elements', as Yamaha calls them. (A voice may also include two additional 'Waveform' (or sampled) elements, but pSY does not implement them). In addition, each voice includes a number of settings that are voice specific such as effects, microtune, detune and several others. pSY deals with most of these parameters. Each FM element comprises six 'operators' (the FM equivalent of analogue oscillators), and each of these operators comprises forty-five parameters. The range of values for these parameters range from on/off to 0~127. So, just in terms of the FM elements alone there are 540 editable parameters. It is the number of these parameters that allows the above situation, as well as the complex way in which they are connected (or not), where many parameters may be radically altered with no perceptible change in the sound of the voice. SY voices are saved as *.syx (system exclusive) files.

Clearly this is a difference from Arpeggiator where, with relatively few parameters, the play loop would routinely take a value from each parameter as a part of the algorithm. This would be neither desirable nor practical bearing in mind the quantity an variety of parameters available to pSY. Instead, a selection takes place. Any number from one to all of the available parameters, (about 700 overall, including voice parameters) can be selected and each time the play loop is executed, one of the selected parameters is chosen (at random) and edited according to a percentage. The loop then continues and, in a manner similar to, although not as sophisticated as Arpeggiator, note pitches, velocities, etc are chosen and output.

A typical example would be to take an existing FM voice and select a parameter that has a 'colouristic' effect on a sound - for instance, the algorithm number which controls the way in which the operators are connected to each other, the waveform, which controls the waveform of each operator, or the course of fine frequency. Neither of these effects the envelope of the sound, only its timbre. In this case, the effect, even with only a single pitch is considerably more interesting than anything Arpeggiator has to offer, if only because the idea of having a sequence of notes differing only by their timbre (which, of course, can effect our perception of 'pitch') is less usual, at least in terms of MIDI. An additional advantage is that one retains the ability to use the (in electroacoustic terms rather basic) control of pitch and velocity that is the feature of MIDI.

This sort of construction is clearly more sophisticated than Arpeggiator's without altering its fundamentally similar approach. Equally clearly, which parameters are effective depends a great deal upon the nature and construction of the voice itself. With control over all parameters the number of possibilities open to the voice become literally astronomical when including all pitch and velocity possibilities too. Theoretically, with 300 editable parameters, and if each parameter were simply either on or off, after only four processes the voice can be any one of 8,100,000,000 possibilities. After twelve processes, the total number of possible alternatives rise to 531,441,000,000,000,000,000,000,000,000! More practically, after a similar number of processes utilising only 26 editable parameters, if these parameters were simple on/off devices, the voice can be any one of 95,428,956,661,682,200 possibilities! Of course, the vast majority of parameters can be set to considerably more values than just 0 or 1. All the envelope parameters can be set to a value between 0 and 63, the loop points between 0 and 3, the wave forms between 0 and 15, etc. I will not even attempt to calculate the number of possibilities open to a single voice, but bear in mind that in addition to these, the number of possibilities noted as relevant to Arpeggiator above also apply to each of these possibilities! Clearly, what we are seeing here is not a system for exploiting these possibilities, but a palette of possible 'colours'. This vast palette is really just a reflection of firstly the sophistication of the SY's FM implementation, but ultimately the complexity of sound phenomena.

How should one control this number, and how, under any circumstances, would it be possible to work out any aesthetic theory based on such prodigious possibilities?