• Unthinking Things

    'Unthinking Things' is a cross-domain, algorithmic, generative and dynamic composition/performance based around concepts investigated by Bishop George Berkeley in 'A Treatise Concerning the Principles of Human Knowledge', written in 1710. The treatise investigates the nature of things, dividing them into spirits which have agency in the world and everything else, the latter being 'unthinking things' which can only therefore ever be the object of our perception. Of particular interest is the way in which such anti-intuitive views have not only survived but flourished during the past century. In particular its implications for the idea of the 'meta-author': a creative work having the same relation to the author as the world, its spirits and ideas, have for Berkeley's conception of God. This seems to me to reflect metaphorically the processes involved in algorithnic composition and some forms of performance using technology. 'Unthinking Things' uses algorithmic, automatic processes which structure and articulate aspects of the whole, both en masse and in detail. I also use live and recorded performance data to generate cross-domain expression, for instance producing sonic textures formed by the sounds of basic materials such as stone, wood or metal ('unthinking things') from readings and recordings of Berkeley’s texts. In this way and others, the piece interrogates the nexus between the written and the spoken, the notated and the improvised. The version performed in March 2017 is exploratory, intended to help answer some technical questions which might allow for an expanded and adventurous event in the future. 'Unthinking Things' is composed using SuperCollider and INScore software.
  • Edge Violation

    clarinet(s), computers, and projections, 2016 music and programming richard hoadley, clarinets ian mitchell, text phil terry
    • Performance/workshop: The Boiler House, London Metropolitan, Thursday October 15th 2015 NB This performance/workshop was cancelled.
    • first performance at Hoadley, Hall and Brown, Saturday April 23rd 2016, Anglia Ruskin University.
    • Performance at the TENOR conference, Cambridge, May 28th 2016 - Ian Mitchell, clarinet(s)
    • Performance at Electronic Visualisation and the Arts, Covent Garden, London, 12th July 2016
  • Choreograms

    Choreograms (2015-6) automatic music for dancers, musicians, computer and live score projection music and programming by Richard Hoadley; text by Phil Terry; choreography by Jane Turner (2015-6, in progress) Choreograms, a music-text-dance piece Music and programming Richard Hoadley amongst others, dance Jane Turner, text Phil Terry - Performance at Colchester Arts Centre, Wednesday March 8th 2017 - Performance at Conway Hall, Friday March 3rd 2017 - Performance at Chelmsford Festival of Ideas, Saturday November 12th 2016 - Performance at The Globe, Albany Road, Cardiff, Thursday October 6th 2016 - Performance (Semaphore/Choreograms chimera) at AHRC Commons, York University, 21st June 2016 - Performance at New Cut, Saturday 7th May 2016, Halesworth, Suffolk - Performance at Hoadley, Hall and Brown, Saturday April 23rd 2016, Anglia Ruskin University. - Performance Early Dance Circle Bienniel Conference, friday 8th april 2016, high wycombe http://rhoadley.net/comp/choreograms/
  • Semaphore

    Listen to Semaphore
    Watch Semaphore

    Semaphore (2014-15) works between dance, music and text. It is a collaboration between the choreographer Jane Turner, the poet and writer Philip Terry and the musician, composer and technologist Richard Hoadley. The primary focus is on live processes, data from the dancers' movements are used to trigger and modulate text, audio and music notation. This is in turn performed and in some cases fed back to the dancers whose movements are then influenced by the music and text, and so on. Ideally, there is a balance between gesture, whether based on movement, music or text and the resulting translation that is not too trivial but is also not so remote that the origin and its result do not feel connected at all. semaphore_image
    Loie Fuller apparition. Photo: Chris Frazer Smith
    Steve Mithen's Prehistory of the Mind suggests that it is natural for the human imagination to think creatively across domains: people can choose to imagine music that accompanies actions, although how this happens is not understood technically. There is evidence that cross-domain thinking is at the heart of creative activity; this practice-based research investigates this hypothesis. There has been significant research into efficacious methods of mapping one circumstance onto another, and more recently, the idea of mapping and its aesthetic value itself has itself become the focus of investigation. Increasingly, as a result, researchers and performers have been investigating the counter-intuitive idea of more less predictable forms of mapping (a reaction to the phenomenon of 'mickey mousing'), where neither performer nor coder/composer are aware of the full repercussions of their behaviours. The effect of certain actions on the predictability and purpose of systems and structures of performance, including the performers' intentionality and virtuosity, is fundamental to this research. The complexity of these investigations can make it seem as though research into any further attempted integration of expressive domains should pause until we are clearer about the current situation. This assumes, though, that the idea of static mapping structures for each new interface, perhaps imitating acoustic musical instruments, is a feasible, practical and aesthetically desirable goal. Composers, performers and researchers are now investigating 'composable environments' where the aesthetic goal is the production of rewarding, stimulating and challenging compositions and environments rather than tool-like new interfaces. This project also presents practice-led research implementing and investigating these ideas and issues. It charts the development of live work in dance/movement, music/audio and music notation and in addition considers the challenges arising from the semantic structures inherent in text.

    Semaphore, October 2014, performance 1 from Richard Hoadley on Vimeo.

  • Semaphore (Cardiff, 9th July 2015)

    Images of our Semaphore performance at Cardiff M.A.D.E. Gallery, courtesy of Sarah Vaughan-Jones
  • How To Play the Piano (from Piano Glyphs)

    Listen to How To Play the Piano
    Watch How To Play the Piano (on another site).

    #howtoplay
    http://rhoadley.net/how

    How to Play the Piano (2015) is a version of a passage originally from the dance-music-text piece Semaphore by Richard Hoadley (music and programming), Jane Turner (choreography) and Philip Terry (poetry) (http://rhoadley.net/semaphore). It uses a live audio analysis of the reading of an original piece of poetry to generate audio and, in 88 Notes, live notation to be played simultaneously by a pianist. In the first part of the piece, the poem is algorithmically remodelled textually, graphically and aurally and orally.

    More information



  • December Variations

    Listen to December Variations
    Watch December Variations.

    December Variations (2013-14) are automatically generated and notated variations for piano on the score December 1952 by Earle Brown. In a paper published in 2008 'On December 1952' Brown says: "In my notebooks at this time I have a sketch for a physical object, a three-dimensional box in which there would be motorized elements - horizontal and vertical, as the elements in December are on the paper. But the original conception was that it would be a box which would sit on top of the piano and these things would be motorized, in different gearings and different speeds, and so forth, so that the vertical and horizontal elements would actually physically be moving in front of the pianist. The pianist was to look wherever he chose and to see these elements as they approached each other,crossed in front of and behind each other,and obscured each other. I had a real idea that there would be a possibility of the performer playing very spontaneously, but still very closely connected to the physical movement of these objects in this three-dimensional motorized box. This again was somewhat an influence from Calder: some of Calder's earliest mobiles were motorized and I was quite influenced by that and hoped that I could construct a motorized box of elements that also would continually change their relationships for the sake of the performer and his various readings of this mechanical mobile. I never did realize this idea, not being able to get motors and not really being all that interested in constructing it." This project is an investigation into these ideas, differentiated by the idea of automatically generated notation. Some of the issues arising from the technique include the role of interpretation as opposed to sight-reading, legibility, possible interactions and the role of the graphic score itself, in both practical and theoretical terms, in this new environment. This project is related to the automatic, algorithmic and live notation compositions Quantum CanticorumThree StreamsThe Fluxus TreeFluxus and Calder's Violin.
  • December Mobile

    December Mobile http://rhoadley.net/comp/december_mobile/ https://www.youtube.com/watch?v=z0PRihC4Eps December Variations are automatically generated and notated variations for piano on the score December 1952 by Earle Brown. In a paper published in 2008 'On December 1952' Brown says: "In my notebooks at this time I have a sketch for a physical object, a three-dimensional box in which there would be motorized elements - horizontal and vertical, as the elements in December are on the paper. But the original conception was that it would be a box which would sit on top of the piano and these things would be motorized, in different gearings and different speeds, and so forth, so that the vertical and horizontal elements would actually physically be moving in front of the pianist. The pianist was to look wherever he chose and to see these elements as they approached each other,crossed in front of and behind each other,and obscured each other. I had a real idea that there would be a possibility of the performer playing very spontaneously, but still very closely connected to the physical movement of these objects in this three-dimensional motorized box. This again was somewhat an influence from Calder: some of Calder's earliest mobiles were motorized and I was quite influenced by that and hoped that I could construct a motorized box of elements that also would continually change their relationships for the sake of the performer and his various readings of this mechanical mobile. I never did realize this idea, not being able to get motors and not really being all that interested in constructing it." This project is an investigation into these ideas, differentiated by the idea of automatically generated notation. Some of the issues arising from the technique include the role of interpretation as opposed to sight-reading, legibility, possible interactions and the role of the graphic score itself, in both practical and theoretical terms, in this new environment. This project is related to the automatic, algorithmic and live notation compositions Quantum Canticorum, Three Streams, The Fluxus Tree, Fluxus and Calder's Violin. The piece is primarily composed in the music programming environment, SuperCollider. This piece uses the software INScore and Guido significantly, both currently being developed by the Grame Computer Music Research Lab. - More information on INScore. - More information on Guido.
  • Quantum Canticorum

    Listen to Quantum Canticorum
    Watch Quantum Canticorum (on YouTube).

    Quantum Canticorum is my contribution to the music and dance piece Quantum². Quantum Canticorum is an Interdisciplinary performance in which dance and music interact using body tracking technologies and bespoke sensing environments to expand our understandings of the interrelationship between the body, its environment and expression, between science and art, culture and nature. Dance is converted into data which are then used to trigger and modulate expressive algorithms which generate in real-time both audio and music notation - also performed live. Although the piece is generated live each time it is performed, its duration remains approximately 10 minutes.This event is led by composer Richard Hoadley and Turning Worlds Dance Company, choreographer Jane Turner. It is a part of the Quantum² project which is supported by Arts Council England. Music: Richard Hoadley Dance: Jane Turner
  • The Fluxus Tree

    The Fluxus Tree is an automatic composition centred around interactions with a collection of experimental interactive sculptures. Dancers (or anyone else) interacts with the sculptures and so creates data from sensors which generates electronic sounds and music notation live. This live notation is then played by the composer and occasional (and excellent) 'cellist Cheryl Frances-Hoad.
  • Calder’s Violin

    'Calder's Violin' is a composition for violin and automatic piano. The music is algorithmically generated, including the violin part which is notated live as the piece progresses. The general textures and references of the music are intended to be predictable, but detail is new each time: an attempt to emulate in the medium of electronically generated music the 'mystifyingly exquisite variation' of performance on traditional, acoustic instruments. Some of the material for Calder's Violin has been previously developed for the on-going dance and music project 'Triggered'.
  • Triggered

    Somewhere between improvisation and composition, art and science, lies Triggered - a dance-music-digital performance that builds on the Cage-Cunningham legacy of interaction between music, dance and technology. Dancers initiate music by interacting with free-standing and suspended sculptures. Sound and movement evolve in response to feedback, producing a sophisticated, highly-charged performance. Performing, choreographing, composing and building the production are composers Cheryl Frances-Hoad, Tom Hall, Richard Hoadley, choreographer Jane Turner with dancers David Ogle and Ann Pidcock. Special guest composers and performers are Sam Hayden and Jonathan Impett. http:rhoadley.net/triggered
  • One Hundred and Twenty-Eight Haikus

    http://rhoadley.org/sounds/128/128HaikuLive.m4a

    One Hundred and Twenty-Eight Haiku is based on two developments from 2009: the generative composition/performance One Hundred and Twenty-Seven Haiku and the hardware and software performance tool Gaggle. These two items are joined by two other newly developed experimental devices: Touchtree and Gagglina, and these items are amalgamated into a performance which is in turn improvised, composed and automatically generated. Richard HOADLEY has as a composer in recent years focused on the investigation of the use of technology in the compositional process: the nature of indeterminacy in music and its aesthetic and philosophical ramifications, and the effect of the interface in different forms on the creative process. rhoadley.net 127 Haiku Audio (128KB, 11MB) 128 Haiku Audio (192KB, 16MB) Programme Haiku Generator Other Compositions Other Research
  • One Hundred and Twenty Seven Haiku

    One Hundred and Twenty Seven Haiku (2009) Constructed using SuperCollider, One Hundred and Twenty-seven Haiku is a development of pSY and its products The Copenhagen Interpretation (1998-1999) and Ambience (2002) and the more recent Many Worlds (2008). The former used custom made software made with Visual Basic to control Yamaha SY synthesisers. The latter used SuperCollider to control similar TG77 synths. One Hundred and Twenty-Seven Haiku uses SuperCollider to do everything: design and generate the sounds as well as decide when to play them and what they should play.
  • The Copenhagen Interpretation

    The Copenhagen Interpretation automatic music for four computers and four synthesisers 1999 What happened at 8:00pm, Sunday 20th June 1999, in the Cambridge University Faculty of Music Concert Hall situated in West Road, Cambridge, marked the culmination of about two years' work. At 8:00pm, more or less precisely, four people each activated the programme called pSY on one of four standard IBM-compatible PCs. Each PC was controlling a Yamaha SY77/99 synthesiser. Activating the play button told pSY to begin playing a score called The Copenhagen Interpretation. Each of the SYs' outputs was routed to two of the four speakers set around the hall (see Figure 1). I mixed the four from the centre of the auditorium. This mixing was, apart from the initial 'click', the only part of the performance directly under human control. The 'piece' they would perform was not in any completely determined format. Indeed, apart from at five or six moments in the piece, I could not be entirely sure what programme's output would be. The Copenhagen Interpretation is named in honour of the work undertaken on quantum mechanics by Neils Bohr et alia during the first part of the century. Weirdly, and after the name had been chosen, I discovered that physicists refer to the quantum state of a system by the greek letter psi. I had been working on a project to create a concert piece for live computers and synthesisers, where the computers were at least to some extent, autonomous during the performance. Of equal importance, I wanted a piece which did not display typically algorithmic tendencies, (loops, ambience, minimalism), but had a sense of teleology. In a sense, it was an experimental musical Turing Test with original, non-stylistic music. (As far as this idea is concerned, the performance is still too controlled, but then I was preparing for a live public performance.) As a composer of acoustic music, when it came to experimenting with the electro-acoustic environment, I was worried about the idea of something being finalised in a static way, such as a single, definitive version petrified on a compact disc or digital tape. Once it was done, that was that - apart from differences in audio mixing and in environmental acoustics there could be no further development without re-writing or re-constructing the entire piece. This is generally either extremely inconvenient or impossible. The idea was to build diversity into the very construction of the piece, so that the result would be different according to the settings of the programme each time it was played. More than this, the piece was not to have a formalised, final version. By its very construction, the programme/piece has no precise version 'written' anywhere, in the code or the settings. The concept of 'rewriting' a piece each time it is performed in order to achieve this is neither appealing nor intellectually or artistically economical - a live performer does not need to learn again a work or, indeed, an instrument whenever the performer re-interprets a piece. It was also a different prospect from revising a 'written' score, (although I personally find this, too, a difficult and unpleasant activity). Through this process it became clear that when writing acoustic music I relied heavily on the performers themselves to 'breathe life' into a score. It seems unthinkable to me that a composer would expect or desire a performer to play a piece identically each time. Performance is an interactive process which, even in cases where one hardly knows or only briefly communicates with the conductor or performers, adds immeasurable complexity and subtlety to the score. Similarly it became clear that in reality when any listener listens to a live performance, their reactions depend quite strongly on the performance as well as how well they know the piece. There are, of course, many aspects of a live first performance that will be unexpected to the listener, even if that listener is the composer. One of the intriguing aspects of developing pSY was trying to incorporate the feeling or the atmosphere of this interpretation while maintaining processes that would be fast enough to allow effective real-time performance. The sounds created by the pSY/SY partnership, especially when allowed to meander in their own way, often seem rather strange: as if they are communicating, but in some alien, inhuman tongue. Some have suggested that they are what you might imagine hearing if you set an antenna to receive signals from some distant point in space and intercepted an alien communication. This impression is only heightened during a performance, where the static and immobile pairs simply sit and 'sing'. Their 'expressions' are the same whether they are singing gently or madly bellowing. (This in itself has certain difficulties, similar to those of more standard 'press play' pieces concerning what you look at when listening, and at the conclusion of the piece, who are you applauding and for what reason. These questions and others need answering elsewhere, although during the recent performance the addition of a large display of a dynamic sonic analysis by the (Macintosh) 'Sonogram' programme did a lot to keep listeners entertained. On the other hand, there are reasons for feeling that these particular visual stimuli, when not a direct result of the musical activity, such as the sight of a live musician performing, are rather more of a distraction from the music than an help to it.) There were also several and diverse practical and technical challenges posed by the project. Over the years standard acoustic instruments have developed beyond any single person's or group's ownership. Individually or in often pre-defined groups, they carry with them a cultural heritage which can belong to anyone should they choose to learn about and understand them - these are basic 'tools' of music and beyond their standard limitations, composers can make them more or less their own as countless others have done before. This may be the case with some electronic instruments (the moog, the EMS), although arguably for a number of reasons none have even approached the status and tradition of even comparatively recently developed acoustic instruments (for instance the saxophone). It is most definitely not the case with computer software - surely another 'tool' although maybe not precisely equivalent in nature. One principal feature of the latter during the last ten to fifteen years has been the extraordinary pace of development surrounding computers; the technology and the software must struggle hard to survive the forces of competition, obsolescence, commercialism and popular fashion. In addition synthesisers and software use methods of sound production which are different, separate and often experimental and these, too, suffer from the same effects of commercialism, competition, etc. How, in this environment, can any piece of software be really felt to 'belong' to anyone, apart, that is, from the authors themselves? I would argue that it cannot and that this is a factor in deterring many people from the medium itself. The only possible exceptions to this are those who have their own personal set-ups, often involving idiosyncratically ancient versions of software and equally peculiar selections of equipment. But even in this case, it is the set-up as a whole concerning there is a sense of ownership, not the software. Having experienced quite a lot of this development I feel this sequence of competition, obsolescence and fashion quite acutely and feel an unease when claiming responsibility for a piece when I myself do not necessarily understand the processes that lie in the background. For some reason I do not feel the same when writing music for a flute, a violin or another 'pre-defined' acoustic instrument. It is my opinion that a significant part of the future of music technology will lie in the development of a variety of 'intermediate' programming tools that will, effectively perform this task. Where the different levels of programming and where in this hierarchy a sense of the aesthetic appears is quite a complex area and I will be discussing it further below. Ultimately, this is an area requiring discussion concerning possible links between programming and composition.
  • A Continual Snowfall of Petrochemicals

    automatic music for computers and Yamaha SY synthesisers 1998/9 And for all its breathtaking size and novelty, the biosphere of Jupiter was a fragile world, a place of mists and foam, of delicate silken threads and paper-thin tissues spun from the continual snowfall of petrochemicals formed by lightning in the upper atmosphere. Few of its constructs were more substantial than soap bubbles; its most terrifying predators could be torn to shreds by even the feeblest of terrestrial carnivores... Arthur C Clarke, 2010, 1988, Grafton http://rhoadley.net/comp/petrochemicals.php