My collaborative process has developed over a number of years through working with choreographers in a range of contexts, both professional and educational. I have found that the main factor separating collaboration with choreographers from that with musicians is the necessity to adapt and translate musical vernacular into more general terms that can facilitate effective communication. This is most crucial in the early stages of a collaboration, when most discussions are entirely abstract, and aimed towards establishing a rough outline of structure and content. Once some material is in place, the choreographer can indicate specific aspects or elements of it and refer to them when discussing the composition.
My collaborative process can be roughly divided into four stages: an initial discussion stage with the choreographer, discussing the plan and concept of the work, both choreographic and musical; an initial creative stage, in which musical ideas will be formulated and tested with the dance; the main stage of musical composition, when the bulk of the material is written, often concurrently to the dance being choreographed; and the final stage when both score and choreography are being refined to ensure cohesive and effective interaction. As part of my process, I try and attend as many of the dance rehearsals as possible while the choreography is being created, so that I am not relying on verbal descriptions of the choreography to create the score, but can instead observe directly as the movement is created.
One of the methods I use to facilitate communication in the early stages of a collaboration is the use of reference tracks. In essence, rather than the choreographer trying to describe abstract sonic features, they can choose music samples that match what they want, and use these to demonstrate aspects of music that they want to include in their work. The focus on aspects of music is crucial, as a choreographer may show a track that they like the rhythm of, and the composer needs to know that this is the factor in focus. As an example, in my first dance collaboration, the choreographer showed me a track that featured violin with an electronic, dupstep-style backing. I then began writing demos in a similar genre, not realising that she had shown me the track to demonstrate the violin and melody, rather than the genre. Once the misunderstanding had been identified, she showed me another track to indicate the overall genre she wanted for the work.
In the initial ideas phase, I find that it is important not to work too specifically to the requested specifications. For a start, it is unlikely the choreographer communicated their exact intentions, or that they were interpreted completely correctly. As such, I find a quick demo approximating the brief works best to allow the choreographer to identify the aspects of the music which they like, and those which need adjusting. In addition to this, as the work develops and evolves, the choreographer may find themselves drawn to music different from that which they originally intended.
Once initial material for the work has been decided upon, it must be expanded and developed. It is at this stage that I find it most useful to observe rehearsals, and to immerse myself in the creative process of the choreography. This allows a composer to be aware of changes to the structure as they occur, and to gain an in-depth understanding of the purposes behind the movement, whether narrative, metaphorically, or abstract. It can also be helpful to adopt techniques being used to generate choreographic material, and apply them to musical material. Such parallels in methodology can contribute to cohesion in the work overall.
The final stage involves tweaking elements in the material that may have developed independently of the choreography, in a way that does not suit it. In my experience, this most often involves adjusting the length of sections in the music to align them with corresponding sections in the choreography. In can also involve the addition of cues into the music, to signal the dancers where they may have difficulty distinguishing when a particular movement or change is to take place.
This is a summary of my own practice, but it is by no means the only advisable method. While advantageous for many reasons, constraints of time and budget make it less common in the professional realm, where choreographers may wish for music to be finalised before choreography to streamline their process, and composers may prefer to compose to a brief rather than attending rehearsals and developing a piece over weeks or months. However, if time and money are not considerations, and creative partners wish to work closely in the development of a work, there is much to be gained from close, ongoing cooperation throughout the collaborative process.
This piece stemmed from a concept I first explored in first year, using harmonic partials and sympathetic resonance in pianos. For that piece, I struck notes to activate harmonic resonance, and then removed the attack from them in the track.
For this work, I have adapted the process for use in concert. I spent time experimenting both with single and multiple piano setups, finding ways to create and select resonance. I also experimented with various configurations for the setup, to ensure pianos would be able to trigger resonance in each other. Once I had a range of materials, I created the score by inputting the musical materials into Sibelius, then arranging them on a page in Adobe Illustrator.
The four pianos are arranged so that their lids reflect sound into each other. They are also amplified, so that the sympathetic resonance is not overpowered by the struck notes that trigger it. The materials for the piece are given on the score on the following page, and a conductor selects which players perform different parts of the score. The structure of the piece is also indeterminate, but for this performance I have chosen to use a ternary form. The beginning section will predominantly explore the natural resonance of the piano when it is left to resonate on its own. The second section will explore harmonic patterns, and allowing specific frequencies to resonate. The final section will incorporate this with more chromatic resonance and noise-based triggers. The microphones on the piano are heavily limited, so that the louder sounds are no louder than the quieter resonance.
This piece was composed for the ‘In The Field’ concert at WAAPA in October. This concert was based around field recording and its use in composition. While in Kalbarri and Shark Bay over the mid-year break, I spent time exploring different sonic environments and taking recordings to use as material for the piece. The most compelling subject matter I found was Shell Beach at Shark Bay. The beach surrounds a large, shallow bay, which has an extremely high salt content due to greater evaporations rates. This high salinity renders the bay unliveable to most species; however, coquina shellfish thrive in these conditions, and billions of them fill the bay. The beach is covered with tiny shells, instead of sand, which creates a unique soundscape, quite different to anything I had encountered before. Whether walking on the shore, listening to the waves lapping the shoreline, or moving the shells to create sounds. I recorded everything with my Zoom H6, and began writing the piece once I returned to WAAPA in second semester.
To begin the composition process, I sorted through the recordings and began exploring ways to structure the piece. When working with field recordings, and particularly when using“the paulstretch algorithm, which gives it an ambient, ethereal sound. Both of these sounds are manipulated overtime with a delay effect, which is automated to adjust the delay time as the piece progresses. The delay echoes the sound back at shorter and shorter intervals, until they become so regular that they are heard as a single frequency. They then begin to move farther apart again, so creating subharmonics which become apparent towards the end of the piece.
The other sounds are also built out of recordings. The most significant of these are bursts of low frequency noise that punctuate and divide the piece into sections. These sounds are recordings of me walking on the beach, which are then compressed and distorted to bring out the extra low end and power in the sound.
Other sounds include burying the Zoom with shells, and dropping onto the poles and wires of the fence between the conservation area and the quarry.
The sounds were mixed and edited in Protools, and iZotope RX was used to remove background noise from the recordings. I spatialized the sounds based on their role in the piece and their envelope. Rather than simply assigning tracks to speakers, the sounds move around the space, and shape the work. The piece is realised in 7.1 surround sound.
The major aspect of my practice is collaborative, composing for contemporary dance. As such, my goal is to use music to aid in the communication of the ideas and messages in the choreography. I find the more efficient way to do this is to use pre-existing musical tropes that an untrained audience will already be familiar with. I do, however, tend to avoid relying primarily on harmony and melody with a preference for textural control to provide cues to the audience. When composing electronic music, there is much no collaboration with musicians, and no need for the music to be communicated in any way except directly to the dancers. Rather than composing a piece, and then workshopping it with musicians, often a piece is composed section by section as the choreography is developed.
The concept for the dance usually comes from the choreographer, who often also gives some indication as to the sound or instrumentation. In my own work, the role of the music is more often to motivate an idea, by representing a sonic landscape that assists the dancers in inhabiting an idea or atmosphere, and can also reduce ambiguity in communication of the same to the audience. The music also motivates movement in the mechanical sense, but usually as a secondary aim, where rhythms are either simple and repetitive, or complex, but not synced with specific steps and movements. Rather than relying on harmony and melody to generate emotional responses, I prefer to experiment with texture, usually through synthesis and digital manipulation of field recordings. Composers in the age of digital synthesis have unlimited options for sound creation, and that has added a new dimension to the role of a composer. Now, every sound can be tweaked with an unprecedented amount of accuracy. Every instrument can be built from scratch, and every sound can be tweaked and adjusted to serve a specific role. When composing, I spend much more time building instruments and sounds than placing the notes that trigger them.
One of the aspects of my collaborative practice that removes it from some of the issues discussed is the lack of notation and performance by musicians. Ferneyhough posits that music is inseparable from its notation, as they way that it communicates to musicians forms an integral part of its identity and function as a work of art. However, this facet does not exist when music is created, produced and realised by the composer using software. The only existing visual representations of the music are symbolic. On one hand are the audio clips, containing visual representations of MIDI notes or audio waveform. On the other are software interfaces for audio plugins, both for sound creation and sound modification. These tend to be quite basic, consisting of virtual dials and faders that correspond to specific effects. However, there are indexal elements to these systems, as every modification made to the interface produces an immediate and predictable change in the sound. In my process, plugins have assumed the role of musicians, producing sounds through the interpretation of MIDI messages. This has in some ways made the role of music to the participants very different. No longer are musicians part of the communication process. The composer has sole responsibility and control of the final sound.
In his work, Adorno doesn’t seem to address music as it functions when incorporated with other art forms. While one could surmise his opinions of film music based on writings about Hollywood, less commercially dictated forms such as dance didn’t fall under his sweeping indictment of pop culture. Parallels could be drawn between his thoughts on the development and role of music and the development of dance in the same period. While ballet was becoming more and more formal and traditional, contemporary dance broke away and began pushing the boundaries of what dance had been and could be. Dance has an interesting relationship with the culture industry. On the one hand, particularly in Australia, contemporary dance does not have the broad appeal of pop music, which means it does not have the same level of commercial pressure to conform to the expectations of production companies. However, it also struggles for funding, and is often forced to seek corporate sponsorships and compete with other dance companies for limited government funding. More experimental and challenging forms of dance flourish in Fringe festivals around the country, but with the limited audience that plagues many forms of contemporary music. One of the consequences of this is that often choreographers are forced to find preexisting music, and they don’t have the opportunity to work with composers as often. This can occasionally result in reduced opportunities for collaborative creativity, and therefore a reduction in the value that each work can contribute to culture overall.
This poster was designed to summarise my Honours research for the ECU Research Week performing arts poster presentation session. The poster was selected for the gold medal for best Honours research poster at WAAPA by a panel of research staff and professors.
The poster introduces the topic, outlines musical traits of postminimal music, and different ways that music can interact with choreography. It then briefly describes my research project. The photo is of one of the dance pieces I composed for as part of the practice-based component of the research project.
One of the most common techniques I use in my work with field recordings is the use of EQ to create harmonic resonance, either emphasising a preexisting resonance or creating a new one. Using a large boost at a very narrow frequency, as long as some material exists within that frequency in the sound source, a resonance will become audible, effecting both the texture and the envelope of the sound.
These experiments began three years ago, with a recording of a street sign struck with the side of my fist. The resulting sound had various resonances, both low frequencies from the vibration of the sign, and high frequencies from the impact and attack. To accentuate these frequencies, I loaded an EQ plugin onto the track, and then created an 18dB boost, with a Q of 10. I swept the boost through the frequency range to find the point where the resonance was loudest, and then added extra boosts at harmonics above this. I repeated this process for the higher frequency range, using the partials from the impact.
Another technique I use involves recordings with a much higher noise content, such as waves on a beach or wind through branches. In these instances, boosting a range high frequencies can add a sense of space, and allow an otherwise neutral soundscape to function harmonically. Options include: emphasising a single, specific frequency; bolstering a frequency with partials following the harmonic series; creating tonal chords or intervals with resonances; and finally, boosting a group of unrelated frequencies, to produce an atonal cluster of tones. Depending on the strength and clarity of the frequencies boosted, they can be used to relate to other musical elements within the piece. By 'tuning' the recording to a particular scale degree, it can either be used to strengthen or weaken the tonic, or as a leading tone to propel harmonic movement.
A more complex combination of these techniques is to create chord progressions by automating the frequency of boosts. This allows the resonances to be adjusted in pitch as a piece progresses, and this can be used to create harmonic change as with a standard, polyphonic instrument. I have not yet used this technique in a piece, as it is very labour intensive without specific EQ plugins designed to facilitate such a technique.
The last method I have experimented with involves loading field recordings into synthesis engines, predominantly Absynth 5 by Native Instruments. For this method, the field recording, or a segment of it, becomes the initial sound source, which can then have various processes applied to it. In Absynth, these include wave-shaping, amplitude, frequency, and ring modulation, various forms of filtering, envelope shaping, and effects. Experimenting with combinations of these techniques can result in a rich sound palette that combines the harmonic control and complexity of digital synthesis with the randomised, ever-shifting content of field recordings. This can be used to overcome the sterile, static nature of purely digital sounds.
Overall, my goal in adding harmonic resonance to field recordings is to increase their effect in a piece. This can including harmonic functionality, the addition of textural complexity or clarity, or accentuating features of the field recording to delineate its role in the overall sonic construct.