A large part of my music relies upon algorithmic, generative, and gestural techniques. To this end, I have developed an extensive suite of MFX plug-ins (under the label of TenCrazy.com) representing numerous semi-atomic functions on MIDI data streams that are intended to be chained together in a modular fashion. These chains are generative, often creating the entirety of the piece by following the rules provided in parametric form, without human hands ever touching a keyboard instrument. Moreover, gestural control can be exerted over the system by drawing MIDI envelopes in the sequencer to describe the “contour” of the performance without specifying the notes themselves.
I often refer to these chains of plugins as “players” or “improvisers”, suggesting an almost intelligent autonomy to their role. In fact, I often play live instruments along with their generative output, creating a sort of hybrid computer-human band. However, while I respect their creations as a “fellow artist”, I also reserve the right to edit their performance as the director of our musical collaboration. As such, I’m able to capture an improvised performance and edit it in hindsight if I particularly like a momentary inspiration or otherwise need to reign in some of the randomness, arranging by cut and paste after the fact.
Finally, I do take care to design my sample-sets to bring out a realistic “human” preformance in my automated improvisers. Due to having multi-instrumental talent, I’m experienced enough to be able to collect and model the sounds of a sampler to approximate the idioms of the instrument in question, even to the extent of playing the instrument myself and “slicing” the audio data into individual notes for algorithmic resequencing. This goes far in helping the generative system realise a convincing performance.