I personally believe that "Musician Modelling" is an important next step in recreating ensembles. Why is it that computer composers can have highly detailed samples, yet still create a "flat" performance? Much of it is due to the fact that the composer doesn't know much (if anything) about how to play the instrument in question. Without that expertise, he or she is truly just "playing the sample", and he's bound to the expression that is imprinted on the data.
Even if the composer knows how to play an instrument, and knows how to get the best from the samples, the process of playing each part over and over (and over!) for each member of a large ensemble (e.g. orchestra) is extremely time consuming! Moreover, current control schemes are woefully inefficient for capturing a realistic performance. With towards 10 controls shaping a single instrument performance in JABB, and only two hands and (a maybe a foot or two) to record with at one time, each part takes several passes!
Without specialized virtual controllers like EWIs to "broadband" this performance data across, you have no choice but to make this a step by step process by recording slider or knob or mouse motions. While I'd love to see more specialized controllers emerge, I think there's more of a hardware hurdle due to lack of demand for them. And even if these controllers were available, a composer would then have to master them all before he or she is in the position to "save time" when making a mock-up (let alone cost restriction).
So my take on the solution is to enhance "human playback" software systems to include the quirks of an individual. Some HP systems right now capture the randomness of performance, and that's great, but I'd like to extend it to "personalities".
For instance, we have a trombone player named "Tony". Tony likes to really lean on low notes, so his mf on the bottom end probably creeps up towards forte. He's also a little slow to start a cue when he's been resting for a while (lips get a bit "cold"!). For fast passages, though, he gets a bit excitable and rushes them a bit and gets a bit lazy on the articulation so that he's perhaps slurring a bit too much. His slide positioning tends to be a bit flat, too, but he adjusts fairly quickly when his ear picks up on it (on longer notes).
Other players in the section would have different quirks to their performance in slightly different ways. "Shane" can play louder. "Jim" has a cold so he runs out of breath quicker, long held notes petering out gradually. "Bill" is quick on the ramp to a crescendo. All of these performers together create a more varied, richer timbral and textural response.
You could use these personalities as presets, tweak them to taste, or randomize from scratch. Additionally, you could introduce the concept of a "section leader". Say that "Irving" is first chair. He has a moderate (50%) amount of sway over the other members of his section. As such, everyone else's performance patterns will be "morphed" to approximate Irving's playing style. In this manner, you can give a bias to the perfomance, while still allowing some weighted deltas on the part of other players.
Then comes the beautiful part: you become the conductor, with a gestural controller or MIDI data track, and each of the performers reacts to your direction on an individual basis (based on their parameters, influenced by section leaders), so the ensemble lives and breathes not as a machine, but as the real thing.
I think it would be very nice to be able to play back the same line of notation for each of these personalities, through the exact same sample set, and be able to determine who is playing according to what you hear in the performance and how it responds to your direction. After all, if you gave the same trumpet to Lee Morgan and Freddie Hubbard and told them to play the same line without embellishment, they'd still be discernable via their tonguing curves, vibrato shape and speed, etc. (well, *and* a brass player does exhibit a certain sound based on embouchure, but let's not get too picky! )
Of course, having access to different instrument sample sets will only enhance this. Let "Lee" play an Olds Ambassador. Let "Freddie" play a Calicchio 3/9L. Whether these sounds are made by samples, additive synthesis, or physical modelling, it will only serve to further distinguish their "personalities".
And really, much of this Musician Modelling can be done today. We don't have to sit on our heels and wait for some technology breakthrough. It is low-hanging fruit, just waiting to be picked, but it *does* require a bit of cooperation between sample library and host/notation program developers. Will we see it happen?
The goal would be to allow composers to be composers, not experts in tweaking MIDI CC messages or virtuosos in a handful of virtual instruments. Convincing mockups should be as easy as dropping notes on a staff with efficient, minimal gestural notation. Building your ensemble should be as easy as selecting "musicians" and pairing them with instruments according to the preference of your ear. And as part of that, using a new sample library shouldn't make you have to learn a whole new control scheme, wasting your time to become an expert on the ephemera of yet another piece of software. I'd love to be able to buy a single instrument "off the shelf" just for the love of the sound ("one Buecher tenor sax, please!") instead of having to discount it because I'd have to learn a new control scheme.
I'd definitely like to be a part of making this happen.