In the future, the sampling must became smart in 2 levels, instrument and interpretation. In the case of orchestra, this will be more visible.

Instrument level
In your days, when you buy a good orchestral library, you will get many, many patches of the same instrument, which is normal because there are many, many ways of getting the same note. It\'s up to the user, to choose between them in a manual way. But there is a lot of \"choosing\" that can be done automatically. You will not use staccato samples if you use long notes, or vice-versa. To make a more transparent way, the user should specify musical proprieties (dynamics, vibrato, legato,..., like in a sheet music, instead of MIDI information (Program change, control change, velocity,...), although you use midi information to send that \"musical parameters\" (ex: Mod-> vibrato; aftertouch->dynamics,...).
This is important to emulate a true instrument. And people are beginning to do little things to accomplish this, like maestro-tools in garritan\'s strings, but we need more...

Interpretation level
When the first level is complete, getting the real instrument, sampling will work to emulate a true musician - the person who plays an instrument. This will be a greater challenge. Then instead of specifying musical parameters all the time, you specify only partial musical parameters (sforcando, ppp, ...).

All this would simplify our midi mock-up\'s, making possible to have a virtual \"London Philharmonic Orchestra\" at home.

Of course, there will be some mock-up\'s better than others, but because of your orchestration and your \"conducting\" of the orchestra...