I will never give up to emphasize how important I find it to start the age of musical sampling and intelligent sound modeling.
Gigastudio has solved the memory problem and allows us now to benefit from huge libraries. This was a major step forward.
But still we suffer from the heritage of a stupid, low-level(though flexible) interface called MIDI. MIDI does not know what a swell is or a phrase or what legato or accelerando means. It talks a strange language that is not musical at all. But thanks to MIDI we all can make music with our computers.
Back to samplers: The technical problems are solved. Now it is really time to improve musical expression. It is a burden to fumble around with complex controller lists that no notation or sequencer can handle easily.
Again the samplers could to the big push forward.
I like you to join a movement (thanks Hans Adamason for you idea!) to ask Nemesys to implement some musical intelligence into the Gigastudio. Its dimension concept is ideal for such things and the features below should not be too complicated to make.
As an example I like to bring in my own ideas. All features can be switched on/off using a (ONE) controller-message.
Idea #1 - LEGATO PHRASING: Detect gaps between notes to switch dimensions. The note after the gap would have a smooth attack to start the phrase, the succeeding notes would have a quick attack to sound legato.
Idea #2 - RANDOM DIMENSIONS: Alter between different dimensions randomly to get a more lively performance.
Idea #3 - ALTERNATING DIMENSIONS: Switch dimension after every note played e.g. to perform up/down bowing with strings (I hate to send a controller or pgm-change after every note!)
Idea #4 - SWELL: Perform a swell (volume change) if controller received. Value of controller determines the speed of the swell.
I totally agree with your first three ideas, but I think that for swells you need to control too many parameters to embed in a single controller number - eg how many beats the swell should last, how soft its initial level should be and how loud it should end up, and whether it should be a linear or logarithmic swell or some shape in between.
These values are probably more intuitively controlled the current \'old-fashioned\' way, ie with an expression pedal.
Keep up the good ideas, the forum (and Nemesys) needs input like this.
thanks for your comments on swells. The idea is of cause is to not have complex parameters. If you need a complex volume change you can use controllers anyway. But if we had a feature that would cover 80 or 90% of the cases with a finger tip - wouldn´t that be nice ?
Take Sibelius notation program and perform a swell. It will send gradually increasing volume controllers of a limited duration of time using a macro. That could easily and most effectively be replaced by a single controller value.
Of course this is not perfect, but it would indeed be an enormous improvement - wouldn´t it ? Especially since MIDI has no satisfying solution for swells anyway, because they are not needed for keyboards.
I saw and heard something amazing that you guys and Nemesis just HAVE to take account of. SuperConductor. This thing actually has built in interpretation and phrasing technology. The tech sounds a bit off beat to e but the streams I heard were incredible. Apparently it takes notes as a group and uses a predictive approach where the way each note plays is determined by what is to come.
if GS could incorporate ( or licence ) something like this then we really would be cooking. On the other hand this would need built in sequencing facilities of some sort as it needs to store the midi or notw information to do its thing.
Maybe what we are talking here is really two products - one to process an ordinary midi passage into an intermediary format with special embedded dyamics control codes, and then some extra features in GS to read and understand the control codes. ( say bow up , bow down etc. )
If we are simply passing notes on the fly to GS from another sequecer i think we always have a problem because a real player reacts to a phrase of notes - not individual notes .
I agree with you about the need for automating midi expression. Less time could be spent on programing and more on the music.
Would it be possible to have a midi plug-in that worked in conjunction with a midi sequencer such as Logic Audio and GigaSampler?
1. A player reacts to a phrase rather than to individual notes.
That´s true. This is why it is so damned difficult to simulate a naturally sounding phrase using only note-, controller- and sample-fragments.
2. There could be an intermediate software, that adds interpretation to the note-streams of a sequencer before sending it to the sampler.
I liked this idea. This would indeed mean, that we would have a layer-model for playing music as we have it in other areas (e.g. ISO/OSI communication model).
Layer 1 is the score with its notes, tempo and expression marks. It is basically being interpreted by the sequencer. The sequencer also reads the expression marks and passes it on using very simple MIDI-controllers.
Layer 2 is the expression layer that reads - besides the notes - the simple expression controllers and interprets them using complex user-editable parameters. It adds a lot of MIDI-controllers that let the music sound more natural. It also selects certain programs (dimensions), that are most suitable for playing a certain expression and passes all this information on to the sampler.
Layer 3 is the sampler that provides the sounds and varieties of the sounds. It is steered by the programs and controllers of Layer 2.
All this could be realized using only this old interface called MIDI.
Layer 1 is quite well implemented since years - except the interpretation of expressions, which none of all those fat software products with millions of technically orientated features handles satisfyingly.
The quality of Layer 2 is extremely important for musical expression and natural sound but is not saytisfyingly implemented into any sequencer or notator that I´ve tried. Maybe Superconductor is trying to fill this gap ?
With Gigasampler Layer 3 went a big push forward.
I hope that someone will take the challenge to do soemthing in Layer 2. In the past all the music-software producers basically made there layer perfect. But nobody really took up the challenge to look into the whole process of playing music with the computer.
This is why the whole world still fumbles around with long controller lists to put some life into their performances and looses a lot of time to parametrize rather than to compose and play.
THIS IS NOT THE WAY TO CONTINUE.
I hope that somebody takes the challenge. Nemesys could step into this!