You have to pan the samples to achieve a realistic sounding string section as you would mono samples, otherwise all the instruments will appear to be in the middle of the stereo field. I could be wrong but I think although the samples are in stereo, they aren\'t recorded, for example, with the violins on the left, like the MV string ensembles.
If memory serves me right, Gary recommends panning the strings in the GOS manual, although he warns against panning the Basses.
The values I gave above are approx. where the strings would sit were you to hear them in a concert hall.
[This message has been edited by Tokyo Joe (edited 11-08-2001).]
Also, sometimes, people pan or seat an orchestra with the 1st and 2nd violins on opposite sides for a wider sound. Most of the time on stage though, the above given settings are more appropriate. Most orchestration books have a nice diagram of where everyone is usually seated including the winds and percussion. Also, be sure to listen to a CD of the type of music you wish to emulate and match it as close as possible. Paul Gilreath\'s book on MIDI orchestration also has some good tips on mixing. He gets into adding a bit of delay to instruments like the horns and the appropriate reverbs and all. Also, if you do pan the instruments, the GigaSampler will basically bias the stereo mix to one side or the other. Once you go all the way to once side though, you basically have a mono instrument.
[This message has been edited by David Govett (edited 11-08-2001).]
Gary asked me to respond to this post. There are a couple of things that I think about when achieving realistic MIDI Orchestrations. First, it is important to understand the difference in concert sound and recording sound. By this I mean that the sound perceived by a listener in the audience of an orchestra in a hall is vastly different from that of a good recording of the same performance. 99.999% of the time, we want to reproduce the recording sound not the performance sound. It\'s not necessarily better, but it is what we are all accustomed to hearing when we put on a CD.
Second, there are two different dimensions that you want try to achieve correctly. PANNING from left to right and just as important DEPTH from front to back. Instrument placement in the left to right dimension is fairly easy especially if you are using mono samples. You simply adjust the panning to achieve the correct placement. As Dave G. suggests, there are many books that show concert seating for an orchestra. Also, Gary\'s manual has detailed info about this. One approach is to simply visualize the orchestra in a concert seating view as if you a standing on the stage ten feet in front of them and standing about four feet high. Pan the sections where you see them visually. If a section takes up 1/4 of the stage, then pan them across this area. Consequently, if I\'m panning strings in a concert seating format, I would divide the aural stage into four quadrants. I would put the 1st violins spread across the entire left side (quads 1 and 2), I would put the 2nd violins in the same area, but with the majority of the signal more toward the left/middle quad, the violas in the right middle quad, the cellos across the entire right half and the basses panned in the middle with slight emphasis toward the right (remember that low frequencies are non directional like your sub... it doesn\'t really matter from where they begin... they end up everywhere.
Continue in this manner for the entire orchestra. However, one of the most important things in this regard is consistency. If you put your oboe at 85, then there she must stay. I hate to hear MIDI orchs that put the instruments in different locations throughout the piece. Even slight variations are somewhat annoying. I find it advantageous to set up time consuming and very detailed templates in DP3 that allows me to not have to think to much about this. That way it is always the same from piece to piece or from different parts of the piece.
In regards to depth, this is a little ethereal but it works for me. Again, in a concert seating situation, there is a range of approximately 40-60\' from front to rear of the orchestra. In either a performance sound or a recorded sound, there is going to be slight delay and reverb reflection differences between instruments from front to back. Consequently, mimicking this via reverb (especially using plug-ins) works very well. Use predelay to change the amount of perceived time of the sound \"arrival\". For instance, I always achieve much greater realism in horns by using a lot of wet signal and little dry (think about the travel path of the sound...backwards and to the back of the stage, then forward). Listen to film scores and you will hear this, even if there is some direct mic placement, the sound stage will generate some reverb delay that is audible.
I hate to hear MIDI orchs that put the instruments in different locations throughout the piece. Even slight variations are somewhat annoying. I find it advantageous to set up time consuming and very detailed templates in DP3 that allows me to not have to think to much about this. That way it is always the same from piece to piece or from different parts of the piece.
Thank you for your posting! I will order your book next week :-), but I do have one, probably very primitive, question: What is DP?. Sorry about my ignorance.
Thanks Paul for posting on this site. Paul Gilreath knows MIDI Orchestration better than anyone I know; in fact, he wrote the book on it: \"The Guide to MIDI Orchestration\". This book is a must read for anyone wanting to obtain realism in their MIDI arrangements and orchestrations. I found it invaluable in planning the GOS library. Highly recommended!
DP3 is Digital Performer version 3, it\'s an excellent mac-based sequencer and DAW, made by Mark of the Unicorn (MOTU).
I thought I\'d give the delay concept a try. I\'ve been arguing with myself about the physics of the concept, and have not yet come to a clear conclusion on how to implement it, precisely and convincingly.
First, there is the direct sound. It\'s clear that the sound from instruments further back will arrive later in the soundfield. So, if the french horns, for example, are 35 feet from the conductor, their direct sound gets delayed by the equivalent of 35 feet ( about 35 msec, if my memory is correct about 1 foot = 1 msec).
Now to the wet sound, and here\'s where my two personalities get into serious arguments. Some of the reflections are off the back surfaces, some are off the side surfaces and ceiling. The distance to the back surfaces from the french horns is less than the distance from the first violins. Therefore, the french horn primary-back-wall-reflection component arrives in the audience earlier than that from the first violins. The ceiling and side reflection components from the French horns have a greater distance to travel and arrive later than those of the first violins.
So, I am in a quandry about the proper treatment of reverb, unless I use multiple instances of reverb to try to model the different processes. As I argue with myself, I conclude that the dominant reverb should parallel the timing of the direct sound, i.e. french horns enter the reverb processor with the same delay as the direct sound. The secondary reverb might take the sounds without any added delay at all.
I\'d sure like to hear someone else\'s take on this.