In a recent discussion on what makes a great piano sample wonderful we focussed briefly on convolution to add realism on one-dimensional samples. Here are some experiments we did with the IOWA Electronic Music Studios Steinway. I think this is a very exciting technology and we will see great things in the near future.
This first convolution clip is a piece that we used for other piano libraries also. (Music Copyright 2002 by Bruce Mitchell, SOCAN). This midi file is performed on a one layer piano sample (88 samples, 400 MB’s). The dynamics have been emulated by using convolution. convolution on 1 layer piano
This clip uses the same set-up and demonstrates the dynamic range, accomplished by convolution. Convoluted dynamics
In the next clip the sustain pedal has been emulated by applying an IR of the soundboard resonance in the convolution engine. Music Copyright 2002 by Bruce Mitchell, SOCAN. convoluted pedal
And the last one uses additional convolution for reverb (based on a cathedral IR). Music Copyright 2002 by Bruce Mitchell, SOCAN. convoluted-pedal+reverb
Interesting, but that piano sample is so rough it is very difficult to hear through the defects and to isolate the effects of the convolution.
It is panning so wildly that it almost sounds like two pianos.
Which, to me, leads to another conversation altogether.
I\'m not sure how player-perspective pianos really fit into the world of convolution. There\'s no way to convolve that hyper-panned kind of imaging into, say, a concert hall setting.
I also wonder about this with the new EastWest piano. The demo I heard on the internet is very clearly panned in a player perspective. How, then, does that fit into 5:1 surround imaging? It doesn\'t seem like one could build a plausible soundstage given the elements represented. It seems to me that if you hold up your \"movie director\" fingers between your speakers, the entirety of the piano image must exist in about a one-foot area somewhere on the soundstage, and be propagated into the rear spread in a very diffused manner, to get anything like reality.
I realize that perhaps these are some early attempts at proof-of-concept, at least on the part of this convolution experiment. But the EastWest product is clearly aimed for market. I\'m just not sure I understand how the knowledge we already have of imaging and mixing is being applied to piano sampling. Everything I\'m hearing is still spread out across the speakers, as if the piano is nailed to the wall, lid out.
I understand that in a pop context--nothing is real there. I\'m not sure I understand it in contexts that are purporting to be realistic imagery in the sense of a concert/soundstage setting.
The examples are early experiments (!) and YES that piano is a very straight example of a player-perspective piano. The panning is very wild indeed, part of it is caused by the IR\'s we used. They can easily be made less \"ultra-stereophonic\" [img]images/icons/tongue.gif[/img]
I did comparable tests with the PMI Bosendorfer 290 that has a perfect sound stage imaging. The IR\'s influence on the stereo image can be controlled in a very precise way. Actually the IR is a very valuable sample. It can have a drastic impact on the final quality of the convolution process. I only started to scratch the surface of fine tuning IR\'s. The tools to modify IR\'s are very scientific and I\'m a fast learner (at least I hope I am) but the learning curve is rather steep. Time will tell what we can actually do with convolution and what is valuable for the musician.
I have spoken to the AudioEase IR developers (The guys behind AltiVerb are very clever people and they help in understanding the topic) and they did a couple of neat tricks with their engine, although their main focus is re-creating actual spaces-not musical instruments characteristics. These two are seperated ideas.
My main focus is IR\'s of musical instruments properties. Like placing the Steinway IR on a KAWAI piano sample. Or subtracting the body resonance from a grand piano sample leaving a brandless piano string tone, which can be further processed to become any brand you would like...
There are convolutional reverbs that permit arbitrary location of right and left sound sources and arbitrary location of multiple microphones. The one I use is Pureverb from www.catt.se. With arbitrary positioning of R and L sources I can place the R \"in front\" of the L (or for piano, the treble string side of the instrument facing the audience as is customary) and the place one through five microphones arbitrarily in the (arbitrary) hall for what ever perpspective and mixing standard (i.e. crossed cardioids, spaced omnis, up through a 5.1 matrix) is desired. What few folks seem to appreciate is that for stereo convolution of a stereo source, four discrete convolutions are required. Namely, left source/left mic, left source/right mic, right source/right mic, and right source/left mic - in other words four unique impulse responses. Those convolvers that use a single impulse response do not come near to approaching the spacious depth of field that a full four way (or 10 way in a five mic setup) convolution provides.
As a side note once you get out near the \"reverb radius\" (The RR is defined as the distance where the direct sound and the reverbant field (sum of all reflections) have the same level using classical acoustics formulae) you can\'t really tell how the piano is oriented with any certainty. This tends to be a bit on the reverb-heavy side side for most folks. However you don\'t have to get too far inside the reverb radius before even a hard-panned piano sounds very \"normal\" through speakers while still sounding somewhat panned in headphones. I happen to be fond of this configuration and do most of my work in this fashion based on a modified version of Worcester\'s Mechanic\'s Hall. If anyone would like my impulse set (four 600KB impulses as described above) let me know.