First, and foremost, GigaPulse is not a reverb. It is a development environment. Meaning, you can develop a reverb in GigaPulse. But you can also develop resonance models of instruments. You can create any impulse-based tool you like, provided you can design it into the available number of slots and with their toolsets. There is a lot you can do in the first generation model, and much room for this thing to develop more capability in future generations.
You can put any kind of front end on various combinations of impulses, with varying amounts of realtime editing control over impulses depending upon where you "park" them. You then create a UI, and the different impulse sets are routed internally via a button interface.
Imagine that an instance of IR-1 or Altiverb represents a pair of mics. The reverbs that have been developed initially for GigaPulse (as room simulators) use seven mics, generally grouped as conductor, wide, and surround capture points. There is an additional center channel impulsed. So each "room" is the equivalent of four IR-1 or Altiverb instances. You can run less...down to single-pair impulses.
Each of these "mic position impulses" has captured 18 different source positions. The ones that represent rooms and halls are laid out in arcs, representing symphonic positions. So, in these included rooms, you're hearing 18 sources x 7 microphone positions.
The microphone positions can be delayed, and "impulse shaped" in pairs, much like the IR-1, except that you manipulate the impulse shape by a single slider instead of the three "levels" in IR-1. Same difference--it just means that you run 100% wet, and affect the envelope of your impulse to create the illusion of more or less distance. This is what IR-1 does as well, which avoids the phase anomalies you get when you mix wet and dry signal. However, you can also mix wet/dry if you like. This is handier if you're building, say, an echo or artificial reverb patch which would need to work with dry signal. Or with some other kind of impulse-modeling altogether..
On the back end, you can route any or all of the mic positions to whatever mixer position you wish, by pairs. This is accomplished with a button-matrix on the included stuff.
There are several rooms, somewhere between five and ten, I don't remember exactly. This is less material than the number of rooms included with IR-1 or Altiverb, but that is very deceiving. Actually, it is a crazy amount more material, only more microscopically captured in each instance. We're talking literally 126 channels of convolution representing each room.
Plus, two stereo convolution channels on each mic position, which can be used as amp models, deconvolver/convolver sets for mics, additional effects, exciters...basically whatever you desire to impulse, program and combine.
Now, additionally, the GigaPulse has some tie-in capability to instrument programming as well. For instance, you can program it to kick in and out with the sustain pedal to create piano harp resonances for pedal down. I have not dug super deeply into how that's done, and I don't know if it's through MIDI-rules, or some other combination of tools.
That should give you some of the scope of it. It will become a library channel for Giga content, that's for sure. For building reverbs, it's not easy to cover rooms to that degree, but it results in a huge potential palette of options for any single given room. It can go stereo or surround with any mic set. By varying the perspectives, the delays, and the balances of the different mic sets, any one room can become an infinite number of rooms. Just modeling different mics (or using spatial processors in the mic convolver slots) can change the way a room sounds. You could get nutty, and route GigaPulse to GigaPulsed mixer channels, and link them out to even other systems. You can create a GigaPulse (or any mixer) setup which routes out to external hardware and back into the system.
GigaPulse is sonically comparable to IR-1. It lacks EQ on the front and backside, but you can patch one in if you're missing it. It far surpasses IR-1, just on the sheer brute force of its capability. It's a lot of convolution with almost unlimited ways of using it--that's just on the end user perspective. If you want to roll your own devices, really, the sky is the limit...and reverbs being just a single possibility of using it.
The GigaPiano II is the proof-of-concept for a resonance modeling application of GigaPulse. It contains a pedal up resonance on one GigaPulse instance, and a pedal down resonance on the other. The pedal down instance's mic levels are triggered by the pedal controller.
I think that should give you a good idea of where GigaPulse is, and where it's going. Even if you have the best reverb plugins available, GigaPulse does some different things than those, and it sounds as good as anything out there. You sure couldn't design a surround room in IR-1 or Altiverb with the same flexibility as GigaPulse. Or program a completely different application with it.
WOW! this sounds amazing . how then do you do a simple orchestral setuop with VSL??
for strings do you simply select the right ( leftwards mics) (viola middle) (cello right)etc.. ? how does it handle the stereo image?? if i take a giga piano - which is spread out far left to right, can i select my piano and "place" it anywhere on my virtual
stage by selecting the right mics? or will you stilll rely a lot on collapsing the image with S1.
OK, I didn't even get into that, because it REALLY gets into the need for hands on.
In short, you can multiselect or even cascade different impulses. You can send different source positions to different mics. The latter is what you'd do in the case of a full string section with stereo spread imagery. You'd simply choose the impulses you wanted associated with each mic position, and your stereo signal will be routed to those impulses for processing. You can even specify DIFFERENT locations for the wide or surround mics to be processing, so you can define your section even more dimensionally in the room.
This is what I meant by saying that the idea of GigaPulse being equivalent to IR-1 or Altiverb is true in a narrow sense--it is as good sounding a convolution engine, with good preset material. But that is really just the beginning of the exploration. Your own experimentation with stage position impulses, mic positions, mic types, routing (stereo or surround), etc., is the thing which will produce the true WOW factor. Many folks will be very happy with just going through the presets, but getting PAST the presets is what will open up horizons.
For pianos, the best bet is to route the signal through a single positional source if you want to put it in a pinpoint location. If you want to spread it, pick a location per mic that corresponds to the width you'd like it to have. You could patch it through contiguous positions in the "arc" and still maintain some separation without making the piano appear forty feet wide. On the other hand, putting its signal through mics on the opposite extremes of the stage will give you that totally "over wide" feeling you get when you put a player-perspective mic'ed piano through a typical convolver.
So, yes, these issues have been thought through very well. I have been looking forward to the time when they'd let me talk a little more about this (I do have permission), and to try and illustrate why I have been so doggedly supportive of this release. It is really as big a deal as anything that has hit the market to date.
All I can say is that people who have brand new machines with speedy processors are going to have a frikkin' BALL. And no one is going to complain about GigaStudio letting good processors go to waste!! That's not to say those of us (me included) whose machine upgrade path is not hitting "replace" at this particular moment won't have fun. But a person will never get enough GigaPulse, once he's figured out how to use it. It is as cool a thing as exists on the planet. And it can make libraries you thought were dead meat come to life and compete with the best you've got. Man, if that's not worth the price of the upgrade, I don't know what is. I am still discovering things every day.
Consider this: I'm discussing just one way the GigaPulse interface has been designed to function musically. That is, as an acoustic space simulator. First of all, the way Tascam built the bundled content is only one way to go about it with the toolset. You could ditch the idea of surround mics, and just literally hang different mic sets in the space and use the switching matrix to switch between discrete microphone pairs. You could use the same mics and different preamps. You could combine different halls into one preset. There is no limitation on the impulse material you can shoehorn into the basic scheme, or what you can design it to do.
I personally underestimated GigaPulse in the big scheme of things. Truth be told (Joe Bibbo and Pete Snell could totally back me up on this), I was obsessed at getting Waves working flawlessly (it does) in Giga 3.0, because I was perfectly satisfied with IR-1 and just wanted it to work without a hitch in Giga. And I can very truthfully say that since I "got" GigaPulse, I have not played a lick through IR-1 in Giga. I was just not understanding the degree of shaping available with the perspective slider and multi mic sets.
One thing I'll say is that the sliders are a little "fast" even in high-res mode, so you need a steady hand at the mouse, and might even want to slow down your mouse speed on your giga machine. Not a huge nit to pick, but I don't want anyone thinking I'm a toady, and have nothing but sunshine to blow up people's butts. And I think it begs for a V2 update, but that's just because what's there is so killer I can't wait to have even more ways to freak out on it.
I could never figure out why Jim (Van Buskirk) was so obsessed with GigaPulse. I commented on it many times--I just didn't see how this one aspect of the program was so big, and my assumption was that it was just a fancy surround convolver. It was the development aspect I didn't understand--that GigaPulse was an authoring environment, like GigaStudio itself, as much as being a production environment. When the ties to instrument design became more obvious to me, it started to make perfect sense that the whole system had to become attached at the hip to GigaPulse.
I think the first thing a person should do is load the full-monty GigaPiano II, open up both the instances of GigaPulse, and start messing around. Watch what each one does, and then start messing with all the levels, deconvolving the mics and substituting others, etc. Hint: Player Position is the SoundField, and M/S is M-149s, so be sure to deconvolve those (deconvolution impulses are labeled with an "inv") before adding a different mic in the second slot. Note that the inverted "deconvolvers" have rolloff and pattern settings just like the rest, and to set those appropriately--or inappropriately if you want to experiment.
Anyway, just checking out how you can radically change the tone of the piano (mute both GigaPulse instances to hear the raw samples) gives you an idea of how GigaPulse ties into instrument design.
If you are like me, you will spend about ten minutes doing that, and you'll get one of those light-bulb bubbles over your head and start loading your other pianos like crazy.
Here's the secret hint that lets even GigaPulse SP users apply those impulses to other pianos:
1) Load GigaPiano II--one of the full modeled versions.
2) Mute the GigaPiano
3) Set the channel to "stack" mode (there's a button over the QuickSound view)
4) Now drag another piano onto that same port/channel
Now, you'll be playing the "new" piano (ok, actually the "old" piano, but you get it, right?) through the instance of GigaPulse loaded into that Port/Channel by the now muted GigaPiano.
GigaPulse SP instances don't appear or "stick" in the channel inserts the way GigaPulse Pro instances do. For one thing, this is how lower-tier versions of Giga still allow GigaPulse to be designed into instruments--allowing any user to appreciate those designs. Imagine that they're "hidden" over the first two slots in a given DSP-Station channel.
Is that limiting? You'd think, but not really. It's actually a great back-door to let you layer things through GigaPulse, and to creatively use those two "tracked" slots as strategies when you build your personal work templates. You can actually attach GigaPulse instances to "faked" GIG files, which contain nothing except a null sample mapped to some outward region. In that way, you can set up some elaborate clones and hybrids of your existing libraries, using GigaPulse, and actually design it such that they automatically come into Giga that way, and GigaPulse SP automatically tracks them if you move them around the channel/port map.
See how this stuff all ties together?
It's brilliant, and extremely user-oriented, all the while being aimed at stimulating modeled instrument design via impulses. I realize that's a realm which is a bit heady right now, but imagine how you might impulse even the bell resonance of a trumpet, or a tuba. GigaPiano takes impulses from the pedal up and down resonance of the piano soundboard. You could do this with guitars (check out Mr. Corrigan's proof of concept on guitar harmonics, using impulses of stopped strings).
What the entire system has become is a first-generation, wholly sample-based modeling environment. Instead of using math to generate the models, it's impulse/convolution based. This is a system which may seem new to us, but in industry, mission-critical systems like airplane wings, concrete bridges, etc., are routinely impulse modeled (and subsequently impulse tested) to determine hidden flaws.
GigaPulse co-opts that technology, and the designer can use it to deconstruct an acoustic instrument's tone production system into smaller components. Consider this one: What does a Fender Strat sound like played through a Martin dreadnought body resonance? What does a Monette trumpet tone (highly stiff bell) sound like played into a Lightweight Bach bell resonance (a highly resonant bell)?
Are you guys connecting the dots?
What I also believe will happen as a trend is that rhythm-section oriented sampler buffs will get instruments designed as "inwardly" focused as the large orchestral libraries represent the extreme "outer" focus. In a way, it's like urban infill in a city...once a city sprawls to a certain degree, there is little practicality in focusing effort further outward. At that point, you start micro-developing towards the center. In the case of sampling technology, that's breaking the "fourth wall" of instrument design, and actually going inside the instrument to model the phenomena which produces its output.
I have been DYING to talk about this stuff, as you can tell.
I have always looked forward to the day that sampling technology focused itself on this direction...switching from the desire to model the highest number of possible instrument sounds at once, and building the necessary structures to use every ounce of CPU towards modeling just ONE instrument exquisitely. Or obviously, opening up the potential, so that people can experiment on where to draw the line between exquisite modeling and polyphony/production.
I truly hope the idea takes hold. Sampling technology largely hit a plateau...the focus has been on parity and actual boundary-breaking has been slow at best. People like Eric Persing, Nick Phoenix, and many others have explored the custom engine as a way to put new design-ideas forward. But the real renaissance and creative flurry comes when the development tools to do these tricks make it into the hands of everyday musicians. This is, in my estimation, the single feature of GigaStudio 3.0 that I hope finds a warm spot in all hearts. They're putting a huge new design palette out there. In a time where they felt extreme pressure to change the way they approached the production end, they chose to stick to vision--but managed to facilitate many more flexible ways to approach the product as well. I can't imagine any way they could have been more faithful to the two opposite camps of their user base, and I think they spent three years of time extremely well in finding just about the perfect compromise...all the while, putting such a massive and unexplored set of possibilities into play. It is really impossible to speculate upon all the ways people will find to use this stuff.
I don't want anyone thinking I'm a toady, and have nothing but sunshine to blow up people's butts. And I think it begs for a V2 update, but that's just because what's there is so killer I can't wait to have even more ways to freak out on it.
Everyone - forget Preparation H - break out the Coppertone SPF 45...
If you can de-convolve a microphone model, before re-convolving with a new mic .... is it possible to do the same with ambience generally.
Lets you wanted VSL and EWQL to sound as though they were in the same space. Instead of trying to match the VSL to the EWQL ambience, could you remove the ambience of the EWQL ... and then play them through the same impulse.
Okay, you would need someting close to the original impulse of the EWQL hall ... and it does beg the question why do it ... but for arguments sake is this feasible.