OK, on the face of it this is going to be another one of those "shall I upgrade to giga 3 or not?" threads.
Or even another of those "Is Giga better than Kontakt?" threads.
Or EVEN another of those "Is Giga dead in the water?" threads!
But my focus is different. What concerns me about Giga now, is how it fits in to the general workflow of a standard DAW.
For most people, the description of a DAW (whether it spans one or several machines) would go something like this: There is a central program, usually Logic, Cubase or Sonar, that handles the original sequencing of MIDI parts, the routing of those parts to audio sources, the recording of live audio parts, and the mixing of all the audio into a final stereo mix. Within that process, there are plugins that handle the way MIDI triggers audio - soft synths and samplers, that work WITHIN the main program. At the end of the process, there may be a dedicated program such as Wavelab or Soundforge, to handle treatment of the stereo mix and preparation for final presentation.
Now Giga has never sat within this standard workflow as well as the competition. You load up your sequencer file, and your Halion or Kontakt settings load up automatically, ready to go. But with Giga you've had to load a separate performance, deal with all the necessary links if you run it on a separate machine, etc. Don't get me wrong - I love Giga and have kept using it, currently on version 2.5, because of all the things it has going for it. but I can't help feeling that the way it works, basically taking control of a whole machine, comes from the time when that was the ONLY way that soft sampling could work, and now that machines are powerful enough to run soft samplers as plugins, it's less and less relevant.
I'm actually happy to try out the Giga-VST adapter, the Giga Teleport and the Rewire support and cobble together the best way I can of co-ordinating Giga with my sequencer (Sonar). But what's really got me round to thinking about this is gigapulse, which is of course one of the BIG selling points of Giga 3.0. I do complex orchestral music and of course I'm eager to try it, like anybody.
But a question keeps coming into my mind: WHY would I want to run my convolution reverb WITHIN MY SAMPLING PROGRAM? The centrepiece of my setup is Sonar, not Giga. That's where ALL my VST instrument settings are stored, that's where ALL my effect settings are stored. That's where my project begins - when I play a few lines into a basic piano sound - and that's where it ends, when I mix down my final stereo file. (OK, for me it actually ends in Wavelab or T-Racks, but anyway...)
It just seems like the obvious thing, when wanting to run a convolution reverb, is to do it IN SONAR ITSELF. Then if I want to apply it to some tracks that are coming out of Giga, AND to my B4 and my Pro-53 tracks, and a few live audio tracks, I can. Sure, Giga 3 can take audio inputs and theoretically do all that (subject of course to having GSIF-2 drivers, which my soundcard, the Soundscape Mixtreme, doesn't yet have, despite having been once of the best cards out there for working with Giga up to this point). But again, WHY would I want to have to buss audio signals out of my sequencer, where they belong, into my soft sampling program in order to apply effects to them?
OK, so I understand that Gigapulse can also be bought as a standalone plugin and used in my sequencer (I think?) Surely, if it IS as amazing as people say, that would be the logical thing to do, and then just stick with Giga 2.5?
I don't get it. I can see that Giga 3 has some amazing technology up it's sleeve. But I just don't see where it fits in to standard DAW workflow. I find myself scratching my head for hours. making convoluted (sic) signal-flow diagrams trying to justify to myself how I would use all of its amazing features, when with Kontakt I would just open my Sonar file and load some samples - end of story. Is Giga 3 goundbreaking? Certainly. Is it worth the upgrade price? Probably. But here's my question: Is the amount of time and mental gymnastics needed to amalgamate Giga 3 into my studio as a whole justified, when I could just be writing music, and using standard, easy, well-trodden architecture to realize that music in my DAW?
I'd really like to hear from people how they actually USE Giga 3.0, in relation to all their other software. Whether they find it easy or challenging, and whether it justifies the effort. Bruce, I'd be interested in your experiences as I know you use Sonar too.
I use Giga on a dedicated machine. The big advantage for me is that by using a cpu intensive plug-in like Gigapulse, I am spreading my cpu resources over two machines, not to mention issues like voices and so on. For me the biggest selling point of GS3 was Gigapulse, it's awsome.
Now Giga has never sat within this standard workflow as well as the competition. You load up your sequencer file, and your Halion or Kontakt settings load up automatically, ready to go. But with Giga you've had to load a separate performance, deal with all the necessary links if you run it on a separate machine, etc.
I've actually liked this "feature" of GigaStudio. I generally work from one very large template, so most of my pieces access the same samples. Although it may take a bit longer to get started, I think being able to switch back and forth between different pieces without having to unload and reload the same samples into RAM easily counters any advantages to workflow that having an internalized sampler would do. After upgrading to NI's latest DFD release, Kompakt takes around three minutes to load up all the samples used in it's share of the template.
Ouch, you make a good point. GigaPulse VST fits your workflow model much better that GigaPulse Pro.
When working in a 100% Giga-Sample environment, GigaPulse Pro works great. I can route things all over the place, EQ, compress, whatever and then I can route into some small number of GigaPulse instances on the DSP Groups page. To do the same in the DAW would require independent routing of every MIDI channel output back to the DAW independently - then I could use audio inputs and aux busses to do the same thing. But it would come at the expense of efficiency. The fewer streams that I have to route to my outputs or DAW, the less the CPU wastes its time playing postman.
Having both GS3 Orchestra and GigaPulse VST covers everything. Then it's easy to apply GigaPulse to external audio tracks. Keep in mind that the CPU can only do so much GigaPulse, so if you're seating lots of "players" independently, you'll end up with a multi-pass workflow anyway - unless you have lots of machines.
There are two workarounds. The best is if you have a multi-I/O soundcard that supports GSIF-2. Now you can route your sequencer audio into Giga and back and run GigaPulse Pro there. This is killer if you want to run the audio through an impulse that you are already running on the Giga side. You don't need any additional instances this way.
The second workaround gets the job done, but has some real drawbacks... You can create a gig that contains audio tracks, and just trigger the audio tracks from a single, held note. It's not hard to do and is fairly quick - you don't even need to open the editor if you use Quicksound - it's just an extra wasted step. I've done this with good results though...
But really the overall workflow for me is to use some small number of impulses when creating the performance. Things that don't go through Giga can get any old reverb that sounds good. You don't worry about fine tuning and seating the instruments separately until the performing stops and the mixing begins. Probably the best approach is to render all of the tracks (using ReWire or capture), and then using GigaPulse VST or the GSIF-2 audio approach. That lets you perform in Giga and mix in the DAW.
No matter how you use it, the impulses that ship with GS3 sound great. I hope that more GSIF-2 drivers are released soon and that GS3 owners get a good discount on GigaPulse VST.
As I sit here looking at two more hardware sound modules that are going on eBay because I'm using more soft synths in Sonar. What does that have to do with running GS on a separate machine? Soft synths eat up a lot of juice on the Sonar machine, why cram a sampler with limited powers in there too along with processors when I can have a purpose-built machine running GS with lots of headroom?
Loading up projects is easy. Both machines are on a KVM so I launch GS & load, switch to Sonar - launch & load, go to work. Having them separate keeps things straight in my head. At the end of the session, I floppy the .gsp file over to the Sonar rig and include it in the project backup. Done.
GigaPulse sounds great. It doesn't really matter that it's in the sampler (GS) since I can put it on a aux send there or a group. It's all going down to stereo anyway and the sounds I want to use GigaPulse on are in the sampler.
Well, I am not a GS user, but I keep asking myself the same questions you did. I read GS forums, reviews, advertising, but I must say that it has been very hard to find good reasons to switch to GS, but all the "advantages" offered seem more like complications than real improvements to workflow. Why run a convolution inside GS? I don´t know, because it looks much more convenient to run convolution as a plugin anywhere I want to, like all the others. Why should I have to work outside my sequencer to manage GS patches? I don´t know, there is no reason for doing that at all... Why does GS demand so much from hardware (e.g. the need for gsif)? I don´t know, because every other audio software that I use in my machine simply work with anything. Why doesn´t GS work with hyperthreading? I don´t know, because every other software I use does! Why doesn´t GS... BZZZZZZZZZZZZ
Time is up, too many questions, I quit.
Last edited by Guga Bernardo; 10-10-2004 at 01:13 AM.
The problem here is a disconnect of understanding...understanding exactly what GigaPulse is.
GigaPulse is being used as a realtime reverb by some programmers, but by others, it's being used as an integral part of instrument design. Body resonances for pianos, guitars, and other instruments; drum shell and overhead microphone resonances for drum kits; string mutes and harmonics...
These processes are perfectly logical in being handled at the sampler level.
You also have to look at the larger picture of GigaStudio uses. There are many folks that use banks of GigaStudio computers to keep a virtual orchestra online at all times. You have a system here that is highly scalable, so what makes sense for a person who is a "one machine" shop might very well not make sense for someone with a five-machine rack of GigaStudio machines, all tied together with lightpipes into a digital board.
Personally, I just print the GigaPulse-processed sound to tracks. No big deal. If I want to change it later, I reprint it. If not, I don't.
Sometimes, I think it's easy for us to over-think, or over-plan our methodologies. I have tried to adopt an attitude of sitting down, hitting record, and just being mindless about experimenting. If nothing great comes out, I can ditch the track. If there's a little nugget of creativity there, I can save it and build on it.
The important thing to remember is this: GigaPulse CAN be a reverb. But that is only a single use for it. It has many, many other uses in instrument design. So it doesn't make sense to apply the traditional rules to it...they do not apply at all.
If it sounds good, it is good. Happy, happy, happy.
"...Why run a convolution inside GS? I don´t know, because it looks much more convenient to run convolution as a plugin anywhere I want to, like all the others."
Get GigaPulse VST then, ...and GS3. Then you can truly run GigaPulse everywhere - in the sequencer, in the sampler and integrated with instruments. Rather than either or, make it all of the above.
> "Why should I have to work outside my sequencer to manage GS patches? I don´t know, there is no reason for doing that at all..."
Frankly, this doesn't matter. For a VSTi you don't manage patches within the Sequencer anyway. You bring up the sample player and manage them there.
The difference? You have to click something in the sequencer to bring up the sampler UI. For Giga type "Alt-TAB" and the Giga window shows up instantly with a great, full-screen UI. Frankly, it's faster in Giga.
Sure, the Giga Performance isn't stored in the sequencer. What's the price? Like a couple of seconds to start the loading of one of the recent perfromances. A few more seconds if you have to find the path for an older performance. This is a startup deal, not a workflow deal - you only do it once per piece, and the cost is near zero.
> " Why does GS demand so much from hardware (e.g. the need for gsif)? I don´t know, because every other audio software that I use in my machine simply work with anything."
Performance, performance, performance. Nothing matches the low-latency and power of keeping everything in the kernel.
> "Why doesn´t GS work with hyperthreading? I don´t know, because every other software I use does!"
As an AMD guy, it's not an issue for me. I would guess that the reason was schedule. They focussed on single processor, non HT, XP-only to get the job done as quickly as possible with fewer maintenance variables. Frankly, they've got some bugs remaining that they need to focus on first. After everything is square, then they should fix HT and multi-processing. W2k is aging code. No need to support that.
To be honest, the only thing that I want to help my workflow is GigaPulse VST. I'd like to apply the same impulse to non-Giga sounds occasionally, and GP-VST would be the most convenient solution.
As a daily GS3 user, none of the other questions you asked have any bearing on my workflow or results whatsoever.
About the kernel level control in GS3, what is the big deal on performance? I have some friends using GS with 7-9 ms of latency wich does not look much different from other softsynths and samplers available in the market. Are you folks getting better results with GS3?
BTW, how do you measure latency in your systems? Is there a "scientific" method of measuring this?
Last edited by Guga Bernardo; 10-10-2004 at 08:08 PM.