I have a Dell 650 Workstation featuring Dual 3.06GHz Hypertheaded Xeon processors and, finally, with the GS3.04 release, the program is funtioning with the dual processors and Hypertheading enabled. To be truthful, I have not stress-tested it in this configuration - but the program would not work at all with previous releases.
Be forewarned that, at least for now, dual processors and Hypertheading will offer no advantage to GS. Release 3.04 only allows the program to tolerate a dual or Hyperthreaded environment. Interestingly, even a program like Cubase SX3, which is specifially coded for multi-threaded dual processing, will frequently choke to death when using dual processors that are *also* Hypertheading. I found this out only recently, when I was finally able to reinstate dual processors and Hypertheading in my BIOS due to the GS3.04 release - and my SX3.01 suddenly made no audible sounds and displayed no modulations on my VU meters. After hours of troubleshooting and tech support - we had to disable dual processing in the SX application for it to work.
I'd go with two machines. You can sequence on one and Giga on the other. Or run VSTis on one and Giga on the other. Or run two Giga licenses. In each case you will get more performance and/or flexibility. There are so many cool synths and Kontakt-Rompler products out now that having a separate machine for them is nice. And having one or two Giga machines is great too.
In a year or two dual-core processors will become common. The software companies will have to support them to compete. The hardware prices will fall. We'll be getting dual processors essentially for free. Maybe even on our current motherboards (not sure about that though).
I was able to turn on HT with gs 3.03 which allowed Sonar 4 which I use concurrently to process more audio tracks. Gs3 performance remained about the same, being able to handle 4 to 5 concurrent GigaPulses cleanly. With Gs 3.04 I seem to be able to get 5 to 6 GigaPulses going cleanly. And I'm experiencing a little more freedom changing display windows between GS3 and Sonar without any glitching. I suspect the HT is letting the OS do its task switching a little more efficiently.
I'm hopeful that true dual-processor support is on the way for GS3 and that it will have even greater impact on GigaPulse performance. My mixes tend to use about a dozen stereo 24/88.2 audio tracks and it would be nice to be able to generate a source-based reverb for each without a hardware farm, multiple passes, or sub-mixes.
as you know dual CPU is NOT the same as hyperthreading.
I use dual xeons with video compositing apps that are optimised for it.
given the price of another xeon and a MOBO you would get really get no worthy benefit at this point. save your money for now. the audio world in this respect is stlll way behind the compositing and 3d apps.
given now convolution plugins and virtual instruments are getting CPU intensive maybe this will change in a year or 2
I think there is economy in a dual processor rig. You have a single - power supply, case, keyboard, graphics card, monitor and soundcard, let alone shared drives and memory and associated controllers. The IT world discovered this years ago - that's why you have multi-processor systems for large applications such as DB servers. This industry would be well served to adopt some of these technologies and take advantage of this economy of scale.
Except that a Xeon CPU costs more than a P4 of the same speed, and dual processor motherboards tend to be expensive. Then, you have to consider that a dual system never makes full use of both CPUs - most estimates seem to be that it will gain about 40-50% above an equivalent single processor system at most. So if you build say a dual 2 Ghz rig, you would value the second processor as effectively giving you 1 Ghz of extra power. If you look at the extra you're paying, for the CPU itself, for the fact of BOTH CPUs having to be Xeons, for the mobo, you might well come to the conclusion it's not worth it. You could achieve the same thing by using a single 3 Ghz CPU (OK, I know it's not exactly the same) and it would probably work out cheaper.
Theres another thing too, particularly with dedicated sampling machines. One of the biggest restrictions on what we do is how much we can get loaded into RAM at once. Windows XP will see up to 4GB, and most standard mobos will take that much. Estimates of how much Giga can actually USE fall somewhere between 1 and 2 GB, and if you're running another RAM-hungry program on the same PC that runs in application rather than kernel mode (eg GPO or EWQLSO), you might squeeze another 2GB into that. So your absolute top whack, running two samplers at once and being lucky with how they behave, might be say 3.5GB.
Now, you don't get double that by adding a second CPU! Windows' 4GB limit still applies, Giga's c.1.5GB limit still applies. These limits are per machine, not per CPU. So if one person builds a dual 2.4 Ghz rig, and another person builds two separate 2.4 Ghz rigs, the second guy will be able to load twice as many samples at once as the first guy. If he's an orchestral composer for example, that means a lot.
> "I think there is economy in a dual processor rig..."
That may be true when the processor is the bottleneck. For instance, when running lots of GigaPulse instances, multi-processing would be great - if GS3 were written to take advantage of it.
But there are other limitations with Giga, namely the amount of RAM that you can use, and the amount of data that you can stream from the hard drives. More processing doesn't solve these problems.
For IT the bottleneck is the database. To maximize it you hook a big honkin' RAID to the fastest multi-processor machine that you can afford. The fact is that they can't use truly separate machines, as the database can never be allowed to get out of sync - there is only one master copy - and transactions must be locked.
For instance, if I had a million dollars in a bank account (I wish), the bank wouldn't want me to be able to make two simultaneous withdrawls of a million dollars from two different computers.
The IT industry doesn't keep the master database on a single machine because it wants to. They do it because they have to. And it's not too difficult to give all of the query logic to one processor and all of the networking logic to the other. Anything to speed up access to the database chokepoint.
With Giga we're only doing "reads", not "writes". That allows us to use as many computers in parallel as we wish. This is more akin to download services having mirrored sites. They use multiple machines to provide redundant bandwidth, processing power and, well, everything. Tucows.com learned this long ago.
So, given that Giga isn't written to take advantage of a dual processor system, and the fact that there's no requirement for Giga to be limited to a single, master system, multiple computers is clearly the best path for GS3.