In the magazine “Maximum PC” February 2004 issue, there was an article titled \"In The Lab: Native Command Queuing Debuts\".
To summarize the article, it describes how 150 more I/O requests per second were performed on the drives that had Command Queuing logic to handle read requests (another example given- a drive went from 221 IOPS to 334 IOPS). Command Queuing, in a nutshell, can reorder a series of data requests so that the “over-all” collection of the data is optimized. For instance, the CQ logic can take advantage of the current location of the read/write head to fetch a piece of data for Request #4 before completing Request #1, and effectively have more time left over to satisfy new requests.
The benchmark used in the article was one by Intel that generates as many random read requests as the drive could handle. In my opinion it doesn\'t seem to be an entirely good benchmark unless you\'re buying a drive for a very taxed network server where your primary goal is to satisfy as many total requests as possible in a given timeframe. In this case if one person had to wait an extra second so that 4 others could be served more efficiently, no one is complaining. This CQ logic certainly doesn’t add much value on a typical single-user machine where requests are mostly serial in nature.
At first, it sounds like this would be a God-Send for Gigastudio because 160 streaming waveforms places a lot of demand on the drive, and constantly pulling wave data from 160 different locations has to approximate a pattern of activity that begins to look a lot like the benchmark algorithm, which was random.
I\'m sure many of you see where I\'m headed with this, but if not, then read on.
This out-of-order fetch algorithm employed by Command Queuing can play havoc with Gigastudio because requests for data are driven by the fact that the most needy buffers are the ones that should be served first. In other words, if GS requested waveform #1 before waveform #2, it did it because the buffer for waveform #1 is about to run empty. In the case of CQ, we now have a controller that may satisfy up to 15 (for the Silicon Image drive) or 31 (for the Seagate drive) requests before satisfying the original request, simply because it was more efficient to gather the data off the platters “out of order”. Sure, overall throughput is higher, but if each request takes 10 ms and we\'ve got, oh say a measly 5 requests that get served before ours, we\'re nearly 50 ms behind the ball on the original request. The buffer runs empty, and a dropout in the sound is heard. Or perhaps a new note is delayed in it\'s initial attack. At that point who cares if overall the drive is ahead on I/Os per second fulfilled. Sample data is pointless unless it is delivered in a timely fashion, and CQ doesn\'t care.
Do I have this all wrong? Will a simple adjustment of the GS buffer size render this “out of order execution” issue moot? Or will it somehow allow people to push their systems even farther now that GS3.0 removes polyphony restrictions? My gut feeling is that the breaking point would come much earlier using CQ, as soon as some unlucky combination of read requests were generated. According to the article, CQ is on its way to the masses, although I certainly wouldn’t build a GS system using CQ logic until this issue is put to rest.
My read on this is that we might get more poly out of our Giga machines at the cost of larger buffers. As long as the algorithm is bound, there can be predictability. If it\'s unbound that first note may have an infinitely long gap before it gets filled in from the hard drive.
It would be cool if there were a knob on this and the sample head length. Turn up the Command Queueing limit and the sample head length and get more poly at the expense of instruments loaded. Piano anyone? Turn CQ down or to zero, minimize the sample head length, and load gobs of instruments and articulations.
The problem will come if we can\'t turn it off or bound the algorithm. Let\'s hope that doesn\'t happen.
mark, there are more points to be taken into consideration eg. physical and logical structure of a disk and the structure of the file system on it - command queuing happens in the domain of ATA (the raw data so to speak) and not on the level of file access/request. you could easily spend an hour or two reading eg. this older but still valid document
streaming is very different from *normal* file access, thats also the reason why some raid controllers don\'t work at all for audio apps
my observation so far is, that serialATA drives (which provide native CQ) are performing significantly better
According to the article, Command Queuing is coming to the masses whether we like it or not. I surmise that it will be touted as the newest \"gotta have\" feature to aid in the sale of the latest generation of hard drives. It would be a shame though if this new technique hampered the number of simultaneous voices in GigaStudio.
We won\'t know for sure until someone runs some real benchmarks on the same drive with Command Queuing both on and off (assumiming that is even possible- it may come as an \"always on\" feature in the firmware). I may get the chance to actually test this out on a built-from scratch giga system I\'ll be building early this spring. I\'d like to build a 2 GB system using the new Prescott CPU (GS currently resides with SONAR 3.1b and although its been a happy marraige so far...). I think given certain knowns, like average seek time, #avg number of requests in the queue, gigastudio buffersize, sample rate, etc. that one could readily assemble a plausible scenerio where a new note was delayed in sounding or ended abruptly because of other requests being serviced first.
I\'m wondering: Are there any \"BenchMark MIDIs\" that are commonly used for testing the breaking point of Gigastudio? I\'d be interested in measuring the effects on CQ on heavily stressed gigasystems. Also, besides hearing the dropouts or pops or clicks, is there any \"scientific\" way of determinimg the point of failure? Is there anything in the GUI that indicates when polyphony stealing has begun? Or when a note was delayed and by how much?