Monday, April 30, 2007

Disk Disk Disk

I think the one of the upcoming challenges for the 4 and 8 core workstation market is going to be resolving a major problem for multi core systems. I/O contention.

In the 4x4 we have two processors with a total of four cores, soon to be 8. So much CPU - and yet I find that I still wait. So I must of course ask the question "What am I waiting for?". Invariably the answer is the hard drive. As I continuted to cogitate I realized now that the machine can run four concurrent processes - system or otherwise, and there can be 4 active threads ( minimum ) that would be able to and need access to the hard disk. Each thread on it's own is quite capable of saturating the hard disk with requests.

With 8 threads running concurrently we're going to have even more trouble feeding them the data that they need. It's an unfortunate scenario because there does not seem to be a decent answer.

For now I see the most viable option is to add between 4 and 8 disks to your system. I will elaborate.

To truly take advantage of the 8 cores I think that we'll need the following:

Drives Purpose
1/2 (R0) System Disk
1/2 (R0) Program Files
1/2 (R0) User Data
1/2 (R0) User application data (My Documents, Music, Etc.)

These are all areas that are accessed very frequently and demand a high level of I/O. The nvidia chipset has a large number of SATA slots, and it's possible that the designers were thinking that it would be wise to use them all. I can see now, why this is the case.

This is likely overkill - you could probably get away with using 2 of the 4 sets. But if you need and use Virtual Machines a 3rd or 4th set is not out of the question.

I suspect there would be performance boost, but I'm not entirely sure how we'd go about measuring this.

Just food for thought.

L1N64-SLI WS Ethernet DOA Part 2

Well, it seems that the Ethernet DOA - was not.

Apparently the firmware for my motherboard was not properly initalized leaving the MAC address for the Ethernet in it's default (and invalid) state.

I found through the helpful advice of another reader that setting the MAC address manually in Vista allowed me to use the on-board ethernet. Whoo!

I was quite happy about this and immediately did some benchmarking with a server on my network that has a RAID 0 disk and a GbE controller. The result under Vista was an astonishing 92 MB/sec - very close to the benchmarked maximum read speed of my disks.

Apparently there is another ASUS BIOS update out there or possibly still in beta that will allow you to write the MAC address value directly to the firmware. However because setting the address manually worked I have not invested much time in exploring BIOS updates.

Of course there is a caveat - I found that running the same transfer 12 hours later yielded performance in the 18 - 24 MB/s range. I guess the dynamic TCP windowing capability of Vista is not that smart :(

My guess is that if 99% of your traffic is internet based then the window function will tend towards a much smaller MTU, and the advantatges of the large MTU will be lost over the local LAN. This is of course speculation - but the likelyhood that the dynamic windowing function is unaware of the context of the transmission (local LAN vs. WAN) is quite likely given where it would sit on the network stack to do it's job.

Bottom line: reboot for multi GB transfers over GbE.

An interesting note however is that there is a very noticeable difference between my onboard card vs. netgear card. The onboard completely outpaced the netgear card even after rebooting and retrying the transfers.

Still here

To answer a readers question: we're still up and running. I've been running around quite a bit lately and it hasn't involved spending a lot of QT with my 4x4.

As we get closer to the AMD launch I'll be providing some details on AMD's support for the upcoming Barcelona platform as well as the new R600.

When the time comes we'll be documenting the upgrade from the FX-70 to the new Barcelona chips.

Believe it or not, I'll be needing the CPU!