It seems that everyone getting an Intel 1366 platform (i7 9xx CPUs) has to, and absolutely must, get triple channel DDR3 RAM. People suggesting builds always get it, and people who decide to only go with dual channel seem to get bonked.
Admittedly, RAM isn’t too expensive (despite prices going up recently) especially compared with the base X58 platofmr, and, as it does support triple channel, one might say “why not”. I guess this may, however, be a point against choosing the typical i7 930/X58 platform over a generally cheaper i7 860/P55 platform; the main advantage of the X58 being triple channel RAM (and perhaps greater PCIe throughput, if you plan to go dual GPUs, but I don’t see that many people doing that).
So why is triple channel RAM so much better than dual channel? From what I see, I think people have some idea that triple channel is like “3x speed” whereas dual channel is “2x speed”, so triple channel makes your RAM 50% faster than dual channel. This is true in a sense, however, this only affects the bandwidth figure.
For transfering data, there are often two primary speed metrics: bandwidth and latency. It’s the latter that is often ignored, however, more than not, it’s the latter that really matters. Bandwidth can easily be increased by multiplexing multiple lines of channels (for example, the case with triple channel memory), however, latency is something which is fixed, and generally cannot be improved so easily.
High latencies of RAM have been a primary performance killer for a while (it takes in order of around 600 clock cycles for a CPU to retrieve a value from those DDR2/3 sticks) – that is, ignoring all the insane CPU optimisations (L1/L2/L3 cache, branch prediction etc) that have gone in to try to get around this issue of RAM being incredibly slow. However, bandwidth has never been particularly much of an issue for most applications either – I’d still say that latency is still of primary importance for performance; various high performance applications, such as x264, explicitly optimise for this by trying to reduce the amount of cache misses.
My 2x2GB DDR2-800 sticks in dual channel can read at about 4.5GB/s if memory serves me correctly. Very few applications even need anywhere need that speed – or in other words, constantly using a large portion of memory (most of the time, applications use a small portion of memory frequently at a particular point in time). And even with that read speed, it’s unlikely that your CPU could process so much information, let alone your HDD writing it.
So going back to the issue at hand of whether triple channel is worth it or not. There are two primary benefits of it over dual channel, one being increase bandwidth (discussed above), the other being greater SMP efficiency. On this front, it means that the memory can serve multiple data requests simultaneously, rather than queue up the requests. This may be useful if one is running a multi-threaded application, or several applications, which need to request memory that hasn’t been cached. However, as alluded to above, most applications use a small portion of memory at a particular point in time, which means that the working memory set is most likely cached when it is used, so the benefit of more channels is perhaps limited, and gives minimal benefit only for certain applications. However, as the number of CPU cores increase, the advantages of more channels of memory will become apparent, but for the typical quad core i7 930, I don’t think there’s much of a benefit.
Benchmarks also seem to agree with my assessment here as well.
(note, synthetic benchmarks may show a difference as they will often stress the RAM; of course, very few practical applications do this)
Conclusion: triple channel really does not provide as much benefit as many people seem to be thinking.
Disclaimer: I don’t claim to be an expert on this field, rather, I’m basing my reasoning from knowledge I’ve obtained.