New-generation GPU memory bandwidth increases: more for compute than for graphics?
New GPUs will have even wider and faster memory buses than their predecessors, with AMD moving to 384 bits from 256 bits, and Nvidia hopping even further, to 512 bits, in their high end SKUs. But is it more beneficial in the usual graphics tasks, or for the future vector FP math jobs in GPGPU operations?
You must have noticed by now how the next generation of high end GPUs, both AMD 'Tahiti' 7900 series and Nvidia 'Kepler' GK100 processors, have further increased their memory bandwidth and even bus width as well. AMD is now one step up from the 6900 series, with its 256 bit GDDR5 memory bus being widened to 384 bits in the 7900 series coming early next month. The Nvidia GK100, a quarter or two later, will have a 512 bit wide memory bus, a hop from 384 bits seen now in the GTX580 and its Quadro & Tesla brethren. Now, this doesn't seem as complicated a job as widening the CTE over and over again for our LTA here in Singapore, but it is still quite a work, as those wider buses require more pins, power drive and present board layout challenges to get both the speed and width.