Asus ROG Maximus V Extreme (Z77) Subzero Review
We have already done a quick preview of the Maximus V Extreme over here. We will still go over many of the main hardware features of the motherboard, and we will also discuss more in depth some of the more unique features this board offers.
Let's start by taking off the heatsinks and talking about some of the important features. First of all take a look at all those extra power connectors! You have an extra 4-pin for the CPU VCore which provides an extra 150W, then you have a 6-pin for the GPUs, and finally a 4-pin floppy drive power connector for extra power for all the I/O devices.
The main CPU voltage regulator is made up of a total of 8 phases. The Maximus V Extreme touts the same exact VRM as the Maximus V Gene, this keeps everything consistent in terms of production. The design is very sound and the components used are on the good side of the VRM parts spectrum, and thus we have no complaints, however this VRM isn't of the same quality of the Maximus IV Extreme (DirectFETs), but then again Ivy Bridge uses less power than previous enthusiast CPUs. We have a Digital PWM from IR, most likely IR3567 as the VRM is made up of 8+4 phases which all run off 1 PWM. That means the PWM needs to have 2 outputs for the iGPU which are doubled and produce a total of 4 phases, and then 4 other outputs are doubled using IR3598 which also includes dual drivers for the CPU VCore. The IR3598 is commonly used on ASUS boards as it helps save space, while it does double it also reduces component count/real-estate by 50% in terms of the need for drivers for each phase. So there are some benefits to these doublers. Below is the rebranded IR3567.
Then we have the VCore VR's MOSFETs; NXP 7030AL is used as the high-side MOSFET and two NXP 5030AL are used as low-side MOSFETs which helps even the load. Each of these power stages also uses 35A Metal Alloy chokes made by Trio, and thus they limit the final output of each phase.
These MOSFETs are found on most high-end ASUS motherboards, however because of how little power Ivy Bridge eats up (even while under LN2 and over 6GHz), ASUS saw no need to upgrade the MOSFETs. One good thing ASUS did was provide a two piece heatsink assembly, thus extreme overclockers can remove their VRM heatsink while under LN2 because the PCB's frozen copper will cool the MOSFETs.
The memory VRM is run by it's own digital PWM, this one might be a IR3570 or it might be some 2 phase CHiL PWM. Here we have the same MOSFETs as used for the CPU VCore, and we will see the same MOSFETs used on the iGPU, VCCIO, and VCCSA VRs.
The VCCIO is run off of a uP16060 which is a 2 phase analog PWM, we see the same nice MOSFETs.
The VCCSA is run off a single phase VRM powered by an unknown Richtek PWM, CZ-DL is a code for some single phase PWM, but there is no need to look this up.
A special 3.3v power VR produces its own 3.3v output. This can be used as extra AUX power for the PCI-E slots for multi-GPU benchmarking and it can also be used to help power Thunderbolt devices, as they and their controller use a lot of power. This is a pretty cool thing, as most boards with Thunderbolt have their own VR for the chip and the ports, however ASUS has also allowed this VR to be used to power the PCI-E slots. There is also a single phase above that is used to power the PLX8747. All these voltages are adjustable in the UEFI.
ASUS always goes above and beyond when it comes to PCI-E slot layout, this board can do 16x native as well as 8x/8x native. However ASUS also allows this board to support 4-way SLI/CF. This is done through a series of bypasses of the PLX8747 through the use of multiple PCI-E switches.
As you can see the quick switches are used heavily, but not to switch lanes between slots, but rather to switch large amounts of bandwidth to their end devices. This is one of the only ways to have 4-way multi-GPU at 16x/8x/8x/8x, or in this case 8x/16x/8x/8x. The total amount of lanes are 8x higher than with other PLX implementations, because they are only providing the PLX with 8x instead of 16x, and just like the NF200 the PLX8747 outputs 32x even when provided with less than 16x lanes. However giving the PLX8747 8x of bandwidth instead of 16x might hurt its performance, but the first card having native 8x is a bonus. In the end the performance should be very similar to other PLX implementations; however the single lane bypass is ultimately a great help for BCLK overclocking and some PCI-E 3.0 issues as the PLX8747 isn't a perfect chip. My only issue with this is that I thought that the black slot was the native 16x slot, but rather the first slot will do native 16x with 1 GPU.
How about some t-toplogy that you can actually see?
Circled in red are points at which the "T" occurs. T-topology is a fancy name given to a technology that provides both DIMMs on each channel their own connection to the main channel's trace. The other way to do things is to use a daisy chain, where the channel is connected to the first DIMM and then the second DIMM is connected to the first. Each method has its benefits, and the hard part of t-topology is making sure that the trace routing is done in exactly the right way. Thus we have the Maximus V Extreme, and users report that it has better memory OC than the Maximus V Gene and the Maximus V Formula, and that is because its memory trace routing has been further optimized because it was most likely developed later. For your reference, you will most likely never see a perfect "T", you will most likely view a screwed up "Y" or three fourths of a swastika. If you want to see if your current board employs this technique, then turn it around and you might see some part of the traces, usually most of them are inside a few layers of the PCB or on the top in which case you cannot see them. If done correctly as it is done on the Maximus V Extreme, it can provide the same level of two DIMM OC as previous boards, but also provide the same level of 4-DIMM OC that boards with daisy chain just cannot produce.
This PEX8608 provides many extra PCI-E 2.0 lanes for things such as USB 3.0 and SATA6G controllers.
Here is a list of all extra devices and how many lanes they require:
|IC, function, and Count||PCI-E Lanes|
|2 x ASM1061 provide 4 SATA6G||2x|
|2 x ASM1042 provide 4 USB 3.0||2x|
|DSL3310 Provides ThunderBolt||2x|
|PCI-E 2.0 4X lane||4x|
|mPCIE header Provides bandwidth to PCI-E add in dongle with Wifi/BT||1x|
|Total PCI-E lane count=||12x|
The native PCH is short x4 lanes, thus ASUS uses the PEX8608 as it is a 8-port controller. That means that there are 8 total PCI-E connections including those which will be used to connect the PCH to the PEX8608. By doing simple math and deduction we can deduce that the PCH provides 6 native lanes to various devices, and then provides 2 lanes to the PEX8606, which then outputs 6 lanes. That gives us a total of 6+6 and thus we have 12 lanes. You can run all the PCI-E linked devices without issue. A betting man would probably wager that the Thunderbolt controller is connected directly to the PCH as its bandwidth might be saturated at some point and clog the PEX chip.