Overclocking guru Der8auer calls out Nvidia for “non-existent safety factor” for their 16-pin power design
Der8auer calls out “non-existent safety factor” in Nvidia 16-pin cables
In a new video from Roman “der8auer” Hartung, the world-renowned overclocker and PC cooling specialist, has called out Nvidia over the “non-existent safety factor” of their 16-pin GPU power cables. This week has already seen reports of several melted (1,2) cables on RTX 50 series GPUs. Furthermore, we uncovered problems with our own 16-pin GPU power cables when testing ASUS’ ROG Astral RTX 5090.
We recommend that you watch der8auer’s video in full. The long and short of it is that the Nvidia 16-pin 12VHPWR/12V-2×6 power standard has a tiny safety margin, and Nvidia has made this factor a huge problem by designing products with inadequate safety features.
16-pin 12VHPWR and 12V-2×6 power cables have a safety factor of 1.1. These cables are rated for 600 watts of power delivery and are designed to handle up to 660W. That’s a low safety margin, especially when compared with older 8-pin PCIe power connectors. They had a safety rating of 1.92. In other words, those older cables could handle almost double their rated loads without any issues.
A poor safety margin with no effort to ensure power balancing
According to several analysts, including der8auer, Nvidia has made matters worse by making no effort to ensure that power is balanced between the six voltage pins in their 16-pin power connectors. This means that most of the cable’s 600W load could go through a minority of a cable’s voltage wires. This is incredibly dangerous, especially when combined with the cable’s low safety margin.
der8auer noted that Nvidia’s first 16-pin-powered GPU, the RTX 3090 Ti, didn’t suffer from the same issues as its newer RTX 4090 and RTX 5090 GPUs. This is due to the fact that this GPU treated a 16-pin power connector as three traditional 8-pin power connectors. power load was balanced between all wires on 12VHPWR/12V-2×6 cables. This prevented power distribution from becoming an issue on these GPUs.
… with the 4090 we saw burning connectors we had all the drama, and then instead of learning from the entire situation [Nvidia] just made things worse. Instead of fixing anything, instead of at least split up the monitoring by three, no, we increased the power draw by 28% to the 4090 and that is something I just can’t understand,
If you’re an engineer, and if you look at this topic as a whole, you can’t tell me that you think this is a good idea. So you just can’t tell me that this is fine to do so, especially with the non-existing safety factor.
My case from last week was to show to clarify that these technical issues exist it was to high highlight Ivan’s case and to show that what he was experiencing might exist more, and that it is technically possible, and it should not be technically possible.
It would be easy to fix for Nvidia. If they would go back to the RTX 3090 Ti kind of power delivery design we would not be here.
…. or should the design just be that you plug it in, and it works, and there are safety features that just prevent this from happening in the first place? You know that the safety factor of 1.1 itself is already a joke, and having a safety factor of 1.1 without any kind of real measuring is an even bigger joke. You know Nvidia if you were to measure every single current pin on the cable if you would measure all the 6 12V pins and check that everything is working fine. You know you’re checking it by software whatever, whatever kind of safety measures you would apply with load balancing. Then you could maybe argue that the safety factor of 1.1 is enough because you’re double-checking everything. But not checking everything and having a non-existent safety margin of 1.1 is just not good, it’s not sufficient and every engineer
will tell you that this is not a good plan to do.– der8auer
There’s a growing consensus that 16-pin 12VHPWR/12V-2×6 power cables are simply not good enough for consumer use
der8auer isn’t the only one calling out the issues with the 12VHPWR/12V-2×6 standard. A recent video from the overclocker “buildzoid” at “Actually Hardcore Overclocking“, discusses “How Nvidia made the 12VHPWR connector even worse.” Additionally, a reported Intel Engineer has taken to Reddit to deliver “an electrical engineer’s take on 12VHWR and Nvidia’s FE board design.”
Simply put, Nvidia has two options. They need to ensure that high-power GPUs have power-balancing features to ensure that PSU cables aren’t overloaded. Alternatively, they can go back to the drawing board and design a power cable with a large enough safety factor to handle high levels of load imbalance. Another option is to lower the 16-pin power cable’s maximum load rating to give it a higher safety factor. Note that this would mean that future 600W GPUs would need to use two power connectors.
The 16-pin GPU power standard has a bad reputation, and rightly so. While the move from 12VHPWR to 12V-2×6 has removed “user error” from the list of things that can cause GPU cables to melt, it has confirmed that the 16-pin GPU power standard’s problems are deeper than that. Power distribution across these cables can be a problem. Add in a small safety factor, and you have a recipe for disaster.
You can join the discussion on the growing complaints against the 16-pin GPU power standard on the OC3D Forums.