Alright, below is based loosley on the format that michael999 had started in his thread but with lots more data and info.
*** Please, if you find any info related to this thread or more accurate wattage readings for cards please post the link in here or PM me so that I can update the list. ***
Right, on with the show!
The Truth About Graphics Power Requirements V2
Ok. There is much confusion as to how much power you actually need to run the latest uber card, the answer is not as simple as first thought. It turns out older cards can use more power than newer ones. Any one broadly stating Nvidia or ATI cards suck more juice don't know what they're talking about. It depends on quite a few factors.
These factors include the following:
(In general order of which effects peak power the most to the least)
- Die size/chip architecture/fabrication process (110nm/90nm/65nm/55nm etc) - Smaller process equals a smaller die, thus less power needed. For example look at the 7900 GTX, it's a 196 mm˛ die size while the X1900XTX is 352 mm˛, both are 90nm, and clocked almost exactly the same but the 7900 GTX draws 36W less. Another example, GTX 280 & 285, same transistor count similar clocks but the GTX285 draws some 28W less. HERE is a graph of how fabrication process sizes can effect power consumption.
- Software - Different applications or games can influence how much power a card draws. Different visual effects, shader levels, scene complexity and rendering engines all have an affect on power. A good example is Furmark and 3DMark. In this benchmark it is clearly shown how FurMark draws substantially more power than what a game (3DMark) might typically. Also if a game is easy on the CPU then the card can run at it's full potential (both performance and power draw). See this article (Translated to Google English) which shows the difference between cards in various games and benchmarks.
- Board layout/design - If a 3rd party manufacturer designs their own PCB layout for the video card they can often improve efficiency, through better quality parts or a better design. For example a HD4850 1GB card with a 3rd party designed board consumes 7.6W & 19.2W less than the reference 512MB design at idle & 3D in modes respectively even though it's core is clocked slightly faster and has twice the frame buffer size.
- Core/Mem clock speeds - Higher clocks equal higher power consumption. (eg look at the difference between the 7900 GTX & GT, both the same GPU and memory but the core and memory are clocked differently, producing a 36W difference. Similar story with the X1900XT/XTX)
- Amount and type of onboard RAM - Greater amounts of onboard RAM capacity mean greater power consumption. Also newer GDDR typically draws less power GDDR > GDDR2 > GDDR3 > GDDR4 > GDRD5. (eg look at the difference between the X1900XT & X1900XT 256MB model, same in every way except capacity and GDDR3 vs GDDR4 respectively, which results in a 10W drop)
- Number of monitors attached to the card - This only effects idle power consumption and appears to be a function of what minimum clock speeds the drivers can keep the card at while driving the 2+ monitors. Can more than double idle consumption. Example HERE.
GFX Card Comparison Table
These graphs are an attempt to show GFX card power consumption, not what the recommended PSU requirements are, but rather the actual draw of the card. The graphs show peak 3D (unlikely under normal use), typical 3D, 2D and idle power draw where possible.
All readings have been found on the web, you can find where I sourced the data from for a card by looking through the posts I make in this thread when I post updates. If you want URLs for a specific card drop me a PM or post in this thread and I'll gladly send you the URLs I souced it's data from.
Idle consumption - is typically taken in windows at the desktop with nothing happening on screen.
2D consumption - is any load put on a card that's not 3D or GPGPU based tasks. This may include video playback or simply moving windows around on the screen really fast.
Typical 3D consumption - is the typical max 3D draw of your card whilst running a regular game, 3D app or possibly even GPGPU task.
Peak 3D consumtion - should be considered an unlikely and rare case for a GPU. Whilst the card can draw this amount of power if needed, it's only under extreme circumstances (like FurMark benches)
Each card could have between 1 and 8 data sources each with varying amounts of data and perceived accuracy. As such I've devised an arbitrary "data quality" rating which is shown in brackets next to a cards name in the graph pictures. This quality rating helps show how much data I have for a card and it's perceived quality. So the more data there is the higher the quality rating. Extra points are given if the data shows readings close together (thus seem precise) and points are taken away is they are spread out (less precise). Readings from sites like Xbitlabs that take very accurate readings get more bonus points due to their known accuracy of readings.
Readings for a card that has a lower 'data quality' rating might (though unlikely) still be accurate just perhaps not as precise. Conversely, cards with high 'data quality' ratings will be seemingly precise but they still might not be accurate (though more likely are). More on that here. This rating just helps show how accurate and/or precise the data I have for it is. Anything around 10 or higher is strating to be good.
Last thing of note. I've tried to normalise all the readings found at the various websites against cards that have been properly tested (ie at the card itself, PEG slot, PEG power connectors etc) for power draw (eg what xbitlabs mostly does), not just a generic total system score. This should hopefully provide more accurate results for cards with readings that are otherwise of not much use. I look more at the differences between known correct readings and the target setup. If I find a few sites that agree that a setup uses a certain amount more/less than a current known setup, then I plug it in. It's not totally accurate but it's the best I can do with the info available, there's only around a -+15% maximum difference compared to the correct readings, usually <-+10%.
Where I can't find proper readings from a website I've utilised http://www.extreme.o...ucalculator.jsp where possible, they're likely to be above true actual wattages but are still quite accurate compared to TDP ratings from vendors.
Without further a do, the graphs...
Charts sorted by brand/generation
Charts sorted by typical 3D
Charts sorted by idle
Charts sorted by card generation
Or simply scroll down to the second post to see them. URLs are also listed below in the second post
You may also download this CSV file, for easy searching etc.
Just looking at single card configs seems to show that there's been a gradual increase in power draw over the generations. Although it appears mainly to be due to more single card dual GPU options coming out, though single card single GPU cards seem to be gradually drawing more each new generation too. However if you look at the mid-low end cards they're staying quite easily under 50W for most card generations. This is mainly due to smaller process technologies and more efficient parts on the PCB that's used with each new generation. So the manufacturers for a similar power envelope can produce faster cards all while charging the same. This is probably done mostly to cater for OEM requirements.
Multi card configs however show quite a marked increase in consumption. One has to wonder when they'll try to cap power consumption for these top end configs. There's only so much you can draw from your power socket!
Getting power to them cards!
For PCI, according to the PCI Local Bus rev 2.3, sections 126.96.36.199 and 4.4.1, "requires that an add-in card must limit its total power consumption to 25 watts (from all power rails)."
"Power Supply Rail Tolerances:
Power Rail Add-in Cards (Short and Long)
3.3 V ±0.3 V 7.6 A max. (system dependent)
5 V ± 5 % 5 A max. (system dependent)
12 V ±5% 500 mA max.
-12 V ±10% 100 mA max."
Info found here http://www.opencores.../2006/02/000435
The AGP 3.0 standard (AGP 8x) can only deliver a maximum of 41.8 W (6A from 3.3V, 2A from 5V, 1A from 12V = 41.8W and an additional 1.24W could come from the 3.3V auxiliary at 0.375A). AGP cards that need an extra power input will have a molex power connector (9800 Pro for example). By adding the four-pin Molex connections, manufacturers extended the life of AGP cards as each supplied 6.5A or 110.5W from these right angle connections (12V + 5V or 17V x 6.5A = 110.5W). This makes a total of 151.8W available to AGP cards with a single molex connector.
Info found here http://www.tomshardw...ners/page5.html
The PCI-Express 1.0 standard allows up to 75w to a card through the PCI-E slot itself, without the need for an additional power connector. But with virtually all mid to high end graphics cards these days requiring more than this they require additional power from a Molex, 6-pin or 8-pin PCI-E power connectors, this is a dead give away that a PCI-E card needs more than 75W to operate stably (7800GT for example). Each 6-pin PCI-E connector (two to three +12V wires and three grounds) can provide another 75W. So a PCI-E 1.0 GFX card with a 6-pin power connector attached can draw up to 150W max, with a dual card setup able to draw up to 300W max between them.
PCI-Express 2.0 offers a doubling of power to discrete cards, up to 150W through the motherboard itself, though not many if any manufacturers have implemented this on their motherboards and not many graphics card manufacturers relying of that so for back wards compatibility they presume PCI-E 1.0 spec slot.
The 2.0 spec also introduced the 8-pin PCI-E power connector. The 8-pin PCI-E power connector has three +12v wires and 5 grounding wires. This means it's rated to deliver 150W to the card. Along with a PCI-E 1.0 slot to provide an extra 75W, a card could draw a total of 225W. This new connector is not to be confused with the EPS 8-pin motherboard power connectors already shipping with most power supplies, it's keyed differently and the polarity is reversed, so you really don't want to try and force the wrong connector in.
Often with current high end cards they will have both a 6-pin and an 8-pin power input plugs so the card can use the maximum 300W (75W from mobo + 75W from 6-pin + 150W from 8-pin) that the ATX spec(?) allows for an expansion slot to utilise.
On some rare enthusiast limited edition cards they could go 'out of spec' by using more than 300W for the card to enable the full rendering potential of the card. E.G. the Asus Mars has two 8-pin power sockets for 375W (75W + 150W + 150W) total. Or the Asus Ares which has two 8-pin and one 6-pin connectors for a total of 450W (75W + 150W + 150W + 75W) total possible!
You can find out more on PC internal power connectors HERE
Many of you must be wondering how much power a card draws once overclocked. Power consumption does not change a great deal when overclocking (just look at the many OC results in the graphs), a rough guide is to say the power consumption increases at a slightly lesser rate than the overclock. So a ~6% overclock will cause ~4% increase in power and so on. But use this only as a rough guide, and do not count on it if you are in a small power situation such as a small form factor or very small PSU. Plus this may vary with newer architectures, but should remain fairly consistent.
Eg On the HD4870X2, increasing the core clock by 50MHz causes an extra 11W at peak power draw. 6.6% increased clock speed for a 4.16% increase in power draw.
On the GTX 285 a 54MHz clock increase caused an 11W increase in power draw. 8.3% increased clocks for a 7.3% power increase.
With volt modding power draw gets insane quite quickly. See the examples I mention below for what I mean, but needless to say if you're doing any form of volt modding, make sure you have a beefy PSU that's up for the task.
Eg On the 9800GTX, increasing the core clock by 200MHz and increasing the voltage by 0.308v caused peak power draw to increase by 75W. 68.9% increase in peak power draw for 29.6% increased clocks.
On the HD3870, increasing core clock by 224MHz and voltage by 0.38v causes a extra 100W increase in power draw. 123.5% increase in power draw for 28.9% higher clocks.
Active Power Management
The latest generation of cads to come out (GTX 500 series and HD 6900 series) are the first from both Nvidia and AMD that actively limit the total card power usage to a predifined level.
Nvidia for example is limiting its GTX 580 through hardware by measuring the draw of current into the card (though I believe is activated by drivers in particular applications like FurMark) to limit the cards draw to 300W. With out this feature the card will easily draw in excess of 300W+.
AMDs route is less accurate but isn't application specific and thus a bit more elegant a solution. They have tested their GPUs and found how much power each part of the chip uses, from this they've created an algorithm to estimate power draw based on usage of each part of the GPU. Once the limit is reached (eg 250W for the HD6970) the drivers down clock the core to reduce power draw. They do how ever provide in the drivers a slider to allow a + or - 20% to the base value. Eg for the 250W limit of the HD6970 this means you can alter the power limit from 200W up to 300W. There's also talk that 3rd party card makers can alter this 20% figure to anything they like. Time will tell.
By SLI/CF's very nature you need to cater for x number of cards, so you can quite easily times the required wattage by the number of cards you'll have in your setup. But as you can see in the graphs this is somewhat dependant on one factor, scaling. At the moment multi card configs depend heavily on driver optimisations to utilise all the resources at their disposal. This is very difficult to do though so we don't see linear improved performance and power draw as we add cards. eg GTX285, GTX285 SLI and GTX285 3x SLI.
Things like Lucids Hydra technology could help bring linear performance (and thus power draw), but if or until that tech takes off it'll remain less than linear for the foreseeable future.
So when looking for a power supply to power these configurations make sure they can cater for a bit more (50W at least depending on card choice) than x times the single cards requirements (plus other PC components), this will ensure stability and allow for some overclocking head room (which I'm sure you'll be doing with these setups ;D)
This article is a little old but still relevant today. What'll happen if your power supply isn't powerful enough for your setup? http://www.extremete...,1932958,00.asp A must read if you're going to build a new, top end SLI/CF PC. So make sure your PSU is designed correctly and can supply the correct amount of amps on the required rails. See links at the end of this post for certified PSUs.
If you'd like to find out how much power (in watts) your current (pun!) PC uses at full load, the eXtreme PSU Calculator found here http://www.extreme.o...ucalculator.jsp is a good tool to help you find that out.
So now for the recommended PSU specs. For starters its recommended that you get a quality power supply, yum-cha brands often don't perform to their rated wattage. 500W PSUs may only be able to actually reliably output 400-450W. Antec, Silverstone, Enermax and OCZ are generally quite reliable PSU's. But there are many other brands that will do the job. There are instances of people using a 6800 Ultra with a 250w PSU. But they know what they are doing and have taken special care in the choice of hardware and monitoring, so it is not recommended. For cards drawing under 200w I suggest a 600w PSU, and for cards using over 200w to invest in a 750W+ PSU (while also using CPUs/configs from around the same generation). But in SLI/CF mode with any card from the Geforce 200 or Radeon 4000 series it is recommended that you power it with no less than a good quality 600w PSU. Of course all this depends on your CPU and other components which have to share the available power, which is another section on its own!
These recommendations take into account the usual configuration that a modern computer uses, such as multiple hard drives, DVD burners, fans, and many expansion cards and other ad-ons. So with these recommendations do not be afraid to have multiple devices.
See here http://www.slizone.c..._build_psu.html and here http://ati.amd.com/t...ldyourown2.html for what PSUs nVidia and ATI certify for use with their multicard configurations. Generally something with 34 amps on the 12 volt rails. And here http://ati.amd.com/p...ersupplies.html for ATI single card PSU recommendations.
You can also check out Atomic’s Most Recommended Psu’s Thread for an up to date list of recommended PSU's.
Edited by mark84, 13 May 2012 - 12:22 AM.