One last thing about the RTX 3090 power management.

  Рет қаралды 33,494

Actually Hardcore Overclocking

Actually Hardcore Overclocking

Күн бұрын

Short version of my thoughts on the RTX 3090 vs New World situation: buildzoid.blog...
My Patreon: / buildzoid
Teespring: teespring.com/...
The Twitch: / buildzoid
The Facebook: / actuallyhardcoreovercl...
#RTX3090 #NewWorld

Пікірлер: 370
@VargVikernes1488
@VargVikernes1488 3 жыл бұрын
This WILL NOT be the last thing about burnt 3090s.
@guycxz
@guycxz 3 жыл бұрын
What do you mean? And why are you playing New World in a church?
@NyangisKhan
@NyangisKhan 3 жыл бұрын
The 3090 is cursed af. Glad I didn't snatch it when I had the chance and instead went for a 3080.
@VargVikernes1488
@VargVikernes1488 3 жыл бұрын
@@guycxz LMAO
@agenericaccount3935
@agenericaccount3935 3 жыл бұрын
⛪️ 🔥
@nexum9977
@nexum9977 3 жыл бұрын
burnt 3090 or church by 3090? xD
@xvpower
@xvpower 3 жыл бұрын
I did find it funny people were calling New World "unoptimized" to me it seemed like the exact opposite, they don't leave many transistors unused.
@cacheman
@cacheman 3 жыл бұрын
Using all transistors just mean a lot of resources are used, not that they're put to good use. Something being "optimized" carries some implication of resources used well. Especially in a GPU where a LOT of parallel work may simply be thrown away if a shader is poorly written.
@defeqel6537
@defeqel6537 3 жыл бұрын
@@cacheman I've noted the same with a lot of Battlefield games maxing out the CPU utilization. That doesn't necessarily mean it is well optimized for multi-core, just that it uses a lot of multi-core resources.
@FrantisekPicifuk
@FrantisekPicifuk 3 жыл бұрын
Not to mention that there is only so much you can do when it comes to optimalization for Nvidia cards. Developers have noted, that Nvidia drivers and their integration is like a blackbox, so any optimalization is much harder than integrating drivers for Radeon cards, that tend to be more open. So this migh actually be on Nvidia and some internal fuckery thats done on deep level inside their drivers
@Dhaydon75
@Dhaydon75 3 жыл бұрын
The game kinda feels unoptimised but not how good it uses addresses hardware, it just scales really bad with man player models/cities and some other effects.
@jabroni6199
@jabroni6199 3 жыл бұрын
I’m gonna tell my power company they shouldn’t charge me so much once I start using every available amp coming into my home because My power usage is optimized.
@10ghznetburst
@10ghznetburst 3 жыл бұрын
IIRC AMD was working on a per WGP clock management system, exactly to deal with stuff like this. The reason you get big fluctuations of power is because you get different utilisation factors on the SIMD units. E.g we have a GPU that issues SIMD32 instructions, one instruction operates on up to 32 points of data, but it's unlikely that in an actual scene you will always have 32 points of data you want to execute one instruction on. You may, for the sake of example only use around 20 of the 32 execution units on average, but then if you have small portions of your workload where most of the GPU suddenly has close to 32/32 utilisation of the execution units, you end up drawing a lot more power than before. Per WGP or SM clock management would require you to detect at a pretty low level what the utilisation factor is, and then preemptively choose to not issue instructions every clock cycle where you have many high utilisation instructions back to back, or alternatively change the clock speed before the transient gets out of hand.
@JethroRose
@JethroRose 3 жыл бұрын
it's probably in both AMD's and Nvidia's interest to pursue this as being able to selectively turn off parts of the card (like Ryzen does inside the CPU) will mean it can run cooler, draw less power and thus opportunistically boost higher when required as it is cooler.
@lrmcatspaw1
@lrmcatspaw1 3 жыл бұрын
VRMs: why are we here just to suffer? GPU CORE: Unlimited POWER!!!!! Also, Deal with it.
@Thirdeyestrappd
@Thirdeyestrappd 3 жыл бұрын
I heard It’s been scientifically proven that if you can zoom in on the human iris enough you get the furmark donut
@SgtRock4445
@SgtRock4445 3 жыл бұрын
Same with Uranus
@Thirdeyestrappd
@Thirdeyestrappd 3 жыл бұрын
@@SgtRock4445 I’ll have to buy a telescope this weekend 😂
@ActuallyHardcoreOverclocking
@ActuallyHardcoreOverclocking 3 жыл бұрын
second
@aaronjessome1032
@aaronjessome1032 3 жыл бұрын
Dammit!
@Soviet_Elmo
@Soviet_Elmo 3 жыл бұрын
Out of curiosity: Would you think comparing AMDs boost behaviour to Nvidias made for interesting content?
@tanishqbhaiji103
@tanishqbhaiji103 3 жыл бұрын
Yes but it wouldn’t be very easy to make that video.
@NyangisKhan
@NyangisKhan 3 жыл бұрын
Buildzoid currently hates AMD atm because of them locking the maximum clock speeds. And his 6900xt blew up so I don't even think he even *has* a card to test that with even if he wants to.
@raze4789
@raze4789 3 жыл бұрын
Too true, I got a 6600xt and after a couple weeks of normal use, went to OC. No dice. It won't move past 2600mhz no matter what. So I just undervolted it and let it do its thing. Still not a bad card.
@moritzaufenanger2537
@moritzaufenanger2537 3 жыл бұрын
@@raze4789 2600?
@raze4789
@raze4789 3 жыл бұрын
@@moritzaufenanger2537 yeah. It doesn't care what I set the core clock to. It'll just boost itself into the 2600's and that's it.
@VargVikernes1488
@VargVikernes1488 3 жыл бұрын
Chad spiciest donut vs Virgin pretty 3D visuals
@user-ro1cc8tz6d
@user-ro1cc8tz6d 3 жыл бұрын
21:44 don't worry about that. In future games are going to be rendered with pure electron, rust,javascrpit and soy
@andrewmcewan9145
@andrewmcewan9145 3 жыл бұрын
My friends 1070 would sometimes trip ocp on the power supply whenever you where loading something. Getting s bigger powersupply solved the issue but as you said proves that nv ignores short bursts. Turing was the first NVIDIA GPU: processing a 32-bit integer operation simultaneously with a 32-bit Floating point operation. Ampere increases the superscalar width to 2x FP32 operations per clock or 1xFP32 + 1x INT32 operation per clock. These additional FP32 operations per clock may lead to the more gross violations of the power limit compared to previous generations. Especially as fp32 probably has more transistors than int. A pascal vs turing vs ampure power spec violations in new world would be intresting to see but not nessarialy feasable. Although as you said this does make me conserned for using big ampure longterm.
@Azerkeux
@Azerkeux 3 жыл бұрын
I wonder why the 16 series cards were so locked down- I have both an EVGA 1660 and 1060 and the 1660 does not let you increase power consumption at all in OC software and have never seen it go over 125w where as the 1060 you can
@commentaccount7880
@commentaccount7880 3 жыл бұрын
"they especially don't last forever when you run them close to thier limits" then i just slide over and turn down my power limit on afterburner instantly lol
@toxy3580
@toxy3580 3 жыл бұрын
My 980ti has been at max limits for 6 years now
@cdurkinz
@cdurkinz 3 жыл бұрын
Just a heads up, updates no longer take hours to do. That’s not a thing anymore.
@benjaminchung991
@benjaminchung991 3 жыл бұрын
For consumers in the future - and ideally to try and drive manufacturer behavior - how hard would it be to include a look at the protections implemented in the VRM when you do board analyses on GPUs? What additional information would you need to conduct the analysis, beyond what's available from the PCB shots?
@10ghznetburst
@10ghznetburst 3 жыл бұрын
You'd need to actually test the cards to see where they hit the limits or have board schematics to fully understand how they are configured. And anyway, I expect GPUs will start to implement more granular power management systems at a low level, to not issue instructions in a way that generates such transients, so it's likely not something that you'd be able to analyse at a board level.
@linnaea_lavia
@linnaea_lavia 3 жыл бұрын
Publicly available datasheet for the components, which is not a guarantee
@10ghznetburst
@10ghznetburst 3 жыл бұрын
@@linnaea_lavia Aside from leaked Gigabyte ones good luck getting board schematics for the GPUs.
@linnaea_lavia
@linnaea_lavia 3 жыл бұрын
@@10ghznetburst if there's full datasheet available for the power controller one can guess where the monitoring components should be located, and the datasheet would specify how those components should be chosen, problem is nowadays even that's not a guarantee. Some manufacturer locks up their documentation and only publish a 2-page flyer (they insist on calling it "brief datasheet" which I very much disagree).
@squirrel6687
@squirrel6687 2 жыл бұрын
Fan cooled? Maybe the fans on full tilt are eating that last few percent of overhead.
@testynetesty
@testynetesty 3 жыл бұрын
Wait, did Gigabyte really start using 60A drmos'es? Every 3080/Ti/90 that i've seen used either AL00 (AOZ5332 50A) or BLN0 (AOZ5311 which was 50A and then updated to 55A). If it is i wonder can it be some sort of preventive fix?
@jazz9fr
@jazz9fr 3 жыл бұрын
Would this be as much of an issue on something like a 3090 FE which uses 70A smart powerstages and better controllers?
@M1nat0
@M1nat0 2 жыл бұрын
Yes, because iirc FE's still use 50A powerstages, 90 Tis are the ones that use 70A powerstages
@andytroo
@andytroo 3 жыл бұрын
there's probably an optimal spike frequency, where the weighting of the old one has fallen out of the time window, where you can go for another millisecond at > 100% power.
@tarfeef_4268
@tarfeef_4268 3 жыл бұрын
posting here since alder lake is the new hype: can we get some rambling about the power delivery for DDR5 being on-stick now? maybe talk about the impact on PSUs and Motherboards now that from what I've heard, memory will run off of 5V, not 12V? I am not sure how much average PSUs are specced to handle of 5V, but I know in some cases that's not a huge number, and it could go up notably if DDR5 power consumption is higher, more so if boards/CPUs allow for higher density on consumer platforms (servers that support LRDIMMS, etc are already going to be specced for insane memory power draw)
@Mickulty
@Mickulty 3 жыл бұрын
What I'm hearing is that RTX 3090s are future classics destined to be very rare. Truly the Lamborghini of GPUs.
@N0N0111
@N0N0111 3 жыл бұрын
A lot water cooled models will survive the 3 years mark.
@andersjjensen
@andersjjensen 3 жыл бұрын
@@N0N0111 Water cooling doesn't protect your VRM from blowing up.
@ActuallyHardcoreOverclocking
@ActuallyHardcoreOverclocking 3 жыл бұрын
@@andersjjensen it does. If you keep a VRM that's on the edge of it's capabilities at 50C instead of 90C it helps a lot.
@ThisIsAGoodUserNameToo
@ThisIsAGoodUserNameToo 3 жыл бұрын
I was really hoping you'd show us the power usage under New World.
@Todd_Manus
@Todd_Manus 2 жыл бұрын
You keep mentioning that know one will use all "transistors" on an RTX3090.. and you keep going to games, but actually 3d rendering hammers a GPU just as much as Furmark... Take this for what it is worth, but I imagine most people who buy the rtx3090 are not gaming... the return on investment over the rtx3080 is minimal.
@michaelpascual2731
@michaelpascual2731 3 жыл бұрын
What about using higher quality components for the power delivery system to be able to handle the problem of the spikes, and is this even possible, does better quality components even exist?
@D3humaniz3d
@D3humaniz3d 3 жыл бұрын
If you have more VRM phases that can share and distribute the load - the are going to last longer, since the load will be distributed. If you have higher quality components rated for higher voltages / power draw, of course they are going to last longer, by definition. That's why, if you intend to use a GPU for longer, you should basically pick whatever card has better power delivery components / better design. A good example of this is the POSCAP/MLCC fiasco at the launch of these [3090, 3080] cards. Cards that only had MLCC's (like the Strix and TUF) did not have any stability issues whatsoever. Meanwhile, everyone else who went with 5 or more POSCAP, had stability issues - at least from what I remember. Why did they use POSCAP's? Cause it's cheaper, nerd.
@mmbr20
@mmbr20 3 жыл бұрын
Appreciate your analysis, thank you. I actually run mine at .850v 1800mhz. Would you say the longevity of the card will be increased?
@Cinnabuns2009
@Cinnabuns2009 3 жыл бұрын
see my comment to the poster above yours
@riklaunim
@riklaunim 3 жыл бұрын
Xonotic (free simple shooter) at least on Linux benchmarks could cause quite high power draw so maybe also usable for showcasing things.
3 жыл бұрын
Excellent video as usual!
@wten
@wten 3 жыл бұрын
A modification to furmark to intentionally create a bursty load would be helpful.
@billgaudette5524
@billgaudette5524 3 жыл бұрын
If I run my EVGA 3080 FTW3 Ultra on the second vbios (105% power limit), and then set voltage and power to 100 and 105 each, the card will report over 400 watts drawn regularly in New World. I set it to 90/90 when I run the game just in case.
@Cinnabuns2009
@Cinnabuns2009 3 жыл бұрын
I was messing around with my EVGA 3080ti ftw3 with pwr limit and voltage limiting and it will boost like all the time to 1950mhz @.9 volts and it will run happily there for what seems like indefinitely and @67deg C where as if I run the card at STOCK it boosts to over 2Ghz very very briefly, then gets to just over 80C temperature and down clocks itself to 1800-1850mhz and then runs there pretty much full time at higher power usage and temperature. In other words, my benchmarks are higher by quite a bit if I set the power limit at 85% or even higher with power limit at 100% but voltage curve max at .9v Boost was taking the card up to over 1.1v and that is on the default bios. So, if you're NOT undervolting, you're leaving performance on the table and also using excess power and then have to run your fans faster to cool down all the power usage. Seems like Nvidia really where just pushing everything to the max with this gen to lose less ground to AMD and there will be (I'd bet also like Buildzoid states) a spate of dead cards come a year or two which Nvidia happily loves... oh you need a new GPU? We have just the thing! Failure mode built in.
@KaNoMikoProductions
@KaNoMikoProductions 3 жыл бұрын
Buildzoid, is this as much a problem for the 3090 TUF? Since the entire shtick is that it's meant to be durable, does it have better VRM protection or any such?
@vyor8837
@vyor8837 3 жыл бұрын
The Tuf parts have been garbage for an age
@KaNoMikoProductions
@KaNoMikoProductions 3 жыл бұрын
@@pcoverthink He did a breakdown of the 3080 TUF, which has the same PCB as the 3090, and he said something along the lines of it being the best 3080.
@Haos666
@Haos666 3 жыл бұрын
@Actually Hardcore Overclocking ... so this is why extreme high FPS causes coil whine on some PCBs? Hundreds or over thousand ultra short poer spikes per second?
@peteraasa5267
@peteraasa5267 3 жыл бұрын
320w is what a 3080 should draw, i just got my Evga ftw3 ultra it draws way over that if u dont tune it down, and the performace difference is like nothing, so i set it to draw 85% power , and now it draws 320 to 330 watts. I dont know why they set it so aggresive, dont want to be in a sauna. Thanks for your input.
@lllllllllllillllllll
@lllllllllllillllllll 3 жыл бұрын
So running the VRM on high load and close to its limits is reducing the lifespan or killing the VRM? How does EVGA shipping out new cards resolve this then (apart from them blaming the soldering or w/e it was)? Seems untenable to just keep replacing them because high loads keep killing the VRM. Is it maybe also a combination with the high temps the VRM and memory modules are running at? I would guess if that's the case, then running a single or even double sided GPU waterblock would probably help. Or, do you think we'll see aftermarket cards running beefier VRM in future designs to avoid issues like these?
@guycxz
@guycxz 3 жыл бұрын
There probably aren't enough cards dying yet to make a more permanent solution more economically viable. Still, New World is unlikely to be the only game that can push the cards hard enough, and the VRM will probably be short lived regardless; hopefully enough cards die within warranty to actually make it more economically viable to make a proper card rather than hope it can last just until the warranty ends.
@UruguayOC
@UruguayOC 3 жыл бұрын
Read what i posted some minutes ago bro. All the Best, Sergio!
@muaries12
@muaries12 3 жыл бұрын
Between Jaytwocents experiments in software and BZ experiments in hardware i learnt a lot of gpu power managment
@87Moonglow
@87Moonglow 3 жыл бұрын
Hmm BZ actually knows what he is doing, Jay2Cents is just fun to watch but i dont watch him for the technical knowhow.
@tobydion3009
@tobydion3009 3 жыл бұрын
@@87Moonglow Exactly.
@ij6708
@ij6708 3 жыл бұрын
Jay is just for entertainment. Just recently he was using heaven dx11 benchmarks to look for improvements from memory OC
@dreamcat4
@dreamcat4 3 жыл бұрын
so getting to the point here, what you are basically saying is that maybe nvidia should introduce a more general type of auto downclocking feature in their future gpus. you cited AVX intel downclock as an example. however maybe that is not so easy for nvidia because they don't have something simple like AVX instructions to look ahead for. but i get your meaning: you mean do not go into such a restricted mode just only when detecting furmark program. but instead to the downclocking based on the general load being requested. and not to totally gimp it, but just to reduce the clocks down a bit further than usual to ensure a wide margin of safety so that it keeps it well enough away from any chance of blowing up catastrophically... but after repeating that i skrikes me maybe should that not be already something that should be incorporated within the existing firmware / software as a part of the inputs to the nvidia GPU boost algorithm? which is what dynamically controls the clocks.... this then goes back to the rumor that either a) NVIDIA was to blame in their driver for not setting the GPU boost correctly. or b) that NVIDIA thought their original / pre-existing GPU boost algorithm settings were already safe. however then with these new 3090s come along. then maybe the actual hardware design and power delivery was different than expected. or alternatively also possible (just like you mention) maybe there are some bad questionable components in the supply chain. which nobody was aware of. the thing is in retrospect it seems that the companies involved did internally investigate (using the new world as a test benchmark). and did eventually find out what is the culprit cause was. but this information was never shared with the public. and perhaps due to not wanting to make public and then forced into some very expensive product recalls. it's cheaper to fix in software and push out a new driver update. and also get the developer to patch their games. this would explain pretty adequately why we never really got the truth about it. because who wants to recall all those 3090s under current market conditions? it would be pretty awful situation to have to do that. still: you have a great point about them dying eventually further down the line. none of these companies would give 1 cent to care to protect the product adequately into old age. they would rather they al blew up after 3-5 years. so that it is clear out of warranty and then everybody has to buy new cards all over again. what annoys me about this is that: well people want to do that anyhow, regardless if they still work because the future cards will be so much faster. there is always a high enough demand for the latest product. so it seems kindda mean in that respect. since anybody who can realistically afford to buy a brand new cards (next time)... they will certainly buy them! and for those who cannot - that is simple because they cannot afford to keep buying cards all the time. so it's a policy that penalizes only the poor. and not the wealthy ones. so that is really where i take issue. it is also pretty bad for the environment too, for so many expensive products to end up as e-waste. sadface 😿
@petrofsko
@petrofsko 3 жыл бұрын
Hi there I live in Glasgow Scotland UK I preordered a 3xs sytem from Scan UK on 18/09/21 they start build stage 3 on 8/11/2 I've up until build stage 3 the day before to cancel. I need a new pc with top gpu, initially I was looking at a scan custom built pc with EVGA 3080 I added the price up with ryzen 9 5900x with whatever else I needed it came to £2850, then I saw same kinda set up cpu etc with EVGA 3090 FTW3 ULTRA GAMING was £3099.98.However I'm reading and watching videos about the OCP sytem on evga 3090 and gigabyte 3090 is not set right hense they're deaths?It's a right pain as I'm still using an i7 930@4ghz 12gb ram with msi Rx 480 8gb/gigabyte X58A UD3R Mobo I got from overclockers UK in June 2010 for £1500 still works but get bsod with big ubisoft open worlds. Although the Rx 480 8gb only gives me 30fps max settings at 2k it's lasted 5-6years replacing my Rx 390 8gb, are better cards to be honest which looking at the situation is absolutely disgusting and sad since the evga 3090 is costing £1500.So I'm seriously considering cancelling in next few days and see how the 4000 series plays out or see about radeon 6900xt whatever.
@transparentblue
@transparentblue 3 жыл бұрын
5:44 Two different definitions, optimization level could be defined as "how much of the silicon is that program actually using" or "how much render time does the prgram need for any rendered frame", in New World's case it's optimized if you go by the first definition whereas I've seen a few claims by devs that it does a needlessly complicated things on the rendering side meaning it would be unoptimized by that second definition. tl;dr: utilization vs efficiency
@arjenmiedema8860
@arjenmiedema8860 3 жыл бұрын
I can confirm that both my 980ti's have died whilst my gf's 970 is humming along just fine till this day. It is one of the lesser considered aspects of GPU shopping that the higher power usage will inevitably result in the high end options dying sooner most of the time.
@tessierrr
@tessierrr 3 жыл бұрын
970 master race 🤣
@andersjjensen
@andersjjensen 3 жыл бұрын
Only an Nvidia problem.. Their approach to power management really spells "We hate users who don't upgrade every generation anyway..."
@TrueThanny
@TrueThanny 3 жыл бұрын
04:48 No it doesn't. It has 5248 CUDA cores. Prior to Turing, a CUDA core was a single ALU capable of executing either one FP32 instruction or one INT32 instructions. With Turing, that changed to a pair of ALU's, where one could execute FP32, and the other INT32, at the same time. The INT32 fact is important, because CUDA is a software library that includes integer math. If an execution unit cannot do integer math, it is not a CUDA core, by logical necessity. So when Ampere came out, and nVidia changed the "CUDA core" count at the very last minute to double the actual figure, it was a marketing lie. With Ampere, a CUDA core is a pair of ALU's, where one can do one FP32, and the other can do either FP32 or INT32 - the same as the sole ALU with pre-Turing cards. In practice, anywhere from 20-40% of that extra FP32 capacity ends up being used in games, with the average being about 25% at 4K. That makes the _effective_ CUDA core count, for games at 4K, ~6500. That's a far cry from the marketed figure of over 10K.
@TrueThanny
@TrueThanny 2 жыл бұрын
@Claus Bohm I'm talking about games. In a single-function app like Blender, the extra FP32 capacity is much easier to take fuller advantage of. Ampere is much better for compute than for gaming.
@kotekzot
@kotekzot 3 жыл бұрын
You'd think NVIDIA would want to make their flagship as reliable as possible, but I guess not. Maybe they've realized the sort of people who buy 3090s for gaming are going to keep buying them regardless, and making the cards fail early just means more sales.
@MarshallSambell
@MarshallSambell 3 жыл бұрын
The flagship cards have always had the highest failure rates for as long as Nvidia has been making flagships. It’s simply because they are pushing the architecture to it’s limit with more points of failure
@kotekzot
@kotekzot 3 жыл бұрын
@@MarshallSambell is it the architecture or the underspecced power delivery that's causing the failures?
@futureb1ues
@futureb1ues 3 жыл бұрын
Does this apply to the 3090FE design as well or just AIB/reference designs?
@Squall4Rinoa
@Squall4Rinoa 3 жыл бұрын
just AIBS.
@Squall4Rinoa
@Squall4Rinoa 3 жыл бұрын
@@pcoverthink Please don't reply if you have no qualifications to your name, the boost to 110% is not a violation its part of the boost design and has been the entire time.
@Squall4Rinoa
@Squall4Rinoa 3 жыл бұрын
@@pcoverthink lmao, i have thrice as much experience and qualifications compared to you kid, bugger off.
@ANiMOSiTYZA
@ANiMOSiTYZA 3 жыл бұрын
The level of depth in your videos is astonishing! Thank you! I have a question.. Maybe.. I have a Zotac Trinity 3090, which was the only card I could get last year, and I even got it at close to MSRP. I flashed it with a VBIOS that has the power target set to 370W (it's a 350W card, as you know) and I have the card limit set to 105% TDP, so it's close to the max allowed limit of 390W. It runs at, or near, the 105% TDP in many workloads and has been like this for most of the time I've had it. I have a custom water loop, covering all the power stages, the VRMs, and a passive backplate. Is cooling something that is allowing the card to survive?
@alexmills1329
@alexmills1329 3 жыл бұрын
Yes, temperature only helps degrade lifespans of these components, but if they are pushed out of spec they can and will still fail eventually and early
@anarcat6653
@anarcat6653 3 жыл бұрын
buildzoid already answer to this question, or similar "it does. If you keep a VRM that's on the edge of it's capabilities at 50C instead of 90C it helps a lot."
@augustusbeard4528
@augustusbeard4528 3 жыл бұрын
Could it be that the 8k results are just easier to monitor because the frames are longer so the transient response is might alse be longer? So the software update is fast enough to see the peaks compared to a faster framerate/shorter frame time? Also do you have any idea what information gpu-z uses to monitor power consumption? Because if its nvidia own shunt resister power monitor circuitry wouldnt that imply that the actual peak power is even higher than reported in software. Lastly if this is true, I understand that most electrical components are made to withstand pulses on a somewhat regular basis, though would the long term effects of these extreme pulse not degrade the vrm components itself or arent pulses that destructive for these components because of the relative little sustained temperature increase and short nature of these pulses.
@arthurberggren5618
@arthurberggren5618 3 жыл бұрын
Hey Buildzoid, completely off topic for this video but I was wondering if you happen to know which BIOS is best on a Gigabyte Z390 Aorus Pro WiFi, F12k or F11? I actually didn't even want the Aorus Pro. I wanted to buy the ultra or the master but they did not have any in stock anywhere at the time and if they did it was completely outrageous price. This brings me to a suggestion I have for a video for you. You could potentially make one for the differences in BIOS revisions on whatever motherboard seems appropriate. I know you have an oscilloscope so maybe measuring the differences in transient response, etc. I would also like to thank you for putting out such in-depth videos. I really wish more people would get a little more advanced when it comes to well everything really. I really appreciate your work and the time you put in. Thank you. Side note. I know a lot of people have trouble overclocking RAM on the Z390 Aorus Pro. I got mine to actually post at 4266 and boot into Windows at 4133 of course it was not stable. I was able to get 3800 15-15-15-30. 3866 however was not. This was all on F12K. I switched to f11 and 3900 and 3866 are stable at 15-15-15-30 all with really tight secondaries and tertiaries. Ram is G-Skill non-RGB 3200 14-14-14-34. Anything above 3900 I just cannot get stable. If you have any suggestions on how to get 4000 plus stable please pass on the information and thank you.
@dei_stroyer
@dei_stroyer 3 жыл бұрын
I like how the memory temps are 205 degrees, let me just cook a fuckin pizza on that.
@chapstickbomber
@chapstickbomber 3 жыл бұрын
Makes Vega64 peaking look tame. I run my Strix3090 at 480W for triple 4k, so I suspect my 1ms peaks are like 700W. Jesus.
@Netsuko
@Netsuko 3 жыл бұрын
At least you're somewhat lucky that the Strix cards seem to be some of the best and most sturdy ones of the bunch. So there's that.
@Bllfrnd
@Bllfrnd 3 жыл бұрын
For my gigabyte 3090 gaming OC the first and the second death occurred during GTA online. First one in march, they repaired it and now it died again today.
@jannegrey
@jannegrey 3 жыл бұрын
I was always scared to run FurMark. I only did it if someone requested or once for 6 hours on my HD4870 1GB. And New World allegedly is even worse (you said spikes, which are worse then continuous current). I'm not even trying to buy this game in the midst of GPU-shortage.
@VargVikernes1488
@VargVikernes1488 3 жыл бұрын
So does that mean that the fact, that New World is badly optimized on AMD gpus, actually saves them from going up in flames? Because I am pretty sure RDNA2 has the same transient power spikes, especially on always power-hungry 6900XT. Or does AMD manage power-delivery more conservatively? Also, do you think it's safe to unlock power limit through MorePowerTool to something like 360-370W on an adequately cooled 6900XT?
@SolarianStrike
@SolarianStrike 3 жыл бұрын
The AMD reference 6900XT is actually using a 13-phase 70A vcore with TDA21472. The VRM is almost powerful enough to power a 3080 ti / 3090. Cards like the Nitro+ is just a reference spec PCB with extra fuses and RGB added. As long as you can keep the VRM cool you should be fine. Also the thing about the Navi 21, it is just a much leaner GPU compare to the 3090.
@r3drumg33k3
@r3drumg33k3 3 жыл бұрын
IDK about safe....lol, But I have drawn over 575 watts on air with my 6900xt OCF.
@SolarianStrike
@SolarianStrike 3 жыл бұрын
@@r3drumg33k3 575W on the core alone?
@ActuallyHardcoreOverclocking
@ActuallyHardcoreOverclocking 3 жыл бұрын
AMD cards don't pull as much power and use somewhat better power delivery components.
@andersjjensen
@andersjjensen 3 жыл бұрын
@@ActuallyHardcoreOverclocking It would be nice if you could walk us through (from connector to VCORE output) how AMD does things. I know your 6900XT cracked the die, but if you still have it you can still measure configuration resistors and the like.
@thorstenschroder7929
@thorstenschroder7929 3 жыл бұрын
I wonder if twice the capacity on the GPU-Side of the VRM could take away a little bit of the peak load on the VRMs. Or are the peaks so long that you'd need crazy amounts of Capacity to handle these situations. Also: Are there power-stages with higher ratings, or are we getting back into discrete MOSFET-Territory at over 60 Amps continuous ratings? So far my SMPS-Designs peaked at 15 Amps with MOSFETs that have continuous current ratings at about the value of the calculated peak-values. Another thing that just popped into my head: How close are they running the Inductors to their Saturation-Limits? As soon as the Inductor is saturated, it's resistive component (less than 1 mOhm) becomes dominant, which will effectively look like you short the GPU Power-Rail to the 12V Input-Rail for a few microseconds, before the Controller can turn off the Powerstage due to Overvoltage (Overcurrent should have tripped here, but doesn't due to mentioned design-flaws) or the maximum On-Time being reached.
@camelCased
@camelCased 3 жыл бұрын
So, when / if I get my 3060, should I underclock it just to be sure? I'm gonna use it for experiments with neural networks and Unreal Engine. So, I'm pretty confident I will accidentally write some clumsy code that uses 100% GPU. And also Blender rendering.
@jtnachos16
@jtnachos16 3 жыл бұрын
Realistically, the 3060 shouldn't be able to slam the limits that hard, as it's overall capabilities are much lower. The issue going on here is that the VRMs are getting slammed with rapidly cycling transients that are outside it's capabilities, based on the evidence available. The 3060 shouldn't be running quite so hard up against it's own hardware limits as the 3090 does with regards to voltages and power. Someone can correct me if I'm wrong on that, but I doubt the 3060 is likely to see such issues from violating power limits, just by it having a more conservative power limit and resulting headroom to begin with. If you are truly concerned, I'd start with underVOLTING, not underclocking. Undervolting provides lower power consumption, assuming the card doesn't ignore the limits put on it as part of the undervolting. Which would head off the issue of violating power limits. Undervolting also doesn't inherently hurt performance as much as underclocking does.
@threepe0
@threepe0 3 жыл бұрын
@@jtnachos16 because the capabilities are lower, it shouldn’t be able to try for capabilities that are higher than it’s lower capabilities berf lorgic nurrrrr buuuhhhhhh
@jtnachos16
@jtnachos16 3 жыл бұрын
@@threepe0 Not sure what you are trying to do here, short of coming across as a dumbass. For the most part, because lower end parts can't clock as high on frequencies and have lower transistor counts, they ride the line less on relative overhead in power staging. It's why there's been a relatively consistent thing with the higher end GPUs (such as top end and/or late revision Ti models) being more prone to power staging issues.
@camelCased
@camelCased 3 жыл бұрын
@@jtnachos16 Thanks, sounds reasonable. There's just one problem - to wait when I can get a 3060 12GB for a normal price :D Those 12GB are really attractive for neural network experiments, and I'm not a heavy gamer; *60 series GPUs has always been enough for me.
@jtnachos16
@jtnachos16 3 жыл бұрын
@@camelCased I'm on a 2060S at the moment. It still handles everything I've thrown at it @1080p without much issue, gaming wise. Only occasionally need to turn down a setting to maintain 60fps. Or at least, it does now that it is in a decent case. Go figure that the moment I have the money to actually get a new card to go with my new build, is the moment the prices skyrocket.
@dinxsy8069
@dinxsy8069 3 жыл бұрын
Has Nvidia or the board partners addressed this issue? Top tier card crapping out this day and age is ridiculous
@vyor8837
@vyor8837 3 жыл бұрын
They have not.
@dinxsy8069
@dinxsy8069 3 жыл бұрын
@@vyor8837 Typical behaviour, pass on issues to the end user with no acceptance. I'm glad that I won't in any position to buy a 3090.
@vyor8837
@vyor8837 3 жыл бұрын
@@dinxsy8069 meanwhile, Nvidia shills are blaming New World and not Nvidia. Because of course they are.
@dinxsy8069
@dinxsy8069 3 жыл бұрын
@@vyor8837 when I heard them blaming new world I did have a 'huh' moment. People who believe that are dilotional 🤣a game that "hacks" Nvidia software/components
@dukeofdream
@dukeofdream 3 жыл бұрын
When you have a 3080ti FTW3 and a sandwich waterblock coming in and you were expecting to be overclocking the hell out of it.... And now you're like... hm... I should put my power limit to 80% just to be safe 😤
@dreamhackian4864
@dreamhackian4864 3 жыл бұрын
3080Ti FTW3 not at risk. has revised board
@dukeofdream
@dukeofdream 3 жыл бұрын
@@dreamhackian4864 You sure? Cause the board seems pretty much the same to me... If it is indeed revised, then it means the block I ordered won’t fit 🤣🤣 I also asked EVGA if it’s ok to go for a full WaterBlock in terms of warranty. I know it’s already said many times but if something happens now I will have the support ticket answer as proof that I got confirmation before proceeding 😅
@N8Miniatures
@N8Miniatures 3 жыл бұрын
You should ask the owner if you can run New World on it, we all really wanna see it :D
@centurion1443
@centurion1443 3 жыл бұрын
many thanks for this series of videos! any suggestions for protecting the gpu? e.g. max FPS, undervolting?
@Zfast4y0u
@Zfast4y0u 3 жыл бұрын
furmark is detected by nvidia driver and cards do throttle on it, cause nvidia dosent want em to blow up, its before 3000 series rolled out, cant remmember which driver version exactly.
@apreviousseagle836
@apreviousseagle836 3 жыл бұрын
I have an AORUS water cooled 3090. I also stuck a fan on top of the backplate to further enhance cooling. My specs when running FurMark at 4k and 8xMSAA: GPU Temp: 55c Memory Junction Temp: 72c Hot Spot: 72c The only game I own that punches the card as hard as FurMark is MS Flight Sim 2020. This game, at 4K and maxed out graphics, is able to push the card to 99% and I still only get 54fps
@fleurdewin7958
@fleurdewin7958 3 жыл бұрын
Hi Buildzoid, I have 2 question: 1. From the GPU-Z monitoring ; I can see that the 8-pin #1 voltage sometimes goes as low as 11.3V . From what I know, ATX specs calls for +-5% voltage tolerance on the 12V rail , which is 11.4V~12.6V. So is your power supply becoming faulty or GPU-Z reading is actually wrong ? 2. The memory temps hovering at 94 Celsius, looks like it might still increase. Is it dangerous to run at these temps if I'm expecting the GPU to last at least 5 years ?
@ActuallyHardcoreOverclocking
@ActuallyHardcoreOverclocking 3 жыл бұрын
shunt resistor and input filtering make the voltage drop uner high loads.
@wewewe2712
@wewewe2712 Жыл бұрын
In my computer, the hot spots reach 100c, what is the problem?
@Seriessify
@Seriessify 3 жыл бұрын
My understanding might be limited, but if furmark hits power limit at ~1200mhz/718mV and you for example doubled the power limit via shunt modding, how much would furmark pull in watts?
@VargVikernes1488
@VargVikernes1488 3 жыл бұрын
ALL OF THEM
@unlimiteddy5546
@unlimiteddy5546 3 жыл бұрын
A shunt mod is only for powerdelivery from the connector. The amount of amps drawn is determined by the GPU how much it needs to activate all its transistors. With a shunt mod you do not force double power going in to the core, but you basically tell the core that there's more power available if it needs it. The amount of power the core draws is basically based on voltage and frequency, which a shunt mod does not influence.
@muhschaf
@muhschaf 3 жыл бұрын
many, like a fuckton...
@volodumurkalunyak4651
@volodumurkalunyak4651 3 жыл бұрын
@@unlimiteddy5546 actually shunt mod will influence frequency as nvidia boost system takes power limit to derive operating frequency and voltage. Less power reported -> higher frequency / voltage untill card things it used up all avaiable power headroom.
@Seriessify
@Seriessify 3 жыл бұрын
@@unlimiteddy5546 That much I do understand, my point was wondering about what freq/voltage the card would run at, and how big the power draw, if not limited by the power limit it is currently beating against.
@MatthewKiehl
@MatthewKiehl 3 жыл бұрын
This furmark donut is used in MSI kombustor - anyone know how similar it is? I discovered that Path of Exile (using the Vulcan engine) was giving me higher temps than MSI kombustor. I thought that this might be a result of full system utilization beyond just the GPU. (More heat in the case and on the board in general). I had to use frame caps with that game to keep temperatures in line.
@Nelthalin
@Nelthalin 3 жыл бұрын
Thanks for the info i was already wondering why New world would wreck cards. But its clear nVidia's power managment is at fault here. The card should protect it self and it does not do that well enough.
@BillyC500
@BillyC500 3 жыл бұрын
Can can see this being a move made knowing the downsides. Is this typical of GPU power management or is this introduced with the 3090?
@jdoggsgarage4494
@jdoggsgarage4494 3 жыл бұрын
So with all of this being said, why dont these cards blow up left and right with the 500 and 1000w bios's? Using the 1000w bios my ftw3 would pull down 600+ w in port royal on water cooling.
@guycxz
@guycxz 3 жыл бұрын
The card's power monitoring measures an average power draw over a period of time. If a spike occurs that is shorter than that period of time, the power monitoring will only catch it after the fact, and it will be averaged with the power draw over the rest of the period of time being measures. So when the GPU gets hit with a workload that uses all of it it will try to draw as much power as it needs, and be limited by the power limit of the card. If the load occurs for a short enough period of time the card's power monitoring will not catch it until after the fact and will not limit it. Additionally, the power spikes in this video may actually be higher than presented, and New World should theoretically induce ones that are higher still. Edit: While it could be that a higher clocked GPU may spike even higher, though considering all the power modded cards probably have better cooling, the VRM may still suffer as much from it, perhaps less.
@jdoggsgarage4494
@jdoggsgarage4494 3 жыл бұрын
@@guycxz I fully understand that, but with a reported power draw of 300w do you really think there are transients when playing a game that are higher than what we see when benchmarking with a 1000w bios and seeing reported power draw in the 600w range?
@guycxz
@guycxz 3 жыл бұрын
@@jdoggsgarage4494 There may be. If the card doesn't down clock and gets nearly fully utilized, the power draw could potentially be huge. So much so that some cards actually tripped OCP before dying. If those cards had an OCP with similar values to those outlines in the previous video, you could potentailly see a current spike of 1000A. Depending on the voltage across the core, we could theoretically see a 1000W power surge that would be presented as much lower by the power monitoring. If we measure 200 times per second, or every 5 ms, and draw 1000w for 1 ms then 350W for 4 ms the average will be 480W, and that is what will be presented. This all also depends on whether there is OCL and how it's set up, and on the way the power stages are monitored and balanced.
@ActuallyHardcoreOverclocking
@ActuallyHardcoreOverclocking 3 жыл бұрын
@@jdoggsgarage4494 there's also manufacturing variance at play.
@N0N0111
@N0N0111 3 жыл бұрын
22:00 The best scenario for a gamer with RTX 3090 that has the fans model would be as follow. Underclock the GPU core to about 1800MHz. Undervolt to 0.800 - 0.900 volts. Buy proper high quality thermal pads for the VRAM. Lock your FPS to max monitor refresh rate in the Nvidia control panel/game. Having this card in your system for more than 1 year with none of above is asking for a dead card.
@axelwolf2115
@axelwolf2115 3 жыл бұрын
Mine is OC with the stock cooler and stock thermal pads/paste and it's been alive for a little more than a year...
@PlayJasch
@PlayJasch 3 жыл бұрын
@@axelwolf2115 Yeah same, I think the flaw discussed here would already have bricked the card. And it survived 210h of new world. I'm good. Highly depended on your cards design.
@tonyb1223
@tonyb1223 3 жыл бұрын
Mines over a year old, runs fine thanks 😁 still has just under 3 years warranty left as well 😉
@N0N0111
@N0N0111 3 жыл бұрын
Okay, that is very good. Now I know there are cards that do a lot better. Can you guys drop the GPU models for us to learn more? But do remember what builzoid said, these high power peaks are degrading the VRM power stages slowly and steadily. We are going into the winter part of the year now, my guess would be that next summer will be a risky gaming period on these hot 3090's.
@axelwolf2115
@axelwolf2115 3 жыл бұрын
@@N0N0111 My model "EVGA GeForce RTX 3090 24G-P5-3987-KR" and I won't discuss longevity as that would be stupid considering they just launched last year but after seeing the flaws explained by Builzoid I will keep track of any change in behavior, although I know for a fact that the affected cards are very few, is not as widespread as it looks, but who knows, right now I'm wondering if my 3090 will live longer than my Asus 1080 hahaha
@tessierrr
@tessierrr 3 жыл бұрын
Werent 3090s blowing up since launch? New world just opened peoples eyes on how shitty the power delivery is 🤣
@noko59
@noko59 3 жыл бұрын
1080p to 4K has same geometry/triangles, what is different when you shade those triangles at higher resolutions 4K/8K you have more pixels to shade to color the triangle of the geometry. Each pixel for the most part is one operation of a pixel shader or compute shader and so on, more pixels more processing and more loading (keep more shaders busy with work). Well that is my understanding.
@martinnyolt173
@martinnyolt173 3 жыл бұрын
From an engineering POV, I can understand why Nvidia takes advantage of power stages that have a rated "burst current". It allows for more boosting for short periods of time, without making a 24+ power stage VRM (which would cost a fortune, and much more PCB space). It just requires NVidia/the board partner to accurately monitor the current through all the stages and ensure that the "power stage boosting" is within spec (e.g. burst of 80A for 10µs, than stay below 50A for > 1ms, or something like that). Of course, this also depends on the spec for the power stage on the PCB, so adopting the GPU boosting to the power stages. I think that would be the right thing to do.
@guycxz
@guycxz 3 жыл бұрын
Honestly, considering the price of GPUs in general and 3090s in particular a 24 power stage VRM wouldn't be too unreasonable. I reckon they configured the power delivery as they did in an attempt the get more performance for the same price, but they flew too close to the sun and now some of their cards are on fire.
@martinnyolt173
@martinnyolt173 3 жыл бұрын
@@guycxz As I said, I somehow can understand that approach. Generally, you don’t want to be too wasteful. Too many power stages can crowd the PCB, making it more difficult to route other signals, and eventually you would have to sacrifice the placement of other components, e.g. move the VDD/cache power stages further from the core (which worsens their performance), etc. At some point, a graphics card only has so much space.
@N0N0111
@N0N0111 3 жыл бұрын
@@guycxz That is so true. They are making gpus that are 3X the price. But they use about the same power stages that the mid tiers have??? Engineers not doing their jobs anymore, it's all about maxing their profits in midst of a crypto boom! Jensen sees only $$$, and now the electrons is biting his butt cheeks.
@guycxz
@guycxz 3 жыл бұрын
@@martinnyolt173 Of course, however looking at how things are configured on the card, it appears the power spikes are left as they are by intention. I don't think the card would have the same performance if limited precisely to the VRM's spec. The best solution I could see is using higher rated power stages, which was probably considered not worth the cost.
@martinnyolt173
@martinnyolt173 3 жыл бұрын
@@guycxz Of course the power spikes are by intention. Let's say your GPU/cooler can handle a heat load of 300W. If 98% of the time, the GPU draws 280W for the load, then there is enough thermal headroom for 400W 2% of the time. This can be used to alleviate bottlenecks and increase FPS eventually. That's the whole point of boosting. However, it is not ok if Nvidia does not accurately monitor the current draw during boosting, and hence drives the power stages outside their spec. As far as I'm concerned, they could still use the same power stages, they just need to adapt their boosting to not overload them. Or, if they rely on their boosting to draw more current for their claimed performance, they need to select power stages which can handle the current spikes. But again, they need to be aware of the current draw at any time to know the required power stage spec.
@Kerrathul
@Kerrathul 3 жыл бұрын
Do you think nVidia did this in response to pressure from AMD Radeon 5700XT, knowing that the 6xxx series might take the crown on fastest GPU?
@andersjjensen
@andersjjensen 3 жыл бұрын
This is their usual MO with power delivery, and it has been for a long time. They're just running Ampere much closer to red line (remember the day one VBIOS update that limited boost clocks?) precisely because they realized too late in the game that going with Samsung 8nm instead of TSMC 7nm gave AMD too much of an in.
@astarothmarduk3720
@astarothmarduk3720 3 жыл бұрын
They use GDDR6X memory which is power hungry, and maintain powerful RT and Tensor cores. Good intentions, but they crossed a red line. I would rather buy an RX 6900XT which shows what can be done with 300W TDP, or an RX 6800 for best energy efficiency and performance/price ratio.
@CaptainShiny5000
@CaptainShiny5000 3 жыл бұрын
I think for most people "unoptimized" means: runs bad somehow. The term gets misused a lot that way. If New World is using everything or at least most of what the GPU has to offer it's actually extremely well optimized.
@insu_na
@insu_na 3 жыл бұрын
well... if 2 ways of rendering can achieve the same, or a humanly indistinguishable result in the same amount of time, which one is more optimized: the one which uses every single transistor or the one that does it at 10% utilization?
@SternLX
@SternLX 3 жыл бұрын
I roll my eye's every time I see it in chat or in a VOIP channel. I stopped asking "How do you know it's un-optimized? Are you a programmer and have access to the code?"
@Nghtmare30589
@Nghtmare30589 3 жыл бұрын
Not really. Optimized stands for using available resources EFFICIENTLY. Unoptimized its like when cars are tuned to have 1000HP but cant put it to the ground effectively so they are barely faster than a car with less power.
@CaptainShiny5000
@CaptainShiny5000 3 жыл бұрын
@@Nghtmare30589 Well said - I agree to that. I had it in mind but didn't express myself properly.
@anthonyc417
@anthonyc417 2 жыл бұрын
My GB 3080 Ti Gaming OC maxes out at 362w in SP 8K. So whatever was going on is looking more and more like drivers to me personally. Unless the 3090 with the exact same PCB layout minus one power stage on the 3080 Ti's behalf is that different but they are the same TDP so IDK.
@tekjunkie28
@tekjunkie28 3 жыл бұрын
So how accurate is that 8 pin power voltage Buildzoid? 11.4V sounds pretty low? I know that may or may not impact the dying but isn't that out of spec ?
@sagerdood
@sagerdood 3 жыл бұрын
Gonna need you to do review the new master z690 asap. I pre ordered it and it looks reeeealy nice
@MrPerpixel
@MrPerpixel 3 жыл бұрын
My research on this with New World point to many failure when changing quality settings. It does spike when doing so.
@JethroRose
@JethroRose 3 жыл бұрын
as I've been saying from the start, this is a design issue not a new world issue. software should not be able to break hardware outside of a bad flash. if it can, the hardware is at fault (and yes there is a lot of IMHO broken hardware out there on the market. doesn't make it right, or acceptable). otherwise we're in for a world of hurt when the next hardware breaking malware comes along.
@BetteBalterZen
@BetteBalterZen 3 жыл бұрын
Hi AHC I own a ROG STRIX 3080 Ti OC LC model. Would you fear playing New World with this card? I play New World with this card and are using an undervolt profile - 1800MHz @850mV Thanks
@renchesandsords
@renchesandsords 3 жыл бұрын
can this sort of issue be mitigated be higher capacitance output capacitors to help stabilize the output voltage and resduce some of the feedback to the vrm?
@nicholasvinen
@nicholasvinen 3 жыл бұрын
The capacitors would need to be huge (probably in the Farads) to deliver 1000A for milliseconds without sagging more than tens of millivolts. You can calculate the required capacitance but I'm too lazy to do it.
@renchesandsords
@renchesandsords 3 жыл бұрын
@@nicholasvinen good point, running numbers gets a value on the order of farads to dozens of farads per capacitor assuming 700W of peak consumption and somewhere around 10-20 mV of drop across 16 capacitors. This does seem a bit high
@Faroghar
@Faroghar 3 жыл бұрын
What is the true safe temperature limit (not just lasting to the warranty) for component ? 90°C ? (I don't want to spend 1k + on water cooling for nothing)
@Mako-sz4qr
@Mako-sz4qr 3 жыл бұрын
Do you recommend undervolting the 3090?
@No-One.321
@No-One.321 3 жыл бұрын
Wait so the 3090 only runs at 1200mhz while using 350w on furmark? Is this normal behavior have we seen this in the last let's say 5 years in anything else Amd or nvidia?
@benjaminoechsli1941
@benjaminoechsli1941 3 жыл бұрын
Pretty sure that was an intentional kneecapping put into the Nvidia drivers a couple years back so the cards can't melt themselves when using Furmark, specifically. It's like Nvidia realizes that their cards can't be left to their own devices...
@theprofessor131
@theprofessor131 3 жыл бұрын
Haven't tried it myself but does renaming or deleting GPUMonitor_x64.dll from inside "C:\...\Superposition Benchmark\bin" resolve GPU-Z's reset issue? Figure it might be caused by some sort of fisticuffs between both programs polling the GPU for statistics
@PwadigytheOddity
@PwadigytheOddity 3 жыл бұрын
Zenyatta’s ultimate is furmark, prove me wrong.
@carbonsx3
@carbonsx3 3 жыл бұрын
Load up God of War or Horizon: Zero Dawn on PS4 and listen to the fans roar when you switch to the world map... change back to the main game render view and the fans drop. On PC, load up Warframe and enter the Mods Console window in the Orbiter and listen to your GPU's fans roar to life. Exit the console and the fans drop out again... (Vega64 Red Devil) This issue of run away render frame rates and unexpected power excursions are everywhere in games.
@Carfreak226
@Carfreak226 3 жыл бұрын
If your card has dual bios and you’re running on the lower TDP bios, and have a frame rate limiter in place, wouldn’t that mitigate these issues? So even if the card were to spike, it would still be under (hopefully) the higher watt bios? EVGA 3090 FTW3 for reference.
@arthurmoore9488
@arthurmoore9488 3 жыл бұрын
Unfortunately, the answer is probably no. Now, the reporting tools won't show you the transients, and they may be happening at a lower frame rate, but they are still happening. Remember, the power monitoring circuitry is before the power smoothing circuitry, which is before the VRM. So, a sharp transient will be handled by the smoothing, but the VRM will still see it. Now, the good news. Those power stages are rated for high transient load, so a lower frame rate means they might have time to recover. Also, since lower voltage equals lower power, undervolting the card can also help.
@Carfreak226
@Carfreak226 3 жыл бұрын
@@arthurmoore9488 Appreciate the informative and detailed response sir.
@evenbetterthantherealthing92
@evenbetterthantherealthing92 3 жыл бұрын
I'm hoping that running at 1440p is putting a little less stress on my 3090 (Strix). I opted for high refresh over 4K gaming from HWinfo and GPU-Z I've not seen spikes that exceed the power limits yet- knock on wood.
@utubby3730
@utubby3730 3 жыл бұрын
Well considering the Strix has a 480W bios out of the box, I would hope its designed to handle the more typically seen power draws that gamers see with it. I have a modest UV and left the PL alone and it routinely draws 400Ws (gaming at 4-5K)
@Baoran
@Baoran 3 жыл бұрын
I was testing my Asus RTX 3090 in new world for a bit using gpu-z like in this video. It seems the card has higher power limit. It only hit 100% TDP somewhere around 400W. I have new world limited to 60fps. When limited to 60fps and 2560x1440 resolution I gpu load is around 40%, power usage is around 270W. If I change to 5120x1440 with 60fps limit the load is 70% and power is between 380W and 390W and TDP is at 95%. When I first ran new world without the 60fps limit it was doing over 90 fps with 5120x1440 resolution so after seeing those wattage numbers I dont want to try what the wattage would be if it was running at full load.
@grempal
@grempal 3 жыл бұрын
I always thought that furmark looked more like a furry eyeball than a furry donut. Enjoy that nightmare fuel.
@MardukTheSunGodInsideMe
@MardukTheSunGodInsideMe 3 жыл бұрын
Running a over clocked 2070 super in 4k on New World. (30-40fps) 150 hours in, thoughts and prayers.
@ianmoone8244
@ianmoone8244 3 жыл бұрын
Which PSU had you used to that test? I saw 11.2v on 8-Pin #1! O.o
@mortenee88
@mortenee88 3 жыл бұрын
I have seen new world mostly crashing in the map section. When I don't cap fps. I play on Alot of different hardware so my latest tested that crashed Alot was a vega64 where I couldnt get this game stable unless I basicly undervolted it or capped the fps. It's got a alphacool block on it so it's really cool and all. But it crashes no matter what at a small oc. The card does 1700mhz in firestrikes quite consistent but it wouldn't hold a 1600/1620 oc in new world.. had to step it down Alot.
@Thundercrash.
@Thundercrash. 3 жыл бұрын
my first 3090 died playing valheim so... got my RMA like 2-3weeks later. The new card started to revving up fans to 100% for a sec and goes down again, 2 weeks ago. (without showing high temps or high RPM in any software) with newest gpu bios. I hate it so hard.
@luider8795
@luider8795 3 жыл бұрын
what card is it, so i dont buy it, am looking for a new gpu above 1.5k
@Thundercrash.
@Thundercrash. 3 жыл бұрын
@@luider8795 gigabyte gaming oc
@Safetytrousers
@Safetytrousers 3 жыл бұрын
When I used to have my 2080ti FE overclocked to mine and game with I uppped thepower limit to max and the TDP reading was often at 123%. I regarded that as the raised limit working. I now run that GPU at 90% power (and play New World with it, no crashes) and it runs as fine as ever.
@Safetytrousers
@Safetytrousers 3 жыл бұрын
@Smokeyninja I pay a fixed amount for my electricity every month, so how much exactly I'm using makes no difference to my mining profits. I try to use as less electricity as possible so I have lowered the power limit on all my GPUs to the least it can be without reducing mining performance.
@astarothmarduk3720
@astarothmarduk3720 3 жыл бұрын
@@Safetytrousers You shall think about the environment we all need for survival. We still do not have 100%+ renewable energy, and not enough chips to support gambling with cryptocurrency at a global scale. The principle "work for money, the person, not the machine" shall hold. I know it is less convenient to work for money in person, but we all have to reduce resource and energy consumption, and just not do mining is the easiest to do.
@Safetytrousers
@Safetytrousers 3 жыл бұрын
@@astarothmarduk3720 I haven't travelled by plane since 1989, I don't own a car, I don't eat meat. I recycle everything I can. Giving up making a living for doing no work is not easy at all.
@blossomforth2331
@blossomforth2331 3 жыл бұрын
Thought we we're playing with a gpu, but it was playing us all along
@turtleiss7683
@turtleiss7683 3 жыл бұрын
Summs it up quite well
@lkuzmanov
@lkuzmanov 3 жыл бұрын
Hi BZ, I recently went custom water, so I've been staring at GPU power and temperatures a lot in the process of overclocking my 3080 GAMING Z TRIO 10G during Superposition stress tests @ 1440p. One odd thing I'm noticing is that the driver downclocks the GPU even in situations where the GPU is < 100% utilized in terms of power, e.g. I'm running a 100 MHz OC on the core and in games the GPU will usually hover around or beyond 2000 MHz, but during the 1440p stress test I'll often be seeing clocks of < 1950 MHz even 95% GPU power. As you can imagine under the custom block temps barely move. Thoughts? P.S. The card usually runs at around 365W when maxed and I'm seeing the above @ GPU Power readings around 340W, so closer to 90 than 95% which confuses me additionally...
@yourhandlehere1
@yourhandlehere1 3 жыл бұрын
Lyuben Kuzmanov....1440p doesn't "stress" a 3080 at all. My PNY only starts freezing if I go +300 core. I never mess with voltage. 1995-2050MHz oob...cruises at 2200MHz, mid 60s on air. I use an RM1000x to cover any spikes. It does like to pass it's 320w limit
@lkuzmanov
@lkuzmanov 3 жыл бұрын
@@yourhandlehere1 it's what the test is called, Stress. At 1440p it stays around or often goes beyond 100%, which is enough for my purposes - to load the card. I'm not worried about max clocks or stability that much, I've found those points. I'm playing with RPM curves to find the sweet spot in terms of noise and temps. My question was about the odd behavior of Superposition in, for example, scene 8/17. For a while both the power and GPU clock drop at the same time and I can't make sense of it.
@Methos_101
@Methos_101 3 жыл бұрын
Does undervolting, and flattening the curve on MSI Afterburner help with this behaviour?
@happydawg2663
@happydawg2663 3 жыл бұрын
Yes, it should solve the problem, you get a little less FPS but you don't end with a burnt gpu, as BZ said, it mostly happens when all cuda cores are under load, so by using larger resolutions.
@nicholasvinen
@nicholasvinen 3 жыл бұрын
That's how I stopped my 3090 rebooting my system due to OCP on the 650W power supply (meaning it was probably peaking close to 1000W). Had virtually no effect on performance but dropped average power from over 400W to about 350W.
@kilroy987
@kilroy987 3 жыл бұрын
If people have rendering bandwidth to work with, they'll use the game settings to keep upping to resolution, detail and framerate to get the best experience. People will naturally try to drive their GPU to approach 100% usage.
@ShaneCutting
@ShaneCutting 3 жыл бұрын
Do you have any recommended solutions for this issue that can be done on the user end? I have a Gigabyte 3090 Gaming OC and I would like to not blow it up.
@JDuhoh
@JDuhoh 3 жыл бұрын
I wonder if we can get some nvidia released software that blows through the power limits. Perhaps Minecraft RTX could get there? (Heaven from Unigine is listed as part of the nvidia tech demos)
@Logan_67
@Logan_67 3 жыл бұрын
Does all this you have went through with this Vision card also apply to the Aorus 3090 Xtreme?
@ole7736
@ole7736 3 жыл бұрын
Great analyis!
@LegendaryGauntlet
@LegendaryGauntlet 3 жыл бұрын
Would watercooled cards (with watercooled VRMs, obviously) last a little bit longer ? How's the longevity of a VRM at peak load but cooler temps vs the same VRM but at high temps ?
@benjaminoechsli1941
@benjaminoechsli1941 3 жыл бұрын
To pull a reply BZ made to another, similar question, "A VRM under full load kept at 50C will last much longer than one that is running at 90C."
@guycxz
@guycxz 3 жыл бұрын
I just had a thought. When running Furmark the card draws about 350/0.7 = 500 Amps. When running new world, under stock conditions, you can expect the card to run at it's typical 1750-1800 boost frequency. So, assuming current draw increases linearly with frequency, if the GPU attempts to run all cores at 1800MHZ we would get a current of about (1800/1200)*500 or 750A(!!!!!!!!) No wonder OCP is set to 800-1200A. EDIT: Assuming the current would increase linearly with both frequency and voltage, we are looking at a potential 750*(9/7) = 954.28A. Fuck. Hopefully CCL at least tames that a bit.
@TrueThanny
@TrueThanny 3 жыл бұрын
The voltage will go up with higher frequencies, and therefore current will go down, not up, at the same power draw.
@guycxz
@guycxz 3 жыл бұрын
​@@TrueThanny The power draw doesn't remain the same at different frequencies. If it had, the 3090 here wouldn't be down-clocking itself so hard when running Furmark. Generally, current scales with switching frequency in a roughly linear fashion for the expected operation range of frequencies, as transistors being toggled more times in reach time frame. this is a very rough approximation, since other parameters, such as the amount of time for which current is required, may change the average charge/transistor-activation ratio. As the frequency goes up, more voltage is required to maintain stability, which would mean the current flow per activation would be higher, this increases roughly linearly, but it is a bit more complicated than that and I don't know whether there even is a general equation you could use to approximate the behavior of a particular piece of silicon.
@konga382
@konga382 3 жыл бұрын
How applicable is this to the 3080 Ti? Since it's mostly the same as the 3090 but with half as much VRAM, does all of this still apply? It's crazy to me how most third-party 3080 Tis have a 400W board limit by default, when you're concerned about a 3090 with double the GDDR6X drawing over 350W. Makes me scared to see what's going to happen to my 3080 Ti if I happen to encounter these conditions.
@andersjjensen
@andersjjensen 3 жыл бұрын
Which model is it?
@konga382
@konga382 3 жыл бұрын
@@andersjjensen FTW3 Ultra
More SPECULATIVE rambling about the RTX 3090/3080 New World failures.
1:01:51
Actually Hardcore Overclocking
Рет қаралды 35 М.
GPU Necromancy: EVGA GTX 1080 FTW Vcore VRM replacemed with Galax HOF-power
42:35
Actually Hardcore Overclocking
Рет қаралды 14 М.
The Best Band 😅 #toshleh #viralshort
00:11
Toshleh
Рет қаралды 22 МЛН
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 7 МЛН
VIP ACCESS
00:47
Natan por Aí
Рет қаралды 30 МЛН
RTX 3090 SLI - We Tried so Hard to Love It
15:04
Linus Tech Tips
Рет қаралды 6 МЛН
Rambling about RTX 3090s getting bricked by Amazon's New World game
37:58
Actually Hardcore Overclocking
Рет қаралды 81 М.
Probing a Gigabyte RTX 3090 Vision that died when trying to run New World
29:00
Actually Hardcore Overclocking
Рет қаралды 154 М.
Nvidia GeForce RTX 3090 Review: The New Titan In All But Name
22:19
Digital Foundry
Рет қаралды 156 М.
Nvidia RTX 5090s melting power connectors AGAIN!
13:56
Actually Hardcore Overclocking
Рет қаралды 69 М.
We Found the Glue: Tear-Down of NVIDIA RTX 3090 Founders Edition
26:06
What RAM to buy in July 2021. 2x8 2x16 4x8 and 2x32GB kits
55:59
Actually Hardcore Overclocking
Рет қаралды 39 М.
GPU PCB Breakdown: Colorful RTX 3080 Vulcan
42:03
Actually Hardcore Overclocking
Рет қаралды 10 М.
How do Graphics Cards Work?  Exploring GPU Architecture
28:30
Branch Education
Рет қаралды 3,5 МЛН
Pixel 7 и 7 Pro с Face ID - лучше iPhone 14 Pro!
21:12
Rozetked
Рет қаралды 457 М.
ПОСТАРЕЛА ЗА 1 ДЕНЬ НА 20 ЛЕТ - МУЖСКОЕ ЖЕНСКОЕ
55:44
ПРИЯТНЫЙ ИЛЬДАР
Рет қаралды 677 М.
BIP HOUSE  .бип хаус 🥰🏡  #shorts
0:13
bip_house
Рет қаралды 1,2 МЛН