I think the B580 being the first card with a sub $300 MSRP to feature more than 8GB VRAM is a notable feature by itself. What it needs to offer beyond that reliability, a usable upscaler at 1080p, enough RT to handle basic UE5 features and being a tier above the RTX3060 in performance.
@samgragas846723 сағат бұрын
Intel's upscaler is better than FSR and close to DLSS and raytracing is good.
@daedalus643322 сағат бұрын
@@samgragas8467 Wish more games would use it and that FSR 4.0 is a good improvement.
@GewelReal22 сағат бұрын
You could get 6700/6700 XT, RTX 3060 12GB and Arc A770 16GB for under 300 many times in the past
@samgragas846722 сағат бұрын
@@daedalus6433 FSR 4.0 should make XeSS useless for AMD GPUs. XeSS performs worse without an Intel GPU.
@lharsay22 сағат бұрын
@@GewelReal They all had higher launch MSRP, the 6700XT was at $480 for example.
@KG-bv4gy22 сағат бұрын
"The product is actually bad" *Cut to b-roll of 265K* Subtle!
@ironsteal20 сағат бұрын
*roll credits*
@Dani-kq6qqКүн бұрын
Where is spanish unboxed?
@adamtajhassam918823 сағат бұрын
then we need a japanese unboxed lol
@johnlabu115423 сағат бұрын
Where is japan unboxed?
@Tom_Quixote23 сағат бұрын
It became Spanish Reboxed.
@QOX8008-822 сағат бұрын
@@johnlabu1154 * JAPAN NONBLURRED !
@mrshinobigaming844722 сағат бұрын
@@johnlabu1154Censored probably
@TheTaurus104Күн бұрын
The fact that AMD is concentrating on midrange for market share does not exclude enthusiast products. In the much-quoted interview with Jack Hyunh, he also said: "Don't worry. We will have a great strategy for the enthusiasts on the PC side, but we just haven't disclosed it. We'll be using chiplets, which doesn't impact what I want to do on scale, but it still takes care of enthusiasts. [...] Don't worry, we won't forget..." Unfortunately, this part is not mentioned or referred to anywhere in the media.
@fabrb26Күн бұрын
Because only Mrns buy high end AMD ?!
@DannyWilliamHКүн бұрын
It's smart if they did avoid the enthusiast market. Let Nvidia crush the $1500+ GPU market and offer great price:performance for GAMERS. If they can cut into the market of people that are purely gaming on their GPU, they can make a lot of money. There is zero reason we have to keep paying $750 for a 12GB mid-GPU just because it's Nvidia. If AMD (or Intel with better drivers) can just get mid-range gamers to believe their cards are for them...there is a lot of cash to be made. Just be the 1440/60 kings for price:performance. Ignore trying to match the flagship Nvidia offerings. In fact, stop tying to match the current Nvidia gen/series. Ignore the 50-series, be a gen behind.
@little_fluffy_cloudsКүн бұрын
Nobody can successfully compete at the highest end against NVIDIA until & unless they have a counter to CUDA and OptiX.
@mytech6779Күн бұрын
Doom and gloom sells papers.
@fredsorre6605Күн бұрын
The reason no one is using that comment is cause everyone and their Grandma knows what he is referring to but its not gonna be coming out next year or maybe even the year after that he is referring to RDNA5 which they recently rebranded to UDNA which was always in line to be made which is expected to be the first true multi GPU chiplet design they were supposed to get the design working with RDNA4 but they could not finalize the design for the IO die or at least its equivalent for the GPU in the time they needed to get it to market its also the reason why RDNA4 has no big die cause the plan was to combine 2 mid size die and join them together the comments are being ignored cause everybody knows what is coming from AMD in the future.
@HEAD12345622 сағат бұрын
IF AMD want market share they need pull something like HD3870/4870/4890/ era. 80% performance of top Nvidia card for 50% price. I still remember how GTX280 launched for 650USD and month later when HD 4870 launched Nvidia reduced price to 500usd and then 400usd(because 4870 cost 300usd and was 80% of performance of GTX280) and when GTX285 launched it was only for 400USD. That is competition. What we heave here today it cartel/duopoly.
@williampinnock225621 сағат бұрын
What changed? Why were the two companies competing then but a cartel now? Are you seriously suggesting price fixing? There are laws you could exploit if you thought that was the case...
@olymind121 сағат бұрын
I also liked their vgas in the 200-250$ price range, those i had: Radeon 7500, 9500, HD 3850, HD6850, HD7850, RX480. Then they switched their greed to the next level...
@chy.019020 сағат бұрын
They will never do that. How many times do you need to see the same thing happen in order to learn? AMD will just undercut them by £40-50 or so, a strategy that has been a failure so far.
@contenez709719 сағат бұрын
nvidia up 190% in last year has no no gamer gpu release i swear it has nothing to do with ai and working people
@contenez709719 сағат бұрын
how come they dont have any amd laptops with 88000 whatevers and 4070s why dont they even bother to put amd gpus in them tryna pay twice as much for twice the prefomance of 4 years ago dawg
@Ori_Ovadia22 сағат бұрын
why arent you putting 'nov Q&A part x' in the headline like you used to? its harder to determine that this current vid talks about more than what the headline suggets...
@Shantara1122 сағат бұрын
11:58 To expand on this point, the consoles are a fixed hardware platform, so the shaders could be precompiled and distributed with the game. Valve does similar on Steam Deck, with their system that allows for sharing the shaders cache between the players in the background. Meanwhile on PC, shaders are different between the different GPU, OS and driver versions, hence the need for a shader precompilation step
@filcon1222 сағат бұрын
Even on PC you can have precompiled shaders. There are minimal changes between GPUs if the game is correctly implemented. Difference in shader compilation betweeen OS or driver versions are minimal. Pretty much only if there was some issue fixed, which is very rare.
@davidhines759220 сағат бұрын
@@filcon12 it only takes one bit wrong on a system, a bit thats fine the same on another pc with almost same components, to cause CTD or a system lock or even bluescreen a pc. are you certain ignoring shader difference between systems wouldnt do that or cause artifacting?
@contenez709719 сағат бұрын
yay for ps5 pro flop
@filcon1217 сағат бұрын
@@davidhines7592 ofc there can be bugs in compiler/GPU/drivers. Just remember that shader compiler is part of DirectX and standard for many years, so bugs are not that common to justify game recompiling shaders on every start. If they do it's simply bad engine. Most of the games/engines for PC/Xbox/PS precompile shaders on builder and pack them with the game and in most cases that's enough and it works.
@chhandobhihbhushan2742Күн бұрын
AMD should just simply undercut Nvidia by a tier price difference, that's literally what they did with early Ryzen.
@beachlife2968Күн бұрын
The thing is Nvidia could adjust prices down and AMD lose margins for nothing. They will know how much they can go lower than Nvidia without them reacting. AMD just need to increase their whole package not just prices to grab customers.
@leviathan520723 сағат бұрын
Not entirely true. In gaming, a 7600k SHAT all over a Ryzen 1800X 95% of the time. Let alone the 8600k. And that's just the i5s... The first really good Ryzen chips that competed in gaming were the 3000 parts and only with 5000 did they actually pull ahead and charged a premium MSRP for doing so.
@lharsay20 сағат бұрын
AMD did that heavily around 2010, for a brief period they could even overtake Nvidia in market share but in the end Nvidia with their superior margins could invest more in R&D which made AMD lose in the long run.
@EbonySaints20 сағат бұрын
@@leviathan5207True, but for production workloads, those early Zen 1 and Zen+ chips were good enough and cheap for people to overlook the gaming downside, which at the end of the day, was tolerable given the advantages like Phenom vs Core Duo/Quad instead of bad all around like Bulldozer vs Sandy Bridge. That and the promised platform support (that was kinda derped up with Zen 3 for a moment) still gave people a reason to consider Zen over Skylake at the time. People buy components based on things besides gaming. Shocking, I know.
@Giliver20 сағат бұрын
With the price of components now would that even be possible? Idk what it costs to make a GPU but I can't imagine it's anywhere near as cheap as it used to be
@AxWarhawkКүн бұрын
With how unoptimized many titles get released nowadays, nobody can deliver hardware that covers what gamers actually need.
@leviathan520723 сағат бұрын
Gamers actually need better games, which Nvidia, AMD and Intel cannot do shit about :(
@haukikannel23 сағат бұрын
But that thing is going to be true in the future so you have to include that when you get GPU…
@wisnudJat23 сағат бұрын
and when the most optimized AAA game this year release (Veilguard), it plagued by that thing. Also, it's actually sad, when this year GOTY is unplayable without frame gen technology.
@QOX8008-822 сағат бұрын
@@leviathan5207 THEY CAN F**IN DO IT ! Nvidia is a company that sponsered many titles like CP 2077, ALAN WAKE 2 AND BLACK MYTH WUKONG etc, They can make a pressure on developers to stop being lazy and RELIANT ON Sh!tty UE5 !
@GewelReal22 сағат бұрын
Gamers need to get out and touch grass
@CragnatharКүн бұрын
Hopefully, Intel can continue to deliver decent value for money on VRAM
@brugj0314 сағат бұрын
Who cares about that, it`s the experience that counts. And Intel sucks at that.
@bradneff3053Күн бұрын
Actually excited to see what’s up with intel this time around with the crazy gpu prices. Actually considering getting one. Don’t disappoint intel.
@sdfopsdmsdofjmp786315 сағат бұрын
I'm in the same boat. I really want Intel to come out swinging with a really well priced product that offers a reasonable amount of VRAM, not just because I want Intel to sell well, but because it would force Nvidia and AMD to actually offer reasonably priced mid-range GPUs to be competitive. A $400 4060Ti that is more like what a traditional xx50 Ti would be is absolutely disgusting and someone needs to force Nvidia to step up again.
@anzov1n19 сағат бұрын
This would absolutely trigger some, but i wouldnt mind seeing some ultra low res (480p) tests to isolate CPU performance for ray tracing.
@band0lero19 сағат бұрын
Even for non-ray traced titles honestly. I want CPU testing to be guaranteed CPU limited. We are seeing 4090s reaching GPU limit at 1080p max settings on some games already.
@michahojwa813216 сағат бұрын
Tried wukong on 605p native. For me it's unplayable.
@anzov1n15 сағат бұрын
@@michahojwa8132 Id imagine that some games are designed for 1080p+ from the ground up so the interface is simply too clunky and unusable at low res. My suggestion was of course for games that are at least borderline playable, enough to run a repeatable gameplay loop. It's not intended as a test of how someone would actually play the game but of CPU performance and bottleneck thresholds.
@deepbludreamsКүн бұрын
Nice, not in french
@namedoasis1134Күн бұрын
As a French speaker I'm sad
@LaserVelociraptorКүн бұрын
@@namedoasis1134 i would be sad too if i was french
@shaheer876Күн бұрын
😂😂😂@@LaserVelociraptor
@mikajacobsen860Күн бұрын
Nice is in France, though
@brandslav5989Күн бұрын
@@LaserVelociraptorshould've just been born somewhere else
@brucethen22 сағат бұрын
Part of the original GPU rumour for 8000 series was that AMD were going to have an MCM design with the GPU cores spread over multiple modules for the high end. If this rumour was true, then my theory is that they are having problems with latency for these designs and thus have postponed the high end GPUs until the next generation, thus the strategy has partly been forced on them.
@AndyViant23 сағат бұрын
If B580 is around the 4060Ti level, with 12gb vram, for around US$249 that's not too bad at all. It would be popular. If it's just a single G31 product as a B750 with around 4070 Super level of performance for US$399 that will be popular too, but I really wanna see a US$449 4070 Ti Super to 4080 B770 G31 16gb variant.
@michahojwa813216 сағат бұрын
It's rummored to be more like 4060+ and there will be games that's gonna be unplayable. Intel should go igpu only 32cu celestial. If they start now they are one year before Nvidia.
@AndyViant8 сағат бұрын
@@michahojwa8132 The difference in performance between an 8gb 4060Ti and a 16gb 4060 Ti is fairly large once you overcome that vram buffer. It's really the difference between playable and unplayable. There appears to be a B570 and B580 product, and rumours of $200 and $250 price points. It could be 20 and 24 cu, or 18 and 20 cu. Until we see official reviews we will not know if the 20cu part is the 580 or 570. But even so, with 12gb ram and $249 for better than 4060 performance, that's about a 15% discount. Not enough, mind you, given the feature set of DLSS 3. But maybe, with the extra 4gb of ram, it is enough, given how 8gb is really at the wall now for serious gaming, especially with the ram use of upscalers. XeSS is also better than FSR by a long way already, and this new generation should massively improve that. If it's the B570 for $200 that's a 4060+ part and the B580 is a 4060 Ti + for $250 then Intel is in a good place. Especially if it's 20cu and 24cu, not 18 and 20. At this point it's speculation. Bring on 3rd/4th December
@DouglasWatt19 сағат бұрын
So, I think it's time for that 30 minute exploration of testing 20 games' shader compilation times across a variety of CPUs to get the data!
@titaniummechanism321419 сағат бұрын
My collegue bought an A750 founders edition and likes it well enough to want to upgrade to Battlemage. I'm a little excited about those cards as well. If I wasn't so stingy, I'd build a secondary PC with an ARC card just to support the underdog.
@vulcan4d8 сағат бұрын
If Intel brings 16GB entry, 24GB mid and 32GB high then they already won the next GPU contest even without benchmarks. So easy to win the GPU war, if one wants to even play it.
@nuddin996 сағат бұрын
Excess VRAM is unused VRAM, it does nothing but add cost at that point. Hardly any games use over 16GB even in 4K with ray tracing. Intel GPUs cant even hit performance numbers near the 4080 to even warrant that much VRAM.
@pjavilla30 минут бұрын
As the other reply points out that's a bit much, not to mention trying to go for the high end of the market is folly considering consumer confidence in Intel's offerings isn't quite there. They should shoot for the budget end of the market and pull the rug out from under Intel and even AMD. FFS do people not look at the e.g. Steam surveys, most computers aren't running on high end GPUs. If Intel can steal away 1080p and 1440p that should be more than good enough. Dominating the top end is all well and good for Nvidia but they can't ignore the budget end.
@drinkwoter19 сағат бұрын
Unless your game looks better than RDR2 theres absolutely no reason for devs nowadays to push games that require more hardware than rdr2 did
@bubsy386115 сағат бұрын
@@drinkwoter Rdr2 it's about art direction and work with materials. It's not a technically advanced game. But it's one of the games that can have huge benefits from RT in my opinion.
@TheNerd3 сағат бұрын
09:00 Shaders are small programs (written source code) that influence how a pixel is beeing displayed. Say you have a texture of red bricks, a shader could (for example) calculate how it's beeing displayed based on a roughness value you put in, to make it look rough instead of wet / glossy or when its raining "set this value to a wet setting". Since shaders are source code like any software, they need to be compiled (make it understandable for a computer) and in the case of shaders for your machine setup specifically. Usually after compilation these already compiled shaders are put into a shader cache so they can simply be loaded from your SSD. Sadly most implementations of "real time shader compilation" cause massive frame time spikes (for obvious reasons you need to compile the shader before something can be displayed, if it's not compiled you wouldn't see whatever has not been compiled), which causes stutter during compilation (it's bascialy a "waiting time"). THe alternative is "compile all shaders up front" but depending on how many permutations there are, this can take a very very long time (from like a couple seconds to like 20-30 minutes). in theory this should solve the problem, but for a couple reasons the implementation of this in most engines is also lacking, since "the collector" in these engines does not find all permutations, so at the start of the game, not all shaders get compiled. Shader compilation is one of those topics that (for some reason) did not get enough attention for a long time and therefore is a pretty big problem right now. That said, "the average gamer" does not really notice "shader hickups" of like 20,30,40 ms (we are talking about 1,2,3 frames of 30/60 in a second), so during development other topics have higher importance like "make a game good in the first place". Big offenders in this space right now are almost all Unreal Engine PC Games, all From Software PC Games (they are especially known for lacking in the technology part for a long time), or Games like Jedi Survivor (which is a complete utter mess and I don't understand how this game got nominated for anything and still got recommended by critiques despite the mess that game STILL is). On console this problem does (almost) not exist since you do not have a lot of hardware combinations or in other words: the devs can compile everything needed up front and deliver the compiled shaders with the game. Even if they need to compile during gameplay, usually the console has a little bit of headroom that can be used for tasks like this.
@davidmccarthy6390Күн бұрын
The market they need to target is the entry level sub US$250 matket that both AMD and NVidia have ignored for the last generation, with both releasing no products in that price range. In fact the last "effort" was the dire RX6500 XT from AMD, a gimped mobile GPU, that failed to beat its predecessor the RX 580 in many tasks, and it had half the VRAM of the RX 580, so a step backwards actually. If Intel can coener that market with a good value offering that has a minimum 12GB of VRAM for US$250 or less that can compete with AMD and NVidia's "entry level" they could have a massive winner, and force the other 2 to adjust their pricing downward. If however it costs around the same and only gives you the same or less though, it'll most likely be dead on arrival, as will any actual competition in the GPU space in the future. Intel isn't there yet with their drivers, and until they are their GPU products require a discount.
@michahojwa813215 сағат бұрын
igpus full power IMO. Before Nvidia comes and eats that market. It's next AI like boom - and also an AI boom. Strong igpu+256gb ram > 5090 32gb. That's potential to be better than Nvidia.
@sdfopsdmsdofjmp786315 сағат бұрын
I don't think the sub-$250 market that Nvidia and AMD were targeting with the 1060 6GB and RX 580 really exists anymore. Inflation and expensive process nodes has turned the $250 market into a $300-320 market, which is where I think the sweet spot should be at this point. Given that the 4060 is by far the best selling GPU despite its underwhelming performance, the market seems to agree that $300-320 is what the average consumer is willing to spend. If Intel can manage to ship a $320 B750 with 16GB of VRAM and 4070 performance and a ~$400 B770 with 4070 Super/Ti performance they will end up in a really, really good spot in the market. I mean I think the B580 at $250 with 12GB of VRAM and performance between a 4060 and 4060Ti would sell really well, but I do really think that most people are willing to spend over $300 for a card if the value for money is good.
@senpos22 сағат бұрын
B650M-HDV/M2 is a beast for the money! Thanks for the motherboard reviews, they are really helpful, as there are too many variants with similarish names but with different components? I know nothing about VRM, so your tests are gold.
@Dayanto22 сағат бұрын
23:28 I don't think it's safe to assume that v-cache won't benefit productivity now that the clock penalty is gone. Considering that the 9800x3D wins* on many tasks where the 7800x3D loses against their respective non-v-cache parts, I think we should expect the 9950x3D to beat the 9950x on tasks where the 7950x3D didn't. And if it's actually better, the cost difference probably doesn't matter at the high end. Especially for professional use, where performance has a monetary value. * To be fair, the 9800x3D has a higher base clock than the 9700x, so we don't really have a proper apples-to-apples comparison this generation. The jury is still out whether it will actually be better, but it's promising so far.
@levygaming313317 сағат бұрын
Especially when 3D-VCache was something they did for their server parts, not something that was originally meant for gaming.
@Dayanto16 сағат бұрын
@levygaming3133 I mean, we already knew that it helped with _some_ productivity workloads in Zen 4, but those people already buy x3D. What I'm talking about is everything else that either saw no benefit or a regression from v-cache. I think it's reasonably likely that a decent number of those actually did benefit slightly from v-cache, but that this was masked by the drop in clocks.
@kacperjestfajny22 сағат бұрын
We need Polish version of Hardware Unboxed. But they would need to drink vodka and eat kielbasa while talking
@rangersmith465220 сағат бұрын
Not a bad plan.
@ericthedesigner16 сағат бұрын
Trouble shooting a problem is the exact definition of the scientific process, the process of recording data and calculations based on environment . So many people get this wrong. You guys Rock!
@jessefisher180923 сағат бұрын
Believe it or not, ecores actually help a lot for shader compilation. I know they don't help much for actual gameplay, and in some cases can be detrimental. But for compiling those shaders, they do actually help.
@joby60221 сағат бұрын
But how do you know? Help me believe.
@jessefisher180919 сағат бұрын
@@joby602 You just run a game that compiles shaders with them enabled vs disabled and time how long it takes.
@joby60219 сағат бұрын
@@jessefisher1809 Thanks, will try this.
@glenndoiron931717 сағат бұрын
@@joby602 I notice a *huge* difference in compilation time on an i3-13100K vs an i9-12900K (both have samsung 990 pros and 32G+ RAM). They are definitely soaking every available thread/core!
@Xero_WolfКүн бұрын
For Battlemage to even matter to me it needs to beat the 4060/Ti out right and be on par with a 4070 on the high end. As of now we don't know how many tiers are coming but their needs to be a marked performance uplift between the the B580 and B750.
@Integroabysal23 сағат бұрын
it won't , intel will not just give you free performance stop coping brothers :D they will just pick a fight with 7600xt and price it accordingly , so the fanboys can pick intel instead of AMD or Nvidia.
@PixelatedWolf207723 сағат бұрын
@@IntegroabysalWhy do you say that? The B580 already outdoes AMD because of its dedicated RT and AI cores for upscaling and Raytracing. It wasn't practical but I was able to do RT on FH5 with Xe integrated graphics because of Intel's RT cores.
@Integroabysal23 сағат бұрын
@@PixelatedWolf2077 Sure , you see rtx 4060 is also beating rx 7600xt in RTX and DLSS , how many people are actually playing games with 4060 with ultra + rtx and stuff ?
@Xero_Wolf23 сағат бұрын
@@Integroabysal If they do that then forget it. I don't need another almost there GPU. Anyways I'm not looking for free performance. I full expect to pay a good price for 4070 performance. Whatever 4070s are selling for right are far from good prices. For most of the work I do intel GPUs beats out both AMD and Nvida on price to performance. The problem is it's on the low end. Sure a $250 GPU is going to sell if the performance is there but many of us need more than that too.
@legendknight203723 сағат бұрын
The main thing Intel needs to take care of is it's drivers, sure they've come a really long way but they're still having problems in a lot of new games on day 1 when you can play them with AMD and Nvidia even without a driver upgrade. New line of products will not magically bring out better drivers.
@patrikmiskovic791Күн бұрын
We need new rx 8800 and new Intel GPUs
@1Grainer1Күн бұрын
RDNA 4 could have high-end, but it would be too expensive, so why bother? if someone wants high-end they will buy Nvidia without looking at price, and just by looking at steam survey 4090 has higher presence of 0.91% than highest Radeon 6600 with 0.76%
@Lady_Zenith23 сағат бұрын
Because it is overpriced. Everything is prices like 4090. Thats one of the reasons none bough 4080. It was 1500USD for 4090 and 1200 for 4080, only a foll would pick 4080, same was 4070TI, the whole lineup, and AMD wanting to grab some cash just copied the whole thing -10%. Radeon 7600XT has MSRP of 329USD, 6600XT used to have 300, that is ridiculous for a low-end tiny 128-bit GPU with 8GB of memory in 2024. Its equal to what GTX650 was in 2016, aka total low-end. The 1060 6GB 192-bit used to be 250USD. Hell, the famous Polaris RX570 was 168 USD! And that was a mainstream 256-bit GPU, thats why it was such as good deal. In other words, what GPUs cost now, is 2x-3x higher of what they should, and that includes the low-end and mainstream where its 3x, while at the high-end its "just" 2x, thats why nothing is attractive. If AMD wants their mind-share and marketshare back, they will have to pull another Polaris, and I bet they will not. Too much greed for that.
@kevinerbs277815 сағат бұрын
@@Lady_Zenith The Polaris has/d the ability to up 4 of them while gaming, so their prices fit that spec. These new 2-3x higher prices can't even do that so where is the value in buying a single card? I will says people don't know this but all of the RDNA line from the RX 5,000 to the RX 7,000 support mGPU for dual card use. Too bad no developers bother to use it. Aprt from 10 games & I think three with RT no one is bothering to use such a simple thing to add.
@lizardx650411 сағат бұрын
Because 90% of the market is the fanboy sect. The 3050 has a market share many times higher than the 6600. People pay for RTX letters. They don't even understand that they need to turn on DLSS to achieve the same performance that the 6600 has in native. In some price ranges Nvidia has no equal, in some AMD.
@renereiche17 сағат бұрын
On a Playstation there IS no shader compilation, because the shaders come with the game (a copy of the shader cache), since every PS4 (or PS5 or PS5 Pro) is the same, while every PC is different and it would be too many combinations to deliver all pre-cached versions. On a Steam Deck, btw., Valve computes the shaders once and delivers them automatically as extra downloads, which is why Steam Decks usually have no shader compilation stutter either. On PC even game-start-up shader compilation rarely manages to cache all needed shaders for perfectly smooth gameplay and you can't compute your way around it (especially not during gameplay), not even in 10 years with 3 times faster CPUs with tons of cache. (and sorry to be one of those commentators)
@johnnyringo357 сағат бұрын
My Amintelvidia ARM based Quantum CPU and QUANTUM GPU have no issues.....here in the year 2055. Upgrade your hardware and time travel you scrubs....
@TheNerd2 сағат бұрын
30:11 The main thing to look at on modern motherboards is the power delivery. Sometimes boards claim they can handle (for example) the 9950X in the spec sheet, but in reality you loose some performance at a high load, because a specific motherboard can not deliver enough power. In Short: If you buy a CPU that has a pretty high TDP, you should research the power delivery of that specific board you are interested in. Historically speaking: If you buy a high end CPU, it's a good idea not to pair that high end part with a entry level mainboard. Usually entry level boards can technically run that CPU (they detect it, they boot etc.), but are developed for mainstream parts, which leads to performance issues in some situations like "rendering in blender" or "encoding video with the CPU" etc.
@marktackman288620 сағат бұрын
I have two 5700 XTS and I can confirm one is a blower style and one is a dual fan style. Both continue to have driver issues for 5 years now
@myroslav687316 сағат бұрын
What issues? I have 5700XT and couldn't be happier with it. Best fps/$ right now at the lower mid end and has been for at least 2 years.
@marktackman288615 сағат бұрын
drivers consistently bug out and need to be reinstalled, both systems....especially after multiple sleep sessions. The 6600M I got off aliexpress has been WAY more stable. I love the card except the annoying random driver issues....especially with things like Fortnite dx12, its frustrating.
@wargamingrefugee90659 сағат бұрын
Intel Arc opened the door for affordable, 16GB graphics cards for me. US$350 for the FE A770 let me play with Stable Diffusion in a way not possible with my 8GB RTX 2060 Super. Back when the card launched, Topaz Labs' GigaPixel was a free download with the purchase of the A770. Converting a video to a series of still images and then up scaling wasn't the most direct of approaches, but it did (and still does) work. On a whim, I recently downloaded Topaz Labs' Video AI 5 test version (same as the for sale version but with a huge watermark) and up scaled a 1080p video to 1440p using the A750 8GB card in my gaming PC (it was US$200 for the FE card and I couldn't resist buying one). The software worked flawlessly on the A750, taking about +2 hours to upscale a 10 minute, 1GB video. Would an nVidia or AMD card be faster? I don't know. And, maybe that's not the point, at least not for me. It's what the card's can do -- playing World of Tanks and War Thunder, training an embedding in Stable Diffusion, using up scaling software, trans coding video to AV1, just working and being stable -- which has earned them a continued place in my computers. So, yeah, I'm looking forward to Battlemage. I'd like to have a more powerful, yet affordable GPU in my gaming rig. As always, I'll wait for yours and other's reviews, look at my needs and see how things go, but an affordable 16GB Battlemage would be tempting. Wonder if they'll offer something with more VRAM?
@BastyTHzКүн бұрын
when will this 8x pci-e end ? modern games need more data transfer rate but the card only have half of it
@ThePhoenixTail19 сағат бұрын
Loving the Oled t-shirt!
@adamtajhassam918823 сағат бұрын
the 3 biggest differences - is temp volt management on a 870 board - better stabilization w certain workloads emulation or office. 3) as you said more usb wifi 7 ready for dedicated wifi users. i will only suggest 650 as a 2ndary now.
@DELTA9XTC22 сағат бұрын
I think my upgrade from an i7 3770 (Quadcore from like 2012 lmao) to a 9800x3D was warranted. and my upcoming upgrade from my GTX 1070 to a RTX 5090 is also warranted :D and my upgrade from a snoozfest 27" 1080p 60hz IPS monitor to a 32" 4k 240hz QD-OLED monitor was def warranted cant tell you how excited I am for the 5090, to finally have THE top end pc. well, for 2 years it will be that, then it could be time to upgrade again to a 6090 or so lol
@glenndoiron931717 сағат бұрын
If you are using a monitor that's just hooked up to your computer, and you primarily game, you were probably better off with a high end 1440p monitor. Not even the 4090 can max out a high end 4k display on modern games.
@DELTA9XTC17 сағат бұрын
@@glenndoiron9317 what? no, didnt you read? I will buy a 5090. And I will upgrade to a 6090 and 7090 after that, until I can comfortably play high refresh rate 4k. I love DLSS upscaling. At 4k output res I will always activate at least DLSS Quality. That means the GPU renders the game in like 1440p and fills in the missing pixel up to a 4k image in per its machine learning based algorithm/image reconstruction. Bc it gets rendered at 1440p before the upscaling process, you have virtually no artifacts and 99/100 times you cant see ANY difference to native 4k. In fact it often looks better than native 4k with for example TAA as anti aliasing. So no, a 1440p monitor would be the completely wrong thing with a 5090 and an upscaler that is as good as DLSS. And DLSS is always improving as well. At 4k output res, you can even go to DLSS Performance! thats like 1080p internal rendering and then upscaling to the full 8,3 million pixels of my 4k monitor. Even with such aggressive upscaling, it can look amazing. But usually DLSS Quality is all that is needed for a high refresh rate experience, and there is still DLSS Balanced as an in between level. AND there is DLSS Frame Gen, which works amazing in the right games. I.e. Black Myth Wukong works great with DLSS Upscaling AND Frame Gen, you can play on the best looking settings that way.
@DELTA9XTC17 сағат бұрын
@@glenndoiron9317 I currently play through GeforceNow in 4k120 bc thats the max bitrate and fps you can get from them. GPU is 4080 level performance (its half of a Nvidia L40S server GPU). The 5090 will be a lot faster than a current 4080 and even with the 4080-like I almost always have 4k120 maxed out. I dont need to fully drive my 240FPS at 4k, I wont reach anything close to that in the really intensive games with really intensive settings. Thats why I will upgrade to a better CPU than 9800x3D and better GPU than 5090 as soon as it is worth it. 4k image just looks a lot better than 1440p at 32 inch. DLSS Quality at 4k looks always better than native 1440p bc it is native 1440p scaled up to 4k. Its amazing. 1440p has 3,7 million pixels and 4k has freaking 8,3 million pixels. you can ABSOLUTELY notice that in clarity, sharpness, details etc. and even 4k60 would be an incredibly experience in any single player game. But I can promise you, that I will always have more than 60FPS with my 9800x3D+5090 pc bc of the amazing DLSS features ;-) And there are thousands of pre-UE5 games than will max out my 4k 240hz monitor easily.
@brucethen22 сағат бұрын
"What annoys you most", Please wait compling shaders Despite ticking "Keep me logged in" Everytime EA boots, i am logged out.
@glenndoiron931718 сағат бұрын
Shader compilation is required unless you're running the exact same configuration as everyone else. Could they hide it better? Maybe.
@brucethen18 сағат бұрын
@glenndoiron9317 I was more referring to the fact that some games do it every run, while others do it once, unless you update the game or graphics driver. The once is OK, every run not so much
@Bluelagoonstudios21 сағат бұрын
9:00 use 2 x 4090 GPUs, simmers are crazy about this setup. And they on the phone daily for a Ryzen 7 9800X3D, which is constantly is sold out, at least here. We have already 6 backorders. These are really serious in sims. Some of these clients are our best, and their "old" systems are getting a refurbish and are on display, at a lower price, and they aren't long on display in our shop.
@backlogbuddies23 сағат бұрын
Intel enters the GPU market for lower end GPUs and then fails to provide proper DX 9 support and forces rebar. Things people in the lower end market want and might not have.
@Nanerbeet20 сағат бұрын
You should benchmark the shader compilation process on multiple CPU's, there is a big difference. The 8-core parts are substantially slower than the high end stuff.
@Kapono515020 сағат бұрын
Always remember, your monitor should be more capable than what your GPU can handle.
@Janfey17 сағат бұрын
why?
@Deliveredmean4217 сағат бұрын
What if you intentionally don't?
@Unknown-wd7ub16 сағат бұрын
@@Janfey There are cases where even an iGPU reaches 4k 240hz , also for gaming specifically it's slightly better to have 237-239 fps on a 240hz display when compared to 300 but with Adaptive sync/Freesync/Gsync it really doesn't matter because they tend to fix the tearing in almost 100% of scenarios .
@Kapono515015 сағат бұрын
@ Bc if your current GPU can already max out your monitor, then there’s no upgrade path for you unless you upgrade the monitor with a new GPU
@Unknown-wd7ub15 сағат бұрын
@@Kapono5150 with games becoming more and more demanding and people wanting to use raytracing in some cases I doubt that can ever be a concern
@Silvar55x21 сағат бұрын
12:03 I don't think consoles do shader compilation on the fly / on demand at all (like on PC). They're fixed platforms so compiled shaders can be provided by the developer as part of the game install.
@xlr555usa23 сағат бұрын
B580 and B570 should be announced on Dec 3rd 2024
@smurfjegeren9739Күн бұрын
For me to consider B580, it would have to be on par with the used market, at an equal price. Right now, Im finding RX 6700 used for 250$, so if thats the price B580 launches at, it would need to beat that I suppose
@KoItai1Күн бұрын
There s also a b570; that has 10% less cores than b580; dk about vram or other specs; but that has been spotted too; but that would be the actually really budget friendly battlemage gpu; it would probably be a 8gb vram gpu below 200$; at least that s what im expecting
@fleurdewin7958Күн бұрын
You becareful, RX6700 exist with 10GB of VRAM. You want the RX6700 XT with 12GB of VRAM.
@haukikannel23 сағат бұрын
Intel can not do that, if they want the GPU manufacturing to float! They can not make profit for those prices!
@ironsteal20 сағат бұрын
30:15 great analysis of the hardware config
@MmntechCa20 сағат бұрын
I've been using the A770 for a couple years now and it's improved a lot. Helps that I don't play new AAA releases anymore (since most have been garbage) but compared to the early days, it's improved a lot. Runs most games 1440p/60 like a champ. XeSS is a great tool. Their RT performance is decent too. But yes, pricing is crucial. I wouldn't have bought it had Memory Express not been unloading the LEs cheap. One rumour I heard though is Intel is positioning Battlemage for compute/AI, and it won't have much uplift over Alchemist in gaming. Which if true will just kill it in the gaming space. If that's the case, I'm 100% going back to Radeon once my current GPU hits EOL.
@ericthedesigner16 сағат бұрын
The quote goes like this "the best of the best of the best, sir!" MIB interview scene
@blursedvark4973Күн бұрын
Earlier to this video then I’ll probably be at the launch of the B580
20 сағат бұрын
unrelated: saw a 2 week old video with Baylun (hope I spelled that correctly), the video editor guy doing a 10 min video on CPU reviews. Does the man still have the job, and will we see him in some future videos as a presenter? I think he was great :D
@BerserkingKantusКүн бұрын
I commented lat time of Warhammer Vermintide and CPU performance degradation between the new Windows updates, I found out my ram was clocked at 4800 when I clocked ut up to 5600 my performance went back to the 270-300 range after dipping down to 130-200 with it at 4800. I'm wondering how that game will like ddr5 8000 if it’s that picky about memory clocks.
@Reknilador23 сағат бұрын
Not just shader compilations, why do games have to be at full CPU and GPU usage in the MAIN MENU? What is it doing? This has been happening for years, and no, there's nothing to animate in a static main menu
@laszlozsurka899119 сағат бұрын
The B580 needs to be sub $200 for the performance it's gonna give. In Open CL the B580 is 31% slower than a 4060 ( B580 scored 77,000 versus the 101,000 scored by the 4060 ) which would put the B580 around an RX 6600 level or RTX 3060 level. A 3060 level of performance in 2025 is not good enough unless it's really cheap.
@toutagamon21 сағат бұрын
Well, there's an idea. Benchmark shader compilation times, on various cpus.
@nevo640020 сағат бұрын
Let's hope BATTLE MINGE brings something to the table.
@Krebzonide18 сағат бұрын
I’m really hoping battlemage has high vram for AI performance.
@l8knight84519 сағат бұрын
I went to X870 for the PCIE Gen5. Little bit of future proofing.
@lucazani273023 сағат бұрын
How much will be the gap between a GDDR7 (blackwell) and a GDDR6 (RDNA4) graphics card? I know it's a tough comparison because it is apples to oranges, but I'm quite worried one of the key selling points of Radeon (better VRAM configuration, and I'm not talking only about size but also about bus speed and bandwidth) won't be there next year
@SaiakuNaSenshu21 сағат бұрын
does battlemage have electrolytes?
@nellyx805119 сағат бұрын
B580: it's what games crave.
@robinbenzinger564610 сағат бұрын
The reason shader compilation stutter is an issue on pc and not on console doesn't have anything to do with capped frame-rate. It is because compiled shaders are only included in the console version of an game. This is also the case for the Steam Deck running Steam OS and is the reason Elden Ring on Steam Deck doesn't have the same shader compilation stutters that an windows pc has.
@iabtiКүн бұрын
more of those different cpu gpu combo tests pls
@최치헌-x6x21 сағат бұрын
The Alchemist was a contender for RTX 30 and RDNA2. So BattleMage should be a competitor to RTX40 & RDNA3. Not the RTX 50&RDNA4 that will be released in January.
@PookaBot16 сағат бұрын
A "for science" CPU benchmark of shader compilation time might actually be interesting information. Especially for a game like Stalker 2.
@suchirghuwalewala23 сағат бұрын
seems like duo decided we had our spanish lessons so he allowed us to listen to the english soundtrack
@sethperry661617 сағат бұрын
I'm excited at the idea of a *fingers crossed* 8gb arc b310. I have an i5 8600 that is a dedicated encoding computer but im getting to the point where I need something a little better.
@psxemulator13 сағат бұрын
The reason that console games don't have shader compilation delays is that all shaders are compiled directly for the hardware offline. This is not possible (or at least, impractical) on PC because there are so many different GPUs (including future GPUs that you have no knowledge of)
@dividead10019 сағат бұрын
I don't expect nVidia to destroy the value of their current gen offerings but Intel might do it which would be nice.
@0x8badbeef21 сағат бұрын
Compilations should not occur unless their is a change. Otherwise what has been compiled can be reloaded and executed. They must have a dumb 'makefile'. However if you Disable your Shader Cache Size in your Nvidia Control panel it will compile your shaders every time.
@levygaming313317 сағат бұрын
Even if you don’t disable it, it has some fixed size, and it’s not hard to imagine that it could be running out of room to cache.
@0x8badbeef17 сағат бұрын
@@levygaming3133 the default setting is what your driver recommends. However, you can set it to Unlimited. I don't know if your familiar with 'makefiles'. It only compiles what changes. But it wouldn't surprise me that some games have no way of determining what changes so to play it safe it compiles everything.
@spoots123423 сағат бұрын
It'll be $300 and probably 10% faster than a 4060. Just based on Arc vs LL
@EbonySaints20 сағат бұрын
The Videocardz leak had it at $259 with a launch on December 12th. Intel would have to be insane to jack up the price last minute.
@spoots123417 сағат бұрын
@EbonySaints That was probably a placeholder listing. 259 sounds like their wholesale price to places like Amazon
@vergeofapathy11 сағат бұрын
Personally, I'd absolutely jump on the opportunity of waiting for one minute on every game start if that results in no shader-compilation stutters during gameplay. Even though that makes little sense to me to be a necessity unless devs go out of their way to invalidate the shader-cache from prior sessions.
@gasracing500019 сағат бұрын
30:30 did I hear a scooby doobie doo impersanation in there?
@S1lentSt0rm12 сағат бұрын
What is this outside thing again? Can you please provide an in-depth review and test of this outside? Please go into great detail as to how you get there, what the benefits and downsides are and what you can do there. What kind of hardware is required? Is it worth it? How much does it cost?
@BlueChrome12 сағат бұрын
Another motherboard related question that puzzles me, If I plug a PCIe gen5 graphics card into a bare-bones budget PCIe gen4 AM5 motherboard, what, if anything, is actually preventing the CPU and GFX card from communicating to each other at PCIe gen5 speeds? As far as I can tell the answer is - nothing in hardware on the actual motherboard, am I wrong?
@fredsorre660523 сағат бұрын
I honestly think that AMD's next goal once they are able to finally nail that Multi GPU Chiplet design is to also add in a 3D V-cache not sure how quickly they will be able to implement it but that makes their design a lot more exciting.
@EdDale4413519 сағат бұрын
Would additional cache help GPU performance? Or are you suggesting they should have some additional feature that improves performance in gaming that does not help AI, so they have two product lines?
@levygaming313317 сағат бұрын
@@EdDale44135 additional cache helps GPU performance because it acts as a kind of “force multiplier” for the VRAM bandwidth. Increased caches increases _effective_ bandwidth, assuming that you can cache the data for your workloads. Basically works the same way in CPUs. This is why RDNA 3 and RTX4000 both increased cache sizes. The only thing I’d caution is, you can’t directly compare the effective bandwidth with cache to a GPU without it, like Nvidia did with the 4060. Also it’s why Intel Server CPUs have had more cache per core than their client CPUs for forever.
@letitiabeausoleil4025Күн бұрын
Hola! I'll grab a new Battlemage for sure... I have a small PC with an Pentium G7400 and an Arc A380 which I bring out just for ProctorU exams. It turns out that the 2 cores is only just cutting it running Win11 and the monstrous guardian browser. The A380 does okay but the fans on the Sparkle board are load af.
@EbonySaints20 сағат бұрын
The A380 is fine for basic stuff, but I am so sorry that you are trying to run Windows 11 on two cores. I had to do that at work for a few months and I couldn't even begin to tell you how much I hated that.
@letitiabeausoleil402517 сағат бұрын
@ yes. It takes 2-3 minutes to settle the OS each reboot. My MB is a B760 so once I decide which Battlemage I want I will look for a better CPU to pair with it.
@JohnThunderКүн бұрын
Battlemage needs to have at least 16GB of Vram that's a good start.
@iansrven3023Күн бұрын
Depends on performance, no point if it's not powerful enough to render modern games at 1440 or 4k that requires that much VRAM
@Xero_WolfКүн бұрын
For anything above the B580 that should be a given considering the A770 had 16 GB SKU two years ago.
@bmqww223Күн бұрын
it would cost 200-250 dollars, it will surely kick the navi 44 in the teeth , amd is still in it blissful carefree phase where they still think that intel hasn't ironed out the driver and architecture yet and they will give 8 gb vram , it will be DOA b580 b570 will eat it up in lower segment but for mid range i am not sure i.e navi48 i heard its 4070ti super level whereas g21 is 4070ti so the money theyll ask for won't be so easy and have to compete with b750 or b770 for that money.... and remember alchemist already had a good RT base and this modern architecture is almost 40 percent faster than that which is more refined whereas rdna 4 is trying out RT for the first time, i hear it is good but based on the ps5 pro performance its not looking healthy , battlemage may have a fighting chance here.......and per the previous discussion for this podcast i am sure b580 might just bring the RT for mainstream
@beachlife2968Күн бұрын
12gb is enough for that price and performance range
@jdmhbeatsКүн бұрын
@@bmqww223so much conjecture 😂😂
@Deeptesh97Күн бұрын
Intel vs AMD GPUs: Intel better for : Video Editing Ray Tracing Upscaling AI AMD better in: Drivers
@blitzwing123 сағат бұрын
Problems there are. 1. Only relevant when not talking about gaming, in this audience gaming is most likely 90+% use case. 2. True but of what use as in this segment be it the 4060 or A770 class it's mostly advisable to avoid RT for performance, and the A770 normally can only go up against a 6700xt in RT which smokes it in raster. 3. Fair point Xess is mostly superior to FSR2, although the lower quality dp4a path can be used on anything, do most games support XESS on arc properly? 4. See point 1. Let's see what Battlemage brings, but right now Alchemist needs to be heavily discounted to be viable as even RDNA 2 destroys it in raster which is still the majority of gaming, plus the lower compatibility sucks.
@grospoulpe95121 сағат бұрын
So, on the discussion with RT maxed out, and a 9800X3D at 1080p (or 1440p / 2160p using upscaling without/without FG), it's going to be GPU limited. That would mean, with the upcoming RTX 5090, we should be near CPU limited with RT maxed out, this time? That would be a first? But, when I see the improvement in UE 5 (UE 5.5, I think), it seems that the limitation using RT is on the CPU side. Anyway, I hope, when you'll test the upcoming RTX 5090, you'll be more focused on RTX performances, using latest games and technology. But, I suppose, UE 5.5 games won't be available before at least a couple of year (eg: The Witcher 4), and by that time, the RTX 6090 will be available... which makes me wonder why would be the purpose of the RTX 5090 if the (current) games CPU limited...
@samgragas846720 сағат бұрын
The CPU load is the same using any low level RT (Lumen excluded) or path tracing but the 5090 will have a rough time. You should be GPU-limited. Devs could add sliders to control CPU usage with RT but they usually won't.
@EastyyBlogspotКүн бұрын
Thanks for answering my question, my cpu is a 8086k (basically a 8700k) and some shader compilations take ages, and yeah i thought shader compilations only happened once and then redone when a game or drivers get updated....but seems like more games happens on every boot of the game
@xavalintmКүн бұрын
That's a nice CPU tho. Ig it's gaming performance is still pretty good.
@SimonVaIe23 сағат бұрын
I'd guess it's a bug/oversight in those games, and there really aren't many games where that happens. Because my understanding is the same as yours, shaders need to be compiled once and are then available for future use. Unless you have a GPU/driver change, then rebuilding the shaders is necessary.
@EastyyBlogspot23 сағат бұрын
@xavalintm I have noticed in many modern games tons of frame drops and stutters, though could be the games themselves as they do ease off after a while but still I do need an upgrade
@gamingfromjohnwayne12 сағат бұрын
So like ssd uf u dont have updated motherboard u cant use the newest speed .so with gen 4 motherboard will u not get 100 performance with gen 5 gpus?
@eliotcole15 сағат бұрын
Did you know that their 'pro' cards had Dolby Vision licencing? So, theoretically, they were the only official, consumer available, equipment to have 'proper' (the Windows app does NOT count) Dolby Vision. (to my mind)
@ironsteal20 сағат бұрын
26:31 exactly what is going on @developers
@xyr3s23 сағат бұрын
Ive been an intel cpu user with amd and nvidia gpus. Now im on an amd cpu, will i be an intel gpu user?
@michaels991720 сағат бұрын
At the absolute minimum, intel needs to offer rtx 4070 super performance at a $350 price point. If they can't deliver that, they are doomed in the gpu space.
@m8x4259 сағат бұрын
Hey, the number 580 has always been a good numbering for Graphics cards. The GTX 580 was a decent redemption of the GTX 480, and the RX 580 was a staple of the Graphics card world for a long time.
@Zastrutzki3 сағат бұрын
I'm pretty sure the games that do shader comp at every startup (think Dead Island 2, Lords of the Fallen), actually don't do anything in the 2nd (and each subsequent) pas, until you switch out your drivers. Devs were just too lazy to not show the check. It's not compiling anything.
@TheEgzi22 сағат бұрын
Got a i5 10600k Should I get a 9800x3d for 4k gaming?
@alarik9511 сағат бұрын
But will intel release consistent drivers, or will they leave battlemage owners in the dust?
@jonathanhayhurst392815 сағат бұрын
I would want the B580 to offer 4060 performance, 12gb of VRAM, and a $250 price.
@samgragas846715 сағат бұрын
That would literally be a downgrade. Such card would cost $200 to sell.
@Cwayne198912 сағат бұрын
The 5700XT is still an amazing card to this day. I love mine.
@tadejdanev50303 сағат бұрын
will they bring some 4070/7800xt level with low W usage?
@nimbulan20208 сағат бұрын
The RDNA4 situation, according to rumors (and these predate the official statement by many months) is that AMD was planning on introducing a complex new multi-chip design for high end RDNA4. They suffered delays through the design/production process that put it so far behind schedule it would only be ready maybe 6 months before the planned RDNA5 launch, so AMD just decided to scrap it. RDNA5 (now UDNA) will of course be using the new design and patents for it have been found since the rumors made their initial rounds. In any case I do find these rumors highly plausible due to the patent and since RDNA1 suffered similar setbacks - in that case due to poor architecture scaling. RDNA1 was originally supposed to be as fast as RDNA2 ended up being but they had to cancel the larger die(s) and heavily overclocked what they had to produce the 5700 XT.
@ericthedesigner16 сағат бұрын
Spiderman was patched by the devs for High Core Cpu's mostly 16 core cpu's.
@bulutcagdas107123 сағат бұрын
If they priced the new Intel CPU's 50 to 70 dollars less, they would probably have been better received.
@kerotomas116 сағат бұрын
By the time the Raptor Lake fiasco happened Arrow Lake was already in mass production. So no, crappy performance is not because of shifting focus to fix Raptor Lake.
@Kiyuja12 сағат бұрын
[Regarding Shaders] Consoles dont have shader compilation because they use fixed hardware, you can bundle the proper shaders with your game and the chip just executes, PC is flexible and thus needs to generate hardware specific code, the shader cache needs to be generated after either driver, hardware, windows or game updates. (this also means a better CPU doesnt rly fix the issue, you'd need at least x5 the CPU in order to have both the code AND the game logic done before the next frame, this obviously gets worse the higher the FPS) Stutters: many recent games stutter because of two things, either because of poor asset streaming or what is often referred to as "shader compilation stutter". This isnt entirely accurate as its not only the shaders but an entire pipeline for it that needs to be generated. This is fairly costly and can easily add up to the length of an entire frame or more, as it is a multi stage pipeline in Vulkan or DX12. Why only modern games?: modern games often use either DirectX12 or Vulkan as the graphics API, they have individual strengths, Vulkan supports other platforms than Windows and DX12 has better high end feature implementations (HDR, RT etc.) but both of them are low level APIs, which means they work on the bare metal and thus are more performant, this comes at the "cost" of developers as they now need to take care of things themselves, some of these things are shader pipelining or setting "barriers" for when you utilize multiple CPU cores to feed the command buffer of the API. Truth is many developers either arent good at their job or have insanely tight deadlines where they cant allocate enough time to properly implement PC only code in their ports. Since consumers seem to buy the games anyway I dont see this changing anytime soon. In conclusion this is a mix of skill issues and a market that doesnt complain enough
@NickChapmanThe12 сағат бұрын
I like Tim's shirt. Is there a story? Official merch.?
@javiTests23 сағат бұрын
It is just weird having Tim on the right side 😂
@Tom_Quixote22 сағат бұрын
Tim is AI generated anyways.
@gnikem21 сағат бұрын
"Issue with the product is that the product is actually bad" - proceeds to show an Arrow Lake CPU, lmao, peak.