Why AMD's first Hybrid-CPU is Different

  Рет қаралды 157,017

High Yield

High Yield

Күн бұрын

AMD's first ever hybrid CPU is here: Phoenix 2 combines Zen 4 & Zen 4c cores into a single chip. But this asymmetrical design does everything different than the competition (chip deep-dive).
Become a supporter on Patreon: www.patreon.com/user?u=46978634
Follow me on Twitter / X: / highyieldyt
SemiAnalysis Zen 4c article: www.semianalysis.com/p/zen-4c...
0:00 Intro
1:03 The Silicon Layer / die-shot analysis
1:52 RDNA3 iGPU
2:25 The Hybrid CPU / Zen 4 vs Zen 4c
4:35 Explaining the size difference / Cell density
5:25 Design Process
7:23 AMD vs Intel vs ARM

Пікірлер: 538
@MikeGaruccio
@MikeGaruccio 8 ай бұрын
The idea of more cores at lower clock speed is extremely compelling as a cloud provider. It’s not uncommon for us to have >50% cpu utilization at a host level, but, scheduling overhead and other issues makes higher oversub impractical, so more cores would allow us to get even more dense and offer a lower cost option for the vast majority of workloads that don’t really need high clock speeds anyway.
@shepardpolska
@shepardpolska 8 ай бұрын
Yeah the main reason for AMD developing Zen4c is to sell them in EPYC chips.
@MikeGaruccio
@MikeGaruccio 8 ай бұрын
@@shepardpolska yea it’s a great fit for those use-cases. E-cores are great for laptops but the lack of simd and other features makes them a non-starter for general purpose virtualization, but cores that work just like the high-perf stuff but a little slower is easy to integrate into a tiered offering.
@HamguyBacon
@HamguyBacon 8 ай бұрын
in 2016 some researchers built a 1000 core cpu that ran on 1.78Ghz that can be powered by a double AA battery with each core being able to shut down individually when not in use. If they did this in 2016, imagine what level of technology we have now that they are hiding from us.
@ChrisStoneinator
@ChrisStoneinator 8 ай бұрын
@@HamguyBacon”hiding from us” lmao also source?
@Therealpro2
@Therealpro2 8 ай бұрын
​@@ChrisStoneinator It's called "KiloCore". Dead project by now, but was hyped up few years ago. Used PowerPC ISA. Coudn't find any actual performance numbers. Overhyped with almost no practical use in general computing because how the chip is built.
@notthedroidsyourelookingfo4026
@notthedroidsyourelookingfo4026 7 ай бұрын
To me, this shows how much room for change there still is in CPU design. We've been stuck with monolithic quad cores for so long, and suddenly, we got big.LITTLE, chiplets, 3D-stacked cache, and 128-core CPUs. I'm loving it :)
@impuls60
@impuls60 3 ай бұрын
For gaming this sucks. I want 16core monolitic. Hybrid Intel cpu has given me nothing but problems. Microsoft thread director is still terrible in 2024.
@justacollegestudent5147
@justacollegestudent5147 3 ай бұрын
@@impuls60well the fastest gaming cpu is and will be for a while an 8 core amd with 3d v cache. If you want 16 cores get the 7950x3D or similar. This is best for laptops and cloud computing. Especially enterprise. More cores in less space is great for enterprise
@curvingfyre6810
@curvingfyre6810 8 ай бұрын
Oh, I see, that's honestly pretty clever, I imagine using the same base design make programming much easier than having 2 entirely separate designs for high performance and high efficiency cores. Really nails home how CPUs are integrated CIRCUITS, and not magic crystals. The same circuit can be laid out multiple ways for different purposes.
@RobBCactive
@RobBCactive 8 ай бұрын
It's much easier for the OS, instead of needing special CPU driver and applications not knowing what features the chips support (as they can be moved between P&E cores), it can simply favour the fastest cores, then slower and finally SMT threads when utilisation is high.
@spankeyfish
@spankeyfish 8 ай бұрын
@@RobBCactive Another likely advantage is in development cost for AMD. I presume that they do all the architectural development in the P core design (since it's easier as it's broken down into more separate blocks) then, when the architecture is mature, redesign the layout to compact it into a smaller area.
@RobBCactive
@RobBCactive 8 ай бұрын
@@spankeyfish Good point! Well if you have developed the RTL, alter the process rules for lower frequency, use compact cells and loosen the placement requirements it is mainly running software tools and design rule checkers to validate a different implementation. It could even involve many of the same engineers who created the faster modular initial design that passed validation and understand the design in depth. So that has to be simpler and cheaper than 2 architectures.
@choilive
@choilive 8 ай бұрын
modern cpus are pretty close to magic crystals 😅
@zelkuta
@zelkuta 8 ай бұрын
No you're wrong, it'd definitely magic crystals. With a lightning and logic enchant.
@RobBCactive
@RobBCactive 8 ай бұрын
It's amusing to think back to Dr Cutress's 2020 interview with Michael Clark (Zen lead architect) who grinned a lot when talking about exciting challenges when being asked questions designed to get him to give stuff away.
@ItsAkile
@ItsAkile 8 ай бұрын
right?
@HighYield
@HighYield 8 ай бұрын
Good catch. I wonder how long AMD worked on Zen 4c and when the decision was made to go that route...
@RobBCactive
@RobBCactive 8 ай бұрын
@@HighYield well the interview made clear that they're starting design at least 5 years ahead, it's why they were stuck with Bulldozer for so long. But Zen4c as a tweak for area might have been in response to cloud demands and taken less, at the time of the interview MC said Zen cores were smaller and scaled better so they believed there was no need for a different core. I think he would have liked to share the idea but obviously was bound by confidentiality.
@kkpdk
@kkpdk 8 ай бұрын
AMD is playing on 'easy mode' in regards to scheduling: With cores where the only difference is max clock speed, all you have to do is prioritize the high-clocking cores, and you're done. If the c cores are more efficient, you can prioritize c cores when all cores are clocked slower than c_max to save power. Intel has it much harder - they have to have instrumentation in the core to sense the instruction mix, in order for the scheduler to know how threads would have performed if run another core. Past tense, because they now have to hope that the mix stays the same for long enough.
@RobBCactive
@RobBCactive 8 ай бұрын
But the OS doesn't know either what the performance demand of a thread will be so it picks the one with most capacity. I could imagine a power saving mode where only Zen4c cores are enabled to save battery, so no boosting is allowed.
@maynardburger
@maynardburger 8 ай бұрын
Why would it be any different? If it's as easy as identifying which core can clock more, then Intel's situation is the exact same.
@RobBCactive
@RobBCactive 8 ай бұрын
@@maynardburger incorrect the E-cores can perform badly given certain instructions. Find the Chips and Cheese investigation into E core inefficiencies, sometimes P core only is faster than P+E in multi-threading.
@viljosavolainen2286
@viljosavolainen2286 8 ай бұрын
​@@RobBCactiveI dont think they have big advantage when cpu is mostly idling. They just can add cpu multi tread performance very energy and cost efficiently, so main treads can perform better.
@entity-bl5og
@entity-bl5og 8 ай бұрын
Cache is also less in Zen(number)c cores, which might have a slight impact in ipc since the core has less data immediately accessible at any given point, so it has to reach out to L3 and ram more often.
@nedjimb0
@nedjimb0 8 ай бұрын
I wasn’t very interested In Phoenix 2 before watching this video, but your detailed breakdown really makes it fascinating.
@Speak_Out_and_Remove_All_Doubt
@Speak_Out_and_Remove_All_Doubt 8 ай бұрын
Current rumours are that Zen 6c will go upto 32 cores on a single chiplet (CCD), that would be some serious multi-threading performance in such a small area.
@hinter9907
@hinter9907 8 ай бұрын
Likely it's all their C cores, but given it's gonna be on either 3 or 2 nm, it's likely gonna be the only Core type by then, since there isn't much point shrinking Cache below 5/4nm due to them not gaining any performance
@SaturnusDK
@SaturnusDK 8 ай бұрын
@@hinter9907 But AMD already has the solution for that with stacked V-cache. In future core designs they could completely off-load the L3 cache to a separate die (or dies) because as they have already proven with V-cache, you can shrink cache considerably if that is the only microarchitecture on the chiplet.
@Speak_Out_and_Remove_All_Doubt
@Speak_Out_and_Remove_All_Doubt 8 ай бұрын
​@@hinter9907 Yeah the 32 Core version of Zen 6c will be the Bergamo successor. If they stick to 12 CCDs then that's 384 cores on a single CPU! It's rumoured to be 2nm but I personally don't think TSMC will have this node ready for AMD early enough or Zen 6c will be late, not sure which. Yeah I would leave cache on 6nm pretty much indefinitely now, it's a much cheaper node plus has loads of capacity.
@tuckerhiggins4336
@tuckerhiggins4336 8 ай бұрын
I doubt it would have enough bandwidth
@Speak_Out_and_Remove_All_Doubt
@Speak_Out_and_Remove_All_Doubt 8 ай бұрын
@@tuckerhiggins4336 It would be at least 12 channels of DDR7, could go up to 16 channels or they might use HBM4.
@AaronEscobedo
@AaronEscobedo 7 ай бұрын
That really feels like a business savvy move - engineer the core once, and get as much use outta it as you can
@powerpower-rg7bk
@powerpower-rg7bk 8 ай бұрын
AMD's approach also supports another dimension of differentiation by using chiplets: the manufacturing nodes and design rules can be radically different depending upon the goals. There is where rumors of Zen 5 and Zen 5c are going in that they are using two different manufacturing nodes for further optimizations in clock speed, area and power consumption. Phoenix 2 is unique in that both Zen 4 and 4c are together on the same monolithic die (which makes sense given its target mobile market). Desktop hybrid chips would be far easier for AMD to implement if they felt the need to bring them to market: just one Zen 5 die for high clocks/V-cache support and a Zen 5c for an increase in core count. AMD has some tremendous design flexibility here for end products. The software side cannot be stressed enough that the changes between Zen 4/Zen 4c and Zen 5/Zen 5c are nonexistent in user code space. (The supervisor instructions are mostly the same but Zen 4c as a few changes to account for split CCX on a CCD and higher core counts in general. Clock speed/turbo tables are of course different to report to the OS. All the stuff you'd expect.) Intel's split architecture requires some OS scheduler changes to function properly. Lots of odd bugs have appeared due to the nature of heterogenous compute: legacy code always expected the same type of processor cores when they first adopted a multithreaded paradigm. These are the difficult type of bugs to replicate as they involve code running in particular ways across the heterogenous architecture that require specific setup to encounter. For example splitting up work units evenly on the software side expecting them to all complete at roughly the same side presuming all the cores were the same. Or a workload calibrates its capabilities based on first running on a P core then gets context switched into a much slower E core that can't keep up with initial expectations. Intel is large enough that they got the necessary scheduler changes implemented into OS kernels to get things mostly working there are plenty of edge cases.
@stephanweinberger
@stephanweinberger 7 ай бұрын
Using the same architecture also makes software (particularly the scheduler) much simpler, which in turn also gives a performance benefit. On an E-Core/P-Core design, as soon as a process e.g. uses AVX instructions it has to be moved to the P-core (which takes time, messes up the cache, etc.) and the scheduler then has to either pin it there (which might not be efficient) or keep track of how often this happens (in order to decide if moving back to an E-core would be beneficial; which causes additional complexity and uses extra cpu cycles). Most of that simply disappears if all cores have the same capabilities (and the only difference is how high they can boost their clock).
@MenkoDany
@MenkoDany 7 ай бұрын
That's a much smarter trade-off than I originally thought! Very well done, AMD eng team
@MenkoDany
@MenkoDany 7 ай бұрын
For the record, from my initial skimming of the news, I had assumed more or less that Zen 5c are literally just Zen 5 cores with part of the cache cut off with a handsaw. I knew that the benefit came in smaller power consumption, but I had no idea there were actually smart engineering reasons for doing it this way
@flink1231
@flink1231 8 ай бұрын
Now that you mentioned it, guess zen c cores make sense for next gen consoles. Clocks are always much lower anyway and the area efficiency would be helpful for cost. Actually it is possible console zen 2 cores are already optimized for area. Never thought about it. Cool!
@SquintyGears
@SquintyGears 8 ай бұрын
Well it could go in console but they're not designed like this at all. Usually the slim model gets just a die shrink or they integrated multiple chips into one. But in 2023 they're actually saving more money by staying on the same process node instead of trying to buy some of the 3nm waffers that apple wants to put in the new iPhone. And saving 35% on just the core with no other saving anywhere else in the chip isn't going to make the console cheaper. It would just be 10% smaller chip in the best case (they're selling at a loss so maybe they are willing to do a silent version update to lose 10$ less) Though, to your last comment the ZEN2 they're using isn't anything like a zen2c. It would have been so valuable for servers you could guarantee amd would have made an epic cpu with it 3 years ago already...
@rudrasingh6354
@rudrasingh6354 7 ай бұрын
nah next gen consoles are much more likely to use a single CCD X3D/VCache cpu. You get slight power efficiency and better gaming performance. This would also mean games will start to focus on the cache a lot more and trickle down to desktop as well and that could be the next paradigm.
@flink1231
@flink1231 7 ай бұрын
@@rudrasingh6354 I sure hope so, it would be incredible, however 3d cache is super expensive and consoles try to stay monolithic thus far (maybe next gen this will need to change? I sure hope so so they can include more silicon without charging 700 usd for a console. Also they could afford to maybe make part of it on a newer node. Let's hope!!). I'm also not sure it would be an issue to use zen c cores with 3d cache if they find a cost conscious way to do it.
@pweddy1
@pweddy1 8 ай бұрын
Actually the RTL is part of the design tools intermediate format. It’s a step in the synthesis process, the place and route step will apply the “design rules” for the fab process.
@HighYield
@HighYield 8 ай бұрын
So the RTL is adapted to the node specific design rules first?
@vladislavknyazkov1962
@vladislavknyazkov1962 8 ай бұрын
​​​@@HighYieldRTL indeed sometimes can be customized for a specific node. There are many iterations of synthesys, placement and RTL updates to achieve target power/area/speed requirements.
@benjib2691
@benjib2691 8 ай бұрын
@@HighYield Junior ASIC design engineer here. The RTL design is basically the functionnal description of the chip written in HDL (Verilog, VHDL or SystemVerilog). The synthesis tool takes as input the RTL design and the node/process-specific library, i.e how exactly a logical function in the RTL is translated to a set of logic gates and the physical characteristics of this set of gates (length/width, electrical resistance, etc...) in the chosen fab process. The output produced is something called a Netlist, which is basically the list of all the logic gates and how they are connected to one-another in order to have the same logical behavior as the RTL. Then the Netlist is placed and rooted in the physical space to create the final die layout and the shadow masks used by the fab.
@Darkknight512
@Darkknight512 8 ай бұрын
@@HighYield Hey FPGA developer here, so RTL is the first step of design entry after ISA is defined and high level block level designs and requires are agreed upon. RTL is the design written in Hardware Description Language, HDL (VHDL, Verilog or SystemVerilog or a combination of all three), the HDL is usually behavioral, written in a way that looks like software but with some differences (mostly the described behavior is highly parallel). After the RTL is written in an HDL, it can be simulated at the behavioral level for a large portion of the verification steps. In an ASIC design flow, RTL goes into synthesis to produced a synthesized netlist. Synopsys Design Compiler is one of the tools available for this. The synthesized design can optionally be simulated at this time. Once you have a synthesized netlist of the "generic" logic gates and flip flops, then comes technology mapping, this is where the start of the PDK is used. The "generic" logic that has been synthesized is mapped to pre-designed physical lego blocks that can go onto the ASIC. Once technology mapping is complete, along with constraints (IO, clock, timing, etc), the chip can be floor-planned before the tools are given the opportunity to do place and route within the floor-planning rules. You can think of it as if a designer wrote down where the walls in a house will go, but the automated tools will place all your furniture according to the rules like, "I want my TV within 2 meters of my couch, and must be facing it". You touch on this accurately in the video, a highly floor-planned chip can sometimes lead to a less optimal design because you gave the automated tools less freedom, however it can sometimes give a more optimal design because you restrict the scope of what the automated algorithms are trying to do. Note that a segmented and floor-planned design not only creates placement and routing constraints, it usually means the synthesis tool, knowing that later stages has more constraints, might also be forced to perform less optimizations making for a larger synthesized netlist. After placement and routing, there will often be some timing violations, some of these can be fixed manually, or the design can be iterated on and the tools given another try on a design with some changes. The placed and routed design can also optionally be simulated, but it will be much more time consuming when compared to an RTL behavioral simulation. In all the steps, emulation can be substituted for simulation, the cost of the emulation hardware is higher, the setup for deploying a design onto emulation hardware is higher but the execution speed of the emulated design is higher then simulation.
@WetPig
@WetPig 8 ай бұрын
Fuck, you made me remember my Semiconductor design courses....goddamn Cadence tools... absolutely sucked... made me loose my interest in the subject area, knowing that I would have to work with them.
@defeqel6537
@defeqel6537 8 ай бұрын
While the C-cores make sense for mobile, I think the gaming APUs are more limited by the area and power draw of the GPU (and possibly memory bandwidth). AMD needs to find similar optimizations for the GPU edit: as a separate note, could mention in the video how modern operating system schedulers already focus on the best rated cores for foreground applications, and indeed multicore workloads push the clocks of all cores down, especially on mobile
@yusuffirdaus3900
@yusuffirdaus3900 8 ай бұрын
Yes, but area efficiency is the concern, zen4 or zen4c both are very efficient at low power, but classic zen 4 takes so much space.
@xeridea
@xeridea 7 ай бұрын
@@yusuffirdaus3900 You should look at Intel cores, they are like twice the size of Zen cores, at roughly the same, or marginally higher IPC. Regular Zen cores are already comparatively efficient for the space. So Zen4c cores aren't that much bigger than a E core, but obviously much faster.
@shuginubi
@shuginubi 8 ай бұрын
Something with 6 zen 5c cores and 12 RDNA 3 CUs would be legendary for handhelds
@worthlessguy7477
@worthlessguy7477 8 ай бұрын
Qualcomm is catching up. Just like how Linux gaming is getting better after steam deck. I hope they bring out a smaller handheld with Qualcomm or MediaTek .
@taylon5200
@taylon5200 8 ай бұрын
​@worthlessguy7477 problem is, ARM doesn't support dx12. So all you're gonna get is phone games on that handheld even if it existed. We want a handheld that plays real games, not android apps.
@YounesLayachi
@YounesLayachi 8 ай бұрын
If valve ever decide to make a new handheld pc it's 100% going to happen, but with RDNA 5 instead
@smith7602
@smith7602 7 ай бұрын
I've been thinking about Intel's big little implementation a lot lately and have finally just come around to it. I guess wider programs won't necessarily need more fast cores if it can already get a lot of smaller ones. Then AMD goes and Feng Shui's their regular cores to get the same effect while dodging scheduler worries and I'm blown away all over again.
@loucipher7782
@loucipher7782 2 ай бұрын
lol all AMD did is just crippled a few of their existing cores while Intel actually give you extra E cores on top of the existing P cores
@smith7602
@smith7602 2 ай бұрын
@@loucipher7782 lol at upboating yourself ebin redditor *tips hat*
@clutteredchicagogarage2720
@clutteredchicagogarage2720 7 ай бұрын
Great video. This strategy seems to make sense for servers. For my laptop, I would personally love to have a hybrid processor design with lower power cores for simple workloads, good support in an intelligent cpu scheduler, and a couple fast cores for occasional cpu-intensive workloads. I haven’t followed cpu design much in the past few years, and this is the first time I am learning about hybrid and asymmetric core designs. Interesting stuff. It’s cool to see how cpu designers are innovating as we reach the limits of increasing clock speeds and shrinking transistors.
@pandemik0
@pandemik0 8 ай бұрын
AMD is on to something here. Will be very interesting how the performance results. Area optimised cores would be potent in handhelds, tablets and laptops - that's desktop class core but with power efficiency per IPC, not just lower clocks & voltages.
@lunatik9696
@lunatik9696 8 ай бұрын
Nice video. Excellent presentaiton. I recently obtained a MinisForum Phoenix APU UM790 Pro 7940HS mini PC and am impressed with the small size, great capability and low power envelope. The Phoenix2 will have lots of applications especially for mobile and notebooks. An all 4c chip could be very useful in the data center for CPU performance per BTU. Many mobile chips are the desktop variant running at much lower clocks to extend battery life and/ or reduce heat.
@kartikkailas9123
@kartikkailas9123 7 ай бұрын
Dense cores is such a brilliant idea. Classic cores also are run at 2.5 - 3Ghz in servers.
@UnseenMenace
@UnseenMenace 8 ай бұрын
This seems like a much better compromise to me! Will be interesting to see how it performs :)
@crazyelf1
@crazyelf1 8 ай бұрын
One big bottleneck for the smaller c cores would be the power density. The heat per mm^2 is a huge bottleneck, so clock speeds are also reduced in that regard.
@RobBCactive
@RobBCactive Ай бұрын
But they don't boost as high so require lower voltage which squares while frequency is linear, boosting deliberately pushes the core into an inefficient part of the performance/power curve. At lower TDP per core the c cores become more efficient while not hitting the higher frequency that standard Zen hits at higher TDP. In a likely U 15W mobile laptop the power density is way lower than desktop where a single core can easily exceed 15W. Thermals constrain boosting as 10C costs about 100MHz max frequency, so on modern chips it's not really a huge bottle neck in general, the Bergamo servers with a large number of CCDs have a large socket, it's more the TDP overall that makes power efficient frequency a priority as high utilisation is the normal case.
@crazyelf1
@crazyelf1 Ай бұрын
@@RobBCactive There's an optimal point of performance per watt for any chip design. Heat per mm2 is still going to be a limit even when running chips at that optional point.
@RobBCactive
@RobBCactive Ай бұрын
@@crazyelf1 you clearly don't understand this as well as you think. Large socket spreads the TDP over a large area so W/mm² can be easy to cool.
@roqeyt3566
@roqeyt3566 8 ай бұрын
My immediate first thought was to the steam deck 2 as well. A single or maybe two big cores for those primarily single threaded games, with 4 smaller cores for heavily threaded games. I've found that a similar layout works well for desktop use on intel's designs, with 4-6 cores that you can schedule for gaming being optimal for an enjoyable experience. I do believe that the 2 GPU workgroups in phoenix 2 are anemic for the deck, as graphics performance would require an upgrade, but that's something they could adjust in a custom design for Valve Back to the CPU architecture though. I absolutely love having more cores than I would think I need. My main gripe with intel's design is the limited use cases I'd have due to how they're scheduled. A "space optimized" core would suit my wishes perfectly, and the full fat cores being there for when you need them is the icing on the cake.
@HighYield
@HighYield 8 ай бұрын
The Deck 2 def needs more WGPs, fully agree.
@saricubra2867
@saricubra2867 7 ай бұрын
The problem is the core configuration. My i7-12700K is a mistake by intel, so they won't make a similar chip in a very long time. Good for me, because the perfomance per watt is hilarious, it's way too high. Watching 4K youtube videos while the CPU stays at 8 watts and 28 degrees on summer with air cooling is really something else...
@MonochromeWench
@MonochromeWench 7 ай бұрын
Zen APUs for high performance handhelds seems to be a market amd has all to themselves, leveraging their existing work seems like what they'd do. 5c probably wont be a big change from 4c beyond what they can do to improve Epyc and Threadripper. Lower power and heat is always good though. I wouldexpect lots of c cores to be used in next Gen Console APUs and by that time we might be up to 6c
@annebokma4637
@annebokma4637 8 ай бұрын
Hoping to see a 8 zen 5 x3d + 16 zen 5c cpu in the first half of next year. Gonna be building with that chip if it's released.
@andersjjensen
@andersjjensen 8 ай бұрын
Yeah, that's a mad concept for programmers and content creators who also wanna game.
@SquintyGears
@SquintyGears 8 ай бұрын
They won't do that... It sounds amazing on paper but how many people would actually buy that? Also it would perform poorly. There's a memory bandwidth requirement to feed cores. Probably requires threadripper socket. There's a lot of discussion of whether it's coming to the chiplet desktop CPUs at all in a hybrid package with a regular chiplet and a c chiplet... And it seems like it won't at all... That's why we've only seen them in EPIC because they make so much sense for servers. And in laptops where the size and power efficiency are king.
@andersjjensen
@andersjjensen 8 ай бұрын
@@SquintyGears Intel is pushing e-core counts higher already with 14th gen and they're already at 8+16 now, so AMD is going to have to do something if they want to stay in the upper end of multi threaded performance on desktop. I have a 5950X myself (software developer) and in my workloads it makes 1-2% difference whether I run the memory at 3200 or 3600. For really "crunch heavy" stuff like all AVX workloads it makes even less. It's practically only branch heavy code, and code that does very little work to very large amounts of data (like database lookups) where memory bandwidth is a problem. Games, however, tend to fall in this category. Threadripper (non-Pro) went up to 64 cores on a quad channel memory setup on DDR4 3200. I can't see any realistic scenario where 8 full and 16 compact cores can't perfectly well run on dual channel DDR5 6000.
@SquintyGears
@SquintyGears 8 ай бұрын
@@andersjjensen we haven't seen the numbers for those intel chips yet... Yes obviously quad channel has been proven in the past. The main difference you're seeing from 3200 to 3600 is the inifity fabric clock difference because latency to memory doesn't scale along with the the raw bandwidth difference of that clock bump, unless you took the absurd amount of time it take to tune the timings for it. i don't know anybody who has, I've tried and given up.... Also, importantly, Intels 8+16 wouldn't take as much bandwidth to feed as a zen + zenC of the same count because the slow intel cores are much slower than their counterparts. I'm not saying i know exactly where the breaking point is where the performance gains fall off a cliff. But I'm pretty sure we're not that far from it. with some quick napkin math with latency that hasn't changed and bus width that hasn't changed but core counts having exploded you can understand why some people wanting to see 48 threads on AM5 seems a little bit far fetched... And that's if they would even make such a configuration. I'll remind you that we've only seen leaks for epic chiplet parts and laptop fully integrated parts. Both are integrations where it brings a very obvious advantage. What's the point for desktop? (assuming they do finally release an update for threadripper)
@annebokma4637
@annebokma4637 8 ай бұрын
@@SquintyGears you do realize that the memory bandwidth has increased since they put 16 cores on am4 right? 24 cores is only a third more bandwidth needed if you start from the presumption that the first 16 core chips were bandwidth bottlenecked On server they do 8 cores per memory channel for zen4 and 16 for zen4c. A 24 core needs 2 channels following server guidelines. Am5 has 2 channels...
@singular9
@singular9 8 ай бұрын
AMD has the capability with Zen to scale in any direction it wants. That was the whole goal of zen from the beginning. It needed to be an achitecture that in the long run, can do everthing and anything, and use innovative packaging techniques. Zen4c is great proof of what can be done, and in the future, there could easily be 3 or 4 different version of zen cores, tuned to what is needed. More cores at lower clocks at least to me never made sense because as an average consumer, quick bursts of high single core usage is what I would require.
@yaoquansong
@yaoquansong 8 ай бұрын
Great analysis. Definitely made me more interested in Phoenix 2 and the upcoming Zen5c cores. I do agree that perhaps the phoenix little nomeclature would have been more accurate rather than Phoenix 2, its still a pretty fascinating chip.
@dazealex
@dazealex 8 ай бұрын
You really need tight coupling for thread shceduling. On the M1/M2 platform, it works extremely well. The GUI is smooth and only pegs the E cores in regular usage.
@GustavoNoronha
@GustavoNoronha 8 ай бұрын
Great stuff! I remember seeing some annotations of Arm designs that implied they also use this method of making the chip less blocky, with several parts of the chip sharing chunks of the chip and making it impossible to clearly indicate where specific things are. Is that the case? Is that mostly a result of using high density cells, or do they also use some kind of automation for laying out the floor plan?
@kazedcat
@kazedcat 8 ай бұрын
They use automated layout on both core. It is just that the big core is optimize for clock speed so the critical wires must be very short. While the "C" core is optimize for area so the wires are allowed to be longer and this free up the transistor layout to fill the gaps between functional blocks making the logic blocks closer with each other.
@andersjjensen
@andersjjensen 8 ай бұрын
For an ultra portable laptop I think Little Phoenix makes perfect sense provided that the compact cores clock reasonably well (say 60-70% of the full fat cores) when plugged in. And since the process scheduler in both Linux and Windows are biased towards the highest clocking cores single threaded and lightly threaded apps will always "run right" without the OS and apps needing modification. For standard desktop compact cores make little sense, but for "the layman's workstation" I think a 9950X3D with 8 full fat + VCache and 16 compact cores would be very desirable to many. Especially now where AMD has discontinued Threadripper non-Pro.
@sebastiandalton3981
@sebastiandalton3981 7 ай бұрын
A "Van Gogh 2" with Zen 5c would be kinda amazing.
@Jwalk9000
@Jwalk9000 7 ай бұрын
I really like that they've started building them like this. I have a sneaking suspicion that IPC will increase due to lower latency between the different parts of the core.
@ItsAkile
@ItsAkile 8 ай бұрын
Wonderful breakdown and discussion, I think about this everytime I see Intel's lineup and the whole instruction-set compatibility rework that's underway with AVX10.x. I prefer AMDs design and approach for that reason alone. I respect Big.Little when they go hard for like Apple's designs and just breathe great products that perform and last. Zen4C sounds exciting for all products
@25myma
@25myma 8 ай бұрын
Zen 4c actually makes a lot of sense in the mobile and server sections; smaller, more efficient cores at lower clocks are all you need there so..
@NaumRusomarov
@NaumRusomarov 7 ай бұрын
Very neat. I can’t wait to see how this works in practice. This is much simpler than trying to implement big.small for x86.
@PEANUTGALLERY81
@PEANUTGALLERY81 7 ай бұрын
Very enlightening video indeed. I’d say what AMD Is doing is a halfway house solution towards a ‘little’ core, but one that’s perfect for the sort of workload that its targeting. Easy workloads that are highly parallel would work better on those little cores, as they can easily scale by core count. But if the workload has less inherent parallelism, then the Zen4C would be perfect for it. Not to mention the ease of workload scheduling here. Intel’s hybrid design requires dedicated hardware in the thread director, while this solution from AMD doesn’t
@mndlessdrwer
@mndlessdrwer 8 ай бұрын
Zen4c looks like the prelude to AMD producing an entirely new range of massively threaded enterprise CPUs. They saw the ARM based servers and went "We can do that, but with x86 instruction sets.
@C0smicThund3r
@C0smicThund3r 8 ай бұрын
I imagine the Zen 4c cores can compute the same instruction sets as Zen4, Unlike Intel E cores which are missing some instruction sets the P cores can execute. AMD with their smaller silicon footprint along with chiplets can likely build CPUs relatively cost effectively now, even with TSMC charging more to produce them. On the flipside Intel has the advantage of creating their own chips in their own fabs.
@MikeGaruccio
@MikeGaruccio 8 ай бұрын
Yea the identical instruction sets is an interesting choice. It’s a huge win for the server market and opens up possibilities for all-efficient-core designs, but may make less sense for some consumer applications (a lot of background processes may not actually need stuff like avx512 so that may leave big chunks of the core sitting idle). I’m sure how much intel running their own fabs is truly an advantage right now given how much they’ve struggled to get EUV online and move to their next node. Relying on TSMC may mean AMD is sharing some of their profits but it also means they’re always able to access the latest, fastest process node. Shedding that risk removes a lot of uncertainty.
@MrAssembler
@MrAssembler 7 ай бұрын
@@MikeGaruccio If you are using a recent C standard library you'll may end up having AVX-??? instructions simply by calling the strcmp function.
@awdrifter3394
@awdrifter3394 7 ай бұрын
Gaming needs high performance cores though. We can see that the Z1 Extreme can clock up to 5.1ghz, I think AMD will need at least 2 Zen5 cores, then maybe 6 Zen5c cores for the next Steam Deck APU. Or if they want to keep it at lower clocks with all Zen5c cores, they'll probably have to add X3D cache to it to boost the gaming performance back up to an acceptable level. Since X3D cache will inhibit cooling, that might actually work well to complement the slower clocking cores.
@Speak_Out_and_Remove_All_Doubt
@Speak_Out_and_Remove_All_Doubt 8 ай бұрын
I very much see the 'dense' versions of Zen being used in games consoles, handhelds, thin & light laptops, NUC like devices, etc. Basically anything where 'power to performance' is king rather than just outright performance. Personally I don't want hybrid chips coming to desktop though or if it does I want to have all options, ie only full large Zen chips or Zen + Zen C chips or only Zen C chips. But I've had and am still having a lot of issues with with Intel's big/little designs and so my next system is AMD because I won't want big cores for my desktop.
@shepardpolska
@shepardpolska 8 ай бұрын
With Zen Dense being the same IPC but just lower clocks it doesn't mess with stuff in the way that intel E cores do. Apps and windows will also know much better which core is faster, since Windows scheduler looks at clocks only to determine the fastest core
@Speak_Out_and_Remove_All_Doubt
@Speak_Out_and_Remove_All_Doubt 8 ай бұрын
@@shepardpolska I just don't know why Windows/Intel is handling things so badly though, like if I don't have that app 'on the top' (ie if I minimise it or have something else open over the top of it) then most of the time things get moved to the E-Cores and leave the P cores empty but these are full on performance requiring applications that were maxing out the P cores but then just almost stop if I don't watch them work!!
@YounesLayachi
@YounesLayachi 8 ай бұрын
By definition Windows handles things badly, that's its entire purpose xD But yeah hopefully we'll see some variation assuming they continue on the same path, already today we are seeing more options with each generation
@johntupper1369
@johntupper1369 8 ай бұрын
So curious as to the efficacy and efficiency of high density next gen, such as zen5 implementations. Hoping that they makes high fidelity, high resolution capabilities much more accessible globally.
@HighYield
@HighYield 8 ай бұрын
Zen 5c might be even more impressive if they learned something from Zen 4c.
@bgtubber
@bgtubber 8 ай бұрын
Ryzen 8950XE with 8 Zen5 cores and 16 Zen5c cores would be rad. Let's say Zen 5 is 17% IPC increase and 3% clock speed increase over Zen 4, and a Zen 5c core is only 15% slower than a Zen 5 core. That would make a Ryzen 8950XE ~60% faster in multi-core workloads than 7950x. Or measured in Cinebench R23 score, that would be ~61,300 points vs ~38,500 points for 7950x.
@RobBCactive
@RobBCactive 8 ай бұрын
The idea is the MT cores don't design for the boost frequency but can operate at the power efficient frequency limit. So under heavy load those Zen4c cores might be no slower, as the system operates at a base frequency. The Zen4 cores have to dial frequency back to share the power/thermal budget.
@maynardburger
@maynardburger 8 ай бұрын
But why is this so important? What is the 'consumer platform' type of user doing where simply having 16 full speed big cores wouldn't still be more than enough for multithread? I had the same issue with Raptor Lake. Sure, it's technically added a fair chunk of multithread performance, but few people buying those are actually taking meaningful advantage of that versus just having 8P + 8E.
@RobBCactive
@RobBCactive 8 ай бұрын
@@maynardburger Some productivity users, use desktop rigs and like Raptor Lake plus the hardware quicksync acceleration. Many users think moooaaaaahhhrrrrrrrrrr cores are better, so pick Intel despite the problems cooling a lot of watts for little performance gain.
@joemarais7683
@joemarais7683 8 ай бұрын
Or, iuno, they can just use 16 Zen 5 cores so people aren't getting gimped cores on half of their CPU. The big little thing only works if it leads to *more* threads, the same amount with less power on half the cores would be devastating.
@bgtubber
@bgtubber 8 ай бұрын
​@@joemarais7683 The Zen 5c cores aren't really gimped. They have multi-threading and support all of the instructions that the full Zen 5 cores support. They are just clocked a bit lower. Besides, you can fit double the amount (16 cores) on one chiplet instead of 8. So I don't see how it's going to be devastating. If you are talking about games, are there any that really need more than 8 very fast cores? Either way, with Ryzen gaming is generally done on one chiplet to avoid the cross chiplet communication latency. So it doesn't really matter if the 2nd chiplet has Zen 5 or Zen 5c cores since you'll always want to keep the game running on the 1st chiplet.
@georgyzhukov4269
@georgyzhukov4269 8 ай бұрын
Thank you for the great explanation of the truly revolutionary 4c cores!
@star5328
@star5328 8 ай бұрын
This is definitely the superior way to do hybrid arch vs the way Intel is doing it. I've shied away from Intel 12th and 13th gen cpus with big and little cores because I simply don't trust the scheduler no matter what they say about "thread director" As far as gaming goes the scheduler should have no trouble with AMD's hybrid arch because the "world thread" so to speak would always be scheduled on one of your higher clocked cores, but the lower clocked cores due to them being the same IPC would still be useful for gaming. I think I'll hold my 5900x until AMD starts dropping these hybrids on consumer desktop. Also by the time AMD brings this to consumer desktop hopefully they've further optimized their infinity fabric to work with high end ddr5 in 1:1 mode to reach monolithic latency levels as Intel does.
@faceplants2
@faceplants2 8 ай бұрын
Awesome video man. Cheers, recommended by Moore's Law is Dead
@HighYield
@HighYield 8 ай бұрын
Thank you, I can also recommend Tom!
@kirby0louise
@kirby0louise 8 ай бұрын
AMD has confirmed that Zen 5 will have "Integrated AI and Machine Learning optimizations" which to me says an expansion on AVX512 support. But since they are going hybrid, I wonder if the new AVX10 implementation Intel has come up with might be at play. Since AVX10 allows for different register widths instead of forcing 512 bit it becomes a good fit for little cores without hampering big ones
@SquintyGears
@SquintyGears 8 ай бұрын
Actually AVX10 is presented as a common standard not limited to intel. But intel is rolling it out inconsistently. I don't think it's what they where talking about though. I think they meant an soc style approach like apple does with their neural engine. It's an accelerator block that would be part of the IO die like the gpu section. AVX512/10 is meant to accelerate the model training and scientific research tasks. It's vectorizing 512 bits of data through a single instruction instead of the 64bit that it can do regularly. This speeds up processing a whole matrix of doing the same operation on every cell. But for inference and using prebuilt models day to day its not what's most effective.
@RobBCactive
@RobBCactive Ай бұрын
@HighYield A point I have not seen mentioned is using advanced process nodes. A halved L3$ with 3 cores for 2 allows a doubling of cores in almost the same area, meaning retaining @High Yield ! For AMD that might make 3nm more economic, gaining efficiency in even smaller area without loss of clock speed challenges while power density is not problematic even all core. OTOH the higher performance less dense 4nm might offer higher max clocks and boost potential on desktop. Cost per core in high utilisation cloud server, the effective cost/performance of Zen5c might be compelling.
@agungdarmayuda5754
@agungdarmayuda5754 8 ай бұрын
Very great explanation about this AMD New Approach CPU Design. Thanks You!
@shanent5793
@shanent5793 8 ай бұрын
The previous Zen designs clock the L3 cache at the the highest core clock frequency in the CCX. Has this changed, are they using Z4 or Z4c cache slices?
@HighYield
@HighYield 8 ай бұрын
Good question, I can't tell you. I think it's the same L3 cache they used in the big Phoenix, but I don't know about the clk speeds.
@Testbug000
@Testbug000 8 ай бұрын
A funny note, Apple did the same sort of thing with the RTL for their A10. They devolved into different designs completely after, but it is amusing.
@bulletpunch9317
@bulletpunch9317 8 ай бұрын
Android devices cores too went from bigger prime cores which were the same arch as the medium cores to the different X series.
@aviatedviewssound4798
@aviatedviewssound4798 8 ай бұрын
​@bulletpunch9317 Those separate arm cores have less instructions than the big core, which means lower performance, but PCs are different because PCs need more performance, so using the same architecture could increase power, but lowering die size to cram in more cores in the same package would give us some extra performance at the same power usage as the non-hybrid CPU.
@bulletpunch9317
@bulletpunch9317 8 ай бұрын
@@aviatedviewssound4798 im talking about the prime and medium cores, not the big and little cores. The prime core on android flagships is like the standard pc 'big' core. The medium cores are like the 'small' cores. The little cores on android are too weak for pc, they're like the old atoms. Qualcomms latest laptop soc only has the prime and medium cores, no little core. As for apple their little cores perform similar to androids medium core.
@HighYield
@HighYield 8 ай бұрын
Oh, I didn't know that. Thanks for the info!
@Pillokun
@Pillokun 8 ай бұрын
The more intricate the design is when it comes to big.little(heterogenous) design the slower it will be. Homogenous design will always be faster because there is no latency/waiting time wasted for the system do decide where to put the workload, and if the workload gets assigned to the lower performing cores u loose perf.
@kingkrrrraaaaaaaaaaaaaaaaa4527
@kingkrrrraaaaaaaaaaaaaaaaa4527 8 ай бұрын
Wendell from Level1Techs had an interesting idea of having an 8 core design combining both Zen 4C and Zen 4 cores. Because you save some power consumption on the Zen 4C cores you can budget the rest of the power towards the Zen 4 cores to much higher clock speeds.
@kingkrrrraaaaaaaaaaaaaaaaa4527
@kingkrrrraaaaaaaaaaaaaaaaa4527 8 ай бұрын
Another advantage is since the IP is the same between the core types the only meaningful difference is the clock speeds. So thread scheduling is not going to be too much of a problem since Windows already assigns most applications via clock speeds anyway.
@besher532
@besher532 8 ай бұрын
@@kingkrrrraaaaaaaaaaaaaaaaa4527 hopefully it will work as easily as that, unlike intel which still don't work and just assign random cores when idling, I just cheacked and the only cores that are not in use on my laptop are the e-cores, like you would think windows would run on the e-cores by default by now
@PainterVierax
@PainterVierax 8 ай бұрын
@@besher532 I heavily suspect this is a pure governor setting issue (performance v/s efficiency mode). ARM solved the scheduler part a long time ago on Linux so Intel+Windows probably solved that as well.
@maxfmfdm
@maxfmfdm 8 ай бұрын
I still love the simplicity of the intel 9700k. 8 fast cores with no hyperthreading. A gaming value masterpiece we may never see again. With hyperthreading and different speed cores you can run into situations were the high priority processing takes a back seat on a slower core or competes with another thread for the same cores resources. It doesnt always happen but with the 9700k you take no chances and get a discount for doing so.
@gplustree
@gplustree 8 ай бұрын
I love random comments like this from one computing perspective or another, I don't game but I do a lot of virtualization and one ancient arch that held its own for many years was Haswell, 10 years old now.. just pulled my last one out of active service this week
@SquintyGears
@SquintyGears 8 ай бұрын
Unfortunately everything you described happens on any multicore cpu. You can still have the OS assign threads sub optimally and multiple processes compete for the same ressource. Hyper threading doesn't really make a difference for that and the new slow fast cores just add a bit more complexity in the priority of how processes thread transfers get handeled. The best way to dodge these issues is to manually assign a higher priority to the program you really care about in task manger. Because then the windows scheduling will never kick it behind something else.
@maxfmfdm
@maxfmfdm 8 ай бұрын
​@@SquintyGears Well what about a games own threads competing with each other thus reducing the performance of the thread that creates the main bottleneck? The rendering thread or main thread for example. This was the kind of case I had in mind when I wrote my comment and it is evidenced by many hyperthreading on vs off benchmarks. In these cases the application priority was identical and hyperthreading was tested on and off and showed a difference. I understand in an ideal world the game engine it self would handle this properly with the scheduler but real world data shows that it does not always. Setting the application priority addresses only part of the problem.
@SquintyGears
@SquintyGears 8 ай бұрын
@@maxfmfdm that's just not really how that works... Within a single program like a game, there are tons of conditions for different parts of the logic to operate in parallel or in sequence. But since it all came from the same program you can generally say they play nice with each other. The problems come when other processes that have to run too, get run in between game tasks and end up making the game wait for a bit more than it expected to. BUT this only shows up as a problem when you're running out of computational headroom. If you're not limited doing anything fancy changing hidden settings will net you 1% or 2% improvements and that's often within run to run variance of just background tasks randomly starting or not (because windows does a lot of things)... It is true that the original Hyper threading implementation in the first and second gen core i series chips showed significant differences in a non negligible proportion of software. It was both a programmer and intel problem. And games were one of the most affected types of software. But that's not true anymore... Programmers don't use the same threading libraries and the scheduler logic for what gets priority and where it gets assigned has gotten more clever. Same thing with the way HT runs the 2 threads on the same core, the architectures now are much better at occupying the empty cycles. Bottom line... The 9700k is a beast because it's the first regular desktop 8 core intel made so it was heads and shoulders above its predecessor. The reason it doesn't have HT was only for market segmentation because they introduced the i9 (which is a marketing construct they did to increase the potential selling price, i7 is still 300$ but you could start spending 500$ now) because HT generally only brings you 40% extra performance in ideal conditions but it hasn't been a source of loss in a very long time. People test it again on every new platform.
@maxfmfdm
@maxfmfdm 8 ай бұрын
@@SquintyGears um nope... just because they are all part of the same program does NOT mean they play nice together in reguards to these issues. I'm sorry but your first comment was a bit off but this claim is WAY off. I worked as a software engineer for a game developer and your claims are just outrageous at this point.
@ravenschmidt5273
@ravenschmidt5273 8 ай бұрын
Should it not be possible to start optimizing the sub parts of the architecture for vanilla and dense in the future? For example giving the vanilla core a more elaborate branch prediction or a bigger uop cache. So it would still be the same architecture but with a tweak or two besides clockspeed, cache and density. Might be a cost efficient way to optimize even better into two directions
@BenjaminCronce
@BenjaminCronce 8 ай бұрын
As soon as the architecture is changed, you're working with different cores. Making a design more compact is vastly different than making wholly different functionality.
@bulletpunch9317
@bulletpunch9317 8 ай бұрын
The same cores are still being used as big/medium cores in multiple arm designs from qualcomm or mediatek. Like the sd855/865 had a high clocked a76/a77 core and a 3 lower clocked a76/a77 cores. The higher clocked core is physically larger too.
@rightwingsafetysquad9872
@rightwingsafetysquad9872 8 ай бұрын
Is a Cortex-X1 the same as a Cortex-A77 with higher clocks?
@bulletpunch9317
@bulletpunch9317 8 ай бұрын
@@rightwingsafetysquad9872 no, you can read anandtech for the details.
@NaumRusomarov
@NaumRusomarov 8 ай бұрын
Interesting approach.
@EntekCoffee
@EntekCoffee 8 ай бұрын
The interesting question is why does AMD include regular Zen 4 cores on P2? With server CPUs it is clear that they do not need the regular cores in the first place. I get that it's probably for ST/MT workload optimization but shouldn't AMD prioritize for a smaller area? This also challenges the narrative that what AMD does is "big little". They don't have to go together.
@abheetl9743
@abheetl9743 8 ай бұрын
Regular Zen 4 cores for single threaded workloads, as they clock higher, probably.
@b127_1
@b127_1 8 ай бұрын
Zen 4c cannot go much beyond 3ghz. This makes them great as efficient cores, but peak single threaded speed would be a lot worse without the "big" cores.
@EntekCoffee
@EntekCoffee 8 ай бұрын
@@b127_1 I've heard it can go past 3.2 GHz just fine, though with a higher voltage than Zen 4. I get that you can get up to 25% slower on Zen 4c w/ standard configs. Though, I reckon with proper optimizations the gap can be narrowed down to 15. For mid range laptops AMD can still get away with not using Zen 4 (or using just one). After all, you are dealing with 15 W or less TDP. It's more flexible than other solutions at the cost of less area savings. So I still don't think that calling this approach "big little" is the right one.
@cat81702
@cat81702 8 ай бұрын
​​@@b127_1zen4 are already super efficient. I turned off boosting, limited to max 2.7ghz, and my 6850u is absolutely nailing every task I throw at it, with about 7-8h of normal usage (over 12h in "just reading and writing text" mode). all with 40ºC with no fan, unless playing a video or using external monitor. So I personally wouldn't mind all cores to be 4c.
@WhenDoesTheVideoActuallyStart
@WhenDoesTheVideoActuallyStart 8 ай бұрын
@@EntekCoffee They should include at least one P core in anything but the cheapest mobile designs, because of Amdahl's law.
@SupraSav
@SupraSav 8 ай бұрын
We're still here waiting for software/scheduler to utilize our current cores.. I am not sure how more cores would have any performance gains, although the efficiency is there. Honestly surprised this channel doesn't have 20x the subs, some great content here.
@bgtubber
@bgtubber 8 ай бұрын
What problems are you experiencing with thread scheduling and what CPU are you using?
@blkspade23
@blkspade23 8 ай бұрын
If it's AMD you're referring to, then that's really only a thing for games and dual CCD CPUs. The bulk of how it works now, would work even better with C cores. Single and lightly threaded tasks just go the the highest clocking cores. Games just happen to be a special use case for X3D where you typically want to reverse that when you have a X3D chiplet. You'll hardly ever run in to any other consumer workload where you'd actually want/need to pick the cache cores. Most of what else could potentially benefit from the extra cache would just as likely be capable of using all threads on the CPU. That's where more threads provide more performance. Either highly threaded tasks that can max the CPU, or muti-tasking with a collection of light to mildly intensive tasks. Wanting to specifically default to the C cores would be mostly a concern for mobile, which is really a matter of having to OS switch behaviors when on battery. There are game engines capable of using 12-16 threads, but are currently scaling back to not have the game peg the CPU at 100%. The 2.0 CP2077 update, for example will use 50% of a 16c/32t CPU, but 80% of a 8c/16t one. I've seen Star Citizen using as much as 19 threads though. If it starts becoming more common for games to push past 16 threads, Intel's E cores in current CPUs will be worse off.
@Crusherix
@Crusherix 7 ай бұрын
Area efficient cores will be revolutionary especially for handheld's like steam deck.
@submijiru
@submijiru 7 ай бұрын
Great for mobile, but for desktop compute Zen 4c doesn't have the appeal as more big Zen cores. My suspicion is that at 100% load for longer periods, a big Zen undervolt reaches greater (or at least a wider) range of efficiency targets. AMD must have realized this same particular flexibly pretty early on, because they literally Compressed Zen here until it reached the same thermal limit at a much lower, and presumably the most efficient, point on the power curve. This is great for devices that run a submaximal load, like most office or mobile devices, or GPU limited devices. Though I'm not sure if that Zenc wont adversely impact "GPU busy" on even the most constrained devices, like handheld gaming, because today the emphasis is more on high refresh rates than fidelity. Time will tell. Frankly the most amazing thing about this is how consistently disruptive AMD's designs choices for Zen in the server or supercompute space are for the consumer market. They keep hitting multiple targets with the same stone.
@bastiangugu4083
@bastiangugu4083 8 ай бұрын
A very interesting concept. I think it's the better way than P/E-cores, especially on Windows. The scheduler has enough problems assigning tasks to the right core. The thing I would like to see would be a desktop CPU with 8 Zen5 cores (maybe X3D) and 16 Zen 5c cores. The differences in clock speeds would guarantee that high performance tasks stay on the Zen5 cores. That the ISA is the same on all cores is great, as heavy multithreaded workloads can profit from many cores regardless of type and clock speed.
@RealSilvernight
@RealSilvernight 8 ай бұрын
I still prefer the simplicity of more cores working in parallel with high clock speeds and multi threading over hybrid designs since i don't have to worry about not having enough cores, or if drm may crash my game because the tasks get moved between cores
@VideogamesAsArt
@VideogamesAsArt 6 ай бұрын
I don't know how you do it to make a video about a topic where I know a lot and still make it incredibly interesting, I could bingewatch hours of it
@seanhunt7750
@seanhunt7750 8 ай бұрын
Hi, I would really be interested in your opinion on Apple's new A17 Pro microprocessor. Perhaps you could make a video about it?
@bulletpunch9317
@bulletpunch9317 8 ай бұрын
Geekerwan has a nice one
@seanhunt7750
@seanhunt7750 8 ай бұрын
@@bulletpunch9317 Thanks, I will check it out.
@HighYield
@HighYield 8 ай бұрын
I'm super interested in the A17 Pro, since it's the first N3B chip so far. But I need something tangible to make a video, like a good die shot...
@heinzbongwasser2715
@heinzbongwasser2715 4 ай бұрын
good content as always
@davidhalliday7776
@davidhalliday7776 8 ай бұрын
High core counts are also constrained by memory bandwidth. However, this can be partly mitigated with a larger cache. I would think a 32 core AM5 desktop could be made practical if: a) It also included 3Dvcache and b) it Used fast DDR5. Ideally, more RAM channels would be ideal, but even without them, I think 32 cores could be fed this way. Perhaps an all Zen4c or Zen5c desktop part with 32 cores would be a multi thread killer. It would still lag in games and the like due to lower clocks but I would love one on my desk.
@hammerheadcorvette4
@hammerheadcorvette4 8 ай бұрын
We're moving to CXL and HBM. I think DDR is seeing it's last days.
@Clawthorne
@Clawthorne 7 ай бұрын
I'm curious how this Zen4c design compares to for example Intel's E-cores in terms of performance per watt. The whole point of bigLITTLE was that minor and background tasks could be executed on the smaller and less complex core instead of spinning up a whole "proper" core, thus saving power. But if AMD's c-cores are just "normal core but more compact", would that even end up saving any power besides the slight difference due to clockspeed differences? So far it just looks great for the datacenter world where you can pack a bunch more c-cores onto a die than normal ones. But what about laptops and other varying power devices that strongly benefits from proper bigLITTLE?
@vongrims952
@vongrims952 8 ай бұрын
If the both zen4 and zen4c cores use the same RTL, instruction set and IPC does that mean it could be easier to do scheduling in OS? I remember hearing that intel's big little architecture had some inefficiencies because windows wasn't able to efficiently schedule in tasks like games.
@andersjjensen
@andersjjensen 8 ай бұрын
Exactly. The OS biases workloads towards the highest clocking cores, and fills up from there.
@robertpearson8546
@robertpearson8546 7 ай бұрын
As a designer for the SBP9900 at TI, I understand the difference between the P and E cores. I also understand the Ridiculous Instruction Set Computer (RISC) concept of ARM. But who said the CPU should be "symmetrical"? I worked on a stack-based direct execution computer. I examined the WD Pascal Engine and the Novix Forth hardware engine. Semantic design means that there is no "symmetry". If they want high performance, they should abandon inherently defective CMOS and switch to resonant tunnel diode threshold logic. "Engineers" at TI had a hard enough time trying to use I²L circuits. I had to create an I²L library of equivalent circuits for the standard TTL ICs. q-nary logic, self-timed circuits, universal logic modules, and semantic design were totally beyond them.
@isomane2408
@isomane2408 7 ай бұрын
In one of your videos you shown that AD102 is 5 nm, while in other it was 4 nm. I know these are just marketing terms unrelated to actual physical size but which of these is closer to being true? I've read discussions whether TSMC's 4N process is considered 5 or 4nm (discussing also whether N4 is 4 or 5 nm) but seems like it's controversial and people are trying to argue both way. What's your take regarding this topic?
@HighYield
@HighYield 7 ай бұрын
Its TSMCs N5P, but Nvidia calls it "4N". It's not based on TSMC N4 tho (which is a optimized N5 node).
@isomane2408
@isomane2408 7 ай бұрын
@@HighYield Thanks for quick response. Yup that I'm aware of but would you consider 4N to be better described as 4 nm or 5 nm process? Because some sources consider it to be 4 nm, while other sources consider it to be 5 nm. For example in your "How long can Nvidia stay monolithic" video 4:20 it says that AD102 is TSMC 5 nm, while in your "RTX 4090 Chip deep-dive" also 4:20 it says that AD102 is TSMC 4 nm.
@blipman17
@blipman17 8 ай бұрын
Having the C and non-C cores have the exact same instruction set but with diferent frequencies/thermal budgets is brilliant! That means no AVX-512 bullshit like Intell has, or problems with certain instructions being significantly slower on the C cores vs their alternatives with regards to the non-C cores. This allows compilers, opersting systems and applications to optimize for AMD much more thoroughly.
@pentachronic
@pentachronic 6 ай бұрын
More cores at higher clocks has a benefit for mathematical compute but for desktop applications a single threaded CPU that is faster is better. What AMD have done is provide a really good compromise. As more applications become multi-threaded their performance can be offloaded into more cores giving a performance boost. However, this means that applications will need to be compiled correctly for this architecture.
@El.Duder-ino
@El.Duder-ino 8 ай бұрын
Very interesting move by the AMD, very cool indeed👍 Thx for the report!
@TechMeldOfficial
@TechMeldOfficial 8 ай бұрын
Good analysis, but i just want to know, are you a chip designer or engineer ?
@HighYield
@HighYield 8 ай бұрын
Neither, I've just been a hardcore chip nerd since the early 2000s.
@TechMeldOfficial
@TechMeldOfficial 8 ай бұрын
@@HighYield i am an enthusiast too
@aaronlangel1408
@aaronlangel1408 8 ай бұрын
next to jim from adored tv.. nice video
@HighYield
@HighYield 8 ай бұрын
I’ve been watching Jim for years
@rosetzu_nagasawa
@rosetzu_nagasawa 7 ай бұрын
why isn't x86 doing the M3 trick to build ram on board? looking at bandwidth of lpddr4, I wonder HOW is that faster....
@Teste-gp7bm
@Teste-gp7bm 8 ай бұрын
AMD also named the small Zen APU as Raven 2. Later it held a name I can't recall but driver code still refers to it that way. This is actually a great way to go, especially on laptops. 2B + 8l would probably be thermally limited anyway but less than an all core equivalent. For desktop 4B + whatever you can fit should also be enough for high performance designs. On server Genoa is already a huge win for higher perf through lower power per computation. The hard part will be fixing the Windows Scheduler. On Linux it is already fixed by ARM long ago. This is also a different design from Intel, as here the small cores also have the same connection to the memory controller and L3 cache. On Intel there is some latency penalty from them being clustered in groups of four. I think people will be very surprised of how this will work and will not think of small cores as a bad thing.
@captaindunsell8568
@captaindunsell8568 7 ай бұрын
For my application, an asymmetrical design works provided i have at least two threads running 4ghz … the other threads can be much slower 2ghz
@sicKlown86
@sicKlown86 8 ай бұрын
A Zen 5 desktop CPU with one CCD of 8 normal Zen 5 cores with stacked L3$ and a second CCD with 16 Zen 5c cores would an extremely interesting product. As long as they can handle the scheduling aspects to optimize for various workloads it would be a killer.
@viljosavolainen2286
@viljosavolainen2286 8 ай бұрын
Id say 6+6 would be interesting cost effective but powerfull system for gaming. Most games dont need that many power cores but might be still able to utilize many cores for some extend. 4+12 black cheep option 😅
@gabrielangel1996
@gabrielangel1996 8 ай бұрын
I really hope they do it, otherwise I'll go with intel. I'm not buying that hiper expensive threadripper pro anytime soon
@blkspade23
@blkspade23 8 ай бұрын
There really isn't as much concern for scheduling in that scenario. Games are the sole outlier to how the OS wants to handle multi CCDs CPUs. By default it's wants to put tasks on the highest clocking cores 1st, but while the stacked cores are slower clocked, they are technically superior for most games. So the workaround is a whitelist method. Some future core dense CCD will always be the slower CCD. Pretty much every other workload is benefitted by higher clocks 1st, and then if it's heavily threaded it'll just include the other CCD. There won't really be enough other common workloads that would benefit from the cache and not use the whole CPU, nor would the dense CCD be faster in isolation.
@TorazChryx
@TorazChryx 8 ай бұрын
If AMD had the will to do it, they could probably launch that right now with an 8 core Zen4+3DVcache chiplet & a 16 core Zen4C chiplet. There's no obvious technical limitation making it impossible to my knowledge? and I wouldn't be surprised if they had something akin to that running in the labs right now (in the same way that 5950X3D's exist internally at AMD)
@cyberpunk59
@cyberpunk59 8 ай бұрын
I had seen some tools being developed for the issues of having 2 different chiplets can give. If you happen to have a 7900x3d or 7950x3d, I would hoghly suggest that you check out a program called Process Lasso. Something that I believe AMD should have made themselves and released with those cpus, so you don't end up with the issues some had, for an example some games running worse than on a 7700x (example csgo) or a 7800x3d, because they didn't run on the appropriate chiplet for max performance
@jouldalk
@jouldalk 8 ай бұрын
When are zen4 DESKTOP APUs coming out?
@Azureskies01
@Azureskies01 8 ай бұрын
The biggest advantage of the the C cores is while it lacks L3 you can just pop v cache on it and make up the difference and then some. You wouldn't find that in the highest performance chips but I could see that for all APUs going forward considering die space is so important. For high end desktop (gaming) I could see ryzen 8000 or maybe 9000 having a block of full sized cores with v cache and then a even lower clocked block of C cores for all background tasks or anything that the OS thinks wouldn't be effected by the chiplet to chiplet communication penalty. Really AMD should just come out with APUs for the low end, mid range and high end gaming chips with v cache, and then have their highest core count CPUs just use C cores as productivity (generally) cares more about cores than it does clock speed. Not sure how much that would eat into threadripper sales but considering they havent come out with those for a few generations i dont think they would care too much.
@raute2687
@raute2687 8 ай бұрын
a more mixed design would be more flexible though. I do both gaming and productivity, sometimes with integrated, sometimes with a dedicated gpu connected to my laptop. 4 zen5 cores with v-cache and 6 zen5c cores would be bloody fantastic for a laptop.
@Azureskies01
@Azureskies01 8 ай бұрын
@@raute2687 You can say you do productivity work on a laptop but lets be real, any person that does it for work will have a desktop. Which is why they should segment the line up for entry level APU, mid and high end gaming, and then top of the line productivity where more cores will actually help performance.
@DigitalJedi
@DigitalJedi 8 ай бұрын
This design is interesting for sure, and it brings up some interesting ideas for future AMD chips, including how Little Phoenix may be cut down. A quad-core Ryzen 3 "7340" variant could be 2+2, while a lower-end "7240" might be 1+4, and then some Athlon "7140" is 0+4.
@SquintyGears
@SquintyGears 8 ай бұрын
The zen4c cores aren't that much slower than the regular ones that it would make an athlon. Think about them as if they where maxed at 3.8GHz instead of 5.3GHz So they'll actually perform closer to a Zen3 going at its full clocks at equal core counts. The athlon are much more cut down.
@DigitalJedi
@DigitalJedi 8 ай бұрын
@@SquintyGears I'm well aware of how neutered the Athlons are. I only bring that name into it as AMD doesn't have anything else below the Ryzen 3 name in the x1xx range.
@RaspyYeti
@RaspyYeti 8 ай бұрын
I would love to see the test results of 12 core Zen 4c with unified 36MB memory
@Alovon
@Alovon 8 ай бұрын
Honestly this could revolutionize the flexibility of Chiplet design that AMD does on Desktop without needing to incur as much CCD Latency issues like on the X900+ SKUs currently. Instead of needing 2 CCDs to break past 8 Cores. You could have a CCD by default come configured with 8 Full and 8 Compact Cores with 3DVC. So something like 8950X = 32 Cores (16 Full, 16 Compact, 2CCDs) 8900X = 24 Cores (12 Full, 12 Compact, 2CCDs) 8700X = 16 Cores (8 Full, 8 Compact, 1 CCD) 8600X = 12 Cores (6 Full, 6 Compact, 1 CCD) 8400X = 8 Cores (4 Full, 4 Compact, 1CCD) Then You can have something like an 8500 with some of the C or Full cores disabled to get 10 Cores or something. The scalability becomes far higher with that design. Or heck, why not a pure C core CCD with even more 3DVC than the standard CCD?
@bgtubber
@bgtubber 8 ай бұрын
Correct me if I'm wrong, but AMD won't be putting 16 full Zen 5 cores in a single CCD with Ryzen 8000. I think this will be coming with Zen 6/Ryzen 9000. 16 compact cores yes, this will be possible from what I've heard.
@YounesLayachi
@YounesLayachi 8 ай бұрын
CCD size/density would need to double in a single generations to have 16 Zen cores in one CCD, the rumours are talking about 4+8c instead of 8+8c for laptops
@Alovon
@Alovon 8 ай бұрын
@@YounesLayachi This is 16 Zen "Compact" cores as an option for high-end CCDs. Not full-sized cores.
@YounesLayachi
@YounesLayachi 8 ай бұрын
@@Alovon ah, when you wrote 8950X= 32 Cores (16 Full, 16 Compact, 2 CCDs) I understood a 16 Zen CCD + 16 ZenC CCD. I see now that you meant 8+8c + 8+8c, Silly me, heterogeneous CCDs are a dumb idea xD the 2 CCDs will be identical , aside from any stacked cache on top... unless...
@Alovon
@Alovon 8 ай бұрын
@@YounesLayachi No, I meant 2 ZenC CCDs with extra 3DVC ontop.
@hypothalamusjellolap8177
@hypothalamusjellolap8177 7 ай бұрын
I think they are starting to separate the qualitative compute concerns to clamp down on efficiency and leverage electrical properties that are becoming better understood at smaller scales. I find myself shopping for efficiency rather than ultimate performance, this way I solar and renewables are even more attractive.
@pvalpha
@pvalpha 8 ай бұрын
I think the most difference we'd see in architecture in a single AMD APU chip is using Zen 5 with Zen 4c+ (rescaled on a smaller node)-- basically having a slight size-area advantage while reusing an existing design. So basically Zen N + Zen N-1c+ to be able to launch more quickly unless there's a major architecture difference between generations. I could see this happening if there were problems bringing up Zen 5c in a reasonable time or if they wanted to lead out-of-the-gate with an APU that was on the new process, regular Zen 5 was ready, and wanted to make a powerful lower-cost chip off the bat to secure marketshare. It would still work better out of the box than Intel's P&E cores because that requires a lot of magic on the programming and scheduling side.
@RobBCactive
@RobBCactive 8 ай бұрын
Porting Zen Nc to a significantly different process used by Zen N++ would be much of the same work of validating Zen N++. APUs & server follow the initial desktop lauch, used to introduce new designs. Delaying V-cache & compact Zen chiplets is more about managing releases and demand, launching the simpler product first.
@crazybeatrice4555
@crazybeatrice4555 8 ай бұрын
Eh zen5 and zen5c should be out around the same time so I don't think that will be an issue
@julioprado7676
@julioprado7676 8 ай бұрын
I've seen reports that AMD Z1 (non-extreme) is Phoenix 2. Do you know anything about that?
@HighYield
@HighYield 8 ай бұрын
I think there’s a very high chance that’s true and the lower spec ROG Ally could be the first product to use Phoenix 2. Would be really interesting.
@darthrainbows
@darthrainbows 7 ай бұрын
This worked out for AMD the last time they tried it; albeit the goal then was to drive power consumption down on their Excavator cores (my memory says this was for the Carrizo lineup maybe?), rather than to create anything like a hybrid CPU cluster.
@brylozketrzyn
@brylozketrzyn 7 ай бұрын
That is interesting approach. We are deploying buildservers at scale, so efficiency is kinda important for us. With i9-13900K having E-cores was essential to reduce build times by 40% compared to only P-cores with same power consumption. For AMD we are using multiple peripherals to do the job and we need CPU to be as homogenic as possible. Also, Hyper-V in WS2022 was not happy with heterogenic cores, but it could be satisfied by AMD. Time will tell.
@MasticinaAkicta
@MasticinaAkicta 8 ай бұрын
We will have to see in benchmarks, in multiple uses, what it goes. I know that the Intel P+E design sounded naf to many, but given the right situation, aka a laptop or small device, they are kinda nice. Lower heat, longer battery life. And yes I know windows 10 needed some patching to get it right. I know.
@schizoidman9459
@schizoidman9459 7 ай бұрын
I agree with @MikeGaruccio, more smaller sizes cores and lower clock is the main idea behind RISC architectures, except that here the cores are actually CISC cores. More cores are the way to go for sure, especially when the cores are uncompromised, except for clock speed. AMD is correct is their approach.
@MetroidChild
@MetroidChild 7 ай бұрын
I do actually think that Intel's move back towards single threaded cores makes a lot of sense regardless of architecture, the better your scheduler and prefetch units are, the less extra juice you can squeeze out of the ALUs and FPUs. Add to that the fact that cross thread vulnerabilities are almost impossible on single thread designs as well as higher densities and those E-cores are looking mighty fine. AMD's focus on instruction compatibility also makes a lot of sense given how they haven't been affected by vulnerabilities and node restrictions as much, it wouldn't surprise me if the future will land somewhere in between, single threaded cores using the same ISA but min-maxxed for speed/energy consumption.
@mikeh6286
@mikeh6286 8 ай бұрын
When optimizing for size and power efficiency, isn't there more to do than just moving the core elements around? I'm worried that the lower clocked zen4c cores can't compete with E-cores in power efficiency (and thus multithreaded performance within a fixed TDP envelope)
@mawkzin
@mawkzin 8 ай бұрын
Don't worry, the standart zen 4 is already more efficient than intel pcore under lower voltage and the 4c just extend this lead.
@mikeh6286
@mikeh6286 8 ай бұрын
@@mawkzin Yeah sure but the 4c cores need to be more power efficient than the 14th gen E-cores in instructions/watt. Obviously they'll be faster core-to-core but efficiency is important for multithreaded loads and handheld use.
@larrydillon4841
@larrydillon4841 7 ай бұрын
I still have CPU-bound programs that are limited by single core performance, so getting the most clock speed out of a single core improves my workloads, while more cores (over 2 or 4) make little difference. So long as the OS scheduler can pin those apps to the fast cores, all should be good. So far, this looks like a good idea from AMD.
@0xEmmy
@0xEmmy 7 ай бұрын
I wonder if AMD's use of the same RTL for both cores stems from the instruction set. x86 is extremely complex. The decoder, accordingly, needs extra complexity, much of which is independent of the throughput needed, and much of which is always-on regardless of what the core is doing. So it might lose efficiency if underutilized, and simplifying the decoder to just provide a lower throughput might not give all of that efficiency back. And if the decoder throughput isn't changing, the execution throughput can't be allowed to change as well. Which eliminates the main reason to change the RTL. Miniaturizing the same logic, also has a lot of advantages. If all the wires are shorter, that means less electrical resistance, and less leakage, which means that the same logic can get higher power efficiency (and thus less heat). This partially compensates for the lost cooling from a smaller physical surface area. It also might allow for thinner wires to carry the same signals, enabling further area savings. Shorter wires also have less chance to affect each other, also partially compensating for the fact that there are more of them. ARM does not have this problem, since it's much easier to decode. As such, it's a lot easier to just cut off half the decoder and get half the throughput (though it's not necessarily trivial). There is also plenty of room for the same exact layout optimization, but it's not necessarily as obvious given that the best point of comparison has different logic. And it sounds to me, like Intel's solution to this problem, is just to remove some instructions. This will drastically simplify the decoder, perhaps making it easier to actually reap the benefits of simplifying the decoder.
@johnberame5322
@johnberame5322 8 ай бұрын
thumbs up to High Yield
@HighYield
@HighYield 8 ай бұрын
👍 to you too!
@FatheredPuma81
@FatheredPuma81 8 ай бұрын
I tried out more cores with higher clocks. Worked well in games that utilized it but the majority of games were imo unplayable.
@shanent5793
@shanent5793 8 ай бұрын
You show four DDR5 interfaces, are those 32 or 64 bits each?
@volodumurkalunyak4651
@volodumurkalunyak4651 8 ай бұрын
Probably 32bit each (128bit total). AMD does use 128bit DDR5 memory controller on Z1 Extreme chip (8*Zen4, 12CU iGPU), so using 256bit memory controller just here (2*Zen4, 4*Zen4c, 4CU GPU) is unnecessary.
@HighYield
@HighYield 8 ай бұрын
Should be 4x 128bit, as volodum said.
@petarstoyanov9024
@petarstoyanov9024 5 ай бұрын
More cores with less clock speed is definitely the wiser path. We are still very un-optimized on SW/ kernel level when it comes to true multithreading, still waiting on the real paradigm shift
@phirenz
@phirenz 7 ай бұрын
Some ARM SoCs have been doing the first half of this for over a decade. The Tegra 3 is the first one I remember, with four cores on the high performance cell library, and one core on low power cell library. It bet ARM's big.LITTLE to the market by a year or two. And it's somewhat common on low-end SoCs to have a fake big.LITTLE configuration that's actually just two clusters of little cores optimised for different clock speeds. I assume those SoCs used different cell libraries. AMD's main innovation here is taking the extra step to really optimise for area with the more careful layout.
Biggest Scientific Mysteries Of The Universe, 3 Hour Video Compilation
3:33:05
Zen 5 Secrets Revealed - it's Gonna Be (Really) Good
18:40
AdoredTV
Рет қаралды 51 М.
Cat story: from hate to love! 😻 #cat #cute #kitten
00:40
Stocat
Рет қаралды 14 МЛН
The Worlds Most Powerfull Batteries !
00:48
Woody & Kleiny
Рет қаралды 18 МЛН
New Gadgets! Bycycle 4.0 🚲 #shorts
00:14
BongBee Family
Рет қаралды 9 МЛН
AMD ZEN 6 - Next-gen Chiplets & Packaging
16:37
High Yield
Рет қаралды 172 М.
Deep-dive into the AI Hardware of ChatGPT
20:15
High Yield
Рет қаралды 312 М.
Steam Deck Chip Deep-Dive (Van Gogh/Aerith)
17:42
High Yield
Рет қаралды 62 М.
The Gate-All-Around Transistor is Coming
15:44
Asianometry
Рет қаралды 438 М.
Your understanding of evolution is incomplete. Here's why
14:21
Zen 4 X3D is great - but has one Big Problem
16:27
High Yield
Рет қаралды 96 М.
How does Computer Memory Work? 💻🛠
35:33
Branch Education
Рет қаралды 3,6 МЛН
RDNA3 - what went wrong?
24:00
High Yield
Рет қаралды 51 М.
Nvidia's Computational Lithography Breakthrough
15:10
Asianometry
Рет қаралды 328 М.
Карточка Зарядка 📱 ( @ArshSoni )
0:23
EpicShortsRussia
Рет қаралды 338 М.
What model of phone do you have?
0:16
Hassyl Joon
Рет қаралды 57 М.
What percentage of charge is on your phone now? #entertainment
0:14
Carregando telefone com carregador cortado
1:01
Andcarli
Рет қаралды 2 МЛН