I also recommend this written article about this topic: Arm vs x86: Instruction sets, architecture, and all key differences explained - www.androidauthority.com/arm-vs-x86-key-differences-explained-568718/
@dalezapple24934 жыл бұрын
Learned a lot Gary thanks 😊...
@Yusufyusuf-lh3dw4 жыл бұрын
@Z3U5 why would that be a big boon to the industry, considering the fact that Apple want to create a large propietory ecosystem that tries to hide everything from everyone including the cpu specs, capabilities and associated technology. Apple is like the boogie man trying to steal the future of computing. But I'm sure they will reach where they where in the 90s
@Yusufyusuf-lh3dw4 жыл бұрын
@Z3U5 yes. There are multiple different use cases in the server market and ARM fits into a few of those. If you look at historic data in server segment market share it's not 100% X86 or Power CPUs. There are small % of other architectures. There are even Intel Atom based servers. But the argument that Arm is going to capture huge server market share is totally unrealistic.
@MozartificeR4 жыл бұрын
Is it difficult for a company to recompile an app into risc?
@HungryGuyStories4 жыл бұрын
Thank you for the informative video! But why, WHY, *_WHY_* does almost every KZbinr hate their viewers ears?!?!?!
@xcoder11223 жыл бұрын
The biggest problem that x86 has are its strong guarantees. Every CPU architecture gives programmers implicit grantees. If they require guarantees beyond that, they have to explicitly request those or work around the fact that they are not available. Having lots of guarantees makes writing software much easier but at the same time causes a problem for multicore CPUs. That's because all guarantees must be uphold across all cores, no matter how many there are. This puts an increasingly amount of burden on the bus that interconnects the single cores, as well as any shared caches between them. The guarantees that ARM gives are much weaker (just as the ones of PPC and RISC-V are much weaker), which makes it much easier so scale those CPUs to a bigger amount of cores, without requiring a bus system that needs to become drastically fast or drastically complex, as such a bus system will draw more and more power and at some point become the bottleneck of the entire CPU. And this is something Intel and AMD cannot change for x86, not without breaking all backward compatibility with existing code. Now as for what "guarantees" actually means, let me give you some examples: If one core writes data first to address A and then to address B and another core monitors address B and sees it has changed, will it also see that A has changed? Is it guaranteed that other cores see write actions in the same order they were performed by some core? Do cores even perform write actions in any specific order to begin with? How do write actions and read actions sort towards each other across cores? Do they sort at all? Are some, all, or none of write actions atomic? Over 99% of the time none of these questions matter to code that you write, yet even though less than 1% of all code needs to know the answers to these questions, the CPU has to stick to whatever it promises 100% of the time. So the less it promises, the easier for the CPU to keep it and the less work must be invested into keeping it. When Microsoft switched from x86 in the fist Xbox to PPC in the Xbox360, a lot of code broke because that code was not legit in the first place. The only reason why this code was working correctly so far were the strong guarantees of the x86 architecture. With PPC having weaker guarantees, this code would only work sometimes on these CPUs, causing very subtle, hard to trace bugs. Only when programmers correctly applied atomic operations and memory barriers, as well as used guarded data access (e.g. using mutexes or semaphores), the code was still working correctly as when using these, the compiler understands exactly which guarantees your code needs to be correct, and knowing for which CPU you are compiling, it also knows whether it has to do something to make your code work correctly or whether it will work correctly "on its own" as the CPU gives enough guarantees for that, in which case the compiler will do nothing. I see code every second day that only works coincidentally as it's being used on a CPU that has certain guarantees but this very same code would fail on other CPUs for sure. At least if they have multiple cores, there are multiple CPUs in the system, or the CPU executes code in parallel as if there were multiple cores (think of hyper-threading). The guarantees of x86 were no issue when CPUs had a single core and systems had a single CPU but they increasingly become an issue with the number of cores rising. Of course, there are server x86 CPUs with a huge number of cores but take a look at their bus diagrams. Notice how much more complex their bus systems are compared to the consumer x86 CPUs? And this is an issue if you want fast CPUs with many cores running at low power, providing long battery life and not requiring a complex cooling system or otherwise will run at very high temperature. And cores is currently the easiest way to scale CPUs, as making the cores run at higher speed is much harder (the clock frequency of CPUs is currently usually below what it used to be before all CPUs got multicore) and it's even harder to make the CPU faster without raising clock frequency and adding more cores as most optimizations possible have already been done. That's why vendors need to resort to all kind of trickery to get more speed out of virtually nothing and these tricks causes other issues, like security issues. Think of Meltdown, Spectre, and Spectre-NG. These attacks used flaws in the way how optimizations were implemented to make CPUs faster without actually making them faster, e.g. by using speculative execution, so the CPU is not faster but it still can perform work when it otherwise couldn't and it it was doing the work correctly, it appears to be faster. If what it has just done turns out to be wrong, though, it had to rollback what it just did and this rollback was imperfect as they didn't clean up the cache, thinking that nobody can look directly into it anyway, so why would that matter? But that assumption was wrong. By abusing another of these tricks to make CPUs faster, branch prediction, it is possible to reveal what is currently cached and what isn't and that way software can get access to memory it should never have access to. So I think the days of x86 are counted. It will not die tomorrow but it will eventually die as in many aspects x86 is inferior to ARM and RISC-V who both give you more computational power for less money and less electric power and this is not a temporarily thing, as no matter what trickery and shrinking they are going to apply make x86 faster, the same trickery and shrinking can also be applied to ARM and RISC-V to make them faster as well, yet on top of that, it will always be easier to give them more cores as well and that's why they keep winning in the long term.
@tvthecat2 жыл бұрын
Did you just type a fucking novel right here?
@dariosucevac76232 жыл бұрын
Why does this only have 26 likes? This is amazing.
@lachmed2 жыл бұрын
This was such a great read. Thanks for taking time to write this!
@jamesyu52122 жыл бұрын
There are so many things wrong in this comment that I’m glad it’s not gotten a lot of likes. Thank God people learn to do actual research than agree to unverified claims in the comments section.
@xcoder11222 жыл бұрын
@@jamesyu5212 People especially need to stop listening to comments that simply make blanket claims of anything being wrong without naming a single specific point that is supposed to be wrong or refuting even a single point. Who says sweepingly, something is wrong, which has ultimately said nothing and can save his know-it-all comment in the future just the same, because apparently he knows nothing, otherwise he would give something concrete from him.
@ProcashFloyd4 жыл бұрын
Nice to hear from someone who actually knows what they are talking about. Not for the sake of making youTube content.
@squirlmy4 жыл бұрын
I thought specifically for Apple Mac's move to ARM. I've heard theories that are basically "conspiracy" theories! lol
@mrrolandlawrence4 жыл бұрын
i bet gary still runs openVMS at home ;)
@mayankbhaisora26994 жыл бұрын
@@mrrolandlawrence I don't know what the hell are you talking about but i get what you are saying :)
@ragnarlothbrook81174 жыл бұрын
Exactly! I'm always impressed by Gary's knowledge.
@RaymondHng4 жыл бұрын
@@mayankbhaisora2699 VMS was the operating system that ran on DEC minicomputers (midrange systems). OpenVMS is the descendant of VMS that is developed and supported by VMS Software Inc. en.wikipedia.org/wiki/OpenVMS
@octacore99764 жыл бұрын
There are thousands of useless videos "Apple moving to it's own silicon!!!"...but this is the only video that is actually informative.
@pranay72644 жыл бұрын
sike
@zer0legend1094 жыл бұрын
I stopped midway saying to myself, why I feel like I was tricked into a college lecture and not noticing it except midway but still liking it
@SH1xmmY3 жыл бұрын
You are not alone
@rhemtro3 жыл бұрын
doesn't even feel like a college lecture
@batmanonabike31192 жыл бұрын
Wait… clandestinely educating us? Cheeky
@lukescurrenthobby41797 ай бұрын
That’s me rn
@macaroni.ravioli2 ай бұрын
@@lukescurrenthobby4179 me too hahahhaha I'm Cracking up At the Orig commenter above 🤣🤣🤣
@Charlie-jf1wb4 жыл бұрын
Struggled to find someone who can actually explain something as complex as Arm v x86 in such straightforward terms. thank you.
@goombah_ninja4 жыл бұрын
Nobody can explain it better than you. Thanks for packing it all under 21 mins. I know it's tough.
@dalobucaram70784 жыл бұрын
Gary is arguably the best person to explain these kind of topics in layman's terms. Intel will finally get some humolity lesson.
@TurboGoth3 жыл бұрын
In your slide listing the major differences in RISC v CISC, one thing that struck ME as relevant because of my interest in compilers and which admittedly may not be particularly worth mentioning depending on your audience, is that RISC instructions also tend to be more uniform in the sense that if you want to do an add instruction, for example, you don't have to really think about what instructions are allowed with which registers. Also, the instruction pointer register is just a regular general purpose register while that register is often special purpose in other architectures. It's awkward and roundabout to even access. Often you need odd tricks like to use Intel's call instruction as if you're calling a function for its side effect of pushing the instruction pointer to the stack. Then from there, you can pop the value off. But in arm, you just have it in one of your normal registers. Obviously, you must beware if you dump random garbage into it that you'll start running code from a different place in memory but that should be obvious. Yet such uniformity can eat space and so the THUMB instructions don't have it.
@yogevbocher36034 жыл бұрын
Just to give a feedback: The pronunciation of the name Dobberpuhl was perfect.
@seanc.53104 жыл бұрын
How else would you pronounce it?
@zoehelkin4 жыл бұрын
@@seanc.5310 like South Gloucestershire
@ivanguerra12604 жыл бұрын
I understood rubber poop !!
@squirlmy4 жыл бұрын
Of course, he's British, not an American!
@thecalvinprice4 жыл бұрын
My father was working for DEC at that time, and he told me a lot about the Alpha chip and their foray into 64bit before AMD and Intel brought their consumer 64bit chips to market. He also told me about the purchase by Compaq(and subsequently HP) and that a lot of people weren't keen on the purchase as there was concern all their efforts would disappear. These days DEC is barely a whisper when people talk about tech history, but I still remember sitting at an ivory CRT with the red 'DIGITAL' branding on it. Edit: Thanks Gary for the little nostalgia trip.
@scality43094 жыл бұрын
VT-100?
@tasosalexiadis77482 жыл бұрын
The leader designer of the DEC Alpha founded an CPU company in 2003 which Apple bought in 2008 (PA Semi). This were they got the tallent to engineer their own CPUs for the iPhone (starting from iPhone 5) and now the new ARM-based Macs.
@lilmsgs4 жыл бұрын
I'm an ex-DECie tech guy too. Glad to see someone helping to explain their place in history. All these years later, I'm still heart broken.
@JanuszKrysztofiak3 жыл бұрын
I've got an impression memory is (relatively) slower and more expensive TODAY than back then. Then - in the early 1980s - performances of RAM and CPUs were not far apart, so CPUs were not that handicapped by RAM - they could fetch instructions directly and remain fully utilized. Today it is different: RAM is much, much slower than CPU, so CPUs spend a good part of their transistor budgets on internal multilevel caches. Whereas on modern systems you can have 128 GB of RAM or more, the one that actually matches your CPU speed is the L1 cache which is tiny - Ryzen 5950x has only 64 kilobytes of L1 cache per core (32 kB for code and 32 kB for data). I would say the instruction set "density" is even more important now.
@mayatrash2 жыл бұрын
Why do you think that is? Because the R&D Intro memory is worse? Genuine question
@okaro6595 Жыл бұрын
Very few modern systems have 128 GB RAM. Typical is about 16. 16 GB costs some $50. In in around 1983 typical was 64 KB. It was about $137 which inflation adjusted would be about $410. Sure RAM is not more expensive now.
@GreenDayGA Жыл бұрын
@@okaro6595 L1 cache is expensinve
@reptilez134 жыл бұрын
The possibilities for how this will go and how it will effect the industry or other players in said industry is endless. Very interesting next few years for numerous reasons, this being a big one. Great vid too btw.
@mohammedmohammed5194 жыл бұрын
Gary, you’ve been there and done that ⭐️
@johnmyviews37614 жыл бұрын
I understood Intel was a memory maker initially however a Japanese company commissioned intel to develop its calculator chips that ultimately developed into microprocessors
@jazmihamizan49874 жыл бұрын
And the space race too
@ricgal508 ай бұрын
Really great to hear someone who knows about Acorn. In 1986 I lived in North-western Ontario (Canada). Our library put in a desk with 6 computers on it. They were the North American version of the BBC Micro, made by Acorn. It was the first computer I worked on. I didn't find out about the ARM chip and the Archimedes until the mid nineties. And it was shortly after that time that ARM was spun off. BTW, Acorn was owned by Olivetti at this time. When I found out about the Raspberry Pi, I got onboard. I had a model A chip, and then I bought a Pi 4 just in time for Covid, so I could use my camera module to have a second camera for Zoom.
@madmotorcyclist3 жыл бұрын
It is interesting how the CISC vs. RISC battle has evolved with Intel's CISC holding the lead for many years. Unfortunately, Intel rested on its laurels by keeping the 8086 architecture and boosting its operating frequencies while reducing the chip size production down in nm levels. Thermal limits with this architecture have literally been reached giving Apple/ARM the open door with their approach which now after many years has surpassed the 8086 architecture. It will be interesting to see how Intel reacts given their hegemony control of the market.
@1MarkKeller4 жыл бұрын
*GARY!!!* Good Morning Professor!* *Good Morning Fellow Classmates!*
@GaryExplains4 жыл бұрын
MARK!!!
@projectz97764 жыл бұрын
Mark!!!
@jordanwarne9114 жыл бұрын
Mark!!!
@AndrewMellor-darkphoton4 жыл бұрын
this fills like meme MARK!!!
@-Blue-_4 жыл бұрын
MARK!!!
@handris994 жыл бұрын
I was afraid in the beginning that all I'll hear is history but I'm glad I watched til the end the answers to my questions were answered.
@jeroenstrompf50644 жыл бұрын
Thank you! You get extra points for mentioning Elite - I played that so much on a Commodore 64 when I was around 16
@GaryExplains4 жыл бұрын
Glad you enjoyed it!
@ZhangMaza4 жыл бұрын
Wow I learn a lot just from watching your video, thanks Gary :)
@GaryExplains4 жыл бұрын
Glad to help
@aaaaea92684 жыл бұрын
This video is so underrated like lol you explained something I was never able to understand in just 20 minutes
@mokkorista4 жыл бұрын
My cat: trying to catch the red dot.
@stargazerweerakkody82544 жыл бұрын
Did He?
@deus_ex_machina_4 жыл бұрын
@@stargazerweerakkody8254 No. Source: I was the cat.
@alexs_thoughts4 жыл бұрын
So what would be any arguments in favor of the Intel (or X86 in general) side of the situation?
@tophan51464 жыл бұрын
Unexpected complications with 10nm.
@RickeyBowers4 жыл бұрын
IPC and higher single-core speed.
@Robwantsacurry4 жыл бұрын
The same reasons that CISC was developed in the first place, in the 1950's memory was rare and expensive, complex instruction sets where developed to reduce the ammount of memory required. Today memory is cheap but its speed hasn't improved as fast as CPU's, a RISC CPU requires more instrucions hence more memory access, so memory bandwith is more of a bottleneck. That is why a CISC CPU can have a major advantage in high performance computing.
@snetmotnosrorb39464 жыл бұрын
Compatibility. That's it really. x86 has come to the end of the road. The thing is bloated by all different design paths taken and extensions implemented over the decades, and is crippled by legacy design philosophies from the 70s. Compatibility is really the only reason it has come this far, that is what attracted investment. PowerPC is a way beter design even though it's not much newer, but it came in a time when Wintel rolled over all competition in the 90s combined with insane semiconductor manufacturing advances that only the biggest dogs could afford, and thus PPC didn't gain the traction needed to survive in the then crucial desktop and later laptop computer markets. Now the crucial markets are server and mobile devices, places where ARM has a growing foot in and totally dominates respectively. Now ARM is getting the funding needed to surpass the burdened x86 in all metrics. The deciding factor is that x86 has hit a wall, while ARM has quite a lot of potential still, and that is what ultimately will force the shift despite broken compatibility. Some will still favour legacy systems, so x86 will be around for a long time, but its haydays are definitely counted. @@RickeyBowers ARM has way higher IPC than x86. No-one has ever really tried to make a wide desktop design based on ARM until now, so total single core speed is only valid now a few more years.
@AdamSmith-gs2dv4 жыл бұрын
@@Robwantsacurry Single core performance is also better with x86 CPUs, for ARM CPUs to accomplish anything they NEED to have multi threaded programs other wise they are extremely slow
@TCOphox4 жыл бұрын
Thanks for converting those complex documentations into understandable English for plebians like me! Interesting things I've learnt so far: * AMD made direct Intel clones in the beginning. * Intel is forced to use AMD's 64bit. implementation because they couldn't develop their own successful one. * Intel has made ARM chips and has a license for ARMv6, but sold its ARM division off. * Apple had a much longer history with ARM than I expected. * Imagination went bankrupt so Apple bought lots of their IP and developed their own GPUs from there. * Apple was the first to incorporate 64bit into smartphones.
@Caesim94 жыл бұрын
The big problem of Intel is that they invested heavily in branch prediction. And it was a wise move. Their single core performance was really great but then Spectre and Meltdown happened and they had to shut down the biggest improvements. Their rivals AMD or ARM invested more in multicore processors. They aren't affected by these vulnerabilities and so Intel has a lot of catching up to do.
@autohmae4 жыл бұрын
Actually, lots of architectures were effected by Spectre and Meltdown, but especially Intel the most. They effected at least: Intel, AMD, ARM, MIPS, PowerPC. But not: RISC-V and not Intel Itanium. At least RISC-V does things in a similar way but not all the things needed to hit into the problem. But in theory new designs of chips could have been affected too. Obviously they know about these things now, so it will probably not happen. Different processors series/types/models of each architecture were affected in different ways.
@Yusufyusuf-lh3dw4 жыл бұрын
Intel did not discard any of their prediction units or improvements because of spectre meltdown problem.
@TechboyUK Жыл бұрын
Excellent overview! WIth the CPU race on at the moment, it would be great to have an updated version of this video in around 6 months from now 😊
@GaryExplains Жыл бұрын
I agree. 😀
@cypherllc72974 жыл бұрын
I am a simple man . I see gary, I like the video
@taher93584 жыл бұрын
Very original
@1MarkKeller4 жыл бұрын
Even before watching ... click
@ΝικοςΡαμαντανης-χ6ο4 жыл бұрын
@@1MarkKeller MARK!!!
@falcon817014 жыл бұрын
Best video on this topic by far
@numtostr4 жыл бұрын
Hey gary, your videos are awesome. I am waiting for some more rust videos. Last rust video helped me to get started in rust lang.
@adj22 жыл бұрын
Absolutely beneficial I learned a lot. I still have a lot more learning to do. Definitely subscribed and will be pulling up other videos that you’ve done for additional information.
@jinchoung4 жыл бұрын
surpised you didn't mention the softbank acquisition. all my arm stock got paid out when that happened and it kinda surprised me. totally thought I'd just get rolled over into softbank stock.
@patdbean4 жыл бұрын
Yes, £17 a share was a good deal at the time, I wonder what they would be valued at today.
@autohmae4 жыл бұрын
@@patdbean they benefited from the price drop of the Pound because of Brexit.
@patdbean4 жыл бұрын
@@autohmae SoftBank did yes, but ARM themselves (I think) have always charged royalties in USD.
@vernearase30444 жыл бұрын
Well, it looks like Softbank is shopping ARM around ... hope it doesn't get picked up by the mainland Chinese. As for stocks, I noticed I was spending waaaayyy too much on Apple gear, so I bought some Apple right after the 7-1 split in 2014 to help defray the cost at IIRC about $96/share.
@Cheese-n-Cake164 жыл бұрын
@@vernearase3044 The British regulation board will not allow the Chinese to buy ARM, because if they do, that would be the death of ARM in the Western world
@BeansEnjoyer9114 жыл бұрын
Subbed. Just straight up information easily explained. Love it
@TheSahil-mv3ix4 жыл бұрын
Sir ! When will ARM v9 arrive ? Are there any dates or assumptions ? What will be it like ?
@MiguelAngel-rw7kn4 жыл бұрын
Rumors says that the a14 will be the first one to implement it, so ARM should announce it before Apple. Maybe next month?
@cliffordmendes45043 жыл бұрын
Gary, you explaination is smooth like butter!
@danielcartis90113 жыл бұрын
This was absolutely fascinating. Thank you!
@reikken694 жыл бұрын
what about RISC-V? heard it is quite different from the ARM architecture.....
@taidee4 жыл бұрын
Very rich in details this video as usual, thank you 🙏🏾 Gary.
@GaryExplains4 жыл бұрын
My pleasure!
@lolomo57873 жыл бұрын
Its cool that gary takes time to read and replies to sensible comments here even if his video is uploaded months or even a years ago. Other channels dont do that.
@GaryExplains3 жыл бұрын
I try my best.
@Delis0074 жыл бұрын
I was waiting for this video. Thanks
@realme-em3xy4 жыл бұрын
Me 2
@GaneshMKarhale4 жыл бұрын
What is the difference between mobile processors and arm processors for desktop?
@thebrightstar36344 жыл бұрын
Really2 good questions next time Gary should do one video becoz of this question U ask anyways really questions 👍👍👍
@sathishsubramaniam46464 жыл бұрын
My brain has been itching for few years now whenever I hear ARM vs Intel, but I was too lazy. Finally everything is flushed and cleared out.
@diegoalejandrosarmientomun3034 жыл бұрын
Arm or Amd? Amd is the direct competence of intel regarding x86 chips and other stuff. Arm is another arquitecture, which is the one covered on this video
@bruceallen64923 жыл бұрын
80286 had segmentation, but it did not have paging. I think OS/2 would smoke a cigarette waiting for segment swaps. 80386 had segmentation swaps and paging. Paging was the what the engineers wanted. Beefier MOBO chips were needed to support DMA while the CPU continued to execute code. 8086 machines smoked during the DMAs with DOS.
@correcteur_orthographique4 жыл бұрын
Silly question : why haven't we uses ARMs processors in PC before ??
@Rehunauris4 жыл бұрын
There have been ARM powered computers (Raspberry Pi is best example) but it's been hobbyist/niche market.
@Ashish4146004 жыл бұрын
because people don't want to rewrite everything! ARM architecture is dominant in embedded system market,in SOCs, you won't see x86 architecture popular there. For PCs, it's opposite. Intel already dominated PC market so it's hard for any company to develop a system for entirely different architecture. The S/W developers (especially the cross compiler designers) will surely have headache, but if Apple is successful, it will indeed bring a revolution, and challenge Intel monopoly in pc!
@correcteur_orthographique4 жыл бұрын
@@Ashish414600 ok thx for your answer.
@augustogalindo86873 жыл бұрын
ARM is actually a RISC architecture, and there is information about it in Grove’s book (Ex Intel CEO), he claims Intel actually considered using RISC instead of CISC (current Intel chips architecture) but he decided against it because there was not much of a difference, consider we are talking about a time where most computers where stationary and didn’t need to be energy efficient, so it made sense back then to keep on with CISC. However, nowadays energy efficiency has become very important and that’s where ARM is taking an important role.
@binoymathew2464 жыл бұрын
@Gary Explains Very educational. Thank you for this.
@roybixby61354 жыл бұрын
And everyone thought Acorn was finished when it's RISC Archimedes computer flopped...
@ddd123434 жыл бұрын
Talking about porting of apps from programmer perspective: can x86 to ARM shift be done automatically? I've read many comments pointing that Apple moving to ARM might have some problems with actually convincing app developers to move to ARM. Apple is also offering special ARM based prototype computers for programmers just for testing so they could be ready for release of ARM Mac. Why is that? Isn't that a matter of simple app recompilation? Can't Apple do that automatically for apps in its store?
@RoyNeeraye4 жыл бұрын
10:52 *Apple
@GaryExplains4 жыл бұрын
Ooopss! 😱
@hotamohit4 жыл бұрын
i was just reading it at that part, amazing that you noticed it.
@STohme4 жыл бұрын
Very interesting presentation. Many thanks Gary.
@GaryExplains4 жыл бұрын
Glad you enjoyed it
@denvera1g14 жыл бұрын
Everyone comparing ARM and Intel, no one talking about ARM and AMD Like the Ampere Altra(80 core) VS EPYC 7742(64 core). Ampere is 4% more powerful, and uses 10% less energy making it 14.4% more efficent than AMD. But some people might point out that for AMD to compete with an 80 core, they have to pump more power into their 64 core which uses disproportunately more energy, than poerformance it improves, i'll be REALLY interested to see how a fully 7NM AMD EPYC where that space savings from a 7nm IO die instead of 12nm makes room for two more 8 core CCDs for a total of 80 cores. Some might argue, that if AMD had used a fully 7nm processor and had 80 cores, they would be not only more powerful, but also more efficicent(less energy for that power)
@scaryonline4 жыл бұрын
So what about threadripper Pro? Its 20 percent more powerful than intel platinum 58 core
@denvera1g14 жыл бұрын
@@scaryonline Doesnt it use more power than epyc because of higher frequency?
@gatocochino55944 жыл бұрын
I found no independent benchmarks for the Altra CPU, Ampere claims(keyword here) their CPU is 4% more powerful IN INTEGER WORKLOADS than the Epyc 7742. Saying the Ampere Altra is ''4% more powerful(...) than AMD'' is a bit misleading here.
@rene-jeanmercier65174 жыл бұрын
This is an EXCELLENT review for some one who programmed with an Intel 4004 way back when. Thank you so much. Regards, RJM
@GaryExplains4 жыл бұрын
Glad you enjoyed it!
@gr8bkset-5244 жыл бұрын
I grew up using the 8088 in engineering school and upgraded my PCs along the way. I worked for Intel for 20 years after they acquired the multiprocessor company I worked for. I got rid of most of their stock after I stopped working for them and I saw the future in mobile. These days, my W10 laptop sits barely used while I'm perfectly happy with a $55 Raspberry Pi hooked up to my 55" TV. Each time I use my W10 laptop it seems to be doing some scan or background tasks that take up most of the CPU cycles. Old Dinosaurs fade away.
@AkashYadavOriginal4 жыл бұрын
Gary I've read on forums that ARM currently is not a true RISC processor. Some of their recent additions like Neon instructions are not compatible with RISC philosophy and are more like CISC instructions can you please explain. Also Intel & AMD's recent additions like SSE actually follow RISC philosophy.
@GaryExplains4 жыл бұрын
NEON and SSE are both forms of SIMD, so there is some confusion in your statement.
@AkashYadavOriginal4 жыл бұрын
@@GaryExplains I've read forums like these that say many of the ARM instructions are actually CISC and many instructions on x86 are follow RISC. So practically the difference between RISC & CISC processors are actually meaningless. news.ycombinator.com/item?id=19327276
@GaryExplains4 жыл бұрын
CISC and RISC are very high level labels and can be twisted to fit several different meanings. Bottom line x86 is CISC without a doubt, it has variable instruction lengths and can perform operations on memory. As I said in the video, since the Pentium Pro, all x86 instructions are reduced to micro ops and the micro ops are similar to RISC, but that decoding is an extra step which impacts performance and power. Arm on the otherhand is a load/store architecture and has fixed instruction lengths. Those are fundamental differences.
@AkashYadavOriginal4 жыл бұрын
@@GaryExplains thanks Professor appreciated
@yash_kambli4 жыл бұрын
Mean while when we could expect to see risc-v isa based smartphones and PCs. If someone would be bring out high performance , Out of order, deeper pipeline, multi threaded risc v processor then it might give tough competition to ARM
@GaryExplains4 жыл бұрын
The sticking point is the whole "if someone would be bring out high performance , Out of order, deeper pipeline, multi threaded RISC-V processor". The problem isn't the ISA but actually build a good chip with a good microarchitecture.
@shaun29384 жыл бұрын
Microsoft and Apple wouldn’t be spending billion on changing to ARM if they didn’t feel that ARM could keep up with x86 development while still offering significant power savings. Where RISC-V is still an unknown. Saying that, by supporting ARM it would most likely make supporting RISC-V in the future much easier.
@AnuragSinha74 жыл бұрын
@@shaun2938 yeahI think compatibility won't be big issue because they all can design and make it backward compatible.
@autohmae4 жыл бұрын
RISC-V will take a decade or more, I think it will mostly be embedded and microprocessors for now and it could take a huge chunk of that market. And some a predicting RISC-V will be used in the hardware security module market
@carloslemos36784 жыл бұрын
@@GaryExplains Do you think Apple could roll their own instruction set in the future?
@cuddlybug20263 жыл бұрын
Thanks for the video, if I use Windows ARM on Macbook (via Parallel) can I download windows app from any source? Or does it have to be from the Microsoft Store only?
@friendstype254 жыл бұрын
The last school project I did was on the development of Elite. Such a cool game! Also, planet Arse.
@dutchdykefinger4 жыл бұрын
never heard of that planet, is it close to uranus?
@technoman82194 жыл бұрын
@@dutchdykefinger hah
@gaborenyedi6374 жыл бұрын
The instruction -> uops translation does not really counts. It inevitably takes some transistors, but currently there are so many transistors that it does not really count (say 1%). Compared to the total power consumption of a real computer (not a phone, but a notebook or a desktop) it is next to nothing. The much higher power consumption comes from many other things, like huge cache, AVX (yes, you use it much, check what a compiler produce!), branch prediction, out-of-order execution, hyperthreading and so. Some of these can be found in some ARM chips, but they mostly don't have them.
@GaryExplains4 жыл бұрын
The difference is that those 1% of transistors are used ALL of the time. It doesn't matter how much space they take relatively, but how often they are used. Like putting a small wedge under a door and saying "this door is hard to open, odd since the wedge is only 1% of the whole door structure".
@skarfie1234 жыл бұрын
I can already imagine the videos in a few years "The fall of Intel"... Sad...
@WolfiiDog134 жыл бұрын
I don't think it will fall. But it will take a huge hit, specially if the PC market also moves to better architectures
@stefanweilhartner44154 жыл бұрын
they need to do some RISC-V stuff because the x86 world will die. that is for sure. just a matter when. they tried to push their outdated architecture in many areas where they completely sucked. in the embedded world and mobile world nobody gives a fuck about intel x86. it is just not suited fot that market and that market is already very competitive. intel just wasted tons of money in that regard. at some point they will run out of money too and then it is too late.
@WolfiiDog134 жыл бұрын
@Rex yes, legacy support is the reason I don't think they will fail, and just take a huge financial hit instead, nobody is gonna immediately change their long running critical systems just cause the new architecture is better performing, when stability is key, you can't change to something new like that and expect everything to be fine. But you are wrong to think ARM is just "for people", actually, we already have big servers running on ARM for years now, and it works perfectly fine (performance per watt is way better on these systems). Also, quantum computers will not be substitute for traditional computing, it's more like just an addition with very specific applications, not everyone will take advantage of this, I think we will never see a 100% quantum system working, it will always be a hybrid (I could be wrong, but I also can't imagine how you would make a stable system with such a statistical-based computing device, you always need a traditional computer to control it).
@autohmae4 жыл бұрын
@the1919 coreteks probably already has one ready :-)
@autohmae4 жыл бұрын
@Rex You did see ARM runs the #1 top 500 super computer, right ? And Amazon offers is for cloud servers and ARM servers are being sold more and more. Not to mention: AMD could take a huge chunk of the market from Intel. More and more AMD servers are being sold now. All reducing Intel's budget.
@ercipi3 жыл бұрын
Its like going to a seminar but for free! Thanks a bunch.
@Flankymanga4 жыл бұрын
Now we all know where Garry's knowledge of IC's come from... :)
@Nayr79284 жыл бұрын
Hey Gary, I'd like to know more about how Metal, Vulkan and OpenGL works differently. Can you make a vid about it? Your vids of how things work are very informative.
@TurboGoth3 жыл бұрын
Ha! Wow. The finer points in how an API is designed and a comparison of APIs that achieve similar goals but are designed by different groups is a tough one. But I would like to add a perspective on this question since I've toyed with Vulkan and OpenGL. And, really Vulkan is OpenGL 5. Kronus, the creator of OpenGL wanted a reset in their API and going between the OpenGL versions in a somewhat graceful way was getting too awkward so they reset the API and went with a new name. Now, to bring Metal into the conversation, it should be noted than Vulkan is effectively an open Metal since Metal is Apple's but it took the same approach in that they are very heavy in their rigid establishment in how the software is to work with the configuration of the display settings so that the runtime conditionals inside the API to cope with any changing conditions are minimized. And this allows for very efficient communication between the application software and the hardware. Also, Vulkan (and perhaps Metal too - I don't know, I've never actually programmed in it - i'm no Apple fanboy) consolidates the compute API with the video API so that you can run number crunching workloads on the GPU with a similar API as you could use to draw images to the screen. And this lets you see the GPU as a more general data crunching device that only happens to crunch data on the same device where it is ultimately displayed. OpenCL is another API that gives this capability (to crunch data) but it is a more narrow view of GPU capabilities in that you don't use graphics through it. But Vulkan can be quite complicated because of all the burdensome setup that is to be established so as to simplify the runtime assumptions and this can really be a huge nuisance for a learner. Using OpenGL as of version 3.3 or so will make your journey easier. But OpenGL ES 2.0 or 3.0 will make it easier still so you can avoid the major OpenGL API drift as of the programmable shaders era which completely changed the game. Before that, there was something referred to as the "fixed function pipeline" and that's ancient history.
@bigpod4 жыл бұрын
nanometer litrography doesnt mean anything when comparing 2 CPUs from different manufacturers because they define what they mesure as litography differently, aka intel CPUs will have more density of components at the same litograpjy size and even 1 size smaller from another manufacturer like tsmc
@ILoveTinfoilHats4 жыл бұрын
Intel's talks with TSMC show's that Intel's 10nm is actually similar to TSMCs 6nm (not 7nm as speculated) and Intel's 7nm similar to TSMC 5nm.
@l.lawliet1642 жыл бұрын
Not true, the power consumption is different, intel can have the same density, but still use more power, because have bigger transistors
@bigpod2 жыл бұрын
@@l.lawliet164 how does that work if you go by transistor count (thats the density) intel can put at their 10nm litography same number of transistors in same space as TSMC 6nm, they just count different components(component that consists of multiple transistors)
@l.lawliet1642 жыл бұрын
@@bigpod that's true, but their transistors are still bigger that's why you see better power consumption for tsmc even if they both have the same density... this means Intel pack process is better, but there transistor is worse. Performance can be equal, but consumption can't
@l.lawliet1642 жыл бұрын
@@bigpod This actually give intel the advantage, because they can get same performance with a worse component and also more cheap.
@jF-sp8lo3 жыл бұрын
just quick on the x86 history, the 8086 was 16 bit and expensive so they released a more affordable 8088 chip in 1979 that only had an 8 bit data bus (much cheaper than 8086) so more popular, also AMD made 8088 4.77 Mhz clone chips with 10Mhz turbo (first PC i ever built) and clone 8086's (Clones started way before your listed 386 clones). I like that you mentioned Cyrix even though they where so unstable I only built one. The AMD64 has to do with patents not just that they where there first.
@meowcula3 жыл бұрын
Well explained and detailed. I get tired of x86 fanboys saying RISC and ARM are crap with either outdated talking points and/or just plainignorance. I personally use amd x86_64 (so yeah x86) but naturally that doesn't mean at ARM can't be just as good. I'm personally curious to see what happens when apple spins it up for the desktop space (not bloody laptops and ipads) as that would be the real proof in the pudding. I'm also impressed with the speed at which people are porting software to ARM, I expected a lot more in terms of barriers to that
@RonLaws4 жыл бұрын
it may have been worth mentioning the NEON extension for ARM in more detail, as it is in essence what MMX was for the pentium - Hardware decoding of h264 data streams among other things. (relatively speaking)
@az09letters924 жыл бұрын
NEON is more like SSE2 for intel. MMX was pretty limited, you couldn't mix floating point math without extreme performance penalties.
@CaptainCaveman11704 жыл бұрын
Hi great video. I'm wondering if you ever run low on content, could you cover the very interesting history of the Datapoint 2200? I know pretty nuch everybody would disagree with this assertion, but in my nind it is the first fully contained "Personal Computer".
Do you think Apples new Mac ARM SoCs are based on ARMs Neoverse N1 or better yet E1 architecture? Because all the other ARMs chips before were based in a cluster sizes up to a maximum of 4 cores. The E1 can have a cluster up to 8 cores. The N1 can have clusters from 8 cores up to a maximum of 16 cores.
@GaryExplains4 жыл бұрын
Apple's silicon isn't based on any designs from Arm, it is Apples own design. Also, since DynamIQ, all Cortex CPUs can have 8 cores per cluster.
@Agreedtodisagree4 жыл бұрын
Great job Gary.
@afriyievictor4 жыл бұрын
Tanx Gary you are my favorite IT teacher
@DRFRACARO444 жыл бұрын
Which is better to learn about if you're trying to become a malware analyst?
@deanhankio63044 жыл бұрын
what is the program used for the laser sight ?
@TurboGoth3 жыл бұрын
RE:"I used to work for DEC". You're legit old-school! And here I figured all this background stemmed from lots of drudging old readings from the past! Wow. Well, I really do appreciate your easy discussion of all these topics that span chip architecture. And I would ask for some discussion on the Alpha (starting with 21064) to press you with your unique background. That architecture certainly must have made Intel sweat given the incredible speeds. LITERALLY incredible, as in, I literally did not believe the numbers I was seeing. I think they were doing 333 mhz while Intel was at 50 or so. I remember dismissing it as impossible. And by the time I might have been able to gain enough info on it, I had already dismissed it as irrelevant when I learned that its different ISA made it an apples and oranges comparison with x86. And I only gained the perspective to appreciate the Alpha ISA long after Alpha had become a historical footnote. I should mention the latest literature that I have crossed mention of the Alpha and that is the RISC-V architecture book which mentions the goal to dodge the undesirable end of abandoning an architecture and the nuisance/expense for everyone invested in the platform.
@TurboGoth3 жыл бұрын
I'm sorry, I got too excited and lost the direction of a sentence. I *would* ask... EXCEPT I know it would be an irrelevant bore to most of your audience since it *IS* just a historical footnote.
@danielho56354 жыл бұрын
3:03 Small Correction -- Itanium, at launch did not have x86 backward compatibility. Later on an x86 software emulation was made but it was too slow.
@SeetheWithin4 жыл бұрын
Very nice video and full of details! Keep those coming!
@-zero-4 жыл бұрын
interesting video, btw can you make a video series on the working of cpu? maybe you made a video about binary but never went into full depth of how the adders work into making a "cpu"
@GaryExplains4 жыл бұрын
Did you watch the videos in my "How Does a CPU Work" series??? kzbin.info/aero/PLxLxbi4e2mYGvzNw2RzIsM_rxnNC8m2Kz
@-zero-4 жыл бұрын
@@GaryExplains yes i have watched them, i would like to learn more in depth on what goes inside the cpu, like how do transistors and adders process the binary and work thier way into assembly language
@GaryExplains4 жыл бұрын
That is hardware circuit design and not something I particularly enjoy, I am more of a software person. You would need to find a hardware and/or logic channel for that kind of thing.
@GaryExplains4 жыл бұрын
Yes, Ben Eater's channel is a good place to go for that stuff.
@ashfaqrahman27954 жыл бұрын
You can take up a course on Coursera called "NAND to Tetris". Basically you build a 16-bit CPU using NAND gates (Part-1) and write minimal software to add life to the hardware using a custom-made programming language (Part-2).
@official_ashhh Жыл бұрын
Brilliant explanation of x86 vs risc architecture.
@apivovarov24 жыл бұрын
What shares should we buy?
@pilabs32064 жыл бұрын
Thanks Gary.
@elvinziyali51844 жыл бұрын
Excuse me, I didn't get the point at the end. Is ARM going to design for PCs or Apple going to scale it by themselves? Thanks for the great video!
@Haldered4 жыл бұрын
Apple have been designing their own ARM-based chips for awhile now for iOS, and will transition MacOS and Mac products to run on their own Apple-designed ARM chips. These chips won't be available for non-Apple products obviously, and AMD is outperforming Intel in the rest of the consumer market, and gamers especially are abandoning Intel. Intel are a big company though, so who knows what the future is.
@professoraarondsouza52552 жыл бұрын
GARY EXPLAINS QUITE WELL!
@JuanGarcia-lh1gv4 жыл бұрын
Great video! How powerful do you think Apple GPUs are going to be? Do you think they're going to compete with a 2080 ti?
@seeker91454 жыл бұрын
Nice explanation. However, I would like to know why did Apple have to base its processor architecture on ARM if it was designing it's own custom architecture? Is it because they needed the ARM instruction set? If yes, couldn't they make their own instruction set? Or that is a very long process?
@igorthelight4 жыл бұрын
It's much easier to use already existing architecture that develop your own.
@thogameskanaal3 жыл бұрын
Also, Nintendo has had a history with ARM, starting in 2001 with the GBA rocking an ARMv7 CPU all the way up to the Switch which runs on a Cortex. I think this is one of the reasons why they were kings when it comes to backwards compatibility (though the Switch kinda halted that and they would never allow for games from two generations prior to run natively, even though the 3DS is perfectly capable of running GBA games.)
@MarthaFockerMF3 жыл бұрын
Thanks for the info dude, its comprehensive, but a little bit too much on history part. Anyway, its a great vid! Keep it up!
@achillesmichael57055 ай бұрын
Finally someone who knows his stuff
@bigpod4 жыл бұрын
well if RISC cpu doesnt have a instructions necessery to do the job same job can take 3x or more time to complete on CISC more instructions are in first of all and well they can be easierly emulated and microcode can allow for more to be loaded it
@jonnypena76514 жыл бұрын
ARM already prepare a SIND, a new set of high performance instructions, also, Apple walk-around is using dedicate acelerator.
@bibekghatak58603 жыл бұрын
Nice video and thanks to Mr Gary for the enlightenment .
@GaryExplains3 жыл бұрын
Glad you enjoyed it
@Steven178p3 жыл бұрын
I just realized I wasn’t subscribed Im sorry Gary I’ve been watching for years
@TheZenytram3 жыл бұрын
why cant cpu add bits in memory?? is there a physics barrier for that or is just that we didnt do it before when wasnt possible so we dont do it now bc we are already doing the other way for years ?
@trendyloca23304 жыл бұрын
Thanks for the whole story that was awesome.
@GaryExplains4 жыл бұрын
Glad you enjoyed it!
@yepo4 жыл бұрын
4:06 I just love how the UK's BBC was involved with computer literacy. Imagine if a major TV network in the USA got involved and helped kids to code.
@chomskyhitchens4 жыл бұрын
Yeah, that was back in the day when the BBC was a brilliant service to the country - now it is a pathetic shadow of its former self, pandering to the lowest common denominator and nonsensical identity politics
@xrafter4 жыл бұрын
@@chomskyhitchens FACTS
@scality43094 жыл бұрын
@@chomskyhitchens In broadcast land they still set the standards i think.
@steelmm094 жыл бұрын
Excellent gary thanks for the great videos
@GaryExplains4 жыл бұрын
Glad you enjoyed it
@hypoluxa4 жыл бұрын
Great overview and explanation of the current situation. Very helpful.
@danimoosakhan4 жыл бұрын
Can Intel switch to ARM as well? Seems like everyone is moving to ARM now a days.
@junaidsiddiquemusic3 жыл бұрын
Finally someone knowledgeable ❤️ thanks for sharing this information with us.
@karancastelino57144 жыл бұрын
Great video Gary
@GaryExplains4 жыл бұрын
Thanks 👍
@Ace-Brigade17 күн бұрын
So if I follow this logic correctly you could then take an x86 instruction set and compile it down or decompile it as it may be to a RISC instruction set before it runs? Meaning couldn't I just compile my x86 software to a RISC instruction set and have complete backwards compatibility with software? Sure it would take extra compilation up front while the application is being built but with how fast processors are today I don't imagine that would take more than a few seconds for a large application.
@GaryExplains16 күн бұрын
Yes, that is very roughly how the x86 emulators work on Apple Silicon Macs and for Windows on Arm laptops (i.e CoPilot+ laptops)
@Ace-Brigade16 күн бұрын
@GaryExplains do they do that at compile time or runtime? Emulators typically do that at runtime right? I would imagine the overhead would be pretty serious.
@Edward135iАй бұрын
3:58 I always find it interesting that in the 1980s the UK had it's own computer industry built on its own home grown architectures completely separate from the American giants at the time like Intel and IBM.