CPU? GPU? This new ARM chip is BOTH

  Рет қаралды 253,760

Coreteks

Coreteks

Күн бұрын

Пікірлер: 749
@MarcoGPUtuber
@MarcoGPUtuber 4 жыл бұрын
A64FX.....Why have I heard that name before? Oh yeah! Athlon 64 FX!
@arusenpai5957
@arusenpai5957 4 жыл бұрын
Yeah, tha name remind´s me that too XDD
@pflernak
@pflernak 4 жыл бұрын
So thats where the deja vu feeling came from
@zM-mc2tf
@zM-mc2tf 4 жыл бұрын
What goes round...
@jean-pierreraduocallaghan8422
@jean-pierreraduocallaghan8422 4 жыл бұрын
I knew I'd seen that somewhere before but I couldn't put my finger on it ! Thanks for the reminder ! :)
@xxdizannyxx
@xxdizannyxx 4 жыл бұрын
FX you...
@billykotsos4642
@billykotsos4642 4 жыл бұрын
RIP SATORU IWATA. A BRILLIANT AND UNIQUE MIND. His father never wanted him to pursue a games career.
@mix3k818
@mix3k818 4 жыл бұрын
There's only one video that comes to my mind at this point. kzbin.info/www/bejne/oGPHqYtrea54g7M R.I.P. to both.
@masternobody1896
@masternobody1896 4 жыл бұрын
Intel is better
@perhapsyes2493
@perhapsyes2493 4 жыл бұрын
And I'm glad he didn't listen.
@MissMan666
@MissMan666 4 жыл бұрын
@@masternobody1896 intel is nr.2.
@nelsoncabrera6464
@nelsoncabrera6464 4 жыл бұрын
Nice Haiku
@MrBearyMcBearface
@MrBearyMcBearface 4 жыл бұрын
This video sounds more like a nonfiction crime tvshow than something about processors.
@rcrotorfreak
@rcrotorfreak 3 жыл бұрын
can u share us ur pic?
@kcvriess
@kcvriess Жыл бұрын
You make me laugh but at the same time I'm annoyed. This dude has a wealth of knowledge and insights, but he's HORRIBLE to listen to.
@suibora
@suibora 4 жыл бұрын
17:28 sure, streaming today data would be instant with tomorrows technology, but what about tomorrows data? The extinction of load times is far away. More powerful computers? That will just be an excuse to use more detailed textures :'D
@Mil-Keeway
@Mil-Keeway 4 жыл бұрын
loading nowadays is no longer limited by file size, it is limited by bad code. NVMe SSDs do many GiB/s, no game asset needs more than the blink of an eye to load. Sadly, developers have some of the fastest possible hardware available (especially in big-budget games and programs), so they have no need to optimize. Running the same code on an average PC then makes it unusable.
@redrock425
@redrock425 4 жыл бұрын
The biggest issue is poor telecoms infrastructure. Even in the UK it varies massively in speed, they're already trying to save cost and not put in full fibre.
@pflernak
@pflernak 4 жыл бұрын
@jayViant Talking of holograms: kzbin.info/www/bejne/jGi5YWiKaambqLc
@635574
@635574 4 жыл бұрын
@@Mil-Keeway compression and bad structuring of data makes for terrible load times rven on high end NVMes. Games before the next get werent optimized for this, maybe except star citizen and arkham knight.
@paramelofficial9100
@paramelofficial9100 4 жыл бұрын
Just compare it to just 10 years ago when some websites would take years to load half the time, or 10 years before that when printing a jpeg was faster than viewing it on a webpage. We're really stretching conventional processor capabilities thin but there will definitely some fundamental shift in the industry that keeps the performance train chugging along. Could be a beefed up ARM chip, desktop chips made from different materials (silicon ain't the most performant, it's the most flexible) or something completely different if synthetic neurons or quantum computers have an early breakthrough. Internet banwidth is also constantly improving. Honestly the only thing slowing us down is companies milking their current technologies like crazy. Let's all thank AMD's Threadripper for shoving 32 inefficient cores into pro-sumer PC's and speeding up global warming lol. And let's not forget Intel's tiny generational improvements. There are certain sollutions which could be implemented pretty soon but who has time to research other options when they have to pump out 3 useful and 7 useless chips a year? TL;DR : Tech seems to be improving faster than consumer needs because it never improves fast enough for professional needs, driving researchers to find new and better sollutions. But capitalism's a bit of a bitch sometimes and is getting in the way.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 4 жыл бұрын
7:55 Not quite. Supercomputing applications actually have limits to their parallelism. There is also a need for heavy communication traffic between cores. Hence the fast interconnect, which is a major component of the build cost of a super. For an example of a massively parallel application which doesn’t need such heavy interprocessor communication, consider rendering a 3D animation. The renderfarms that are deployed for such an application are somewhat cheaper than supercomputers.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 4 жыл бұрын
8:53 That doesn’t make sense. “Teraflops” is a unit of computation (“flops” = “floating-point operations per second”), not of data transfer. Data transfer rates would be measured in units of bits or bytes per second.
@blackdoveyt
@blackdoveyt 4 жыл бұрын
Yeah, A64FX has 1TB/s theoretical bandwidth and 840GB/s of actual bandwidth.
@MrTrilbe
@MrTrilbe 4 жыл бұрын
So,, ARM, AMD and Fujitsu teamed up for a super APU, that's in some ways more epic then EYPC..., I will call this colab FARMeD!
@absoluterainbow
@absoluterainbow 3 жыл бұрын
Proof?
@MrTrilbe
@MrTrilbe 3 жыл бұрын
@@absoluterainbow it was a tongue in cheek summery of this video and an pun at the end
@wajihbleik436
@wajihbleik436 4 жыл бұрын
Thank you for doing what you're. I learn a lot from your videos.
@micronyaol
@micronyaol 4 жыл бұрын
Can't imagine a japanese chip without TOFU interface
@glasser2819
@glasser2819 4 жыл бұрын
iExplorer has SHAKRA engine (as shown in TaskMgr) and if it was coded in Germany it would have a SAUSAGE Cache pipe... LOL
@IngwiePhoenix
@IngwiePhoenix 4 жыл бұрын
@@glasser2819 No, it would have a Bierfass (beer jar) pipeline ;) I am german, I should know. ^.^
@minitntman1236
@minitntman1236 4 жыл бұрын
The driver of AE86 was a tofu delivery man
@prashanthb6521
@prashanthb6521 4 жыл бұрын
SUSHI coming up next.
@matthewcalifana488
@matthewcalifana488 4 жыл бұрын
They make the Best capitors .
@TechKerala
@TechKerala 4 жыл бұрын
Dedicated my life's 20 MINUTES.. Worth it as always..
@adnan4688
@adnan4688 4 жыл бұрын
Absolutely!
@Soul-Burn
@Soul-Burn 4 жыл бұрын
Only dedicated 10 minutes. x2 speed is great.
@johnnyxp64
@johnnyxp64 4 жыл бұрын
20:36 actually for me..cause I wanted to see my name on the Credits... 🤣😝
@AwesomeBlackDude
@AwesomeBlackDude 4 жыл бұрын
Always guarantee when you watch a (Jim) #AdoredTV video ❎
@miguelpereira9859
@miguelpereira9859 4 жыл бұрын
@@johnnyxp64 Being a Coreteks patreon means having big pp
@DarthAwar
@DarthAwar Жыл бұрын
If the utilise the newest HBM version instead of traditional DRAM for Cache it would vastly increase its processing speed and reliability but also dramatically increase production costs
@Battlebaconxxl
@Battlebaconxxl 4 жыл бұрын
What you describe sounds like a modern version of the PS3's cell chip.
@FrankHarwald
@FrankHarwald 4 жыл бұрын
Kind of, yes! The PS3 used several DSP-like processors connected onto a ring bus - except that rings, as well as other pure bus like topologies, while being the simplest way to interconnect multiple regions on a chip have several inherent limits which restrains this kind of topology to a limited amount of locally adjacent cells which is why the kind of processor presented here not only has one ring, but a hierachy of rings topology: See this paper as an example for examining & describing different hierachical ring topoligy variants as on-chip interconnection networks, also called NoC = "network on chip" "Design and Evaluation of Hierarchical Rings with Deflection Routing": pages.cs.wisc.edu/~yxy/pubs/hring.pdf This has been a hot reseach topic in hp & scientific computer engineering for several years now. Another really old, formerly rejected but increasingly interesting & related research topic is "computing-in-memory", also "processing-in-memory" or "near memory processing" because the costs to transfer data between processing units & memory is, as mentioned in this video, increasingly becoming a limiting factor, see "Computing In-Memory, Revisited": ieeexplore.ieee.org/document/8416393 but also semiengineering.com/in-memory-vs-near-memory-computing/ & while the recent emergence of array processors like Googles tensor cores & other forms of neuromorphic processing units is clearly at least partly due to that, this problem isn't limited to applications using AI but applies to a much broader category of problems - the "bandwidth wall" is a thing.
@SerBallister
@SerBallister 4 жыл бұрын
@@FrankHarwald One of the biggest headaches of working with the cell BB was the relatively tiny amounts of accessable memory each SPU had (256kb IIRC). This meant you couldn't use a lot of general purpose algorithms and instead had to modify them to be streamable with high locality of reference - for some algorithms it just isn't possible to optimise in such a way.
@FrankHarwald
@FrankHarwald 4 жыл бұрын
@@SerBallister indeed, but modifying algorithms so that they run with a high amount of locality is something that you'll have to do for all data intensive algorithms anyway - no matter how much of it is done automatically, profiler-assisted or by hand - regardless of what the underlying architecture is because while all shared memory architectures will start hitting the bandwidth wall at some point, distributed memory architectures will be the only way to circumvent these limitations. & yes, this also means that algorithm that access a lot of memory from the same chunk in a purely serial way will either have to be modified to access data in parallel from multiple chunks (if possible) or remain bandwidth limited (if this is acceptable or if the algorithm is inherently serial).
@SerBallister
@SerBallister 4 жыл бұрын
@@FrankHarwald You should aim for that yeah. The SPU local memory presented an addressing barrier instead of a cache miss like on a multicores, all data has to be present in that block. Take a ps3 game for example. Some systems like physics and pathfinding can be hard to compress your game world in 256kb, the PPU had to work on that stuff and you then had the headache of pipelining the output of that into the SPU (e.g. animation) if you want to avoid stalls. Interesting chip but can be hard work, task scheduling and synchronisation is also not straight forward. I would prefer working with modern desktop multicores with shared memory.
@thurfiann
@thurfiann 4 жыл бұрын
of course it is
@m_schauk
@m_schauk Жыл бұрын
Damn this video had aged well... so good. Wish more videos like this were made and popular on KZbin.
@chafacorpTV
@chafacorpTV 4 жыл бұрын
I once heard that HAL got its name by grabbing IBM's and ticking the characters because they saw themselves as "one step ahead of IBM". Seeing this, I truly believe it.
@miketaratuta
@miketaratuta 4 жыл бұрын
ticking them back, not forwards
@kipronosoi
@kipronosoi 4 жыл бұрын
Woot !! Coreteks is back, feels like its been forever...
@StopMediaFakery
@StopMediaFakery 4 жыл бұрын
Don't you just love their Masonic logo? The honeycomb hexagon also known as the Cube, a reference to Saturn and the system we live in. Just so happens to also be in the beehive colours.
@chuuni6924
@chuuni6924 4 жыл бұрын
If you haven't already, you may want to look into RISC-V's upcoming Vector extension. It does all that SVE does, but better.
@Toothily
@Toothily 4 жыл бұрын
Better how?
@chuuni6924
@chuuni6924 4 жыл бұрын
@@Toothily There are a couple of independent things. For one thing, there's no architectural upper limit to the number of vector lanes. Another thing is that the dynamic configuration of the vector registers allows better utilization of the register file (for example, if only a couple of vector registers are used, they can subsume the register storage of the other registers to get much, much wider vectors). Also, while that part of the specification is still a bit up in the air, there is an aim to provide for polymorphic instructions based on said dynamic configurations, which means that it's far easier for it to adopt new data types with very small architectural changes. They also aim to provide not only 1D vector operations, but even 2D or 3D matrix operations, which could provide functionality similar to eg. nVidia's tensor cores, except in a more modular fashion. There are more examples too, but I think this post is running long enough as it is. I recommend reading the specification.
@Toothily
@Toothily 4 жыл бұрын
@@chuuni6924 that sounds really cool spec wise, but do they have working silicon yet?
@chuuni6924
@chuuni6924 4 жыл бұрын
@@Toothily The spec isn't even finalized yet, so no, there's definitely no silicon yet. However, the Hwacha research project is being carried out in parallel and I know there's a very strong connection between it and RV-V, and I believe they have working silicon in some sense of the word. It's a research project rather than a product, however, so not in the ordinary sense of the word.
@mrjean9376
@mrjean9376 4 жыл бұрын
Really wanted to know, what you guys think/opinion about this computer compared to nvidia dgx a100?? Does it has equal performance or something? I really excited to know this. Thx :)
@seylaw
@seylaw 4 жыл бұрын
And ARM already has announced the SVE2 extension which is a replacement for their NEON instruction set (for home/multimedia usage instead of SVE1 which is tuned for HPC workloads). Interesting times are ahead and can't wait for ARM storming the PC desktop...
@raymondobouvie
@raymondobouvie 4 жыл бұрын
I am no engineer in any shape, but with Coreteks videos I am getting such a digestible form of explanation that teaches me, even thaw i am 37yo) Thank you so much!
@mrlithium69
@mrlithium69 4 жыл бұрын
37 is not too late. God willing you will be learning well past 37 and even at 73.
@Seskoi
@Seskoi 4 жыл бұрын
I'm 101 years old and still learning!
@IARRCSim
@IARRCSim 4 жыл бұрын
@@Seskoi in base ten?
@raymondobouvie
@raymondobouvie 4 жыл бұрын
@@IARRCSim they opened schools on Mars - finally)
@The_Man_In_Red
@The_Man_In_Red 4 жыл бұрын
@@Seskoi I'm 1,009,843,000 seconds old and I push myself every nanosecond to learn more and more
@rickbhattacharya2334
@rickbhattacharya2334 4 жыл бұрын
Man your videos always inspire me to read more computer architecture . I have computer architecture as a subject in my bachelor's and i don't like it but your videos always inspire me to read it more.
@datsquazz
@datsquazz 4 жыл бұрын
Those chips are cool and all, but did you see THIS? 18:04 That truck has FOUR WHEEL STEERING, now THAT is innovation
@onebreh
@onebreh 4 жыл бұрын
They have been on the roads for years...
@carholic-sz3qv
@carholic-sz3qv 4 жыл бұрын
there is also all wheel steering at the wheels at the back too, look at this tatra video kzbin.info/www/bejne/i17Ym6OshMqsms0
@Mil-Keeway
@Mil-Keeway 4 жыл бұрын
lots of 3-axle garbage trucks in europe have frontmost and rearmost steering, pivoting around the middle axle basically.
@keubis2132
@keubis2132 4 жыл бұрын
@@onebreh pog realy ?
@koolyman
@koolyman 4 жыл бұрын
You call that innovation? Get back to me after you google 'Spork'
@DanafoxyVixen
@DanafoxyVixen 4 жыл бұрын
The comparison with the duel intel xeons is a little silly now that they have already been blown out of the water with eypc.. still an interesting CPU tho..
@stefangeorgeclaudiu
@stefangeorgeclaudiu 4 жыл бұрын
I think people are going to get surprised when AMD announces Milan this year. Also, the Frontier 1.5 exaFLOPS supercomputer will use a CPU chiplet + 4 GPU chiplets + memory in the same AMD chip.
@thomasjensen1590
@thomasjensen1590 4 жыл бұрын
The question is, what is more EPYC?
@BrianCroweAcolyte
@BrianCroweAcolyte 4 жыл бұрын
I agree. With how many problems Intel has been having for the last 4-5 years stagnating them on 14nm, comparing anything besides other x86 CPUs to Intel feels disingenuous. If they compared this ARM chip to the actual current x86 performance leader (a 2U Epyc Rome server with 128 cores) it would be beaten by at least 2-3X. Maybe performance per watt would be better on the ARM chip, but the performance density would almost definitively be unbeaten.
@aminorityofone
@aminorityofone 4 жыл бұрын
​@@BrianCroweAcolyte This isn't the first time ARM was expected to be dominate. It happened int he 90's as well. In fact Microsoft made Windows NT compatible with ARM back then. There was big promise that RiSC cpus would take over the world. Well, that didnt happen, and i still dont think it will happen today or in the future.
@defeqel6537
@defeqel6537 4 жыл бұрын
@@aminorityofone ARM will probably continue to dominate the market where chips are designed for purpose (unless RISC-V takes that market), mostly because x86 isn't licensed to anyone new.
@lazadafanboyz7970
@lazadafanboyz7970 3 жыл бұрын
We need to rethink how we use computers. ARM is the future, X86 processors are flawed, it runs at 125watts while ARM will only use 5watts with equal of computing power. The only problem is we need to reprogram everything
@TechdubberStudios
@TechdubberStudios 4 жыл бұрын
Loved this video so much, watched it twice in a row.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 4 жыл бұрын
5:36 Actually, the plural of “die” is “dice”. Yes, those dice. As in the phrase “the die is cast”, which means instead of throwing several dice, you have thrown just one, and must stand by whatever it shows.
@ehp3189
@ehp3189 4 жыл бұрын
The "die is cast" comes from the middle high German/English Gutenberg printing. The printed page came from a single die cast, which is why it was slow and expensive (though cheaper than the Monks drawing each page by hand). This allowed Bibles to be printed, helped people learn how to read, and bring education to the people.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 4 жыл бұрын
@@ehp3189 That can’t have been right. Guternberg’s innovation was the invention of movable-type printing, as in having separate pieces for each letter that were assembled to make up a page. Printing an entire page from a single block was a technique that had been invented by the Chinese centuries earlier.
@ehp3189
@ehp3189 4 жыл бұрын
@@lawrencedoliveiro9104 Granted, but the expression goes more towards the assembled type set being cast together in a block and any changes to that during a printing run were not to be allowed. It was difficult enough that breaking apart the group and then reassembling it for one letter change was more expensive than it was worth. At least that is my understanding. I liked philologogy in college but they only offered one class ...
@arthurcuesta6041
@arthurcuesta6041 4 жыл бұрын
You're finally back. Thanks again for the amazing work.
@斯溫克
@斯溫克 4 жыл бұрын
Congratulations on 100.000 subscribers !! I love your videos and i came a long way in computer khnowlage because of you , i hope you have a great year ! Love you from EU Si ♥️😊
@valentinoesposito3614
@valentinoesposito3614 3 жыл бұрын
The Japanese make the most exotic CPUs
@fanitriastowo
@fanitriastowo 4 жыл бұрын
I like that progress bar ads
@N0N0111
@N0N0111 4 жыл бұрын
7:15 Finally the memory bottleneck is being some what addressed.
@josephfrye7342
@josephfrye7342 2 жыл бұрын
this is another example that you don't need nvidea or various graphics card seperatly.
@Hazemann
@Hazemann 4 жыл бұрын
This advanced Fujitsu's A64FX chip variation will be in a Nintendo Switch revision in the future as a complement to the ARM's CPU and Nvidia's GPU
@deoxal7947
@deoxal7947 4 жыл бұрын
Where can I read about that?
@lawrencedoliveiro9104
@lawrencedoliveiro9104 4 жыл бұрын
11:39 No, not all the other memory types did commoditize eventually. I can think of two things that Intel bet on, that flopped: bubble memory, and RAMBUS.
@peacenaga7725
@peacenaga7725 4 жыл бұрын
I stumbled upon your channel when viewing your interview of Jon Masters and have binge watched 3 episodes losing sleep. Kudos. I am learning a lot! Thank you. I havent binge watched in a long time.
@tipoomaster
@tipoomaster 4 жыл бұрын
"The future is Fusion", the slogan was just 12 years ahead of the technology
@kentaaoki8064
@kentaaoki8064 4 жыл бұрын
17:01 Reseat that RAM!
@lahmyaj
@lahmyaj 4 жыл бұрын
Kenta Aoki lol I noticed that too
@miguelpereira9859
@miguelpereira9859 4 жыл бұрын
Oof
@nowonmetube
@nowonmetube 4 жыл бұрын
I just hope it's not the Cell processor all over again. It seems that Toshiba always tried to invent a new architecture (going as far as inventing new technology that's faster than silicon based chips), but it never takes off.
@karapuzo1
@karapuzo1 4 жыл бұрын
Hear hear, remember Cell and Itanium, great on paper, not so much in real life. Always keep in mind the baggage and momentum of established software and industry.
@desjardinspeter1982
@desjardinspeter1982 4 жыл бұрын
your video presentations are so well done. I always look forward to watching them! such an interesting product. thank you for covering this!
@SelecaoOfMidas
@SelecaoOfMidas 4 жыл бұрын
The future with ARM processors looks great with this one. Interesting that Nintendo has an indirect connection to this too. One could imagine their NSO servers running on top of an A64FX processor, maybe a future console? 🤔
@CookyMonzta
@CookyMonzta 4 жыл бұрын
Apple is planning to employ ARM chips in their machines. When last I heard, they were going to start with their MacBooks; presumably with their custom A14 and maybe the A15 in the future. One must expect that, by the time they move their custom ARM architecture to their desktops and iMacs, they'll have the A16 ready. Would it be too farfetched to assume that we'll see this A64FX (or its successor) in a MacPro? And what of RISC-V? Will they get into the desktop/laptop game? Or is there something already out there that I haven't seen yet?
@Speak_Out_and_Remove_All_Doubt
@Speak_Out_and_Remove_All_Doubt 4 жыл бұрын
When are we going to see desktop CPUs with 3D stacked memory and realistically how much memory is it going to have? I can't see 32GB of system memory fitting on a normal desktop die but maybe I'm wrong. Or will it just be a small amount of on-die memory to be used like an extra cache layer but you still have your normal DDR system memory? Also, I can't be my head around heat dissipation in this new 3D stacking future. Given heat is the biggest issue in 2D chips and atm we can cool directly on to the only layer there then 3D stacking just doesn't sound like it's going to work on anything other than ultra low powered chips to me. I guess you could try to have some kind of micro coolant channels flowing between layers or maybe thin sheets of graphene but this will be expensive and complex to integrate plus If you have to go from CPU's running at 5GHz to them running at 1.8GHz then I can't see the benefits of closer system memory being enough to overcome this inherent drawback.
@mrlithium69
@mrlithium69 4 жыл бұрын
True story. Yes we will see 3D stacked CPUs with some kind of RAM on them. (My guess is 2023). We've seen GPUs already that have large HBM stacks "on-chip", and we will see them integrated to even more "on-die". Since Intel has invented its "Foveros" tech to avoid the Interposer layer and TSV's. It does seem like the RAM packages would be almost as large in die size as the CPU. (judging back to the HBM GPUs). Also, Remember the 128MB EDRAM L4 cache on the integrated GPU of Broadwell 5775C (2015)? Granted it was for the iGPU, but that was an early proof of concept of CPU+RAM, and took up a very large real estate of that chip. It wasnt market friendly, the proportions and cost were all wrong for mass-marketability, but it was interesting to say the least. Right now its probably easy enough to add a stack of 1 or 2 modern 16Gigabit DRAM die (=2 or 4 Giga BYTES), thats just my speculation. They've also been researching that micro-coolant-channel science as a real possibility lately. And you're right, the heat in the 3D stacks is the main issue. Everyones trying to stack the best way possible, its just a question of how. Then again, besides the real science of it, theres commercial viability concerns. The currently long-standing seperation between the CPU and RAM markets, means we get good prices on both. RAM as a commodity as we know it would be at risk if the 2 main CPU makers are integrating it onto a CPU. The move away from "memory modules" we all know and love would be too hard to do. Plus, think Apple laptops and planned obsolescence; they already solder the RAM on the motherboard, I'm sure they'd love it soldered right on the CPU. So we can buy $1000 CPU+RAM combo-chips and throw them away in 3 years when Chrome starts using 128GB of ram. I'd be more convinced if it was a small L4 cache idea, and leave system RAM alone.
@johannajohnson310
@johannajohnson310 4 жыл бұрын
he does NOT want to mentiion the downfalls , we can barely keep heat off our current cpus and gpus, so 3d stacked? nah the amount of power needed, and cost to switch a billions of severs ,no happening
@IARRCSim
@IARRCSim 4 жыл бұрын
Having things as simple as the video suggests would be a dream. Unfortunately, we'll keep having multiple levels of memory like we always have for over 50 years and different speeds, capacity, and prices per capacity for each. That means any software developers eager for better performance will need to continue optimizing for that complicated layout and be mindful of how it all works. Drastically more L3 cache would be great but there will always be demand for different capacities that strike a different balance on speed, price per unit of capacity, and capacity. RAM Drives introduced at kzbin.info/www/bejne/bKHTkJ6oeM2qlaM do a great job of explaining some of the tradeoffs between speed, price, and others between SSD-like memory vs RAM.
@alexsiniov
@alexsiniov 4 жыл бұрын
When it hits consumer it will probably have HBM3 memory with up to 64gb on die. Since you won't need ram and GPU, PCs will become micro form factor, imagine how many sockets for these monsters can u fit on standart ATX motherboard without chipset, pci-e slots, etc? :)) and u just need to watercool that bitch with AIO or custom loop. U can add for example 12tflops another chip when u run out of resources or add smaller performance one and they will stack together :) and then add most powerfull one when u get money :) and continue your life with 3 cpus and add one more in 5-6 years :)
@The_Man_In_Red
@The_Man_In_Red 4 жыл бұрын
@@johannajohnson310 I'm inclined to agree, there's only so much efficiency you can squeeze out of silicon no matter how it's designed.
@denvera1g1
@denvera1g1 4 жыл бұрын
Consumer processors will probably use HBM as sort of an L4 cache, or a base memory with a tiering system, and then still have traditional memory channels, though maybe less channels
@aziziphone9350
@aziziphone9350 4 жыл бұрын
Finally ! My favourite KZbinr coreteks uploaded a video. Love your content man been watching you since the age of 15 in 2019 till now
@aziziphone9350
@aziziphone9350 4 жыл бұрын
JustVictor 17 hehhe nice man
@aziziphone9350
@aziziphone9350 4 жыл бұрын
JustVictor 17 this technology and semiconductor field is the only place were we all can be together without any bullshit politics and drama of this cruel world.
@virtualinfinity6280
@virtualinfinity6280 4 жыл бұрын
Just an minor correction: The ring bus was used by Intel for Haswell and Broadwell. From Skylake onward, they are using a mesh interconnect.
@davidgunther8428
@davidgunther8428 4 жыл бұрын
Desktop Skylake (and family) uses a ring bus too. The high core count chips use the mesh interconnect.
@charbax
@charbax 4 жыл бұрын
Thanks for using clips from my A64fx videos, just please link to my videos where you list your sources, thank you. Have a nice day.
@charbax
@charbax 4 жыл бұрын
You still haven't credited my work.
@InfinitePCGaming
@InfinitePCGaming 4 жыл бұрын
Was worrying you disappeared. Glad we got a new video.
@oraz.
@oraz. 4 жыл бұрын
There are so many computing questions about whether to use the gpu that people answer with how transferring memory is the bottleneck. Things like fft are better on the cpu only because of the extra delay in transferring between system memory and the gpu. A dual purpose system sounds so much more elegant and futuristic. I hope things go in that direction.
@aikanikuluksi4766
@aikanikuluksi4766 4 жыл бұрын
So that is where Arnold Schwarzenegger's Terminator got (or will be getting) its CPU from.
@DukenukemX
@DukenukemX 4 жыл бұрын
This sounds great for computers that do multi-threaded work, but most desktop computers focus on serial work and that's a different situation. A GPU's workload doesn't care what order the code is ran in, so this allows it to scale very well with multiple threads. CPU's focus on latency, and the lower the latency the faster the execution, hence why there's so much cache in modern CPU's. The A64FX sounds like a multi-core multi-threaded CPU with little regard to serial workloads. If you stripped Intel and AMD CPU's of their cache and branch prediction, you basically have a GPU, which is what the A64FX is sounding like. AMD and Intel CPU's are inherently inefficient because there's so much one can do to increase IPC. The higher the clock speed the more power is needed. Branch prediction tries to execution branchy code ahead of time to increase IPC, and this means you waste execution pipelines. Cache is meant to help in branch prediction, as well as reduce the latency of ram, and AMD/Intel CPU's are overflowing with the stuff. All this wastes energy and silicon to try to get faster IPC for serial work. Unless you compile the Linux kernel often, the A64FX is not for consumers.
@studiosnch
@studiosnch 3 жыл бұрын
And a few months later we see many design choices here (especially the on-chip memory) in Apple Silicon M line.
@jerrywatson1958
@jerrywatson1958 4 жыл бұрын
Hey glad you're back. But I think you are beating a dead horse with ARM. I am from back in the day when ARM was DEC and the Alpha chip. The RISC vs. CISC WAR was won and lost back in the 90's. Hell DEC was so far ahead in the early 90's that I bought a $4K Alpha workstation with Windows NT (UNIX licensing was too expensive at that point) for my own use. It even gamed! But soon thereafter DEC, a "Intel" type behemoth of industry back then went under. Along with all the other RISC chip manufacturers, HP, IBM, they all left the general computing market. The world consolidated on CISC w/x86 instructions, later updated with AMD 64 bit instructions, (another loss for Intel). While it is true a specialized chip will be faster at a single task, that is not the world we live in. Multiple different tasks running concurrently does not lend itself to single use chips. Just like ASICS for Bitcoin when the algorithm changes they become less useful silicon or junk. Real Time OS's work well with RISC arch. Like they use on the Mars Rovers. But more satellites use radiation hardened 386 cpu's to this day. So wish as you may, the world has standardized on x86. Be it the programming tools, or just the fact that it is the "common" language of general purpose computing. The fact that we use ARM processor tech in our phones is only because of low power requirements of the device. While in Russia or China they may make their own ARM cpu to keep track of their people and not rely on western technology (US). They will fail because of low adoption rates and poor performance when compared to US technology. Hence why they steal US IP and can't be trusted. No one can trust anyone else in the world today. Beyond me how we will have peace on earth and good will amongst humankind. But Fujitsu, will learn the lessons of others, even Cray computing got sold to HP. I don't see A64 FX going much past the Japanese market. Just look how hard it is for enterprise providers to switch to AMD Eypc from Intel, and they are both x86! Just a note while that liquid cooling system looks cool and quite. It is hell of expensive, see all that copper, and only recently have NOCs been adding water chillers vs. standard air conditioning. Even more costly to install. 90% of the computing market had written off AMD before Zen. People were doubtful of the rumors, even upon release, there were still doubts. But now 3 years later AMD is the newest player in town. Then there is the issue of x86 getting better at low power, take AMD's 15W 8C cpu, or Intel's. Now we all know that 2-3W is what phones and tablets use, but how long before x86 matches it? I don't see ARM chips enmass going high power. In laptops the are a joke compared to x86.
@KokkiePiet
@KokkiePiet 4 жыл бұрын
Jerry Watson Bottom line, its not about how fast one cpu is, its about cost and efficiency In a supercomputer or datacenter total cost of ownership is for a large part electricity. Cooling is about half of power use. So, if the same computing power can be gained with more CPU’s and systems that in total use less power the choice is a no brainer. The same is the case for mobile devices, using a minimal amount of power but with very much calculation force
@jerrywatson1958
@jerrywatson1958 4 жыл бұрын
@@KokkiePiet You are so wrong. 🤣🤣The faster ExaScale computer is the most efficient. El Capitan will do more work than the two Petaflop Supercomputers it is replacing! ARM can't touch this! LOL. 😁 I can't imagine how many acres of ARM cpus it would take to even come close to a Petaflop, let alone Exascale.
@KokkiePiet
@KokkiePiet 4 жыл бұрын
Jerry Watson So where am I wrong? All I am saying it’s a choice of calculating power per watt
@jerrywatson1958
@jerrywatson1958 4 жыл бұрын
@@KokkiePiet I would suggest you watch a few more videos before commenting. That is very low on the list of requirements. When the new machine can do 6 billion calculations per second. Power per watt goes out the window. Not even in the same league.
@jerrywatson1958
@jerrywatson1958 4 жыл бұрын
@@KokkiePiet Just to add, El Capitan is a 40 MW system. Got a small nuclear power plant to run it and it's siblings? ARM is not Exascale. How many cabinets would it take for 2 TF? 5,000 if they could even connect them, ARM doesn't have Infinity fabric. Save it for phones. It's cheap enough.
@tbernesto0912
@tbernesto0912 4 жыл бұрын
Well, this is Absolutely Amazing.. Thank You Very Much for Sharing.. Greetings from México... !!!
@danuuu101
@danuuu101 4 жыл бұрын
Your channel is a gold mind for computer engineers I really like your analysis and getting into details more then other channels do. In other note, I really want to see a video about RISC V and its future in personal computing and IoT I'm currently learning RISC V assembly and planning on building a small RISC V CPU on a FPGA but I'm very curious about its future and if it worth the effort.
@Audiman0aha
@Audiman0aha 4 жыл бұрын
So what you're saying is the cell architecture was way ahead of its time. Small SPEs working on small tasks to paralyze a workloads bbandwidth.
@erics3596
@erics3596 4 жыл бұрын
RIP Sun SPARC - you kicked ass...but FJ moving away from that arch is the final nail in the coffin
@TECHN01200
@TECHN01200 4 жыл бұрын
When I heard "CPU and GPU" I was thinking AVX turned to 11
@MrSchweppes
@MrSchweppes 4 жыл бұрын
Great Video! Thanks!
@JamesLee-mp2qz
@JamesLee-mp2qz 4 жыл бұрын
I didn't think it was humanly possible for your voice to get any lower... You proved me wrong :)
@ZaklanoCheljade
@ZaklanoCheljade 4 жыл бұрын
I hope this platform comes down to us, mere mortals soon enough so Intel, AMD and nVidia would get forced to cut down their prices. I get it, they need to make money but charging 1200$ for a GPU that would become obsolete in 2-3 years is just ridiculous.
@UHDking
@UHDking 4 жыл бұрын
I am a fan. Good content man. Thanks for your research and sharing the knowledge.
@notjulesatall
@notjulesatall 4 жыл бұрын
Incredible specs. All the computing power of a GPU with SIMD intrinsics and all the software support it already has available, I really look forward programming on these chips.
@TheJabberWockyy
@TheJabberWockyy 4 жыл бұрын
I wonder why everyone isn't talking about this. This is fascinating and exciting.
@kh_trendy
@kh_trendy 4 жыл бұрын
We've already seen performance gains from general purpose CPUs introducing larger and larger L1-L3 cache sizes. The closer the memory is to the CPU, the faster the CPU. We'll probably never get rid of add-in RAM, but I would fully expect to see SKUs from AMD/Intel being separated by cache sizes in the near future.
@lazadafanboyz7970
@lazadafanboyz7970 3 жыл бұрын
Fujitsu, sell this to gamers please.
@Speak_Out_and_Remove_All_Doubt
@Speak_Out_and_Remove_All_Doubt 4 жыл бұрын
Great video Celso, sounds like the A64FX chip could make for a great games console processor too?? When do you think we will see memory on chip on desktop CPU? Zen 4? Would this be more like another cache level and we will keep regular DDR system memory as I just can't see 32GB fitting on a CPU die.
@redrumtm3435
@redrumtm3435 4 жыл бұрын
By embedding the hbm on the soc, the processor has a direct link to it and can therefore shift data at much faster rates. It eliminates a huge bottleneck by shortening the bridge between the two. With all of these extra cores, specific tasks can be assigned, and change more dynamically than ever. Think of it as a logic function: the various cores can change form based on whatever task they are doing, and without it bogging down the system like emulation, for example. All computing is algorithmic, this is just a more advanced form of it.
@Speak_Out_and_Remove_All_Doubt
@Speak_Out_and_Remove_All_Doubt 4 жыл бұрын
@@redrumtm3435 Sure, I get that, and that is a great feature/asset to have which is one of the reasons I've been super excited about this chip and following it since the mid 2018 but what I am saying is 32GB of HBM per die seems kind of small for data centre work loads so when you have a data set or even single file that is more than 32GB you will have to split it and send it to different dies which will surely be much slower than have the whole of the data sat in normal system memory where any free CPU can work on the data?
@redrumtm3435
@redrumtm3435 4 жыл бұрын
@@Speak_Out_and_Remove_All_Doubt The entire system works so efficiently that data can be transferred much faster. It can achieve much more, but with a lower buffer. The cores are designed to work as one, and vary when and where necessary to suit a particular task. Basically, the entire thing is the beginning of ARM's transition into a hybridised cloud system that works across all devices. This is how we will cheat Moore's law, and continue to push the envelope so to speak.
@Speak_Out_and_Remove_All_Doubt
@Speak_Out_and_Remove_All_Doubt 4 жыл бұрын
@@redrumtm3435 I know Coreteks is a firm believer that ARM is coming for x86, and I agree it will take over certain areas, but I think the PC space is so heavily entrenched in x86 I really can't see it happening but it's great that it is happening on the server side if its faster and using less energy. Coreteks also loves RISC V, i'm not sure which is has the better chance of challenging x86's dominance.
@amirabudubai2279
@amirabudubai2279 4 жыл бұрын
Few things I have to disagree with. 1) HBM isn't outright better than DDR. Its main limiting factor(excluding cost) is its relatively low capacity. DRAM can currently reach 4TB of RAM on a single socket while the limit of HBM2 is only 24GB/stack(most I have ever heard of is 4 stacks on a device). 2) It doesn't work the same way as a GPU. It just doesn't on a fundamental level. Modern GPUs have clusters of *vector processors.* Vector processors apply the same instructions to multiple pieces of data at the same time(also called SIMD for single instruction, multiple data). So a Vega 56 can be thought of as a 56 core vector processor. This thing is just taking normal cores and cramming a lot together with high bandwidth, and as a result, it is a lot less efficient with its bandwidth than a GPU doing a task that GPUs are good at. The upside being that this design is more versatile.
@xXxserenityxXx
@xXxserenityxXx 4 жыл бұрын
Hats off to the designers of the text reader programmers.
@GarretSlarrity
@GarretSlarrity 4 жыл бұрын
Very excited for the video on neuroscience and computing!
@Maisonier
@Maisonier 4 жыл бұрын
I'd love that ARM CPU in my smartphone...
@deoxal7947
@deoxal7947 4 жыл бұрын
Not all Arm processors are designed to go in phones. This one would draw way too much power to be usable. For example the i.MX chip in the Librem 5 and the Allwinner chip in the Pinephone. Both of them were not designed for phones but are being used anyway for their open nature.
@Maisonier
@Maisonier 4 жыл бұрын
@@deoxal7947 I know that dude ... but a man can dream ... The power of the sun in the palm of my hand.
@prashanthb6521
@prashanthb6521 4 жыл бұрын
@@deoxal7947 Dude, I want that chip in my wrist band.
@saricubra2867
@saricubra2867 4 жыл бұрын
The PS3's CELL processor: "A rival!!!"
@TheUchihasparky
@TheUchihasparky 3 жыл бұрын
PS3 was RISC but that doesn't mean it was like an ARM CPU, for example the PS3 was very power hungry. More like a powerPC chip
@saricubra2867
@saricubra2867 3 жыл бұрын
@@TheUchihasparky "powerPC" It is a PowerPC chip.
@nagyandras8857
@nagyandras8857 4 жыл бұрын
most probably on a deskptop cpu one will find a "BIG" core for the os, sometimes gpu cores, but that may not be allways a case, and i think later on FPGAs. then, the most used function of a given code can set the FPGA to do just that. that would be hardware and software tailored for speed. i believ l1, l2,and l3 cache will be removed as we know them today, and a single cache memory will be used, accessible directly by all cores, and the FPGA . unified memory address. Sortha like a big shared l3 cache, expect fast as current l1 cache.
@sparketech
@sparketech 4 жыл бұрын
Saving on 11:56... HBM is expensive due to the interposer, they are probably using HBM2E if trying to save on costs. I could so see the future of CPUs one day all having HBM, this would allow much smaller computer systems with less traces and eventually I could see almost everything being built into a CPU where the motherboard is nothing more than connection with internal components and having an I/O board.
@winstonsmith430
@winstonsmith430 4 жыл бұрын
I've been waiting to see hbm used on a processor! Awesome job, it was exactly what I was predicting. As always great video Coreteks.
@greenempower1053
@greenempower1053 4 жыл бұрын
I've been seeing this coming for years now.
@AlexSeesing
@AlexSeesing 4 жыл бұрын
The end sounds like Cygnus X - Positron but a bit different. Has that anything to do with the presumed change in computing you laid out in this video? If so, that is a masterful match!
@CitizenTechTalk
@CitizenTechTalk 4 жыл бұрын
Simply mind blown! Wow! Thank you, amazingly educational video!!!
@TheJabberWockyy
@TheJabberWockyy 4 жыл бұрын
Awesome video man! Ty for the great content
@kennyj4366
@kennyj4366 4 жыл бұрын
How exciting, talk about disrupting technology. All it would take is the right application like Gaming consoles or very powerful SFF PC's. Thank you so much for sharing this information and knowledge. Great video. 👍🙂👍
@MrTrilbe
@MrTrilbe 4 жыл бұрын
I wouldn't say it's that disruptive, now anyway, it would have been 4 years ago when HBM2 was first in production, this is just a more mature use of that technology vs sticking it on GPU's
@MaxLenormand
@MaxLenormand 4 жыл бұрын
Really cool video! I’d simply like to add a little something, mostly on that last point that you have mentioned: Today, most data scientists in computer vision, myself included, would love to buy a cheaper, higher vRAM AMD GPU to do our work. So why don’t we do it, if performance is comparable, and we get higher vRAM and chips are cheaper? Because of software. CUDA is just so much better for people to work on, and Nvidia is years ahead of AMD on this sector. While I agree that this might challenge the big players, I think the integration of software shouldn’t be overlooked, because that’s the reality of today! Don’t get me wrong, I’d love to see more competition and not have only one option for computer vision tasks. Other than that, really nice video to keep informed on what’s going on!
@alberto148
@alberto148 4 жыл бұрын
@jayViant openCL is such a cludge to work with compared to Cuda though
@D0x1511af
@D0x1511af 4 жыл бұрын
@jayViant OpenCL still lack of major feature instead CUDA..which why Nvidia Still dominant in GPU market..even redshift 3D maxthon PyTorch or any AI and ML library still prefer CUDA over OpenCL.due to nature OpenCL hard to compile
@Zero11s
@Zero11s 4 жыл бұрын
it's not cool to show a globe as the world, it's a lie
@ghoulbuster1
@ghoulbuster1 4 жыл бұрын
@@Zero11s This comment section is reserved to intelligent individuals, you do not fit that description.
@m_sedziwoj
@m_sedziwoj 4 жыл бұрын
And what about ROCm ?
@mapesdhs597
@mapesdhs597 4 жыл бұрын
25+ years ago SGI was at the forefront of pushing memory and I/O bandwidth as the critical foundation for effective scaleable HTPC, which made sense given their typical client base, ie. those dealing with very large datasets (GIS, medical, auto, aerospace, etc.) Their NUMALink interconnect I think topped out at 64GB/sec by the time the company was finally gone. They also did a lot of work reducing latency for large shared memory systems and it was their OS scaleability ported to Linux which allowed Linux to really take off in the HTPC market, along of course with XFS and OpenGL more generally. That drive though for bandwidth kinda faded after the company hit the rocks in the early 2000s; once their workstation line stopped and the tech world became x86, nobody really cared about bandwidth for the longest time, it was all about the separate componets in a system instead (CPU/gfx/RAM/storage), a decade+ of a legacy chipset arch to link things together (partly offset by having gfx and some storage connected directly to the CPU), hardly any progress in how they communicate and nothing at all in COTS terms about scaleability (at one point SGI had ideas about a workstation that could be connected together to form much larger shared memory systems, a multicore MIPS-based system using PCI Express and NVIDIA gfx, but that never came about). Now suddenly with a sharp increase in data demands, including from consumers, bandwidth is back in fashion. Is there any dense COTS system today that can do 1TB/sec I/O? SGI managed this almost 20 years ago with the Origin3900 series, though that required rather a lot of racks. :D I've still yet to hear of any single motherboard workstation that can match even an older Onyx2 Group Station for Defense imaging with just 64 CPUs (load and display a 67GB 2D image in less than 2 seconds), though surely Rome could do this (an Onyx3000 system could doubtless do this with far fewer CPUs, but SGI stopped publishing such technical details after 9/11). I sometimes wonder what gfx is being used by the DoD for such tasks, probably a custom/scaleable Quadro with a buttload of RAM. Thanks for the excellent video! It sounds like at last genuinely new directions in computing are on the horizon. Group Station Ref: www.sgidepot.co.uk/onyx2/groupstation.pdf
@spawnlink
@spawnlink 4 жыл бұрын
Why would using the same ISA both in servers and edge enable greater performance? No one is sending raw code between systems. They all use some sort of high level protocol and need to marshal and demarshal data both ways. Even if they decided to send code... having the same ISA isn't enough. There are thousands of ARM chips that are compatible at their core but anything of interest needs to target the specific chip and its features. They are using ARM because it's not X86, it is flexible so they can license what they want/need, they can modify it as needed, they can get decent performance (unlike RISCV), and they can use off the shelf toolchains and operating systems without major changes or maintenance.
@spawnlink
@spawnlink 4 жыл бұрын
And as for the possibility of heterogeneous chips... it's no different from the past 50 years. Heterogeneous systems and chips have existed for many years. The fact the chips might exist on the same die more often than in the past isn't surprising. They exist today. Lots of CPUs have subprocessors for security for example. You've got many systems with SIMD subsystems or whole separate dies.
@WarriorsPhoto
@WarriorsPhoto 4 жыл бұрын
Hey Celso, I am not going to lie and say I understood all that you said in this video. But 80% made sense to me. It’s an interesting time to be alive for sure. Thank you for sharing this information about modern chip designs and I hope Intel and AMD can catch up soon.
@VicenteSchmitt
@VicenteSchmitt 4 жыл бұрын
Glad to see your videos again!
@johnj8639
@johnj8639 4 жыл бұрын
when you talk about texture streaming, that’s not really correct, it would just help performance of higher resolution textures, it wouldn’t reduce load times or have textures pop in smoother. That would still be limited by your hard drive. Also this is basically a CPU with gpu functionality, it won’t perform better than a dedicated gpu.
@Kalisparo
@Kalisparo 4 жыл бұрын
Both AMD and Intel cpus run something similar to arm internally and uses a cisc to RISC translator.
@bananya6020
@bananya6020 4 жыл бұрын
I just hope that if specialized chips become a thing, then there won't be restrictions to OS (there *shouldn't* ) i really like the idea of having all memory chips etc on one chip. as it is right now, there is a lot of wasted space spent on circuitry connecting components like RAM and a GPU, or perhaps in the future storage, that can definitely be eliminated. in the past, we had cache dies, separate from the cpu. now it's integrated, making processing faster. same can do with other chips. imagine all we could do if we only had a processing chip that had all the work being done; we could have much smaller devices with much more power. it seems like a good step towards a future without such bulky machines. imagine an ultrabook now that can do your desktop's work better with much less space. now imagine the higher power efficiency due to less spread as well as other improvements (such as using ARM)--you could have a small ultrabook the weight of paper, the processing power of a desktop, and a battery life that lasts months (my laptop already lasts about two weeks without a charge). though i will say, the only actual figures really shown in this graph are of tasks highly dependent on memory, as well as being very parallelizable. perhaps with scheduling and less parallel tasks though, this chip would perform worse. one of my concerns about this though would then be the fact that we totally lose upgradeability--either deal with outdated tech in 2-3 years of release or buy a totally new machine. this may then give rise to subscription hardware or cloud computing, with renting server space from a host and simply using a raspberry pi-like tablet or laptop to connect and do all your work. tons of nasty stuff could rise tho such as spying, data breaches, and simply corporate or lawyer scuminess
@vatanrangani8033
@vatanrangani8033 4 жыл бұрын
Every coreteks video tells me of future that we will have one chip having everything More into future - cloud More future - in your brain
@metatronblack
@metatronblack 4 жыл бұрын
More into the Future - we are the computer😳😳😳
@vatanrangani8033
@vatanrangani8033 4 жыл бұрын
@@metatronblack we are already rn ... Probably
@jorgegomes83
@jorgegomes83 4 жыл бұрын
Impressive as always, sir. Thank you.
@NikolaosSkordilis
@NikolaosSkordilis 4 жыл бұрын
While A64FX is indeed quite impressive, perhaps saying it's almost like a GPU is an exaggeration. Apart from having HBM2 memory (and thus a high memory bandwidth) and that rather tight 6D Tofu interconnect what part of it is like a GPU? What about its cores? How do they function, and how do they compare to GPU shader cores? Can they all work together in a SIMD or SIMT manner? How about their integer and floating point performance? Are these peak 3 TFLOPs mentioned at 8:55 just FP32 floating point performance from the 512-bit SIMD units or a balanced integer & floating point performance? And how long can this peak performance be sustained?
@metallurgico
@metallurgico 4 жыл бұрын
Finally the first video since I subscribed! I watched all your previous videos lol
@JohnPaulAlcala
@JohnPaulAlcala 4 жыл бұрын
Architecture looks very similar to the IBM Cell
@alecday3775
@alecday3775 4 жыл бұрын
Hello there Coreteks: I could really see a chip like this being used in laptops cause of how powerful they are but how little space needed since there is no need for system RAM or a dedicated graphics chip meaning it would make high end gaming-grade laptops much more affordable and have much longer battery life. I mean, with the much-shrunken motherboard (really you only need a an arm for all the I/O like your USB ports, video outputs and charging port and that is it [not to mention the power button and a secondary board for the trackpad]), you could fill the the remainder of the space with a massive battery which means a much longer battery life out of the laptop MUCH LONGER than the one that was shown off at CES this year that lasts 16.5 hours (which is pretty insane in of itself). We could see batteries lasting possibly 24 hours or more on a single charge doing mundane tasks. Just think of that for a second. A laptop that is capable of running al of the latest games with all the graphical settings cranked up while playing modern AAA games WHILE STILL allowing for more than 18 hours of gaming or running more GPU-intensive workloads on battery. THAT'S INSANE. If you are going out to say a LAN (they do still exist) or to a friends house and wanted to bring your laptop to play some games for a few hours, you don't need to worry about the charger AT ALL. That is why I see a chip VERY SIMILAR to this one being used in laptops cause you could get a very powerful but very power-efficient laptop for quite possibly LESS THAN $1 grand US. One that is capable of DOING EVERYTHING that one that would set someone back more than $3 grand for now but for LESS THAN $1 grand. Really insane to think about. Dunno what you think but I think laptops are a really good use of a chip like this.
@Spartacus547
@Spartacus547 4 жыл бұрын
For the PC user gaming enthusiast what exactly what a system look like with just one single chip? a motherboard most likely water cooler and that's it, or very very thin heat sink and you can go flat as hell are desktops will just look labtops, this sounds better for the PC minimalist but if this form factor is adopted the era of the gaming PC is dead
@jerrywatson1958
@jerrywatson1958 4 жыл бұрын
They would look like what the next gen consoles look like. LOL Windows in the cloud, office 360, Xbox Game pass, get the idea? The only screen you'll carry is on your smartphone or smartwatch.
@derhundchen
@derhundchen 4 жыл бұрын
I can't help but think that Fujitsu managed to do with the A64FX what Intel was trying to achieve with Project Larrabee...
@andrew1977au
@andrew1977au 4 жыл бұрын
Awesome video bud, some very interesting info there. Thank you
@iscariotproject
@iscariotproject 4 жыл бұрын
interesting future ahead,weird its been so silent i just stumbled on this video,saw another video about 80 core arm cpus,the power saving alone is big.
@KobeKeats
@KobeKeats 2 жыл бұрын
I believe thats called an APU
@KobeKeats
@KobeKeats 2 жыл бұрын
I’m just joking by the way. Don’t go all “wElL aCtUaLlY” on me.
@ahmtankrl8526
@ahmtankrl8526 4 жыл бұрын
Putting ram so close to soc could also work good on power efficiency as data will travel shorter distance
@MrMortadelas
@MrMortadelas 4 жыл бұрын
Correct me if I am wrong because I haven't done low level programming since university. Doesn't ARM have a totally different instruction set than the x86 platforms (actual binary code and with no equivalent commands, not just in the assembler). "It runs windows" might be misleading, it runs windows and maybe some apps reassembled for ARM but I'd think that a standard x86 program would run terribly. So the fact that X runs faster on an ARM chip rather than a Xeon may be just down to the program. Have I missed some OS tech that allows ARM to accept x86 programs?
@doperider85
@doperider85 4 жыл бұрын
Coreteks has the smoothest smokey delivery in the tech world you have it dialed in man
@perhapsyes2493
@perhapsyes2493 4 жыл бұрын
I do have something to add - you say this is a product aimed entirely at HPC markets. I don't fully agree - it is a product that will prove a new type of chip to consumer markets. You're right that this kind of chip isn't necessary yet for the consumer, but it will be. These forms of giant bandwidth requirements will become more and more common as technology advances. Things like holographics/volumetric VR will take ridiculous amounts of data as it will add a third dimension to image data - exponentials and so on. I really do hope it's the end of the x86 era and move to a new instruction set with it's eyes towards the future.
The FUTURE of GPUs: PCM
18:47
Coreteks
Рет қаралды 30 М.
Intel is in serious trouble. ARM is the Future.
25:04
Coreteks
Рет қаралды 1,5 МЛН
Kluster Duo #настольныеигры #boardgames #игры #games #настолки #настольные_игры
00:47
Osman Kalyoncu Sonu Üzücü Saddest Videos Dream Engine 269 #shorts
00:26
REAL 3D brush can draw grass Life Hack #shorts #lifehacks
00:42
MrMaximus
Рет қаралды 12 МЛН
😜 #aminkavitaminka #aminokka #аминкавитаминка
00:14
Аминка Витаминка
Рет қаралды 2,1 МЛН
Do we really need NPUs now?
15:30
TechAltar
Рет қаралды 698 М.
How are Microchips Made? 🖥️🛠️ CPU Manufacturing Process Steps
27:48
Buying a Brand New PC is Dumb...
17:02
Linus Tech Tips
Рет қаралды 863 М.
The World After Silicon - From Vacuum Tubes to QUANTUM
26:45
Coreteks
Рет қаралды 383 М.
How do Graphics Cards Work?  Exploring GPU Architecture
28:30
Branch Education
Рет қаралды 814 М.
What if AMD is.... RIGHT?
19:24
Coreteks
Рет қаралды 90 М.
SGI Octane:  What can a $30,000 computer from the 90's do ?
16:54
RetroBytes
Рет қаралды 2 МЛН
This Chip Could Change Computing Forever
13:10
ColdFusion
Рет қаралды 1 МЛН
The FUTURE of Computing Performance
27:00
Coreteks
Рет қаралды 197 М.
Kluster Duo #настольныеигры #boardgames #игры #games #настолки #настольные_игры
00:47