Bigger Chips = Better AI? Nvidia's Blackwell vs. Cerebras Wafer Scale Engine

  Рет қаралды 20,625

Chip Stock Investor

Chip Stock Investor

Күн бұрын

Watch our video: "Investing in Semiconductor Manufacturing Equipment Stocks: The Ultimate 2024 Guide" 👉 • AI Supercycle: Your Ul...
Nvidia's new Blackwell GPU is HUGE, literally! If you’re looking to be an Nvidia AI chip competitor, why not just make physically bigger chips? In this video, we explore the physics and economics behind AI chip design. We'll cover Nvidia's Blackwell packaging secrets, rival Cerebras Systems' wafer-scale chips, and the critical role of fab equipment makers in the race for AI system dominance.
👉👉Want more Chip Stock Investor? Our membership delivers exclusive perks! Join our Discord community, get downloadable show notes, custom emojis, and more. Become a true insider - upgrade your experience today!
Join at KZbin: / @chipstockinvestor
Join on Ko-Fi: ko-fi.com/chipstockinvestor 🔥🔥🔥
If monthly membership isn't your thing, don't worry, you can purchase our show notes in our Ko-Fi shop. ko-fi.com/chipstockinvestor/s...
If you missed our Semiconductor Industry Flow 2024 and chip industry manual, you can purchase it here:👉 ko-fi.com/s/90c74a988a. Get your investment journey off on the right foot to kick off the new year.
If you missed our Cybersecurity Industry manual for 2024, you can purchase it here: ko-fi.com/s/0a436c7c8
Other vids to check out:
EDA “The Secret AI War,” • The Secret AI War, and...
Wafer fab equipment: • How to Profit from the... forms.gle/u1HLCtcwAXitt45y9
Check us out on our website: chipstockinvestor.com
We use data, charts, and KPI's from our friends at Main Street Data. If you would like to check it out and subscribe to a premium membership here is a link that will get you 10% off👇
mainstreetdata.com/subscripti...
Find this episode and more on Spotify Podcasts: open.spotify.com/show/4QSHBYl...
You can also find us on Apple and Google Podcasts!
Chapters:
00:00 Introduction to Megachips: Why It's Not Simple
00:29 Exploring NVIDIA's Blackwell GPU and Cerebras' Monster Chip
01:22 Diving Deep into Chip Manufacturing Challenges
03:53 Advanced Packaging Techniques: Chiplets and Heterogeneous Integration
10:25 Cerebras' Wafer Scale Engine: A Game Changer?
12:13 The Five Major Challenges of Megachip Manufacturing
16:41 Economic Constraints and the Future of Chip Manufacturing
18:44 Investment Opportunities in the Semiconductor Industry
Content in this video is for general information or entertainment only and is not specific or individual investment advice. Forecasts and information presented may not develop as predicted and there is no guarantee any strategies presented will be successful. All investing involves risk, and you could lose some or all of your principal.
#semiconductors #chips #investing #stocks #finance #financeeducation #silicon #artificialintelligence #ai #financeeducation #chipstocks #finance #stocks #investing #investor #financeeducation #stockmarket #chipstockinvestor #fablesschipdesign #chipmanufacturing #semiconductormanufacturing #semiconductorstocks
Nick and Kasey own shares of Nvidia, Cadence Design Systems, AMD, and Synopsys

Пікірлер: 93
@chipstockinvestor
@chipstockinvestor 3 ай бұрын
👉👉Want more Chip Stock Investor? Our membership delivers exclusive perks! Join our Discord community, get downloadable show notes, custom emojis, and more. Become a true insider - upgrade your experience today! Join at KZbin: kzbin.info/door/3aD-gfmHV_MhMmcwyIu1wAjoin Join on Ko-Fi: ko-fi.com/chipstockinvestor 🔥🔥🔥
@RedondoBeach2
@RedondoBeach2 3 ай бұрын
Good information but painful to listen to the two of you talk. Do yourself a favor and listen to your own speech patterns. You both have a terrible habit of talking in... a... very... choppy... pattern. This is aside from the obvious editing done to this video.
@alexsassanimd
@alexsassanimd 3 ай бұрын
REQUEST: an episode of details of most important NVIDIA "partnerships" and what it means for the company's future. You guys are awesome.
@pierrever
@pierrever 3 ай бұрын
Oracle dans les ordi quantique
@sugavanamprabhakaran2028
@sugavanamprabhakaran2028 3 ай бұрын
Excellent! As always you both are great teachers in this field! Keep up your amazing hard work! ❤
@mdo5121
@mdo5121 3 ай бұрын
Another plethora of important info....thanks as always
@zebbie09
@zebbie09 3 ай бұрын
Excellent presentation. Thanks for sharing….
@oker59
@oker59 3 ай бұрын
Cerebras pace of making smaller and smaller transistors in large scale wafers, suggests they have some systematic understanding of how to deal with the thermal/quantum jitters. ASML now has 1 nm feature size ability(ASML's technology is also a technological miracle. And, they can see how they can go beyond their current miracle. ). So, I expect Cerebras for one to beat their cs-3
@shannonoliver7992
@shannonoliver7992 3 ай бұрын
GREAT video! I can't believe you continue to produce such great content. Job well done, and a BIG thanks!!
@andreinedelcu5330
@andreinedelcu5330 3 ай бұрын
Great videos and content! as always
@darrell857
@darrell857 3 ай бұрын
NVidia makes its chips at or near the retical limit, as does WSE. Both designs overprovision functional units that can be fused off/routed around and still meet the specification (some estimate about 10-20% of H100 chip is disabled silicon). Nvidia can bin bad chips into lower blackwell products to offset costs, WSE doesn't have this option. WSE requires a complex cooling system but a lot less networking. Blackwell requires an additional NVlink chip per 8 gpus or so, advanced packaging for the GPU dies/HBM, advanced melanox networking to get a lot of gpus to communicate. So it isn't so clear who wins on a cost basis. Cerebras seems to have solved the cooling/mechanical problems so in theory they can outperform blackwell on certain models that fit within the chips memory. However that is substantially less memory than blackwell.
@chipstockinvestor
@chipstockinvestor 3 ай бұрын
Exactly. Thank you for the extra detail on the comp. Fun to watch the battle. All good stuff for customers.
3 ай бұрын
Cerebra’s has also its own networking technology and it’s fast. Something to consider when evaluating his smaller memory per chip. Cerebra’s has also its own networking technology and it’s fast. I’m curious about the diff between Blackwell and WSE-3 operation cost. Also their business model are different. Cerebra’s doesn’t sell their chips but act as cloud provider and supercomputer builder/administrator In diff modalities. I’m curious about the diff between Blackwell and WS
@Dmillz192
@Dmillz192 3 ай бұрын
Huang said last week NVIDIA doesnt make or sell chips tho lmao then to double down and say theyre a software company
@v1kt0u5
@v1kt0u5 26 күн бұрын
@@Dmillz192 it's true... they're designers... TSMC makes the chips ... and in fact the insider knowledge/software part of Nvidia is by far the most valuable!
@Dmillz192
@Dmillz192 26 күн бұрын
@@v1kt0u5tsmc makes both gpus and cpus. Again nvidia is known for their gpus which will take atleast 6 years to catch up and perform to the level of wafer chips; which tsmc is actually manufacturing for Cerebras. Cerebras was the only client of tsmc doing wafer chips until recently with Teslas wafer sized 'Dojo' which was announced a few weeks ago.
@IATotal
@IATotal 3 ай бұрын
Thanks a lot for the video!
@valueinvestor8555
@valueinvestor8555 3 ай бұрын
Very interesting video, especially the five reasons for size limiations at the end. #1 was new and interesting to me. But it makes sense. This is something most non-experts would probably not find out by themselves easily. #2 was relatively obvious. Carebras has at least somehwhat of a solution for this as you mentioned. They are somehow routing around damaged transistors (not sure how effective their solution is). #3 also makes sense. But like with #1 most people wouldn't know by how much exactly this would limit the chip size. #4 also makes sense. Maybe materials science could help here!? or maybe the optimal available materials are already used. It would seem that Nvidia wouldn't make compromises here given the product price. #5 I guess the previous points all play into this TCO calculation and it is probably cheaper to cool separate smaller chips. It would be interesting to know if the Nivida CEO thinks that the size of the Blackwell chips is already optimal or if it could make sense to grow chip size further at least for very large customers who need the most computing power. I asked Gemini why 300 mm is the current standard for wafers. One interesting aspect is that precisely handling 450 mm diameters wafers for example would be an immense technological challenge, because the wafers are so fragile.
@basamnath2883
@basamnath2883 2 ай бұрын
Great video
@rastarebel4503
@rastarebel4503 3 ай бұрын
HIGH QUALITY CONTENT!!! the 5 reasons on chips size limits was excellent... love it!
@aaronb8698
@aaronb8698 2 ай бұрын
GREAT PRESENTATION!
@majidsiddiqui2906
@majidsiddiqui2906 3 ай бұрын
Great video. Good basic explanation regarding the 5 main reasons chips cannot easily be made bigger.👍
@eversunnyguy
@eversunnyguy 3 ай бұрын
Your channel has come to my eyes at the right time...but I wished I knew this channel before the AI frenzy 2 months ago..
@geordiehawkins7372
@geordiehawkins7372 3 ай бұрын
Great insight for this non-techie. Still able to get good info that will help with due diligence before investing. Thanks!
@user-sm9ms6yw9p
@user-sm9ms6yw9p 3 ай бұрын
Thanks you!🌹🌹🌹
@styx1272
@styx1272 3 ай бұрын
Thanks Crew. I wonder if you might do a video on Brainchip Corps neuromorphic Akida chip ? I'm very curious to understand how the Akida 2000 works because it has memory embedded in the chip in 4 memory configurations to a 'node' or an axiom. Producing a super low powered chip. I'm wondering why other companies aren't following this design? And does it have the potential to be scaled into training ?
@kualakevin
@kualakevin 3 ай бұрын
Good video, but hope video can elaborate more on how Cerebras has solved problems (3) and (4) in their product. And for problem (5), power consumption, although larger chip would consume more power per chip, but it consumes less power for the equivalent compute (of smaller chips stitches together with interconnects or other methods).
@missunique65
@missunique65 3 ай бұрын
could you cover the building out of the newer bigger data centers -I heard Andreeson talk about them.
@mtoporovsky
@mtoporovsky 3 ай бұрын
Do u have some info about firms with develop on semi-light combining solutions?
@1964juls
@1964juls 3 ай бұрын
Great information, love your reviews! Can you review ALAB(Astera Labs Inc)?
@rahulchahal3824
@rahulchahal3824 3 ай бұрын
Just SUPER
@Ronnieleec
@Ronnieleec 3 ай бұрын
What about patent limits? Are semiconductor companies and EDI companies patenting variations, like pharma companies and etc.?
@AdvantestInc
@AdvantestInc 3 ай бұрын
How do you see the role of advanced packaging techniques evolving in response to these scaling challenges?
@chipstockinvestor
@chipstockinvestor 3 ай бұрын
We think advanced packaging companies have a lot to gain to make it all happen.
@lightichigo
@lightichigo 3 ай бұрын
Can you guys do a video anout Groq and how it will impact nvidia monopoly ?
@alan-od8hk
@alan-od8hk 3 ай бұрын
Little disappointed that you really didn't cover the cerebras cs-3 chip and compare it to nvidias grasshopper.
@RoTelnCheese
@RoTelnCheese 3 ай бұрын
Great work guys. What do you think of Tesla and their developments in robotics and AI? Their stock value is compelling right now
@ronmatthews2164
@ronmatthews2164 3 ай бұрын
Under $ 140 in a year.
@limbeh3301
@limbeh3301 2 ай бұрын
Tesla met with Huang to beg for more GPUs. That tells you how well his Dojo supercomputer is doing.
@GustavoNoronha
@GustavoNoronha Ай бұрын
nVidia isn't ahead of the pack in terms of packaging, the new Blackwell double chip is exactly the same thing as the Apple M1 Ultra - 2 really big chips connected together using TSMC CoWoS. What makes nVidia the leader of the pack is their design, and in some cases the software support. For AI that is not a big deal, CUDA is not as relevant, people aren't writing to those APIs, they are using things like PyTorch, higher level frameworks that support all of the major vendor APIs these days, so that is not a big competitive advantage. It would be good to do a deep dive in all the technologies used in the MI300 - AMD has been on the vanguard when it comes to packaging. It doesn't mean it gets the win, but it should be a good case study for how all of these advanced packaging technologies work, how they can be used to increase cost effectiveness by reducing the size of the dies that are fabricated (yield), and in providing a lot of flexibility on product-level differentiation. MI300A is a good indication of what the future holds.
@chipstockinvestor
@chipstockinvestor Ай бұрын
Did you see our fab equipment video? We are planning some more detail on what CoWoS entails, as these are the processes all these chips and systems utilize.
@UltimateEnd0
@UltimateEnd0 28 күн бұрын
MI300A=home super computer Cerabras=commercial super computer. They aren't even in the same league.
@GustavoNoronha
@GustavoNoronha 28 күн бұрын
@@UltimateEnd0 MI300A is definitely not for home computers, the El Capitan super computer being installed right now should take the number 1 spot in the TOP500 super computers list when it's fully installed, and it's powered by MI300A.
@chrisgarner5765
@chrisgarner5765 3 ай бұрын
Nvidia has the fastest interconnect of all the competitors ... Nvidia also is the company that started all of this, really with Deep learning! Plus, Nvidia is more than capable of making a wafer scale chip if the believed it was a better way!!!!!!!!!!!!! Nvidia also has the Best software stack and tools for the job!!!!!!
@limbeh3301
@limbeh3301 2 ай бұрын
No, Cerebras has the fastest interconnects between dies. It's basically like communication between two Blackwell dies, but instead of 2 you get 80-90 dies. Also Cerebras inter-die communication is faster than Blackwell since they're not using 2.5D. They're just using a metal layer, it looks like.
@heelspurs
@heelspurs 3 ай бұрын
The entire wafer is etched by reticles before it's cut into chips, so I don't see how problem #1, 'the reticle" is a problem for using the entire wafer for 1 chip. As for defects, the architecture enables bypassing sections that have a defect. Groq does this. It's not simply "infrastructure" that limits wafers to 12 inches, but the inability to make the flow of gases and heat across the entire wafer perfectly even. You could slow each step down to help gases and heat spread more evenly, but that reduces production rate. The only very fundamental physics problem is that you want as much of the chip to be synchronized with the clock steps as much possible because parallel computing for generalized algorithms can greatly waste computation. You can't have the entire wafer synch at high clock speeds because, for example, at 1 GHz, light can travel only 300 mm and it's not a straight path across the chip & capacitances greatly reduce that max speed, and at 1 GHz you really need everything sync'd at less than 1/4 the clock cycle (75 mm max distance). Fortunately, video and matrix multiplication are algorithms that can efficiently do parallel ("non-sync'd") computation. Training can't do parallel efficiently, but inference can, although NVDA's GPU architecture can't do it nearly as efficient as theoretically possible. Groq capitalizes on this, not needing any switches (Jensen was proud of NVDA's new switches being more efficient) or any L2 or cache (which at least 2x the energy per compute required), which is why Groq is 10x more tokens per energy than H100.
@SavageBits
@SavageBits 3 ай бұрын
Reticle size constrains the size of the of the largest unique design that can be patterned on the wafer. Those identical designs are then repeated across the entire wafer. Blackwell takes 2 of maximum reticle size dies and connects them together in the same package. My prediction is that the Blackwell 's successor will connect 4 maximum reticle size dies in the same package. Nvidia approach is more flexible then the WS-3, which has massive cooling, power distribution, and defect management challenges.
@limbeh3301
@limbeh3301 2 ай бұрын
The advantage of staying on the wafer is that you have extremely low latency and extremely high bandwidth between the reticles. First, Blackwell only allows 2 reticles to talk to each other using the 2.5D interconnect (which likely has larger pitch than what Cerebras is doing). Second, the moment the data has to leave the Blackwell package you'll need to use NVlink, and eventually infiniband. This is why you see that everyone is trying to make larger and larger chip, to optimize for the communication between compute elements.
@eversunnyguy
@eversunnyguy 3 ай бұрын
Would like to hear your view on PLTR Palantir...Or this channel is only for chips...
@mach1553
@mach1553 3 ай бұрын
This is GPU bridging by 2 die stitching & gaining an extremely huge boost in performance!
@pieterboots8566
@pieterboots8566 2 ай бұрын
One more disadvantage: path length or wire length. Everybody knows these are all steps towards the optimal full 3d chip not just interconnects. This will have the highest transistor count and the shortest path length.
@limbeh3301
@limbeh3301 2 ай бұрын
Problem with stacking vertically is power delivery and cooling. For compute you can't really stack much because the heat density will be too high to cool. This is why you only see memory being stacked on top of compute.
@pieterboots8566
@pieterboots8566 2 ай бұрын
@@limbeh3301 Chiplets with interconnects also have this problem.
@DigitalDesignET
@DigitalDesignET 3 ай бұрын
@9:15 - 4np for the Blacwell is actually 5nm technology, it's not 4nm. That's why people need to understand this meaning is no longer tells us anything about transistor density. If I misunderstood someone correct me.
@chipstockinvestor
@chipstockinvestor 3 ай бұрын
Sorry but we don't make up the names for these manufacturing processes. It is indeed called 4N, regardless of what the transistor sizes actually are, that's the name of it.
@DigitalDesignET
@DigitalDesignET 3 ай бұрын
@@chipstockinvestor thanks replying, it soure is interesting to understand more about this manufacturing process as it can be misleading information which tech is more superior.
@limbeh3301
@limbeh3301 2 ай бұрын
How is N5 one and a half generation behind N4?? That's like half a generation behind...
@chipstockinvestor
@chipstockinvestor 2 ай бұрын
The N4 node being utilized isn't the standard one, but a newer "enhanced" N4
@suyashmisra7406
@suyashmisra7406 3 ай бұрын
You were doing okay until you said "these are not perfect conductors, they are semiconductors" Good video otherwise, considering the channel is dedicated more towards people who are interested in stocks rather than the tech itself.
@MsDuketown
@MsDuketown 3 ай бұрын
Monolithic boundaries.. But smaller calculation units are better. ARM already proved that, and now the explosion of diversification will do the rest..
@user-io4sr7vg1v
@user-io4sr7vg1v 3 ай бұрын
Is there a native compiler for numpy to cerebras? If they are doing the latter, Nvidia is just fine.
@chipstockinvestor
@chipstockinvestor 3 ай бұрын
www.cerebras.net/blog/whats-new-in-r0.6-of-the-cerebras-sdk
@johndoh5182
@johndoh5182 3 ай бұрын
Bigger chip = higher defect rate. If the chip is designed to deal with failed parts of the die so they can still get to market (pathways through the chip can be disabled and the chip specs allow for a certain percentage of the chip to fail in production), then it's not terrible. But a wafer size chip is a nightmare. Pretty much any wafer that comes off a line has defects. It's only a matter of percentage. The prevailing knowledge is that the smaller you can make a die (chip), the smaller the percentage will be for chips that fail off that one wafer. For instance is a single wafer is used to make ONE chip AND there is no allowance for failed parts of that chip, then the failure rate is pretty much always going to be 100% and of course that's not feasible.
@t33mtech59
@t33mtech59 3 ай бұрын
Why do the hosts seem AI generated lol. Or just oddly calm and consistent in cadence
@seabassmoor
@seabassmoor 3 ай бұрын
I think the video is chopped up
@ARIK.R
@ARIK.R 3 ай бұрын
And also CAMT
@bounceday
@bounceday 2 ай бұрын
Bigger chips are hotter chips. Why is this not a concern. Is it lower energy architecture and smaller die allowance?
@chipstockinvestor
@chipstockinvestor 2 ай бұрын
New techniques being used to try and keep those monster chips cool. All in the name of tearing through more data. We have some research in queue on Vertiv (VRT).
@johnsands6652
@johnsands6652 Ай бұрын
When will cerebras go public?
@GuyLakeman
@GuyLakeman 2 ай бұрын
WELL, THEY FRY EGGS TOO !!!!
@tamasberki7758
@tamasberki7758 3 ай бұрын
So you guys are telling me those pills I bought on a shady webshop won't make my chip bigger? 😉😃
@tarikviaer-mcclymont5762
@tarikviaer-mcclymont5762 3 ай бұрын
May result in chip shrinkage
@chipstockinvestor
@chipstockinvestor 3 ай бұрын
😂
@alexsassanimd
@alexsassanimd 3 ай бұрын
how can one invest in Cerebras? They seem to be a private company
@chipstockinvestor
@chipstockinvestor 3 ай бұрын
You are correct Cerebras is private
@limbeh3301
@limbeh3301 2 ай бұрын
There are some websites that allows transactions in secondary markets. You might get lucky and score some shares.
@camronrubin8599
@camronrubin8599 3 ай бұрын
Nvidia going to stitch waferscales together 😆
@elroy1836
@elroy1836 3 ай бұрын
To paraphrase another reactor to a different review of NVDA's Blackwell, I hope at some point there is some discussion of AVGO's (Broadcom) newly produced ASIC chip with 12 HBM stacks versus the 8 on Nvidia’s Blackwell. While the focus seems constantly directed at the innovation of NVDA, the AVGO solution reportedly provides 50% more performance in an accelerator at the same or lower price than NVIDIA's solution.
@kleanthisgroutides7100
@kleanthisgroutides7100 2 ай бұрын
My issue with Cerebras is the them being adamant they get 100% yield which is of course BS… they will not disclose how much of the wafer is actually bad/defective. As for the power they are not lower power when running at full tilt with a normalised process… yes there is an architecture advantage for lower power but in the grand scheme of things it’s not significant. Transistors are transistors, they need to switch hence consume power.
@UltimateEnd0
@UltimateEnd0 28 күн бұрын
Except that that Cerebras CS-3 uses 200x less energy consumption than the fastest super computers currently operating in the world.
@kleanthisgroutides7100
@kleanthisgroutides7100 28 күн бұрын
@@UltimateEnd0 15KW-25KW is not low power… there’s no comparison to a supercomputer since it’s not Apples to Apples.
@almostdead9567
@almostdead9567 3 ай бұрын
Why isn't liquid nitrogen used to cool these chips ? I mean quantum chips use liquid nitrogen so why not these big ones ?
@chipstockinvestor
@chipstockinvestor 3 ай бұрын
Power consumption. It takes more energy to cool the chips, in addition to the energy to operate them. A poorly designed cooling system can add a huge expense to a data center's operations.
@christian15213
@christian15213 3 ай бұрын
Doesn't this all lead to the push for quantum
@danielstevanoski
@danielstevanoski 3 ай бұрын
Co-fee?
@jacqdanieles
@jacqdanieles 3 ай бұрын
Ko-fi
@TheBestNameEverMade
@TheBestNameEverMade 3 ай бұрын
That last point is not correct. Per compute, the celebras system uses less power because you need less extra equipment to do the same thing. Did you not research the numbers?
@chipstockinvestor
@chipstockinvestor 3 ай бұрын
Uh, we don't recall attacking Cerebras and we certainly didn't say that it used more power. What we did say, was that it is possible that a bigger chip may have an increased total cost of ownership. We gave 5 reasons why it is a challenge to make bigger chips work. Did you not watch the whole video? Context is important.
@TheBestNameEverMade
@TheBestNameEverMade 3 ай бұрын
Thanks for responding. I did. Go to the section where you talk about TCO. 16.05. I know you said might but it doesn't because when you have 60x as much compute and a huge amount of memory on chip there is less power in total even if the there is more power per chip for cooling etc... also Nvidia needs dozens of chips for communication to do the same as one chip. Communication is much cheaper in power usage if it's just baked into the chip.
@RedondoBeach2
@RedondoBeach2 3 ай бұрын
Why.... do.... you..... talk.... like.... robots?
@anahitaaalami9064
@anahitaaalami9064 3 ай бұрын
Intel
@MichaelMantion
@MichaelMantion 3 ай бұрын
Just skip to 12:44 this video was such a waste of time I think I will unsubl
@ARIK.R
@ARIK.R 3 ай бұрын
Camtek (NASDAQ:CAMT) said Monday it has received a new order for about $25 million from a tier-1 HBM manufacturer, for the inspection and metrology of High Bandwidth Memory.
Does Broadcom's AI Event Spell Trouble For Nvidia Stock? (AVGO & NVDA)
17:39
Chip Stock Investor
Рет қаралды 10 М.
3 New Groundbreaking AI Chips Explained
21:49
Anastasi In Tech
Рет қаралды 155 М.
We Got Expelled From Scholl After This...
00:10
Jojo Sim
Рет қаралды 72 МЛН
Haha😂 Power💪 #trending #funny #viral #shorts
00:18
Reaction Station TV
Рет қаралды 14 МЛН
4,000,000,000,000 Transistors, One Giant Chip (Cerebras WSE-3)
15:09
TechTechPotato
Рет қаралды 120 М.
Is Snowflake Stock A Buy Now (SNOW)
25:15
Chip Stock Investor
Рет қаралды 16 М.
NVIDIA Is On a Different Planet
31:44
Gamers Nexus
Рет қаралды 898 М.
New Disruptive Microchip Technology and The Secret Plan of Intel
19:59
Anastasi In Tech
Рет қаралды 365 М.
Micron’s (MU) Epic Good News For AI Chip Stocks - Time to Buy Now?
23:43
Chip Stock Investor
Рет қаралды 8 М.
Intel's Crazy Plan for AI Chips IS WORKING! (Supercut)
28:19
Ticker Symbol: YOU
Рет қаралды 226 М.
New Photonic Chip: x1000 faster
12:24
Anastasi In Tech
Рет қаралды 254 М.