Unboxing the Tenstorrent Grayskull AI Accelerator!

  Рет қаралды 45,374

TechTechPotato

TechTechPotato

3 ай бұрын

With all these AI hardware startups, people ask me when they can actually go buy them. Turns out, now you can! Here's some hands-on with the Tenstorrent Grayskull, and a chat with Jasmina from Tenstorrent about how to get up and started!
tenstorrent.com/cards/
-----------------------
Need POTATO merch? There's a chip for that!
merch.techtechpotato.com
more-moore.com : Sign up to the More Than Moore Newsletter
/ techtechpotato : Patreon gets you access to the TTP Discord server!
Follow Ian on Twitter at / iancutress
Follow TechTechPotato on Twitter at / techtechpotato
If you're in the market for something from Amazon, please use the following links. TTP may receive a commission if you purchase anything through these links.
Amazon USA : geni.us/AmazonUS-TTP
Amazon UK : geni.us/AmazonUK-TTP
Amazon CAN : geni.us/AmazonCAN-TTP
Amazon GER : geni.us/AmazonDE-TTP
Amazon Other : geni.us/TTPAmazonOther
Ending music: • An Jone - Night Run Away
-----------------------
Welcome to the TechTechPotato (c) Dr. Ian Cutress
Ramblings about things related to Technology from an analyst for More Than Moore
#techtechpotato #ai #tenstorrent
------------
More Than Moore, as with other research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, which may include advertising on TTP. The companies that fall under this banner include AMD, Applied Materials, Armari, Baidu, Facebook, IBM, Infineon, Intel, Lattice Semi, Linode, MediaTek, NordPass, ProteanTecs, Qualcomm, SiFive, Supermicro, Tenstorrent, TSMC.

Пікірлер: 137
@danielreed5199
@danielreed5199 3 ай бұрын
Can't wait to get one of these, hold it above my head and shout "I HAVE THE POWER!!!!!"
@henriksundt7148
@henriksundt7148 3 ай бұрын
I can literally hear the guitar riffs!
@kayakMike1000
@kayakMike1000 3 ай бұрын
As Cringer transforms into Battle Cat!
@magfal
@magfal 3 ай бұрын
I hope these specialized chips completely take over the inference market and that future chips take over training at scale too. I would like to see sane prices for GPUs again.
@InnuendoXP
@InnuendoXP 18 күн бұрын
Yeah though hopefully we'll see fab capacity scale to account for both. A lot of the price of the GPU is determined by what price a chip of that size on that fab standard can be sold for. It's one reason why AMD doesn't bring Radeon prices down as low as they could, because it would make it less profitable than Zen, eat into their margins,, and they both need to share capacity for what AMD can get from TSMC. Having more market share as a publicly traded company isn't valuable if it doesn't also mean higher net profit to be able to reinvest into R&D for future performance/feature gains & AMD already learned & got burned that way with Vega.
@esra_erimez
@esra_erimez 3 ай бұрын
I think I'm going to gift myself a Grayskull AI Accelerator for my birthday
@arthurswanson3285
@arthurswanson3285 2 ай бұрын
Facts
@tipoomaster
@tipoomaster 2 ай бұрын
8:49 lol at Ian playing it off like he was going for a smell and not a bite when she thought that
@woolfel
@woolfel 3 ай бұрын
like the conversation about how you keep backward and forward compatibility. As a software engineer in the consulting space. Compatibility is the blessing and curse of maintaining code.
@aapje
@aapje 3 ай бұрын
Yeah, it seems a bit optimistic, especially when the API is very similar to the hardware.
@philippetalarico1059
@philippetalarico1059 3 ай бұрын
The "It does fit in your desktop", is such a underrated burn on NVIDIA/AMD haha 🔥
@KiraSlith
@KiraSlith 3 ай бұрын
6:20 Delta makes an 60x20 blower that'd fit the formfactor far better, slim down that unit to properly occupy a low-profile slot in say a compact machine like HP's e1000 micro servers, I'd recommend bringing in a cheap low-power microcontroller to monitor and manage the fan's speed as well, to reduce overall system noise and allow you to optimize the fan curve.
@triularity
@triularity 3 ай бұрын
AI-Man: "By the power of Grayskull.... I have the power!" 🙃
@dinoscheidt
@dinoscheidt 3 ай бұрын
Even for Jim Keller it will be a hard task to catch up on 10 years of CUDA and the whole software stack that rests on top of it. I really hope they succeed. Software-Hardware co-design is really the crucial aspect here.
@arthurswanson3285
@arthurswanson3285 2 ай бұрын
They have to hit the hobbyist entry point to make a mark.
@solidreactor
@solidreactor 3 ай бұрын
This is amazing news! Looking forward to order one
@Veptis
@Veptis 3 ай бұрын
So Grayskull is useful for 2016 workloads? Wormhole seems barely useful for today's task either? Maybe image decoders? With that memory limitation. Does the compilation run completely on the card? I mean there is a lot of compute on board - so could I run it as a language model server and then use my system for something else? Or am I supposed to buy 8 of these ... Und put them into a server board with a server CPU? A single groq card is 20k and no memory. Perhaps developer kit, but not researcher friendly. It seems. I want an inference card to run 70B models in my workstation. And preferable directly via accelerate. So I can write device agnostic code. Any model from HF at any precision from fp32, to bf16 to fp16 to quantized models. So your roadmap is to be upstreamed to PyTorch 2.0 natively? That is like half a year late. And today we had the release of pytorch 2.2. Intel is aiming to get their GPU upstreamed by Pytorch 2.5 in October. Which will also be a backend switch to triton. Perhaps I should sign up on the website and share my requirements.
@danielme17
@danielme17 3 ай бұрын
they should forget that 8GB LDDR and just give us fast access to one or two NVMes, done. I would never complain about memory again.
@kazioo2
@kazioo2 2 ай бұрын
@@danielme17 For what? You wouldn't be able to feed that compute with the puny bandwidth of an NVMe.
@hedleyfurio
@hedleyfurio 2 ай бұрын
Every success with these dev kits that allow developers to get their heads around the hardware and software stacks . The level of transparency and authenticity displayed in all tenstorrent interviews is very encouraging vs watching a slick marketing pitch to hype up the crowd . Many comments are about the LDDR size, and perhaps those are from people wanting to plug in a card, and run a LLM - the amazing tech in the chip and software stacks - with accessibility, is where the value is , as it is not difficult to place more LDDR chips . Our application is a multimodal authentication inference engine at the edge where speed , low power and accuracy are key figures of merit , so we are looking forward to getting our hands on the devkit.
@EricLikness
@EricLikness 3 ай бұрын
Sorry to use this reference, but as SJ used to say, "Great Products Ship". You cannot try things out unless they're manufactured and in your hands. 'Announcements' don't run LLMs. 😸
@thegoldbug
@thegoldbug 2 ай бұрын
I love the C64 tshirt!
@danielmeegan6259
@danielmeegan6259 2 ай бұрын
Thank you 👍
@OpenAITutor
@OpenAITutor 26 күн бұрын
I like to see how much it can accelerate inference. Some performance numbers would be great.
@colinmaharaj
@colinmaharaj 3 ай бұрын
Doctorate in FPGA, impressive
@emeraldbonsai
@emeraldbonsai 3 ай бұрын
Maybe they explain this in the video but it says TT-Buda: Run any model right away but the grey skull card is only 8GB wont you be limited to models under 8GB or can it leverage your cpu's ram
@kil98q
@kil98q 3 ай бұрын
yhe same question.. im down to paying that price but if its barely an advantage over a same prices gpu then might as wel buy a more flexible gpu..
@wool1701
@wool1701 3 ай бұрын
The latency is too high on the PCIe bus to use CPU RAM for large models with good performance. The only tensor accelerator which I have seen that can effectively run large models fast in shared memory is the Apple M GPU. Apple M can do this because they have a very good unified memory model and high bandwidth internal bus. (I have tried doing this on Ryzen with unified memory but the iGPU is not significantly faster than the CPU for LLM inference. I tested pytorch 2.1 / ROCm 5.7.1, on RDNA2 with Llama2:13b - AMD does not officially support ROCm on this GPU.)
@dinoscheidt
@dinoscheidt 3 ай бұрын
It’s simply not a llm inference machine. Transformers, through highly in hype right now, are a small subset of machine learning. Also, when you nail the architecture, it might be easier to extend into larger memory (bandwidth is the problem, not the size). Asking for more memory is like asking for higher megapixels on cameras, completely forgetting that you need to be able to fill and page that large bucket.
@DigitalJedi
@DigitalJedi Ай бұрын
@@dinoscheidt Agreed, and given they are positioned as a dev kit of sorts, this is more than enough memory for someone to get small test builds up and running that will scale to larger pools on future hardware.
@Mr.Kim.T
@Mr.Kim.T 3 ай бұрын
This reminds me of the Physx add-in cards some 15 years ago. Unfortunately for them, single graphics cards very quickly became fast enough to do in-game physics themselves without requiring a separate card for the purpose. NVIDIA just swallowed Physx whole… as it had done with 3dfx before it. Since then, NVIDIA’s dominance has become all-encompassing. I’ve known NVIDIA almost since its inception…. it’s a hard-nosed company that takes no prisoners. My advice for other A.I. companies is to keep out of NVIDIA’s crosshairs.
@arthurswanson3285
@arthurswanson3285 2 ай бұрын
I remember those! Always wondered why they disappeared.
@Zero-oq1jk
@Zero-oq1jk Ай бұрын
Is there any chance we see RISC-V laptops and pc's? Like Ascalon or anything else.. Or ARM will be only option there?
@movax20h
@movax20h 3 ай бұрын
Not too bad start. While it might not quite outperform something like 7900XT, pricing is decent, smaller, slightly more efficient, and software support looks already pretty good. But I think 8GB is going to be a bit limiting. Maybe with two cards installed, it could be worth for bigger models. Looking at website, documentation and repos, it is all rather strightforward to use, instructions and structure of pieces is easy to understand. So already ahead of AMD for example. I really hope that the tt-kmd driver gets mainlined into upstream kernel first.
@s1rb1untly
@s1rb1untly 3 ай бұрын
Hopefully these guys knock nVIDIA down a peg in the future. Competition is good.
@Slav4o911
@Slav4o911 3 ай бұрын
Not with 8GB LPDDR... they need VRAM (a lot of VRAM) and as high as possible bandwidth.
@Delease
@Delease 3 ай бұрын
I'm very interested to know what Tenstorrent's plans are, if any, for getting their Linux drivers upstreamed into the mainline kernel. Having upstreamed drivers would really go a long way in giving me confidence these cards are going to have long term software support, independent of the fortunes of the company which created them.
@cem_kaya
@cem_kaya 3 ай бұрын
in the development of windows support do you consider WSL ?
@packapunchburger
@packapunchburger 3 ай бұрын
I doubt it personally
@nathanfife2890
@nathanfife2890 3 ай бұрын
I'm interested. What kind of performance difference between Nvidia graphics cards you get with these accelerators? I'm assuming it's not as good as a 4090 or something, but it's still probably significantly better than just running on my 16 core CPU. So like where in that range does this thing sit? Or is it more about the interesting framework that enables more creative development?
@Slav4o911
@Slav4o911 3 ай бұрын
Just looking at the bandwidth it would be about 3x slower than RTX 3060... for 2x more money... so not good.... and I don't believe they have faster tensors... but even with faster tensors, the limiting factor is the bandwidth not the tensors.
@esra_erimez
@esra_erimez 3 ай бұрын
I'm very envious of people that can program FPGAs. I have a masters in Comp Sci and no matter how much I try, I can't get my head wrapped around FPGAs and emacs.
@first-thoughtgiver-of-will2456
@first-thoughtgiver-of-will2456 3 ай бұрын
just start with VHDL on a FPGA with a good GUI studio. If thats still too difficult and you have money for a liscense I'd recommend LabView. It can target FPGAs as well as CPUs and is a graphical programming environment (no code solution) that is extremely approachable.
@danielreed5199
@danielreed5199 3 ай бұрын
I find that they are easier to program if you take them to their natural habitat.... the countryside. I think they are mainly used by cattle farmers to systematically control a set of access points to the pastures. Basically... if you are out standing in your field (Computer Science) you will be able to figure it out. On a serious not though.... don't give up trying... every attempt, although it may not seem like it, you are getting better at it, somethings just have crazy steep learning curves. I am pretty sure that a lot of concepts you learned in CS took a while to sink in, but they did :) I hope you are able to envy your future self :)
@s7473
@s7473 3 ай бұрын
when i studied digital electronics around 2001 we started started with basic logic gates and built a traffic light system. I can't remember the name of the software we used but it was a xilinx fpga we worked with and it was mostly drag and drop placement to bulit up a digital circuit diagram that could be exported to the chip, it was much easier than programming an 8086 microcontroller in assembly language. :)
@sailorbob74133
@sailorbob74133 3 ай бұрын
What? FPGAs can't be programmed in VIM?
@esra_erimez
@esra_erimez 3 ай бұрын
@@sailorbob74133 🤣
@zeljkanenad
@zeljkanenad 3 ай бұрын
So many start-ups / companies today are built with only and only one goal; to demonstrate something narrow and not sustainable on its own, and finally (the Goal) be sold to big tech. Unfortunately, in the process, they must sell their beta or 'dev-kit' product to customers, basically using them as free work force. Competing with Nvidia? Oh, please. This is presented as dev-kit, but for what purpose will someone invest their energy hoping that whole proprietary stack will not die and that it will be able to scale in future? Basically, an example of real life use-case for this dev-kit today in its current form. Regardless of above, it was a pleasure listening Jasmina and Ian discussing the topic. Good job, Ian. And all the best, Jasmina. Hope Nvidia buy you for billions :)
@theworddoner
@theworddoner 3 ай бұрын
I wish them all the best and success. The dev kit memory seems a bit tiny doesn’t it? It’s 8gb with a bandwidth of 118GB/s. What can you do with that?
@lbgstzockt8493
@lbgstzockt8493 3 ай бұрын
Maybe it streams from your system RAM and just caches?
@LunarLaker
@LunarLaker 3 ай бұрын
start developing on it :) it's like how arm workstations absolutely sucked until stuff like Altra showed up, there just had to be something available to see what works
@LtdJorge
@LtdJorge 3 ай бұрын
@@lbgstzockt8493RAM has abysmally low bandwidth.
@Slav4o911
@Slav4o911 3 ай бұрын
@@lbgstzockt8493 That would be slow... I can tell you from practice once your model spills outside of VRAM... it gets very slow. Some small spillover sometimes is not very detrimental, but it slashes your speed 2x or 3x times... of course it's still better than 20x times slower. Nvidia GPUs are literal AI monsters.
@WyattFredaCowie
@WyattFredaCowie 3 ай бұрын
Yeah that's a $200 6600XT. Not quite sure what their idea is here, especially when GPUs are already extremely efficient for machine learning
@ErikS-
@ErikS- 3 ай бұрын
The logo will be something that will catch the attention of AMD's legal department... If I would be judge / jury in a trial on the IP, I would most certainly see a conflict with AMD's logo.
@davorcapalija9383
@davorcapalija9383 3 ай бұрын
@tinto278
@tinto278 3 ай бұрын
Is this the SDI/3dfx 3d accelerator moment for AI Accelerator's?
@Slav4o911
@Slav4o911 3 ай бұрын
Nah.... not even close.
@tinto278
@tinto278 3 ай бұрын
@@Slav4o911 Regan star wars and Jurassic park are coming.
@NNokia-jz6jb
@NNokia-jz6jb 3 ай бұрын
Good commodore tshirt. ❤
@NNokia-jz6jb
@NNokia-jz6jb 3 ай бұрын
What can i do with this card?
@fteoOpty64
@fteoOpty64 3 ай бұрын
So Ian, you finally met your match!. A solid Phd in FPGA really good at the stuff with elegance and beauty to match. What can I say ?. A unicorn is so so rare.....yet, we are looking at one!.
@fteoOpty64
@fteoOpty64 3 ай бұрын
Get a stack of these and revive your Pi calculations to 100 trillion please!.
@danielgomez2503
@danielgomez2503 3 ай бұрын
Does anyone have a discount code to share ?
@MrAtomUniverse
@MrAtomUniverse 9 күн бұрын
GROQ is way better right ?
@Arcticwhir
@Arcticwhir 3 ай бұрын
whats the performance like
@Slav4o911
@Slav4o911 3 ай бұрын
Just looking at the bandwidth number, it would be slow, even if their tensor cores are fast. You need fast ram and a lot of bandwidth to feed the tensors, otherwise it's slow. Just look at how much bandwidth Nvidia GPUs have. Even if their tensors are faster than Nvidia (which seems impossible to believe), they would need to feed them. Also why didn't they put a lot of that RAM, at least 32GB. 8GB is very small, you can buy 16GB RTX for about that price, which can start working immediately without any hassle.
@joealtona2532
@joealtona2532 2 ай бұрын
Grayskull is tagged 2021 on their own roadmap? Isn't it too little too late?
@MultiMojo
@MultiMojo 3 ай бұрын
8 GB of memory in 2024 just plain sucks. LLMs are all the rage right now, and the smallest one with 7B parameters needs atleast 16 GB VRAM (DDR6 not DDR4). I don't see how anyone would be interested in these over the H100, which everyone drools over. Atleast increase the memory to 128 GB + to drive some interest.
@TechTechPotato
@TechTechPotato 3 ай бұрын
They're dev kits :) It's in the name.
@nadiaplaysgames2550
@nadiaplaysgames2550 3 ай бұрын
@@TechTechPotato would they be looking into using standard DDR ram as a slower chace or even you an SSD as direct connection. have an SSD throw a lump of bulk transfer of contunes memory load a chunk of model then load in to ram in FILO and have it running in a loop
@jjhw2941
@jjhw2941 3 ай бұрын
@@TechTechPotatoMy NVidia AGX Orin with 64GB of RAM is also a dev kit :)
@Slav4o911
@Slav4o911 3 ай бұрын
@@nadiaplaysgames2550 Unloading to SSD will be very slow, even on the fastest SSD. Even spillover to RAM makes the models very slow... I don't even try to unload to the SSD (and possibly destroy it, because it would be very heavily used). I mean if a model fits fully inside the VRAM, if you use the "streaming option", the model will start to answer around 5 seconds, if there is small RAM spillover the answer time will progressively slowdown... to 20 - 30 seconds... if the model runs fully in RAM.... you'll wait 200 - 300 seconds...(depending on context length) which does not look like chat but like sending an e-mail and waiting for an answer... it's possible but not fun at all. If it spills over to the SSD, the answer will probably come after an hour... if the SSD doesn't explode before that.
@GeekProdigyGuy
@GeekProdigyGuy 2 ай бұрын
Hilarious how much ignorance one comment demonstrates. How are you comparing an $800 dev kit card to an H100, which is north of $40K? A literal 50x price difference. Not to mention calling VRAM "DDR6, not DDR4" when GDDR6 is generationally aligned with DDR4, just specialized for GPUs.
@Phil-D83
@Phil-D83 3 ай бұрын
By the power of...AI!
@pebre79
@pebre79 3 ай бұрын
Benchmarks?
@kitastro
@kitastro 3 ай бұрын
take my money
@UnicornLaunching
@UnicornLaunching 2 ай бұрын
Now that they have enterprise partnerships, they're getting into the rhythm of shipping. Devs will test, extrapolate use cases, and report feedback. Same as it ever was. Not enough memory? Same for every piece of hardware that gained traction.
@OOMed
@OOMed 3 ай бұрын
That's right. That's right. That's right.
@theHardwareBench
@theHardwareBench 3 ай бұрын
Not heard Jim Keller mentioned since he bailed out of the Ryzen project in 2015. Considering how bad those early CPUs were I’m guessing AMD didn’t listen to his advice. Pretty sure he wouldn’t think having the cache speed locked to RAM speed would be a good idea.
@TechTechPotato
@TechTechPotato 3 ай бұрын
I've interviewed him multiple times! Covered his time at Intel, when he left, and his new ventures!
@theHardwareBench
@theHardwareBench 3 ай бұрын
Cool, I’ll look through your old videos. I discovered you through an article you wrote interviewing the head of Intel’s OC lab in 2020. I’m getting back into overclocking and looking for an edge. He said per core overclocking was the way forward but I can’t see how that’s going to improve an of my cpu or 3dmark scores lol.
@woobilicious.
@woobilicious. 8 күн бұрын
What the hell are you talking about? Zen 1 saved AMD, it was wildly considered the saving grace for AMD, and even though it wasn't completely killing Intel's Core, it was a viable alternative that everyone celebrated. I also doubt you have any clue how to build a CPU to be commenting on AMD tying RAM and L3 cache clocks together, cache coherency is literally the hardest problem is computer engineering and I would gamble that Intel's Chiplet design will do something similar.
@zebobm
@zebobm 3 ай бұрын
Doctorate corn: 2 PhD for the price of 1. But seriously, where will these chips used from a consumer standpoint?
@NotAnonymousNo80014
@NotAnonymousNo80014 3 ай бұрын
In pushing Nvidia out of AI so they can return to making graphics cards. :D
@LunarLaker
@LunarLaker 3 ай бұрын
for training models used by consumers, probably smaller/more niche ones given big software players have their own chips or have the capital for NV. As a consumer you're probably never going to buy your own inferencing card, but maybe much further down you could see tenstorrent IP in your CPU
@pengcheng8299
@pengcheng8299 3 ай бұрын
why was it branded "Taiwan" if the contract went to Samsung Fab?
@TechTechPotato
@TechTechPotato 3 ай бұрын
This chip was technically GF I think. Packaging likely done in TW.
@predabot__6778
@predabot__6778 3 ай бұрын
@@TechTechPotato Wait, Global Foundries...? But GF doesn't even have a 7/10nm fab -- how would these cards be able to even match a 4060 Ti with a process as old as 14/12nm?
@tindy128011
@tindy128011 2 ай бұрын
can i use it to run a Minecraft server?
@qeter129
@qeter129 3 ай бұрын
Get a consumer card with 48gb or more memory out there for less than 1500$ and you'll make hundreds of billions on edge AI computing. Pls free us from the green giant and his little red minion.
@nadiaplaysgames2550
@nadiaplaysgames2550 3 ай бұрын
what card do i need for a local LLM
@nadiaplaysgames2550
@nadiaplaysgames2550 3 ай бұрын
@@user-ef2rv9el9x yup i fix that today o got new card to day
@jjhw2941
@jjhw2941 3 ай бұрын
@@user-ef2rv9el9xPeople are using Macs and Macbooks because of the unified high speed memory as well.
@Slav4o911
@Slav4o911 3 ай бұрын
Some Nvidia RTX with at least 16GB or more VRAM... so definitely not 8GB... I have an RTX 3060 12GB GPU and it's not enough for the bigger models, and once your model spills into the regular RAM it becomes slow, so more VRAM is better. Also have in mind AMD and Intel will not help you, you'll have a hard time with them (as if you have a problem, almost nobody will help you, because everybody uses Nvidia) to run LLM models on them and the models are optimized only for Nvidia GPUs.
@nadiaplaysgames2550
@nadiaplaysgames2550 3 ай бұрын
@@Slav4o911 doing some research the 4060ti is highest you can get without spelling a organ I just hope the split memory buss and 8x lanes won't mess me up
@nadiaplaysgames2550
@nadiaplaysgames2550 3 ай бұрын
@@Slav4o911 anything bigger than 16gb its 4090
@metallurgico
@metallurgico 3 ай бұрын
that fan placement looks so janky lol
@ChrisJackson-js8rd
@ChrisJackson-js8rd 3 ай бұрын
are they hiring?
@MrChurch69
@MrChurch69 2 ай бұрын
Can you play games on that thing
@rudypieplenbosch6752
@rudypieplenbosch6752 3 ай бұрын
Would be nice if you can use it in combination with Matlab, interesting product. Interesting woman very eloquent.
@sheevys
@sheevys 3 ай бұрын
That's right
@cannesahs
@cannesahs 3 ай бұрын
For once real engineering talk instead of pure marketing s*it 👍
@michela1537
@michela1537 3 ай бұрын
Thank you for sharing ;-) We need more female in AI ....Urgent to balance outcome of humanity and AI !!
@DanielJoyce
@DanielJoyce 3 ай бұрын
So this dev board can only run tiny models from a few years ago? Disappointing. Even their bigger boards only have like 12Gb
@igor25able
@igor25able 3 ай бұрын
No need to support Windows, Cuda does not do it anymore so community shifted to linux completely, support for wsl is quite enough
@maruma2013
@maruma2013 2 ай бұрын
Will they sell AI chips for consumer? We need a person who save us from Jensen Huang.
@TechTechPotato
@TechTechPotato 2 ай бұрын
You can buy them now
@maruma2013
@maruma2013 2 ай бұрын
I see Anyway, I want see demo running with GraySkull.
@mirekez
@mirekez 3 ай бұрын
so many RISC-V cores to process ML? I don't believe it worth it
@bartios
@bartios 3 ай бұрын
Don't know if you're aware but those cores implement a ton of custom instructions optimized for AI. That and all the networking etc is is where they get their tops/flops.
@mirekez
@mirekez 3 ай бұрын
@@bartios keep custom, remove cores)))
@TechTechPotato
@TechTechPotato 3 ай бұрын
No, this isn't RISC-V cores. It's Tensix cores.
@oj0024
@oj0024 3 ай бұрын
The tensix cores are aupposed to have five control RISC-V core and a large compute engine. I'm not sure what the RISC-V cores in Grayskull actually are though (extension wise).
@ErikS-
@ErikS- 3 ай бұрын
I was originally very enthousiast on risc-v. But what I hear and see, is that is is just not performant and crashes continuously. I am hopeful for the future, but until it is picked up by a credible company like Qualcom / Intel / AMD / Nvidia / ARM / Samsung / ..., I doubt it will get to a mature point.
@Johnmoe_
@Johnmoe_ 3 ай бұрын
8gb of lpddr4…….. for $599…….. bruh 💀. it’s an interesting project don’t get me wrong, but I could do better with an off-the-shelf Nvidia gpu.
@TechTechPotato
@TechTechPotato 3 ай бұрын
It's a developer kit.
@waldmensch2010
@waldmensch2010 3 ай бұрын
nice hardwarep0rn 🙂
@bassamatic
@bassamatic 3 ай бұрын
AI, like a program that searches the web and pretends to understand it.
@tringuyen7519
@tringuyen7519 3 ай бұрын
Depends on what you define as understanding. AI is doing a math regression of you based upon your questions & responses. Are you predictable?
@emeraldbonsai
@emeraldbonsai 3 ай бұрын
@@tringuyen7519 IF you actually had access to all the inputs of a human then yes they would be predictable
@TDCIYB77
@TDCIYB77 3 ай бұрын
Is Ian flirting? 😂
@julianneEVdmca
@julianneEVdmca 3 ай бұрын
OKAY ! WHAT IS AI Accelerator again!??!! CUZ YOU ALL SHOWING HARDWARE BUT IT JUST SOFTWARE!! why keep showing me pci-card when you can literally use usb.2!! is it funny to sell FREE-chatgpt as a new monster graphic chip!!?? i not gamer to fool me by DLSS & RTX !! YOU TALKING TO I.T VIEWER NOT SOME HOME GAMING USER! SO WHO YOU WANT TO FOOL WITH THIS??! WHO!!
@iseverynametakenwtf1
@iseverynametakenwtf1 3 ай бұрын
I feel uncomfortable watching this. Such an awkward thing
@pynchia4119
@pynchia4119 Ай бұрын
Pity there's too much BOTOX. She cannot even move her mouth anymore, left alone smiling fully. OMGoodness
@666Maeglin
@666Maeglin 3 ай бұрын
So pretty and smart..
@tuqe
@tuqe 3 ай бұрын
Saying things like that makes someone feel uncomfortable and is weird
@solarin739
@solarin739 3 ай бұрын
I do love Ian's Commodore shirt yes
@packapunchburger
@packapunchburger 3 ай бұрын
I do love some well designed and placed pogo pins myself
@AK-vx4dy
@AK-vx4dy 3 ай бұрын
@@tuqe But is true, but i would say smart first
@tuqe
@tuqe 3 ай бұрын
@@AK-vx4dy nah still comes across as someone who has not spent enough time around women to realize that they are humans
@BUY_YOUTUB_VIEWS_e0e108
@BUY_YOUTUB_VIEWS_e0e108 3 ай бұрын
This video is a great resource for beginners.
[24] Jim Keller Interview 2: AI Hardware on PCIe Cards
34:24
TechTechPotato
Рет қаралды 27 М.
Let's Look At Some Big, Expensive Old Servers!
27:45
This Does Not Compute
Рет қаралды 630 М.
Зу-зу Күлпәш. Агроном. (5-бөлім)
55:20
ASTANATV Movie
Рет қаралды 618 М.
How did CatNap end up in Luca cartoon?🙀
00:16
LOL
Рет қаралды 7 МЛН
Intel's Newest $350 Million Machine
19:18
TechTechPotato
Рет қаралды 173 М.
Why everyone is building AI Chips
10:53
Anastasi In Tech
Рет қаралды 31 М.
Are Linux laptops the FUTURE??? - System76 Darter Pro
11:51
ShortCircuit
Рет қаралды 1,3 МЛН
MAJOR New Announcements From TSMC Symposium!
6:51
TechTechPotato
Рет қаралды 46 М.
A $9 Introduction to the RISC-V Future of Computing
26:34
apalrd's adventures
Рет қаралды 331 М.
Building High-Performance RISC-V Cores for Everything
19:01
TechTechPotato
Рет қаралды 98 М.
How much charging is in your phone right now? 📱➡️ 🔋VS 🪫
0:11
How Neuralink Works 🧠
0:28
Zack D. Films
Рет қаралды 28 МЛН
Any Sound & Call Recording Option Amazing Keypad Mobile 📱
0:48
Tech Official
Рет қаралды 325 М.
What % of charge do you have on phone?🔋
0:11
Diana Belitskay
Рет қаралды 350 М.
Any Sound & Call Recording Option Amazing Keypad Mobile 📱
0:48
Tech Official
Рет қаралды 325 М.