A Deep Dive into IBM's New Machine Learning Chip

  Рет қаралды 17,859

TechTechPotato

TechTechPotato

Күн бұрын

Пікірлер: 103
@AnastasiInTech
@AnastasiInTech 2 жыл бұрын
Amazing video
@RATTL3R186
@RATTL3R186 2 жыл бұрын
Do not know why I never found your channel till now. YT sucks and apparently needs these very cards LOL. Thanks you cover a LOT of tech no one else will attempt to explain, Much Appreciated. Is it me or was that a substantial piece of copper on the card?
@TechTechPotato
@TechTechPotato 2 жыл бұрын
It's passively cooled in a server, so 75W worth needed
@Tential1
@Tential1 2 жыл бұрын
It's not your fault, this wasn't the best channel name to transition to lol.
@shinokami007
@shinokami007 2 жыл бұрын
audio is very weird from 1:30 to the end ...only the first section is nicely mixxed. anyway, thanks and keep it up Ian ;)
@PainterVierax
@PainterVierax 2 жыл бұрын
yep I think the stereo mic would better have been treated as a mono input rather than keeping a raw unbalanced volume.
@nunyobiznez875
@nunyobiznez875 2 жыл бұрын
It's always fascinating to see what IBM has been working on, because it's usually something on the leading edge, and it's always something interesting, at the very least.
@kayakMike1000
@kayakMike1000 2 жыл бұрын
Wow, you're way out of date man... IBM is stupid late on AI. Nvidia is light-years ahead, and AMD is probably not too far behind. Google has tensor cores that you could rent like five or six YEARS ago. IBM barely had a cloud back then...
@nunyobiznez875
@nunyobiznez875 2 жыл бұрын
@@kayakMike1000 Out of date or out of context, one of the two, anyway. IBM didn't just now join the party on AI and I'm not even speaking of a specific product. AI has been around since the '60s though, decades before Google or Nvidia were even a glint in their creator's eye. Regardless, that's where a lot of leading research is still being done today. IBM is also at the forefront with quantum computing as well, which is very much still in it's infancy and is bleeding edge research, despite also being around, at least in theory, for many decades. I don't really think of the cloud as cutting edge, but yeah, I guess they have that too. They're a somewhat quiet company, that just doesn't get much attention at the consumer level now days. But either way, maybe it's you who's out of date with what they've been doing.
@nunyobiznez875
@nunyobiznez875 2 жыл бұрын
@Chuck Norris What are you even talking about? Mentioning AI existed before Nvidia existed, is not "making a fight" or putting any company vs company. It's not doing anything, except stating a plain fact. I'm not going to bother rewatching this video to check, but I seem to recall from memory that the card featured in this video was only a 75 watt card, which for the record, is not going to even be in the same class or ball park as a 300-450 watt GPU. Though, this card does have some unique features, as well. However, it would still be ridiculous to even try to make some kind of direct comparison.
@cem_kaya
@cem_kaya 2 ай бұрын
The audio is weird at ~ 5:15 , I don't know how to explain, but it feels like there are two mics: one records your voice and the other records the misc noise coming from you, like breathing, cloth moving, and air moving through your airways. It feels like the second one amplifies the msc noise.
@iyke8913
@iyke8913 2 жыл бұрын
Camera game has really improved 👌
@fpgamachine
@fpgamachine 2 жыл бұрын
The board looks good but I am extremely skeptical of what a quantized neural network with INT2 can deliver. Especially after using FP16 quantized networks on MVIDIA cards and noticing greater than expected degradation during inference.
@bryce.ferenczi
@bryce.ferenczi 2 жыл бұрын
Really? FP16/BF16 is usually free performance at inference time, often its only at INT8 that you start to see performance degredation. I'm curious what kind of models you have issues with. From personal experience, it's only been Mask2Former *training* in FP16 that I've had problems with in the past.
@esra_erimez
@esra_erimez 2 жыл бұрын
12:15 when you mention analog, do you mean as opposed to digital? That would be very interesting to have neural networks with something like op-amps.
@TechTechPotato
@TechTechPotato 2 жыл бұрын
Yup!
@alpaykasal2902
@alpaykasal2902 2 жыл бұрын
The lines would blur with IBM's quantum team, their entire job seems to be signal to noise control.... and control certainly isn't the correct word :)
@i300bps
@i300bps 2 жыл бұрын
Please Fix Audio ! as you some noise in the left channel. Thanx in-advance
@billykotsos4642
@billykotsos4642 2 жыл бұрын
AI hardware wars are only continuing to heat up ! Lots to look forward into the future !
@D.u.d.e.r
@D.u.d.e.r 2 жыл бұрын
Thank u for the report Ian!👍
@benjaminlynch9958
@benjaminlynch9958 2 жыл бұрын
Awesome. Does IBM have a separate product that they (or their customers) use to train these models for Int2?
@monstercameron
@monstercameron 2 жыл бұрын
yes, I've been poking you on twitter to talk more about these ai accelerators!
@FrankHarwald
@FrankHarwald 2 жыл бұрын
10:28 "any computer in the world has a pcie attachment" nope. A lot of embedded, mobile & soc computers don't. PCIe is mostly used in PCs & servers.
@PainterVierax
@PainterVierax 2 жыл бұрын
sadly true. I believe there might be some market for a more humble USB module but manufacturers are certainly more financially inclined to put those units in a shiny new SoC product line rather than offering an extension to existing solutions.
@alpaykasal2902
@alpaykasal2902 2 жыл бұрын
Clearly them using their sales language.
@monstercameron
@monstercameron 2 жыл бұрын
By chance did you ask IBM about in memory computing?
@velo1337
@velo1337 2 жыл бұрын
intel will have you covered
@ProjectPhysX
@ProjectPhysX 2 жыл бұрын
0:40 looking forward to when FP2 foating-point comes out (-NaN, -0, +0, +NaN)
@erkinalp
@erkinalp 2 жыл бұрын
😅
@PainterVierax
@PainterVierax 2 жыл бұрын
you're joking but FP2 is kind of a common thing when the values of an INT2 means -1, 0, +1 (and sometimes the application might need NaN as an extra value for errors/failures purposes) like in an 1 axis control or sensor feedback application. This is very similar to the classical 1bit comparator logic table.
@FlaxTheSeedOne
@FlaxTheSeedOne 2 жыл бұрын
It would be nice if you could have shown an allround view. In a more calm way instead of floping the cards around. The view and how its cooled is interesting too
@ConsistentlyAwkward
@ConsistentlyAwkward 7 ай бұрын
Ibm is so interesting how does this chip relate to the NorthPole chip they announced last year? ik that chip is an inference only ASIC is that research related to this chip or are they completely separate? will they have a training only chip next 😅
@whyjay9959
@whyjay9959 2 жыл бұрын
Can I install this to make bots in Unreal Tournament smarter?
@spuchoa
@spuchoa 2 жыл бұрын
Great video!
@wololo10
@wololo10 2 жыл бұрын
This is how Skynet begins
@_PatrickO
@_PatrickO 2 жыл бұрын
Starlink would have been called skynet if hollyywood did not suck so much. Tesla makes AI and robots, so it isn't hard to see where this is going.
@alexmills1329
@alexmills1329 2 жыл бұрын
Every day it gets a little closer…
@LettersAndNumbers300
@LettersAndNumbers300 2 жыл бұрын
Not really
@igoromelchenko3482
@igoromelchenko3482 2 жыл бұрын
Absolutely not... Fellow human... Bip-blop...
@kayakMike1000
@kayakMike1000 2 жыл бұрын
Oh come on... Google has AI already, but its really only interested in cat videos, conspiracy theories, and white supremacy.
@walter1824
@walter1824 2 жыл бұрын
What if there was a realistic physics chip, like those billion particle simulations ?
@Joker-no1fz
@Joker-no1fz 2 жыл бұрын
but can it play crysis?
@chaoukimachreki6422
@chaoukimachreki6422 2 жыл бұрын
Nice watch !
@Ironclad17
@Ironclad17 2 жыл бұрын
What is the main reason for a low power pcie card? If it's going into servers they can certainly go for higher density. Are they trying to avoid competing with cdna and hopper?
@TechTechPotato
@TechTechPotato 2 жыл бұрын
Search the Web for cards like the T4 or A10, which are a similar form factor
@billykotsos4642
@billykotsos4642 2 жыл бұрын
Another chip in the game ! LETS GOOOOO
@vensroofcat6415
@vensroofcat6415 2 жыл бұрын
Unrelated, but you triggered a wish to rewatch Ex Machina. The better looking and smarter AI :) This card looks good too, but still light years behind (which is distance just in case).
@sniffulsquack5608
@sniffulsquack5608 2 жыл бұрын
"So, I just happen to be here at IBM"
@Veptis
@Veptis Жыл бұрын
I am currently looking at building a new workstation PC at home. And for it, I am entertaining the possibility of putting a dedicated "AI" accelerator. I want something to run models that are like 6B or maybe even 16B parameters to use for my own research and a lot of fun. And It should be an matx build where there is a dedicated GPU for gaming/video editing. But it's difficult to find actual products that get sold to consumers.
@nellyx8051
@nellyx8051 2 жыл бұрын
Int 2? Wow you can't reduce precision much more than that. 🤣
@alexmills1329
@alexmills1329 2 жыл бұрын
That’s perfect for deep learning is my understanding, they require a binary yes no answer and the grey area over large enough matrices averages out right either way.
@MrHaggyy
@MrHaggyy 2 жыл бұрын
@@alexmills1329 yes they need a lot of Int 1, but we kind of have this with raw binary already. Int 2 is great for statistical decisions like the hypothesis testing. Those give you a yes or no as an answer as well as a is significant or not. Or more AI wording is a feature present in a picture and is it relevant for classification or not. Also a neat datatype for quantization where you have two values and want to know if your value is bigger or smaller than either one.
@nellyx8051
@nellyx8051 2 жыл бұрын
@Mahesh Oh I see. I thought it was a different way saying 2 bit calculations.
@MrHaggyy
@MrHaggyy 2 жыл бұрын
@Mahesh oh you are right they count the bytes and reserve one bit for the sign. Thanks for pointing it out. From the video, Ians question and "uint_16, int_32 etc" i thought they meant bits. It`s also inconsistent with their naming for floating points. HFP8 is a hybrid floating-point 8-bit.
@shaxosYT
@shaxosYT 2 жыл бұрын
@Mahesh yeah, if you google "IBM INT2" you get some database command that is unrelated to the topic of this video. This cards really supports operations down to 2 bits (but I think higher bits representations are still more practical, at this time).
@billykotsos4642
@billykotsos4642 2 жыл бұрын
dope vid !
@anarekist
@anarekist 2 жыл бұрын
lol, thank you for the video. whats up with the coloring tho?
@alpaykasal2902
@alpaykasal2902 2 жыл бұрын
Best tshirt ever. CommodoreForever
@Ritefita
@Ritefita 2 жыл бұрын
where's memristors?
@jannegrey
@jannegrey 2 жыл бұрын
Beer to Beer Consumers?
@TechTechPotato
@TechTechPotato 2 жыл бұрын
🤣
@TheEskimo24592
@TheEskimo24592 2 жыл бұрын
Anyone else find the audio being balanced so far to the left side, intolerable?
@eberger02
@eberger02 2 жыл бұрын
“Inference” of what? If I inferred something at work it’d mean I’d looked at the past data and inferred a current state. With only 2 bits, so 4 possible states, of data that is impossible. I couldn’t even😮 code something simple, like a Serum Potassium concentration , in two bits. You could take the day before’s data and create a line, but again you’d need more than two bits to say the gradient.
@TechTechPotato
@TechTechPotato 2 жыл бұрын
2 bit is related to the intermediate layers of the machine learning algorithm, not the output. Eg 8bit multiply is usually also 32bit accumulate.
@alpaykasal2902
@alpaykasal2902 2 жыл бұрын
Inference against a pretrained model.
@eberger02
@eberger02 2 жыл бұрын
@@alpaykasal2902 you didn’t answer the question. I didn’t ask against what.
@shaxosYT
@shaxosYT 2 жыл бұрын
@@eberger02 in machine learning lingo, "inference" means to provide an input (for example an image) to a neural network and read the output (for example, the class the image belongs to). This is in contrast to "training", where the parameters of the neural network itself are updated to match a known output. This chip can perform internal operations multiplying together many 2 bits values (or larger, 2 bits is the lower limit) to improve speed and lower energy consumption during inference. However, the network output will not be expressed with just 2 bits but many more
@alpaykasal2902
@alpaykasal2902 2 жыл бұрын
@@eberger02 sorry if I misunderstood what you were asking for... you said " it’d mean I’d looked at the past data and inferred a current state". Which is correct. a model is pre-trained from past data. then, 'something' can be inferred about the current state measure against that pre-trained model. In the edge/IOT model, those inferences are then sent back to the core (datacenter) as cumulative learnings for a newly trained model to be delivered. In cumulative steps, inferences become averaged over time and quantizing of large datasets can get more accurate because the models are cumulatively better. Whether a GAN, reinforced, diffusion, etc, more iterations make for better trained models, and better inference. Or are you asking what the actual compressed dataset looks like? i'd guess we're talking about matrixes which are essentially blobby greyscale... coming from computer vision guy, of course that's what I want a matrix of numbers to look like :)
@50shadesofbeige88
@50shadesofbeige88 2 жыл бұрын
Where is Miles Dyson!
@ultraveridical
@ultraveridical 2 жыл бұрын
For some reason you are inhaling in my left ear.
@shinokami007
@shinokami007 2 жыл бұрын
omg yea and it's killing me :/
@tommihommi1
@tommihommi1 2 жыл бұрын
nice shirt
@kylexrex
@kylexrex 2 жыл бұрын
My left ear liked this
@shinokami007
@shinokami007 2 жыл бұрын
my right one is kinda jealous tho :D
@shinokami007
@shinokami007 2 жыл бұрын
@@sirmongoose shuush you non-headset user 😛 love you anyway ahha xoxo
@dupajasio4801
@dupajasio4801 2 жыл бұрын
Ian, I have lots of respect for you. My comment is very general and not reflection on you. When or where can we see the truth about IBM shyt being crap ? Their servers suck , their software suck . I am yet to see a negative review of any solution by anybody. Am I missing something ? A tech told me once it takes 4 hours to power up Watson and after that one can start the OS. Seriously ? I'm sick and tired of these perhaps sponsored reviews that are all positive. Once again, I would not say that if I didn't have tremendous respect for you Sir. From my experience with IBM AS400 or IBM i or whatever they decided to name it now IBM offerings are shyt. And only getting more and more behind. Grettings
@owlmostdead9492
@owlmostdead9492 2 жыл бұрын
Asking the real question here: when is IBM going to make laptops again?
@Tential1
@Tential1 2 жыл бұрын
Who would think, after going to 64bit processing, we would need 2 bit processing.
@Xune2000
@Xune2000 2 жыл бұрын
Does this mean we can get Rocket League bots that aren't brain-dead wall-humpers? I'd love to see AI that can approximate human intelligence in games. Right now the options are stupid/easily manipulated/cheating AI or toxic try-hards/sore losers & winners. AI that can approximate human intelligence and maybe even learn new tactics/skills would be fantastic to play against!
@10100rsn
@10100rsn 2 жыл бұрын
INT2 is defined by IBM as being 15-bit precision. So, it is basically a signed 16-bit integer. Sign bit plus 15 value bits. So this is a 16-bit only AIU? Only processing 16-bit floating point (FP16) or 16-bit integers (INT2/short)? And it is at ~75Watts ??? I like it... a lot... but I would love FP32 only versions for audio and DSP... 16-bit is great for processing images with more than enough precision for AI and video doesn't even need that many bits, but all professional audio is done as FP32 these days. Some applications offer FP64 processing/mixing but that is unnecessary... I could see an all FP64 version being the end goal for all scientific and DSP data eventually and I could see every business/tech user with at least one FP32 or FP64 card one day. An FP32 card would cover most use cases, but FP64 would cover all of them. ;)
@MrDs7777
@MrDs7777 2 жыл бұрын
You clearly have no idea what this chip is. It’s not a general purpose CPU
@10100rsn
@10100rsn 2 жыл бұрын
@@MrDs7777 of course it isn't a CPU. It handles massively parallel computations. That is exactly what I need but since it doesn't have the necessary precision it would be pointless. It is awesome and I'm sure it is good enough for their target audience, just not for me.
@MrHaggyy
@MrHaggyy 2 жыл бұрын
mhm 16 bit does reduce the number of transistors significantly. Out of curiosity do you need FP32 through the whole network, or only in the input and output stages? I only used audio signals for control and we always transformed it into an array of 8 or 16-bit values, and cut out all signals our control couldn`t handle anyway.
@10100rsn
@10100rsn 2 жыл бұрын
@@MrHaggyy it would need to be FP32 from input to output. My dream machine would have hardware optimized multiply accumulate and optimization for convolution engines and enough RAM, maybe only 8GB, to hold all the data in and out. Hardware optimized transformation functions for multiplexing and demultiplexing would be great as well but that might make the hardware more complex than it needs to be. idk if it would even be worth it, there would need to be a way to handle custom algorithms...
@shaxosYT
@shaxosYT 2 жыл бұрын
@10100rsn this is incorrect. INT2 here truly refers to using 2 bits (4 states) to represent neural network weights and activations. You can take a look at the 2-bit paper mentioned below the chart at 4:00 (the full title is "Accurate and Efficient 2-bit Quantized Neural Networks")
@theokramer_za
@theokramer_za 2 жыл бұрын
You sure that IBM don’t mean 2 bytes for INT2…? That is what it meant in my 1980s C programming years
@willberry6434
@willberry6434 2 жыл бұрын
Who are they using at 5nm? They aren’t fanning them themselves are they?
@sean_vikoren
@sean_vikoren 2 жыл бұрын
Class warfare.
@interests3279
@interests3279 2 жыл бұрын
IBM stands for Impractical Boomer Machines
@raven4k998
@raven4k998 3 ай бұрын
yeah but who cares unless they are providing a useful ai for people to use
@AlexHamre
@AlexHamre 2 жыл бұрын
first
AI’s Hardware Problem
16:47
Asianometry
Рет қаралды 637 М.
3D chips are here, now what?
1:07:07
TechTechPotato
Рет қаралды 10 М.
It Takes Two To Tango: A New Era of Performance
24:08
TechTechPotato
Рет қаралды 24 М.
Speedrunning 30yrs of lithography technology
46:07
Breaking Taps
Рет қаралды 1 МЛН
NVIDIA REFUSED To Send Us This - NVIDIA A100
23:46
Linus Tech Tips
Рет қаралды 10 МЛН
Previewing IBM Telum Processor
24:27
IBM Research
Рет қаралды 10 М.
Future Computers Will Be Radically Different (Analog Computing)
21:42
"Z2" - Upgraded Homemade Silicon Chips
5:46
Sam Zeloof
Рет қаралды 2,1 МЛН
AI Hardware Will Be Everywhere (for a fee)
20:09
TechTechPotato
Рет қаралды 15 М.
The History of the FPGA: The Ultimate Flex
18:08
Asianometry
Рет қаралды 351 М.