My new favourite channel. Looking forward to catching up on what you've already released and your future videos :).
@pirojfmifhghek5662 жыл бұрын
This is a great video. I'd been predicting this for a while, simply because of all the gains that I'd heard about in analog chips from Mythic AI. Glad to see that more companies are getting in on this. It's also the perfect time for computing to start implementing new components for neural networks. The available bandwidth for motherboards has gotten ridiculously large lately, so there's a lot of headroom. I think it makes a ton of sense to start using dedicated AI chips for a whole host of common tasks and applications. The efficiency and speed gains would be enormous. This change in computing is gonna happen eventually. We're all gonna be socketing a plethora of purpose-built AI chips into our computers soon. There are just so many "fill in the blank" potential uses for AI. Anyone playing around with AI art generators can see that the results are surprisingly sophisticated and sometimes spooky. But damn does it take a lot of horsepower to do that stuff with a GPU. It takes a 3090 running at full power for several minutes just to produce results. It's horribly inefficient and slow. But it does remind me of the delight people experienced with the early internet. The internet used to be meagre and slow and truly amateurish, but everyone still shared that undeniable enthusiasm for being the first pioneers in a new world. And that's where we're at with AI.
@AjinkyaMahajan2 жыл бұрын
8:15 MAC diagram, you won my heart. This channel motivates me to keep learning and researching and never give up regardless of how many failures. You are a true person who understands affection with technology. Cheers ✨✨
@rich_in_paradise2 жыл бұрын
You didn't mention one of the key aspects of Google's TPU (and other specialist AI processor) compared to a GPU is also the number representation. GPUs can process 32 and 16 bit IEEE floating point numbers. But for AI work Google found that the fractional part of the number (commonly known as the mantissa) is less important than the magnitude (the exponent) and so they changed the number of bits allocated to each in their own BFLOAT16 format. That makes their processors better for AI, but relatively useless of other kinds of numerical computation.
@daddy31182 жыл бұрын
Graphcore has Float8 being considered by the IEEE.
@TheReferrer722 жыл бұрын
Same as Tesla's Super computer that has a custom number format.
@cyrileo Жыл бұрын
I know 😃. A lot of optimizations have been done since then to squeeze out even more performance. (A.I)
@gotfan77432 жыл бұрын
You missed 2 important AI chip companies. UK based Graphcore and US based Cerebras which has designed a wafer scale AI chip.
@incription Жыл бұрын
not useful unless they manufactor in mass quantity. its probably incredibly slow to make wafer scale chips
@ishan67712 жыл бұрын
As a ML researcher it is interesting to watch, unless run at extreme scales, regular chips are just enough especially for inference
@harrytsang15012 жыл бұрын
Yes, essentially, inference of smaller models can be done locally, in the browser (WebGL or Webassembly). At larger scale, it's always a limitation of memory bandwidth because no hardware can keep billions of parameters in cache. The caching locality of GPU breaks down pretty quickly and the 24GB of VRAM in top tier consumer grade GPU is still far from enough. At the end you give up and rent Google Colab to run your large models.
@ishan67712 жыл бұрын
@@harrytsang1501 True, I don't think even any university will invest in specialized hardware, most cloud providers also simply give credits to use their cloud services, but for the cloud provider itself I think such chips can provide significant power savings perhaps and worth it in the long run.
@transcrobesproject36252 жыл бұрын
What do you mean by "regular"? Regular GPUs? For certain things like NLP (stanza, Marian, etc) CPUs can be orders of magnitude slower than GPUs, making them totally unrealistic for running inference, so regular GPUs sure, but not CPUs!
@shoaibkhwaja41562 жыл бұрын
"64 KB of memory ought to be enough for everyone" 😏
@Bvic32 жыл бұрын
Real time 30FPS image processing of a HD camera input is very demanding.
@Palmit_2 жыл бұрын
Thanks Jon :-) How you get your head around, and then write and deliver stuff of this complexity is mind-boggling! Do you even sleep? Do you work? Are you an automaton?? You're incredibly efficient and skilled in any case. You should def do a youtube live Q&A. I'm sure thousands of your viewers have lots of questions each. Thank you again.
@RoderickJMacdonald2 жыл бұрын
I suspect part of his secret is that he simply loves to learn.
@maxluthor68002 жыл бұрын
@@RoderickJMacdonald it's not that hard if it's your passion
@TradieTrev2 жыл бұрын
He's a true academic, there's no doubt about it!
@victorfeng42842 жыл бұрын
Thanks!
@joshhyyym2 жыл бұрын
7:11 the box labelled as system processing is actually just a bench top power supply. It is supplying 1.00volt and 0.000A. It is not doing any processing. Great video btw, big fan of your channel
@lumanaty2 жыл бұрын
Great Video. Extremely excited about this industry and really hope to get involved. Met up with some Cornell scientists and discussed lower-power electronic AI accelerators. This space is ripe for innovation that will lead to 10x improvements in power and inference speed. Amazing stuff out there.
@halos41792 жыл бұрын
Curious, what makes you believes this claim?
@out_on_bail Жыл бұрын
Wish i bought NVIDIA Stock when i watched this
@vijvalnarayana51278 ай бұрын
wish i bought NVDA when you posted this comment
@theant42684 ай бұрын
@vijvalnarayana5127 I wish I shorted Nvidia at peak
@fadecutmike4 ай бұрын
@@theant4268wish I shorted NVDA when you posted this
@Davethreshold2 жыл бұрын
Oh drat! Another YT channel that I'll become addicted to! Seriously, I am FASCINATED with technology, mainly computer tech. You cover many aspects of things that I have not quite seen before. Good work! 🧡
@Luxcium Жыл бұрын
The way you talk about your topic with passion, confidence and humility but also the rhythm and the synco-tonic of your voice makes those videos not only interesting but relaxing and calming you are such an amazing person yet because of how humble you are it makes me feel so strange to give you compliments but I guess that somewhere inside of you, you know that you are doing something right and something good 😅 So I must share this with you because you deserve many compliments 😊🎉❤
@tykjpelk2 жыл бұрын
I'm very excited about the silicon photonics approach. Photonic chips don't need to perform multiplications one at a time, but rather do the whole matrix multiplication in parallel, which makes it a O(1), or constant time operation. The chip needs to be configured to multiply by a certain matrix, which takes milliseconds, and can then perform matrix multiplications as fast as you can give it inputs. With 50GHz modulators and photodetectors readily available, I'm excited to see what companies like QuiX, iPronics and Xanadu will achieve.
@kalukutta Жыл бұрын
Are 50 GHz of modulators available on Chip ??
@tykjpelk Жыл бұрын
@@kalukutta Yes, 50 GHz has been available for several years both in electro-absorption and carrier depletion modulators in SiP, so both amplitude and phase modulation. They're even mature enough to be available in MPW PDKs, so you can just include them in your layout without designing anything from scratch. However they are too large for dense integration of meshes like the ones here.
@crackwitz Жыл бұрын
ASICs, FPGAs, GPUs do parallel calculation too. That is no special attribute of photonics.
@tykjpelk Жыл бұрын
@@crackwitz True, but in a fundamentally different way. Those devices need to perform a long series of logic operations to get a result. A photonic interferometer mesh is configured to be the multiplication itself, and the result comes out as soon as the input light has passed through it, about a tenth of a nanosecond. It's limited by scaling the chip and how fast you can switch the input/read the output. A reasonable parallel would be that to calculate shadows with ray tracing you need to compute a ton of stuff on a GPU, but the photonic approach is to shine light at the object and look at the shadow.
@avanisoni55492 жыл бұрын
Great explainer!!! I would highly suggest attaching your research source material in description.
@reh-linchen46982 жыл бұрын
Love your AI example of taking eggs for ping-pong balls with 100% confidence. It is hilarious!
@DanOneOne2 жыл бұрын
Honestly the whole idea that in order for AI to work, thousands humans have to manually classify each picture is just so debilitatingly stupid... It's like having a cheat sheet with all answers for all tests and instead of understanding the question and thinking, just guessing the closest answer without any understanding...
@nahometesfay11122 жыл бұрын
@@DanOneOne It's less of a cheat sheet more like doing practice problems then checking your work against the teacher's answers
@phinguyenvan7082 жыл бұрын
I think the problem is not people cannot design the AI chip that run faster than Nvidia GPUs, but the problem is the huge software stack behind Nvidia GPUs. I have tried both IPU and TPU and believe me, the software is painful as hell.
@BattousaiHBr2 жыл бұрын
yeah, same reason amd came out on top of intel but cant do the same with nvidia no matter how competitive the hardware is. nvidia is just lightyears ahead of everyone in the software stack.
@aarch642 жыл бұрын
Just a quick thing, I’m like 95% sure Xilinx is pronounced Zy-links. I grew up about a mile from the HQ, and frequently had employees from there read books to me in elementary school.
@deang56222 жыл бұрын
I used Xilinx chips years ago. You are correct.
@codycast2 жыл бұрын
You’re telling this to the guy who pronounces “Dee RAM” as “der-am”
@aiGeis2 жыл бұрын
The most egregious mispronunciation in this video was of the great John Von Neumann's surname.
@MikeTrieu2 жыл бұрын
It's almost like this channel take some sick joy at trolling tech enthusiasts with improper pronunciation of industry jargon. He's never corrected any of his flubs.
@mrhassell5 ай бұрын
The global AI accelerator chip market is currently valued at approximately $332.14 billion. 10x when the video was made.
@PlanetFrosty2 жыл бұрын
Dimensity, is doing a good job. I’ve worked on silicon photonics for 25 years and we have a new design we’re now working on a solid state SOC which includes unique photo sensitive “protein” molecule, but more another time...these are now the “Wet Works” as we try to evolve to new methodology in visual and human language understanding.
@Ivan-pr7ku2 жыл бұрын
The path for future scaling of ML hardware is switching to analog circuit computations. The conventional binary load/store logic is already bumping in the perf/watt wall.
@RoyvanLierop2 жыл бұрын
I would have expected at least a brief mention of Analog computing, using resistors as weights and adding currents together.
@EvanBoldt2 жыл бұрын
Something like memsistors seem like the real future of neural network hardware as network complexity outpaces how many compute units can be put into a package. Programmable resistors could instantly apply a neural network by sitting between the CMOS and typical ISP.
@m_sedziwoj2 жыл бұрын
Problem with analog computing is luck of design tools and knowledge, hard to debug, and many more. Phonic is interesting but same neuromorphic chip design. Because NN are something you know where each memory need to be, so don't need to use RAM, but put memory next to compute and load with pre program sequence etc.
@Ivan-pr7ku2 жыл бұрын
@@m_sedziwoj We already have put the computations besides the memory -- the GPUs, with their megabytes of registers and caches right next to the ALUs. But this is still not nearly enough and doesn't overcome the huge overhead of the classic discrete binary computing. Computation and memory must to be fused into a single functional structure, similar to how the organic neurons work, to get out of the power overhead trap. Probabilistic computing also could be significant contribution to ML, since most of the training doesn't need precise results or use of strict data formats.
@BattousaiHBr2 жыл бұрын
@@RoyvanLierop forget electrons, imagine doing computations with photons on the fly.
@dougsimmonds5462 Жыл бұрын
Can't figure where to signup for your news letter
@Name-ot3xw2 жыл бұрын
So back in the 00's the concept of a computing 'black box' was gaining some steam. The idea that we just push buttons and our PC spits out data that we consumers have only vague ideas of how the data came to be. I feel like the coming AI boom is going to take the black box idea to the next level. No one will have a solid idea of the how.
@artemglukhov152 жыл бұрын
Great video that presents a nice overview of the current technological scenario. Could you please add the DOI for the papers you are quoting? Just for an easier search.
@mariusj85422 жыл бұрын
What’s interesting is that even if AI aggregates nodes, they’re using pretty standard regression models in the node it self, meaning the classification in the calculated weights are based on very old mathematics.
@chopper3lw2 жыл бұрын
GREAT OVERVIEW!!!! Thanks
@xntumrfo9ivrnwf2 жыл бұрын
Have you looked at analog computing/chips for machine learning? I remember reading that they can be advantageous for certain tasks in the training workstream.
@mapp0v02 жыл бұрын
Have you heard of BrainChip Inc.? BrainChip has a first-to-market neuromorphic processor IP, Akida. Brainchip's Akida is a neuromorphic system on a chip designed for a wide range of markets from edge inference and training with a sub-1W power to high-performance data center applications. The architecture consists of three major parts: sensor interfaces, the conversion complex, and the neuron fabric. Depending on the application (e.g., edge vs data center) data may either be collected at the device (e.g. lidar, visual and audio) or brought via one of the standard data interfaces (e.g., PCIe). Any data sent to the Akida SoC requires being converted into spikes to be useful. Akida incorporates a conversion complex with a set of specialized conversion units for handling digital, analog, vision, sound and other data types to spikes.
@JorgetePanete2 жыл бұрын
I hope photonic computation becomes mainstream soon and CPU+GPU stops consuming over 200W
@pirojfmifhghek5662 жыл бұрын
That would be nice, but I'd also just like to see more dedicated components that supplement the CPU and GPU. There are a lot of things that a dedicated AI chip or two could do that would reduce the need for such extreme horsepower. Any efficiency gains we can make are going to be important, and I think we're simply at that phase where we should be creating a new pillar of components to do that. Photonic computing will be great, but even a breakthrough in that space won't make it to the end user for another ten years. But AI chips are almost reaching a point where they can be introduced as a standalone part. Even in something as pedestrian as the gaming space, I could see AI chip applications all over the place. It could produce a lot of streamlining in the design and development phase. It could create better variability in the game, which is honestly just a perk. Most importantly, it could be used to create a healthy number of assets in-game, which could reduce the overall _file size._ And I can't stress enough how important file size is becoming. Many video games take up an enormous amount of space and it's about to skyrocket soon. Just look at Unreal Engine 5. It has great potential to reduce GPU usage, due to its ability to render _a near infinite number of polygons_ without breaking a sweat. But all that polygon data still has to be stored somewhere... assuming it has to be stored at all. Now if a dedicated AI chip could be utilized to create the majority of that content in the end-user's computer, while they're playing the game, that would allow for game designers to deliver lush realism without crushing our drive space with >1TB downloads. Level design, texture design, NPC randomization, NPC dialogue creation, truly sophisticated enemy AI, there's a lot of stuff this could be used for. It's an utter waste of electricity to always depend on the GPU to do these tasks. And then for production workloads... man, there are just so many applications for machine learning here. It's only limited by one's imagination. Applications where the end result doesn't need to be _exact,_ but it just needs something convincing to fill in the blank and round out the rough edges. Adobe image processing, video color correction, pattern recognition, animation, 3d modeling, predictive 3d modeling, etc. Just tons of stuff that we're kinda already dipping our toes into, but the current GPUs are just too slow to reliably carry the load without bursting into flames. And of course anyone with Excel wizardry could probably think of an infinite number of potential applications there too.
@JorgetePanete2 жыл бұрын
@@pirojfmifhghek566 UE5 allows the use of Nanite, which is getting more features in experimental 5.1, and the assets are compressed, in Lumen in the Land of Nanite most of the space is taken by high res textures
@pirojfmifhghek5662 жыл бұрын
@@JorgetePanete It's highly compressed, but it's not nothing. There's still a natural tendency for game designers to push file sizes to their limits. The difference between a AAA title with static assets and a procedurally generated title can be enormous. Even if nanite could shrink the static assets down by 70%, it doesn't hold a candle to the potential of procedurally generated design. I see it as a type of low-hanging fruit. Texture creation based off of smaller seed files would also be a helpful use of AI. You are right that they take up a crapton of space. Sometimes the bulk decompression of texture files alone is enough to make CPUs and SSDs weep. That's a bottleneck we could do without. "There's got to be a better way!" I shout, with my fists raised to the skies.
@JorgetePanete2 жыл бұрын
@@pirojfmifhghek566 Seeing how massive each cod warzone update is, when there are people with dial-up internet makes me sad, I hope the community stops just saying "meh" to all the bad things big companies do
@sodasoup83702 жыл бұрын
The weird thing is that evolution on the side of software like increased sparsity was kinda completely useless for convolution tpus. Thats why eyeriss went the multicore route i guess. I kinda expected it to take longer until we reached that point...
@evennot2 жыл бұрын
I did a diploma on this topic in 2005, prototyping on Xilinx vertex too. But for spiking NNs, not regular ones. Spiking NNs takes advantage of race conditions from simultaneous concurring impulses, more akin to real NNs. They don't have a system-wide clock signal, thus they remove the disadvantage of hard discretization of the modern electronics
@mightynathaniel5355 Жыл бұрын
excellent video presentation, well done 👍 subscribed now after stumbling on this.
@ippydipp2 жыл бұрын
Brilliant video mate
@ez19132 жыл бұрын
Thankfully it still looks vulnerable to voltage spikes and EMP attacks.
@YouveBeenCabadged2 жыл бұрын
As well as hydraulic presses and moltern metal
@tyrantfox78012 жыл бұрын
Photon based computers are on the way
@mbarras_ing2 жыл бұрын
Alif and Syntiant are two companies I've spoken to recently doing 'AI Accelerators' for embedded devices. Gonna be an interesting few years!
@Alorand2 жыл бұрын
My favorite company to come out of the AI boom is Cerebras with their wafer scale engine.
@bendito9992 жыл бұрын
Yes that thing is the coolest
@blacklotus4322 жыл бұрын
dude your content is A+++
@lerntuspel62562 жыл бұрын
jesus crist, my biggest project so far was a "simple" 8-bit microprocessor, that was annoying as hell to layout in virtuoso. I audibly gasped when I saw the layout at 4:46
@rayoflight622 жыл бұрын
The problem with ARM CPUs which includes accelerators. They are proprietary. People writing the OS -say Linux - require the help of the manufacturers to write drivers, software updates, etc. This is not true for x86 CPU - which have a known structure and don't require the help of Intel for writing low level software. Our only hope is for Intel to invent a 5 Watt multicore, Risc or Cisc at this point doesn't matter much. If the trend continue with this proprietary SoCs, we will end end up with "hardware as a service" - a thing that I dislike a lot. And it is already happened with the software; do you own a video editing software, or a CAD anymore? Sometime I hope ARM and Intel get together and design the "Freedom Chip". Otherwise the best processor will only live 2 or 3 years, like our phones do now. Thank you for all your hard work...
@leyasep59192 жыл бұрын
heard about RISC-V ? Well, ok, look up "F-CPU", started in 1998.... and the "Libre-SOC" started in 2008 🙂
@popemuhammed57492 жыл бұрын
Good video, however, as someone that schooled and works in the Deep Learning and Microelectronics field, you failed to talk about the accuracy deterioration between float64 and other Neutral Network Accelerators (NNA). I expect a follow up video. I agree NNA are faster than regular GPU operations, however, it comes at an accuracy degradation cost. NNA is fitting countable infinite numbers(float64) into 255(uint8) or 65536(int16). Regardless of how well a quantization does this, there’s always error incurred. Building on the same point, repeat multiply and add operations 100 million times in uint8, the resulting values will be significantly different from the original float64 value. Sometimes the accuracy degradation can be more than 30%!
@lukakapanadze61792 жыл бұрын
What are you opinions on Tenstorrent?
@daddy31182 жыл бұрын
Luckily accuracy is measured as the accuracy of the inference as a whole rather than numerical accuracy of individual calculations.
@helmutzollner54962 жыл бұрын
Very interesting! Excellent overview on the subject. Thank you
@coraltown12 жыл бұрын
As a retired CPU engineer I find this fascinating to watch/learn, except that the more advances we make .. the more society seems to go to hell.
@cubancigarman26872 жыл бұрын
I was really against the proliferation of AI. The visions into the future brought by movies can possibly come true. But as I see divisions within our country and the greed by our politicians doing best for them with little consideration for the masses, I have come back to let technology push through. There will be a point when we will let the AI write it’s own program to make itself more efficient and less wattage drawing and so on and so forth. Maybe AI will be compassionate to our childish needs and help raise us into the future. Maybe AI will come to the conclusion that we are just parasites to the energy resources that are limited to the planet that AI and humans inhabit and will deplete. I am very sure that safety measures will be in place so AI will not go rogue like. But what if even the safety measures are countered by Ai? I guess time will only tell. Would we have androids whose sole function was to take care of humans eg…the episode of Logan’s Run (circ.1970’s) or the terminators and hunter killers from the Terminator film series? Perhaps I’m completely wrong and it’s definitely the Hunger Games scenario when the elites have complete control over the remaining resources of the planet and govern human existence. I will need more whisky and cigars to think more thoroughly about this subject matter! Good day and be safe!
@allezvenga76172 жыл бұрын
Thanks for your sharing
@adityapr.93802 жыл бұрын
3:36 that's image of the city, indore (m.p), that traffic guy is ranjeet the dancer.
@bernardfinucane20612 жыл бұрын
Moving the memory to the calculation would be like creating a hardware neuron.
@AlexK-jp9nc2 жыл бұрын
I believe that's what they're gunning for. I saw another startup where they were hacking with transistors to change them from simple 0/1 to something like a sliding scale, and then doing math with those values. I think that's extremely similar to how an organic brain works
@leyasep59192 жыл бұрын
@@AlexK-jp9nc wait... transistor are analog parts, you know. It's how you use them that makes then digital or analog, when you saturate them or not. Analog computing with discrete transistor is an old art.
@adissentingopinion8482 жыл бұрын
As a brand new FPGA designer being introduced into computational designs, I'm pumped to see integrated AI cores in my designs that can integrated a little AI processing without losing general computing resources. MMUs can make your routing congestion very sad as is :( . But knowing FPGA design will let me shift over to ASICs if that's what's in demand.
@deang56222 жыл бұрын
Only if you implement your design in VHDL which can be synthesized. If you're coding up specific logic functions which exist in the FPGA vendor supplied libraries then you're going to have a problem. And it's not a case of whether ASICs are in demand, it's simply a case of performance and cost and the volume of sales.
@skierpage2 жыл бұрын
13:00 please scale screenshots to fill your video frame. I don't need black borders, I need text I can read on my phone!
@gildardorivasvalles63682 жыл бұрын
Sorry to correct you, but Neumann is not pronounced "Newman", it's pronounced "Noy-mann". It's the name of a mathematician, who among other things contributed greatly to the development of computation: en.wikipedia.org/wiki/John_von_Neumann Other than that, great video as always. Thank you.
@Asianometry2 жыл бұрын
I just pronounced his name this way in another video. You’re gonna love it
@gildardorivasvalles63682 жыл бұрын
@@Asianometry , hahaha, nice! 😄 Thanks for the reply, and I will very likely watch that other video some time soon. Keep up the good work!
@MaxPower-112 жыл бұрын
Thank you for the informative video. BTW, it’s pronounced ‘fon Noyman’ or ‘von Noyman’ Architecture (named after the eminent mathematician and polymath John von Neumann).
@LokiBeckonswow2 жыл бұрын
epic epic epic video, thank you for explaining such complicated tech and concepts so well, thank you
@jysm33022 жыл бұрын
somebody need to give you an educator award for this. miles and miles ahead of any ive seen yet.
@RixtronixLAB Жыл бұрын
Nice info, thanks for sharing it:)
@JohnVance Жыл бұрын
So this aged extraordinarily well!
@miklov2 жыл бұрын
Fascinating and well presented. Thank you!
@AlexanderSylchuk2 жыл бұрын
What about all those people from developing countries who work for social media companies to filter inapropriate content, is there any practical limit on the complexity of neural network in order to get rid of that job?
@herpaderppa32972 жыл бұрын
what you mean? these neural network will just pump up these numbers. these technologies can produce content and pump it out in the millions of articles a day.
@AlexanderSylchuk2 жыл бұрын
@@herpaderppa3297 There's this problem with images and video on social media where algorithm simply cannot recognize something prohibited and the only way it is handled right now is by human labour. This type of labour is quite traumatic psychologically since people have to look constantly on something you wouldn't want to watch.
@dcocz39082 жыл бұрын
Hey i've had trouble buying simple cortex m4's so until they sort that out i don't see these new tech devices been anymore available
@matthewexline6589 Жыл бұрын
So with companies making TPUs for many of the tasks that used to be performed by GPUs, this will mean that there will be... fewer GPUs being made, and fewer facilities around designed to make them... does this mean that people should expect the see the prices of GPUs just continue to go up then?
@johnl.77542 жыл бұрын
What Wowed me the most lately is the AI that can draw pictures from simple descriptions that you give it. It is better than most done by human graphic designers. It should be mostly a AI software advancement then hardware but not certain.
@johnl.77542 жыл бұрын
kzbin.info/www/bejne/i2LGd2yHeNpkqLM I saw it in this video
@vanillavonchivalry66572 жыл бұрын
John you're a little mistaken. Dalle Mini doesn't draw or paint or sketch anything. It compiles images as a result of instructions. So it's not drawing - for instance - Johnny Depp eating a carrot; it's compiling images of drawings of Johnny Depp from internet browsers like Google, Bing etc. It isn't painting Trump eating Nancy Pilosi, it's finding images of "paintings of Trump" and compiling them into multiple images based on instructions. Nonetheless it is cool. But at the end of the day what you're seeing are human-created images compiled into some dream-like result.
@mattmmilli82872 жыл бұрын
@@vanillavonchivalry6657 that’s not true.. I mean it is somewhat. But you can say “Johnny depp as a angel eating a carrot in heaven drawn in the style of the Simpsons” It has some reference for all those things but has to get creative to make something new
@jpatt0n2 жыл бұрын
@Cancer McAids Look up Dall-E 2.
@blinded65022 жыл бұрын
@Cancer McAids You haven't visited internet in a while, I see.
@PeterRichardsandYoureNot Жыл бұрын
So, you chose those high stack coolers on the thumbnail because they look like the cpu cores from the movie with Johnnie Depp, transcendence?
@leoott4362 жыл бұрын
Hey Jon, i think a great follow up tp this Video would be a Video on teslas dedicated self driving Hardware chips in their cars and there Dojo Training Hardware.
@miketjdickey2954 Жыл бұрын
Great blog thank you
@eljuligallego2 жыл бұрын
you says google didn't make TPU available for sale. watch about coral project?
@In20xx Жыл бұрын
Exciting stuff, makes me wonder what will be developed in the near future!
@kayakMike10002 жыл бұрын
Convolutional are huge, like edge detection looks at pixels around a specific pixel...
@wysteria79172 жыл бұрын
Are there no developments for AI training chips intended for edge environments? Or near-edge environments?
@AbuSous2000PR Жыл бұрын
very informative; many thx
@y.shaked51522 жыл бұрын
8:09 - "The multiply-accumulator circuit is designed to do just one thing. It multiplies two numbers and then adds it to an accumulation sum." I mean... that's *two* things, my man. :)
@vslaykovsky2 жыл бұрын
When was the script of this video created? v100 are 2 generations old today
@queasyRider32 жыл бұрын
Have you seen the other video, where they show the ability to use analog circuits for really fast and energy-efficient computations? They do mention the error margin, which means analog would be better used in certain cases. Really interesting, though. Also, I like the deer.
@kathrynradonich39822 жыл бұрын
I can’t be the only one who saw the video thumbnail and thought “wow the PPC G5 is making a comeback” when seeing those heat sinks 😂
@Khal_Rheg02 жыл бұрын
Have my youtube algo contribution! Great video, very interesting!
@robertbohnaker98982 жыл бұрын
When will this trickle down to camera makers like Sony, Canon and Nikon ? As a wildlife photographer, this AI Accelerator technology would open the door mind boggling processing power that would have enormous potential to improve camera performance capabilities. Or will this tech be first applied to photo post processing computer applications ? Thanks 😊
@leyasep59192 жыл бұрын
yes, that would be for post-processing. The pro camera is here only to capture the most accurate data as fast as possible... unlike a smartphone, the photographer wishes to retain the artistic and technical grip on the result and can spend more time post-processing on their computer, than shooting.
@DanOneOne2 жыл бұрын
Honestly the whole idea that in order for AI to work, thousands humans have to manually classify each picture is just so debilitatingly stupid... It's like having a cheat sheet with all answers for all tests and instead of understanding the question and thinking, just guessing the closest answer without any understanding... It will work for many cases. It's better than nothing, but really it's a dead end...
@augustday94832 жыл бұрын
Think of it like this: humans spend years being taught by other humans, training their brains until they're smart enough to start achieving novel solutions on their own. Right now, we have to put in a lot of manual effort to teach the AI. Eventually, the early AI will be able to teach their next generation, and then the next generation, and so on...
@KillFrenzy96 Жыл бұрын
I think you may be mistaken. This work is an investment to do less work later on. Many AI's are actually more accurate than humans are. You are underestimating AI if you think they cannot understand.
@mikl23452 жыл бұрын
I thought you were going to explain the actual difference between a GPU and an NPU in terms of architecture. I.e. from a programmers perspective.
@w0ttheh3ll2 жыл бұрын
Why is a power supply unit labeled as "system processing" at 5:17?
@leyasep59192 жыл бұрын
if your system is electrons, that could make sense 😀
@georgabenthung32822 жыл бұрын
Great video, as always, thanks. You mention silicon photonics and argue that chips produced with this technique can solve the problem that storage and processing is not happening in the same place. I don't see where silicon photonics help to solve this specific problem. Isn't the difference in this chips that the data is travelling as photon on the connecting bus? You might take a look into analog computing chips which are planned. They might be a real game changer when it comes to simulating the brain's neurons.
@nygariottley24510 ай бұрын
Were you talking about LPU's (GROQ) using light-----laser
@htomerif2 жыл бұрын
7:40 lol! Look at those old brown ceramic through capacitors. They even have the little crimps on the leads so they don't get over-inserted. I'm concerned that that's something that I'm misidentifying.
@megalonoobiacinc48632 жыл бұрын
could also big giant thermistors, for short circuit protection or something
@leyasep59192 жыл бұрын
looks like "polyswitch" to me, they exist as round and square shapes, for overcurrent limiting. Ceramic caps. do not look like (or this large) that for more than 60 years. They would use SMD "bricks" as can be seen in other pictures.
@htomerif2 жыл бұрын
@@leyasep5919 Yeah, that's them. I know I've seen them somewhere before, probably in data center computers. I spent a long time looking for images of the TPU v2 and v3 board before I commented that and its definitely not labeled "C" anything like the electrolytic capacitors but I couldn't find any images where I could make out what it was labeled. Its weird to see what looks like pretty low quality components for something as expensive as that. Not necessarily the fuses, but usually in expensive stuff they spring for the more expensive long life and low ESR aluminum caps.
@leyasep59192 жыл бұрын
@@htomerif Maybe it's a prototype.
@htomerif2 жыл бұрын
@@leyasep5919 Well dang. I had like a whole smart comment wrote up, but I searched again for "TPU v2 board" and I guess I missed it before but there's a 2000x1300 image where you can clearly see those things labeled "F10" and "F7", so yeah, they're PTC thermal fuses. As for whether its a prototype or not, it looks like there are zero unpopulated pads and literally hundreds of test points, so that's pretty consistent with a prototype. It also has an SMT micro RF connector on it. I wonder what that was for.
@retromograph38932 жыл бұрын
Great vid! ….. please do a vid on Optalysys !
@asnaeb22 жыл бұрын
Ai accelerators other than GPUs never work unless your model is like 6 years old and uses no new functions. They are very inflexible.
@cinemaipswich4636 Жыл бұрын
These chips only work in a "Serial" fashion, unlike 64 core, 128 thread CPU's. They need one processor after another. If a "network" of processors are needed, then that would require a syncronised processor the size of a football pitch. Latency kills big chips.
@helloxyz Жыл бұрын
Data travels down electric wires and chip paths just as fast as photons down a fibre optic cable or Photonic path. It is the components at either end that are the problem
@marbasfpv46398 ай бұрын
Help me understand. Why would light (photonics) be a better alternative to electrons? Doesn't both particles travel at the same speed, the speed of light? And why does light not generate heat issues? it does in sufficient quantities.
@64-bit637 ай бұрын
light doenst create heat losses as much like electricity does, less heat density = more transistor density = more compute per square meter
@anodyneliniment23265 ай бұрын
following @64-bit43's reply, electrons do NOT travel at the speed of light in our chips. Anything with mass is literally incapable of moving at light speed. Electrons have heat generation issue due to having to create a medium for transmission (i e. potential difference) whereas light is an electromagnetic wave that only has to permeate through whichever medium it is in
@brandonv87212 жыл бұрын
Having linear algebra flashbacks Xilinx is your pronunciation correct? I ask as my contact (I'm in electronics) there pronounces differently
@andersjjensen2 жыл бұрын
"sailinks" is the usual pronunciation. At least that's how Dr. Lisa Su and the now-former CEO of Xilinx pronounced it in the interview about the AMD acquisition of them.
@brandonv87212 жыл бұрын
@@andersjjensen thanks!
@autohmae2 жыл бұрын
14:13 well, you've answered your own question, pretty certain a number of people are looking into how to solve that one. It at least has the most promise if (it can be) solved.
@Star_cab Жыл бұрын
"A learing neral network" I recall this being referenced in a movie.
@kaizen520712 жыл бұрын
Maybe John should make a course on semiconductors - working and evolution on a platform like brilliant. It will be a killer, atleast for the needs out there
@FerranCasarramona2 жыл бұрын
Checkout the los power neuromorphic hardware front the Australian company Brainchip.
@yanlucasdf2 жыл бұрын
First time I've heard about ai specific chips was a veritassiun video about a startup Makin one of thoses with a SD card and analog computing to calculate matrix with voltages That the simple explanation of it, the complex is that each binary slot of a SD card work like a capacitor that can be filled with zero or one unit of energy, that's how the card usually hold binary data. But what this startup does is instead of just havin it be or full or empty they hold a fraction depending on what the ai weights need to be set, send a currency on the input and measure the output to get the desire result
@jayantpancholi95062 жыл бұрын
Can you help me with the SoC / Chip name or startup that is building that. TIA
@yanlucasdf2 жыл бұрын
@@jayantpancholi9506 is a texan company called Mythic AI they show up on the part 2 of veritasium video on analog computers
@anonymoose9801 Жыл бұрын
They are no longer in business and ran out of money.
@yanlucasdf Жыл бұрын
@@anonymoose9801 thats sad 2 hear their tech was so promising, refurbishin sd cards into tensor cores looked so cool n promising
@Boersenwunder- Жыл бұрын
Which stocks are benefiting? (except Nvidia)
@pandoorapirat8644 Жыл бұрын
TSMC
@FoxtrotYouniform2 жыл бұрын
Only 8 bucks for a latte in brooklyn?!? Sign me up, goddamn, can't get a burger and fries for less than 35 bucks, an 8 dollar coffee somehow feels like a steal.
@AparnaModou Жыл бұрын
Tesla have already one supercomputer called Dojo. It just shows that hardware technology is also keeping and supporting the AI progress. Anyone here know what kind of hardware does generative AIs like Bluewillow use?
@TexasBoyDrew9 ай бұрын
This guy loaded on NVIDIA and even tried to help us... From a year ago
@sanitygone-l9y2 жыл бұрын
Hi Asianometry, I think a good video would be looking at the possible applications of Gallium Nitride (GaN) in the chip making industry and whether it could unseat silicon. They promise to be higher performance, lower power and cheaper.
@bioxbiox Жыл бұрын
The video is a gem. This could be a successful Master's thesis/
@EmaManfred Жыл бұрын
NVIDIA have already increased their market value with the recent plublic releases of different AI platforms mostly for OpenAI. Any type of generative AI or image generators like Bluewillow would benefit from this.
@davewang2022 жыл бұрын
Xilinx is typically pronounced as Zye-Links rather than Zee-Links.
@leoalex20012 жыл бұрын
Very interesting and nicely explained video. Just one question, what exactly is a weight?
@arirahikkala2 жыл бұрын
Just a parameter whose value is used to multiply an input. They're called weights because they describe how much you weigh a given input, for instance, maybe a cat face detector might have high positive weights for a cat eye and cat ear input, and perhaps a negative weight for a dog snout input.
@topspykimi2 жыл бұрын
In a foreseeable future, custom made ai chip will not be mainstream, nv cards will still dominate the market as they can adopt to newly introduce algorithms. Scientists in nv actually works with academic to improve the efficiency. For most of the projects, don’t see big advantage of using money to design own ai chip
@ConsistentlyAwkward7 ай бұрын
Groq is already using photonics to speed up chip to chip communication