The first 100 people to go to www.blinkist.com/theteslaspace are going to get unlimited access for 1 week to try it out. You'll also get 25% off if you want the full membership!
@ryvyr3 жыл бұрын
I removed the comment from main body since it did not seem relevant to subject material, and am relegating here. Why do you employ the seamless mid-video sponcorship method rather than announce at beginning, at the very least, if insisting on mid-video reel? It really kills the rest of video and at times just click off at that point. Is there an ethical/moral consideration, or no? Per your recent video with the CT photoshopped to be black, along with the title, and noting to be "self aware" per clickbait - was that a sort of hand-wave relying on enough of us to not care? I do enjoy your content, though am disheartened when people seem misleading or irreverent with mid-video seamless sponsorship reels, which feel like a betrayal of trust.
@texasblaze10163 жыл бұрын
Where is the DOJO super computer being built?
@nathanthomas81843 жыл бұрын
Is it plugged into the Black ooze ?
@glidercoach3 жыл бұрын
Not sure if using climate change models as an example, was a good idea, seeing as all models have failed miserably. As they say, _"Garbage in, garbage out."_
@martinheath59472 жыл бұрын
While these computers and AI breakthroughs may in themselves be pure, scientifically and mathematically speaking, the potential for malevolent usage is enormous eg 24/7 real time, comprehensive, monitoring and tracking surveillance of entire populations for an all pervasive and totalitarian social credit system. Recent events around the world relating to pandemic "control measures" suggest our leaders do not have our best interests at the forefront of their concerns. Control is the goal and I foresee a very dangerous coalescence of supranational elite power once this technology is pressed into service for *their* benefit.
@nujuat3 жыл бұрын
Im an experimental physics phd student and I've written a quantum mechanics simulator that runs on graphics cards. When i was writing that, the top priority was to retain the highest accuracy possible with 64 bit floating point numbers (since we want to know exactly what's going to happen when we test the experiment out in the lab). I think most supercomputers are built to do things like that. However, having that accuracy is unnecessary for things like graphics and machine learning. So it makes perfect sense that tesla would cut down on that when they're designing a supercomputer only for machine learning purposes. I don't think you got anything wrong.
@superchickensoup2 жыл бұрын
I once used a search bar on a computer
@karlos69182 жыл бұрын
The Chern Simons number has a modulo 64 factorization heavenly equation representation which can map onto a binary cellular automaton with states.
@muhorozibb27772 жыл бұрын
@@karlos6918 In human words that means😳😳😳?
@BezzantSam2 жыл бұрын
@@superchickensoup I remember my first beer
@BezzantSam2 жыл бұрын
Do you mine ethereum on the side?
@denismilic18783 жыл бұрын
Very smart approach. Less precise data and more neural networks. Simply said, it's not important if a pedestrian is distanced 15.1 m or 15.1256335980...m, important is if he going to step on the road or not. For decision making precise data is not necessary, interpreting and understanding data is crucial. The second factor why low precision is acceptable all predictions are made for a short time span, and calculations are done repeatedly. The third reason is sensors inputs are also relatively low quality but huge amount. edit: very good and understandable video.
@raymondtonkin67552 жыл бұрын
Like the individual memory for each core
@StephenRayner3 жыл бұрын
Software engineer here with 15 years experience. You did a good job
@thomasruwart17223 жыл бұрын
Great video! I spent my entire 45-year career in High Performance Computing specializing in the performance of data storage systems at the various DoE and DoD labs. I am very impressed with Dojo, it's design and implementation not to mention its purpose. Truly amazing and fascinating! Tuesday Humor: Frontera: The only computer that can give you one hexabazillion wrong answers per second! 😈
@efrainrosso65572 жыл бұрын
So Frontera is the Joe Brandon Biden of computers. Always wrong with authority and confidence. Not one right decision in 50 years.
@prashanthb65212 жыл бұрын
Awesome career you had sir. I am right now struggling to string together few computers to make money from stock market from my basement :)
@thomasruwart17222 жыл бұрын
@@prashanthb6521 - that sounds like fun! There are lots of inexpensive single board computers that you can build clusters with. Some have AI coprocessors as well to do Tensor Flow or whatever suits your needs. I wish to all the best luck with your projects!
@anthonykeller51203 жыл бұрын
40+ years of software engineering starting with machine interfaces. Very good presentation. If I was at the start of my career this is where I would want to spend my waking hours.
@TheMrCougarful3 жыл бұрын
This is probably another example of a philosophy most often seen working at SpaceX: The best part is no part. I would probably call the dojo a super-abacus. But for their purpose, an abacus was perfect, so they built the correct machine.
@MichaelAlvanos3 жыл бұрын
Great presentation! It filled in the gaps & I learnt some things I wasn't even aware of. Even your comment section is filled with great info!!
@stevedowler23663 жыл бұрын
Thanks for a very clear explanation of task-specific computing machine design. I've read ... well, skimmed ... er, sampled that DOJO white paper to the point where I glommed the idea of lower but sufficient precision yields higher throughput thus compute power for a specific task. Your pi example was the best! Keep these videos coming, cheers.
@jaybyrdcybertruck10823 жыл бұрын
Its worth mentioning that Tesla is already planning out the next upgraded version of DOJO which will be 10 x the performance of the one they are building today. Dojo will be up and running sometime in the second half of 2022, after that I give it 1 year to turn Full Self driving into something the world has never seen. It will take all 8 cameras video and simultaneously label everything they see in real time through time. Today its labeling small clips from individual cameras. This will be a HUGE step change in training once its running. Its going to save millions of lives.
@gianni.santi.3 жыл бұрын
"after that I give it 1 year to turn Full Self driving into something the world has never seen." What we're seeing right now is also never seen before.
@TusharRathi-zj1wu8 ай бұрын
Not yet
@TusharRathi-zj1wu8 ай бұрын
Not yet
@jaybyrdcybertruck10823 жыл бұрын
fun fact, the computers Tesla has been using to train FSD software today amount to being the 5th largest super computer in the world. It isnt good enough at that level so they are leap frogging everything.
@ClockworksOfGL3 жыл бұрын
I have no idea if that’s true, but it sounds like something Tesla would do. They’re not trying to break records, they’re trying to solve problems.
@jaybyrdcybertruck10823 жыл бұрын
@@ClockworksOfGL here is the actual presentation by Tesla which explains everything, its a but long but holy cow its awesome. kzbin.info/www/bejne/oGHdZXmtmqisaq8
@scottn7cy3 жыл бұрын
@@ClockworksOfGL They're trying for world domination. Elon Musk is merely a robotic shell. Inside you will find Brain from Pinky and the Brain.
@jaybyrdcybertruck10823 жыл бұрын
@@stefanms8803 small potatoes for a car company then I guess, remind me what GM Ford and VW have?
@abrakadavra31933 жыл бұрын
@@ClockworksOfGL It's not true.
@neuralearth2 жыл бұрын
The amount of love I felt for this community when you compared it to Goku and Frieza made me feel like there might be somewhere on this planet where I might fit in and that I am not as alone as I feel. Thank you TESLA and ELON and NARRATOR GUY.
@incognitotorpedo423 жыл бұрын
When you start the video with a long (sometimes angry/defensive) tirade about you not knowing anything about supercomputers, it makes me wonder if any of it is going to be worth listening to. You actually did a pretty good job, once you got to it.
@KineticEV3 жыл бұрын
I was thinking the same thing. Especially at the beginning with the super computer vs. the human brain. I think that was the only thing I disagreed with since we know the whole point some companies are trying to do is solve the AI problem but always come short.
@kiaroscyuro3 жыл бұрын
I listened to it anyway and he got quite a big wrong
@ravinereedy2043 жыл бұрын
Not everyone has a degree in CS... I do, and he explained a lot of things pretty good. The thing is, he knows the limits of his knowledge and he does his best to explain anyways. How you gonna bash the guy for that? lol I suppose I understand what you mean though. At least he is upfront about it and doesnt lie to the views to fill the gaps?
@vsiegel3 жыл бұрын
@@ravinereedy204 I think he did not bash the author, he pointed out that there is a risk of loosing viewers early because they misunderstand what he says.
@ravinereedy2043 жыл бұрын
@@vsiegel Sure, maybe thats what he was implying, but thats not what he said though lol
@j.manuelrios59013 жыл бұрын
Great video! It was never about the EV’s for me, but instead more about the Ai and energy storage. TSLA
@raymondtonkin67553 жыл бұрын
It's not just flops, it's the adaptive algorithms too ! The structure of dimensions in a neural network ... pattern recognition, nondeterministc weighted resolution 🤔 and memory
@lonniebearden99233 жыл бұрын
You did a great job of presenting this information . Thank you.
@oneproductivemusk1pm5653 жыл бұрын
Like I told you before! I love your commentary very natural and conversational! Keep it up my man!
@scotttaylor33343 жыл бұрын
I, for one, welcome our computer overlords... Three comments about the video: Fantastic video! Tons of data and lots of background. Love it. You made an analogy with Canada getting rid of the $1 bill, and I think you indicated that it reduced the number of coins we carry around, but my experience is exactly the opposite. I find that I come home with a pocket full of change every time I go out and use cash... Second thing, Nvidia, is pronounced, "invidia/envidia". I used to play on the hockey team down in San Jose California. Again thanks for the great video and great presentation.
@d.c.monday41533 жыл бұрын
Well, I am not a computer nerd! But, the parts you explained that I knew, were right, the parts you explained that I didn't know, sounded right! So I am happy with that. Well done.
@robert.27303 жыл бұрын
GO TESLA GO 🚀🚀🚀👍🏻😀
@shawnshurtz91473 жыл бұрын
No
@ChaJ673 жыл бұрын
To my understanding at least, with current technology it is impossible to make a chip over a certain size and get perfection. This is what limits GPU sizes, which is way smaller than a wafer. The only way to do wafer size is to design it to work around any and all defects. So they may actually use nearly 100% of the wafers, just with a number of sub-components disabled because of defects. The reason wafer scale is so important is heat dissipation of interconnects. The reason we have gone so long without GPU chiplets is with all of the interconnects, you can't just distribute the GPU across multiple die to get better performance. Instead you have a multi-pronged interconnect nightmare with one of those problems the shear heat generated in the die to die interconnects outweighs any benefit from spreading across to more die. While there is talk of MCM GPUs from AMD and AMD already has MCM CPUs, the CPUs are done with particular limitations to allow chiplets to work and the issues making an MCM GPU possible have been studied for years and it looks like they may have come up with an acceptable solution to there being a benefit to spreading across multiple die. Wafer scale takes a different approach in that everything is on the same wafer and so the interconnect issue is eliminated at the cost of you have to deal with the defects of neighboring silicon on the wafer instead of chopping everything up and throwing out all of the defective pieces, at least defective to the point where more common designs cannot work. The only way to dissipate the heat from so much silicon in one spot is through liquid cooling. So there is actually another layer on top which is the water block if I understand correctly. Another great thing about liquid cooling is you can just bring the heat to outdoor radiators and dissipate it. Something I would be interested in is it seems Tesla has high temperatures figured out, allowing them to boost the performance of the power electronics for the Tesla car, so it would be interesting to know what is going on with Dojo to see if they can have a simple high heat load outdoor radiator to cool the supercomputer and thus save a bunch on cooling. Cooling can be quite an expensive process, especially if traditional forced air CRACs are used, so a simple liquid loop with mainly just pumps to move the liquid and fans over the radiators from a power perspective would be a huge power savings. Chilling air to 65 F (or 15 C) and then blowing it over high performance computer parts with crazy high powered fans burns a tonne of power to do, especially if it is 115 F (over 45 C) outside.
@goldnutter4123 жыл бұрын
If the car is moving, you can get near free airflow. RAM it in with the right fluteing or whatnot.. Clock speed and routing on chip knows what is coming well before cognition of a human would kick in. I stopped it must be underclock time.. it happened well over a second ago..slowing down with expected stop is a high prediction case.. most of the time it won't be getting fooled. Even if it does, clocking back up from 5% to 100% is so fast it is "instant" to our perception. So zero issues should be expected.. the wafers that go in should really last by the sounds. Nice essay cheers.. do enjoy when someone doesn't ignore CENTI grade, History channel shame on you and ALONE show.. always F still NEVER a conversion.. was always telling someone uh just saying but - that in F is under 0 in C as well.. which from the basic explain to a layman.. doesn't seem right because you first -32 then almost halve ! lucky the temps dont swing to -40 aka -40 lol the coolest temp to almost die in but not seen it yet.
@denismilic18783 жыл бұрын
Of course, all these wafers have redundancy built in them, but this is not a new idea. kzbin.info/www/bejne/gpqtknucocqggbc
@Dave5843-d9m3 жыл бұрын
The simple way to cool computer processors is to chill the server room. It’s not efficient but does the job. Direct cooling the wafer with “water” cooled heat sinks is far more efficient but the plumbing soon gets seriously complicated.
@vsiegel3 жыл бұрын
@@goldnutter412 Thank you for fighting for correct or even sensible use of temperature units. (Maybe it is good that no aliens visit this planet. Not using common units would be really embarrassing.)
@traniel1234567893 жыл бұрын
@@Dave5843-d9m Plumbing is complicated when you need 3rd party manufacturers to install their equipment. It is the preferred way of doing things in a homogenous datacenter. Fans consume a *lot* of power, and you can't make them go faster. There are even immersion cooling systems in some new datacenters to improve energy efficiency.
@owenbradshaw93023 жыл бұрын
Greet video , I will say, dojo has the advantage of incredibly low latency so that the entire super computer can process data efficiently, regardless of floating points . Lots of floating points is useless if you can’t transfer that data between nodes very fast. It’s like trying to push a fire hydrant of water through a garden hose . This is one of the big factors for how good dojo is .
@vsiegel3 жыл бұрын
Is still is floating point numbers, but less precise, with lower resolution basically. You do not need the precision, and if a number uses less precision, it uses less memory. You can not transfer the data faster, but you can transfer more in the same time. The latency does not change, the throughput doubles if you use half the precision.
@thefoss7212 жыл бұрын
Dude your videos are super solid! I’m super impressed with the info and knowledge and slight bit of humor to keep things moving swiftly Can’t wait to hear some more info!
@pwells103 жыл бұрын
I subscribed based off the thumbnail. I liked and commented because of the quality of content.
@vsiegel3 жыл бұрын
Practically speaking: AI training normally runs on nVidia graphics cards, which are AI training accelerators at the same time. Dojo is just a fast AI training accelerator. Ideally you can simply choose to use Dojo instead of nVidia, and your program does the same as before, but much faster. Alternatively, you can make your AI larger, similar to a higher resolution on a screen, so much that it runs at the same speed as before, but the AI is better in what it does. How it is done and how much faster it is is mind blowing.
@oneproductivemusk1pm5653 жыл бұрын
I agree that image is too graphic but it's perfect for the occasion! Lol😂😂😂
@craigruchman70073 жыл бұрын
Best explanation of Dojo I’ve heard,
@markrowland13662 жыл бұрын
When mentioning Dojo needing twelve units to do what is impressive, the architecture is infinaitily expandable. A stand alone single unit might fit in a bedside cabinet. Maybe twelve might take up one wall of a bedroom.
@norwegianblue20173 жыл бұрын
Anyone else remember when there was talk about hitting the ceiling on computing power with the 486 processor? This was back in the early 1990s.
@goldnutter4123 жыл бұрын
MS-DOS 3.3.. hmm okay easy enough.. might be a coder next decade.. not a chance in hell no thankyou and goodbye.
@nickarnoldi3 жыл бұрын
Tesla will most likely keep all exopods in-house, and offer a subscription to tile time. The Tesla bot platform will use a Dojo subscription service for training. A VR headset with tactile gloves would allow a user to perform their very complex task, and the client can send builds up to the cloud. Tesla made Dojo compute with scalability at its core. Dojo is the gateway to AGI.
@emilsantiz38163 жыл бұрын
Excellent Video!!! A very concise explanation of what Dojo is and is not, and its capabilities and limitations!!!!!!
@NarekAvetisyan3 жыл бұрын
The PS5 is 10.2 TFLOPS of FP32 btw so one of these Tesla tiles is only 2 times faster not 35.
@JayTemaatFinance3 жыл бұрын
Great content. Funny analogies. Commenting for the algorithm. 👍🏼
@sowjourner2 жыл бұрын
Amazing...exactly on my level of comprehension without googling in conjunction with listening. Impressive. I immediately subscribed..... i never subscribe to any channel. my expectation is hearing more at this perfect and engaging level. a BIG thanks !!
@costiqueR3 жыл бұрын
I really enjoy it, a comprehensive and clear presentation. Thanks!
@konradd85453 жыл бұрын
ASI is beyond our reach for at least 100 years or until we have AGI (Artificial General Intelligence). AGI in itself is infinitely much more complex than a very small task of learning how to drive. Obviously, I'm not saying that self-driving cars is an easy task in terms of computing, but our brain does it infinitely better, faster and on 20W of energy only. I love how lay people overestimate the power of HPC or Machine Learning and underestimate the power of our brains. It's like comparing a single light bulb to a massive star 😂
@vivekpraseed9183 жыл бұрын
Exactly...not all supercomputers put together can rival the ingenuity of a single rat's or bird's brain (or maybe even bacterial colonies with zero neurons). Apes are nearly AGI
@memocappa54953 жыл бұрын
Advancements here are exponential, doubles every 9 months, and that rate itself is improving. It’ll be in the next 5-10 years
@flareonspotify3 жыл бұрын
100 years? More like 6 months ago 5/3/21
@konradd85453 жыл бұрын
@@memocappa5495 yeah, sure. The same exact predictions were made around 50-60 years ago. And do we have AGI (let alone ASI)? Not even remotely close to it. It's not about computing and crunching trillions of FLOPS, it's about being able to learn and adapt to any situation based on experiences and about milion other things. There are two main problems with developing AGI. Human intelligence is not yet well known. Even the definitions differ from scientist to scientist. So how on earth are we naive enough to think that we can develop something similar if we don't understand our own natural intelligence? Second main problem is that we are trying to develop AGI on Von Neumann architecture which is a futile attempt in itself, unless we want to spend energy of the entire universe for a 1s simulation of human brain 😂 I can only see neuromorphic computing as a possible candidate but these are in their infancy. So, despite what media and lay sources say, we are nowhere near AGI. Sorry (not sorry) to burst the bubble.
@konradd85453 жыл бұрын
@@flareonspotify what are you talking about?
@erickdanielsson67103 жыл бұрын
Kool Beans, I worked on array processors "FPS Floating Point" in the late 70's 12MFLOP 64 bit systems, Hot stuff then. It would take months to solve problem. Progressed thru the years. Ending my industry work with SGI/Cray. Last 15 years with DOD and High speed machines. But This is a step above. Thanks for sharing.
@donwanthemagicma3 жыл бұрын
A lot of companies don't wanna put in the risk of making a system like what Tesla is doing and have it not be adopted because it also brings down the amount of computing that something would need to have in order to get the calculations proper And that's only if everyone adopts it
@menghawtok78373 жыл бұрын
If Tesla cracks the autonomous driving puzzle then the financial return would be many times the investment put in. Perhaps most companies don’t have a single use case that can potentially reap such a high return, or management that’s willing to put in the investment to do it.
@donwanthemagicma3 жыл бұрын
@@menghawtok7837 most other companies do not have the people that could even begin to design a system like that in the first place
@BreauxSegreto3 жыл бұрын
Well done 👍 ⚡️
@MrGeorgesm2 жыл бұрын
Bravo! It does help understand the evolution of Tesla’s competitive advantage in FSD and related. Thank you!
@citylockapolytechnikeyllcc79363 жыл бұрын
Dumb this down one more level, and it will be comprehensible to those of us outside the labcoat set. Very interesting presentation
@jamesluccin45533 жыл бұрын
🤣🤣🤣 bruh I understood like 70% of it
@dan926773 жыл бұрын
Both interesting and informative!! Thank you...
@Fitoro672 жыл бұрын
Excelente apresentação! Essa forma de abordagem da TESLA em seu projeto DOJO, vem de encontro a questão de que: as coisas mais complexas, são formadas por partes simples. Esse tipo de pensamento, contrário à idéia da perfeição absoluta, nos leva a potenciais incriveis. 😀
@Mikkel1112 жыл бұрын
Nvidia, not Nividia.
@markbullock37412 жыл бұрын
Thank you for the upload.
@Nobody-Nowhere3 жыл бұрын
Cerebras is doing wafer scale AI chips. This year they released the 2nd gen chip. They announced the first version already in 2019. So Tesla is not the only one or first doing this.
@godslayer14153 жыл бұрын
You are fucking clueless.
@godslayer14153 жыл бұрын
@@IOFLOOD With TSMC's atrocious defect levels - prob half that "wafer" is dead.
@gabrielramuglia20553 жыл бұрын
@@godslayer1415 In a traditional "monolithic die" design, one bad single transistor could potentially require you disable an entire CPU core, memory channel, or other critical large structure. If you design with a larger number of smaller structures that are intended to work together and route around any dead spots, your effective "working" / "active" silicon rate can be dramatically higher even with the same number of actual defects. For example, one could presume as few as a dozen defects might make a 1 billion transistor CPU be completely unusable. If you end up with 1/100,000,000 defects on average, that means most of your CPUs will be unusuable, which seems silly to criticize the fab and say that the defect rate is very high (1 in 100 million is pretty insanely good), just the tolerances required are insane -- maybe with that design of CPU die, you need 1 in 500 million defect rate. Whereas a design that is fault tolerant may lose only 1% of computing capacity for the exact same defects.
@zoltanberkes85593 жыл бұрын
Tesla DOJO is not a wafer scale chip. They use normal chip tehcnology and put them on a wafer sized interconnect.
@amosbatto30513 жыл бұрын
Very poor info on the D1 at 7:55. The wafer of 25 D1 chips is probably designed to be able to work around bad chips, so they don't have to throw away the entire wafer. Also, Tesla is not the first to make whole wafer chips with many processors. Both UCLA and Cerebras have been doing this since 2019, and there was a company back in the 1980s doing the same.
@TheRealTomahawk3 жыл бұрын
hey did Alan Turing use a supercomputer to crack the Enigma code? Thats what this reminded me of...
@jabulaniharvey3 жыл бұрын
found this...A young man named Alan Turing designed a machine called a Bombe, judged by many to be the foundation of modern computing. What might take a mathematician years to complete by hand, took the Bombe just 15 hours. (Modern computers would be able to crack the code in several minutes...thirteen to be precise)
@PeterDoingStuff3 жыл бұрын
Thanks for making this video, very informative about HPC
@Philibuster923 жыл бұрын
This was communicated so well and so clearly. Thank you.
@LAKXx3 жыл бұрын
Elon : ''Been telling people we need to slow down Ai'' Meanwhile builds the fastest machine learning computer known to mankind
@broughttoideas3 жыл бұрын
Not even close that would be a quantum computer
@nolansmith79233 жыл бұрын
Can’t beat quantum, but quantum couldn’t be used for this purpose, so technically both of y’all are right.
@wesleyashley993 жыл бұрын
Nowhere to go but forward. Scary as it may be, slowing down will only allow others to pass.
@RixtronixLAB3 жыл бұрын
Nice video clip, keep it up, thank you :)
@arthurwagar62242 жыл бұрын
Thanks. Interesting but beyond my understanding.
@alexforget2 жыл бұрын
Another thing that strikes me with Dojo is the bandwidth. Most computers can only achieve a small fraction of their advertised power because of bandwidth limitations. Dojo interconnects between chips and wafers mean no slowdown over data access. There is probably a 10X factor in speed right there that is easily overloked.
@gregkail43483 жыл бұрын
Good presentation !!!
@sundownaruddock3 жыл бұрын
Thank you for your awesome work
@ottebya3 жыл бұрын
BEST summary of that white paper I have heard, really impressive since every other video that tries to explain it is a mess, this is such complex stuff jeez
@miketharp49143 жыл бұрын
Great report.
@ModernDayGeeks2 жыл бұрын
Awesome video explaining Tesla's Supercomputer. Knowing the possibility of Tesla integrating this to their AI work like Tesla Bot means they can further improve how we understand AIs today!
@henrycarlson75143 жыл бұрын
Interesting , Thank You
@howardjohnson21383 жыл бұрын
Thank you
@annielankford9843 жыл бұрын
Tesla’s genuine company!!👍👍👍👍
@kstaxman22 жыл бұрын
Tesla is always ahead on science and technology.
@automateTec3 жыл бұрын
No matter how large the computer, GIGO (garbage in garbage out) still applies
@GlennJTison3 жыл бұрын
Dojo can be configured for larger floating point formats.
@Leopold51003 жыл бұрын
excellent
@Jesse_Golden3 жыл бұрын
Good content 👍
@ambercook67753 жыл бұрын
It all sounded logical to me ! Lol. I love your channel.
@helder4u3 жыл бұрын
refreshing, thanx.
@YaroslavVoytovych2 жыл бұрын
The big flow of your video: You try to introduce AI supercomputer to a general public by focusing on the computing only, but avoiding even a brief introduction to the neural networks - what they do, how they work, why they are used, what is training, why to use them at all, why not to just program things, what they are good for and what they are not, etc.
@bob_frazier3 жыл бұрын
I'd rather Elon Musk have this technology than any other person, company, or even government. We are on the brink of the singularity. Self driving cars brings us here, but nothing else will matter with that one last step.
@meshuggeneh148503 жыл бұрын
Well done
@larryroben16832 жыл бұрын
GOD *** THE AUTHORITY & CREATOR ****
@EdwardTilley2 жыл бұрын
Smart video!
@mmenjic3 жыл бұрын
15:48 if that is the case then every first big thing in history would have resulted in major development in the field but often that is not the case, usually first just proves the concept and then second, third and others improve and really innovate and change stuff significantly.
@rkaid73 жыл бұрын
Enjoyed the pants flop and odd swear word. Great video.
@robertmont1002 жыл бұрын
Adding Double precision is 15% area hit for the total chip
@sandiegoray012 жыл бұрын
Thank You. I'm only concerned about FSD, at this point. As far as I can see, not a super far distance, all other computing needs are gradually being fulfilled. And my association with computers in business has been terminated, as I'm retired. Now my only real connection with computers is trying to find one that will actually be delivered to me. And after that one which doesn't die on me after 3 months, as my last computer purchase. And combining that need with a high end personal computer that will satisfy my rather complex personal computer needs in one package.
@thegreatdeconstruction3 жыл бұрын
IBM made a tile based CPU for supercomputers as well. In the 90s
@davivify2 жыл бұрын
I feel confident that if I had gone into writing Broadway musicals, I'd have also been able to achieve that high number of flops.
@tireman913 жыл бұрын
Beautiful! Just want to remind everyone... DOJO 4 DOGE!
@Human-uv3qx3 жыл бұрын
Support ♥️
@johntempest2672 жыл бұрын
Good job.
@francisgricejr3 жыл бұрын
Wow that's one hella fast Super Computer!
@gti1893 жыл бұрын
I’m an idiot and I understood this easily. Great video thank you.
@randolphtorres41723 жыл бұрын
THANKSGIVING
@matthewtaylor90663 жыл бұрын
Thanks that's cool fantastic work on the story could you do more on dojo
@kimwilliams7223 жыл бұрын
I also appreciate when people keep their grafic language to themselves
@Spartan111177773 жыл бұрын
“Elon Musk just has to find the best Engineers in the KZbin Comments.” - Huy Pham, Random KZbinr 😂
@teddygreene20003 жыл бұрын
Very interesting
@thomasruwart17223 жыл бұрын
As a retired supercomputing weenie, the benchmarks used to determine the speed of a supercomputer use all the floating point and integer sizes. So, your statement about 64-bit floating point, if I may paraphrase, is the most important, that is not entirely correct. Yes, it is important. But one researcher I have known and worked with for over 35 years wrote and regularly runs his Computational Fluid Dynamics (CFD) code on every new DoE and DoD supercomputer. His CFD code performs best with 32-bit floating point and is being developed to utilize the 16-bit floating point capabilities of newer Xeon processors.
@knightwolf35113 жыл бұрын
looking at the comment section the video got a few things wrong...
@thomasruwart17223 жыл бұрын
@@knightwolf3511 - yup - but overall Dojo is pretty interesting and amazing. Out of curiosity, are you an old retired computer guy like me?
@raphaelgarcia36363 жыл бұрын
Well explained ,...I understood it & Im no computer expert by any means ..lol ..& entertaining ..TY :)
@zachariahstovall17443 жыл бұрын
Smooooth Segway
@makeworldbette3 жыл бұрын
No, D1 is made with TSMC, not Samsung
@nolansmith79233 жыл бұрын
He said I assume Samsung, he wasn’t sure.
@jjgerald78773 жыл бұрын
Computers. Supercomputers. Quantum Computers. Hypercomputers. Oracles and O-machines.
@leachjulie49753 жыл бұрын
Successful people don't become victorious overnight. What most people see as a glance wealth, a great career purpose is the results of hard work and hustle over time. I pray that anyone reading this will be successful in life.
@carloscruz73173 жыл бұрын
the days are numbered.
@mxr5723 жыл бұрын
Musk is one CEO of a few of top tech companies that knows what is going on in their company. not business graduates but engineering savvy. like Ford before he went bonkers.
@benmlee3 жыл бұрын
In today's hyper competitive tech world, you have to be very sharp at technology to stay afloat. In the past, when the car engine had not changed for a hundred years, the priority was to maintain the operation of the plant. Now, in a few years, everything including gas station will change. You have to have a deep understanding of technology to lead.
@somaday2595 Жыл бұрын
@ 9:20 -- 1 tile, 18,000 A & 15 kW heat load? Is something like liquid nitrogen removing the heat? Also, is that 18 kA the max A, and the avg is more like 5 kA?
@balaji-kartha3 жыл бұрын
Every action of Tesla is making the future of personal automobiles obsolete !!