Why CUDA "Cores" Aren't Actually Cores, ft. David Kanter

  Рет қаралды 109,622

Gamers Nexus

Gamers Nexus

6 жыл бұрын

We talk about NVIDIA CUDA Cores vs. AMD Stream Processors and why neither is actually a "core," featuring David Kanter of Real World Tech.
Ad: Buy the Corsair Dark Core SE on Amazon (goo.gl/3psBno)
Check out Real World Tech here: www.realworldtech.com/
Read David's article on TBR: www.realworldtech.com/tile-ba...
David Kanter of Real World Tech (formerly Microprocessor Report) joins us to discuss why CUDA "Cores" aren't actually cores, later explaining the differences between CPU & GPU cores, stream processors vs. CUDA cores, and more. The discussion follows GPU architecture and explains the building blocks of a GPU, with details on streaming multiprocessors (SMs), multiply-add operations, texture map units (TMUs), and so forth.
This is a great opportunity to learn from an expert about GPU architecture basics, and to help demystify some of the marketing language used in the industry.
We have a new GN store: store.gamersnexus.net/
Like our content? Please consider becoming our Patron to support us: / gamersnexus
** Please like, comment, and subscribe for more! **
Follow us in these locations for more gaming and hardware updates:
t: / gamersnexus
f: / gamersnexus
w: www.gamersnexus.net/
Host: Steve Burke
Expert: David Kanter
Video: Andrew Coleman
Links to Amazon and Newegg are typically monetized on our channel (affiliate links) and may return a commission of sales to us from the retailer. This is unrelated to the product manufacturer. Any advertisements or sponsorships are disclosed within the video ("this video is brought to you by") and above the fold in the description. We do not ever produce paid content or "sponsored content" (meaning that the content is our idea and is not funded externally aside from whatever ad placement is in the beginning) and we do not ever charge manufacturers for coverage.

Пікірлер: 464
@GamersNexus
@GamersNexus 6 жыл бұрын
Our thanks to David Kanter for joining us. It's old now, but we think you might really like his article explaining tile-based rasterization on NVIDIA GPUs: www.realworldtech.com/tile-based-rasterization-nvidia-gpus/ Check out our DDR5 memory news: kzbin.info/www/bejne/m6elfHycd8p6gM0 Back-order a GN Modmat here for the next round! We sold out already. Again. store.gamersnexus.net/
@d3c0deFPV
@d3c0deFPV 6 жыл бұрын
Thank you, David Kanter! I'm very interested in machine learning and have been messing with Tensorflow recently. It's amazing to see stuff like random forests, which were invented way ahead of their time, finally being used for things like neural networks. Good stuff, would love to see more videos like this.
@TiagoMorbusSa
@TiagoMorbusSa 6 жыл бұрын
And the GPU is throttling for sure, because the draw distance is dynamically reducing. The FPS are rock solid though, good job.
@error262
@error262 6 жыл бұрын
Draw distance is a bit low, did you forget to set it to high in settings? Tho hair physics are spot on, are you using TressFX?
@ObsoleteVodka
@ObsoleteVodka 6 жыл бұрын
Error989 Is the exaggerated levels of depth of field.
@michelvanbriemen3459
@michelvanbriemen3459 6 жыл бұрын
Rendered using N64
@Markel_A
@Markel_A 6 жыл бұрын
They had to turn the view distance down to reach 60FPS at 4K.
@JosephCaramico
@JosephCaramico 6 жыл бұрын
They had to reduce draw distance to get better out of the box thermals.
@mhoang_tran
@mhoang_tran 6 жыл бұрын
Steve HairWorks
@ggchbgigghb7699
@ggchbgigghb7699 6 жыл бұрын
David sounds like the kind of guy who could take a few empty soda cans, some plastic spoons + duck tape and produce a 1080ti in the garage.
@abdulsadiq8873
@abdulsadiq8873 6 жыл бұрын
someone get that man a cape!
@AwesomeBlackDude
@AwesomeBlackDude 6 жыл бұрын
OR" rebuilt Radeon AMD Vega to... R.A.V.2.
@TanTan-ni4mg
@TanTan-ni4mg 6 жыл бұрын
Ggchb Gig ghb David needs help wiping his own ass. Show me something he has done in the last year? 2 years? 5 years? Ok, lets make it easy, EVER?
@josh223
@josh223 6 жыл бұрын
Tan Tan hes done more than you
@TanTan-ni4mg
@TanTan-ni4mg 6 жыл бұрын
joshcogaming proof?
@Jelle987
@Jelle987 6 жыл бұрын
So that could imply that the Corsair Dark Core SE does not contain an actual core either? /mindblown
@GamersNexus
@GamersNexus 6 жыл бұрын
Brought to you by the Corsair Dark Lane on a Vector Unit SE! Doesn't quite have the same ring to it.
@DoctorWho14615
@DoctorWho14615 6 жыл бұрын
It really has stream processors.
@charlesballiet7074
@charlesballiet7074 6 жыл бұрын
So if bob owns a orchard with apple trees and its over 400 apples per tree.....(does math) omg bob has over a million cores on his farm, hope its a crypto farm
@samuelj.rivard
@samuelj.rivard 6 жыл бұрын
no its a apples farm... xD
@thomasanderson8330
@thomasanderson8330 5 жыл бұрын
some computer mice have actually an embedded microcontroller. these run their firmware what is a tiny bit of software. that does probably some stuff to smoothen your mouse movements and to detect when you lift up your mouse. this mouse could actually have a working processor in it. nothing powerful but it executes commands and processes sensor data
@sliceofmymind
@sliceofmymind 6 жыл бұрын
THIS! Steve, this is the type of content I love. Pretty please, keep this kind of content coming!
@markp8295
@markp8295 6 жыл бұрын
Frank R If you're smart enough to keep up with this, you're smart enough to know why this cannot be the regular content. Sorry to say but it's views that pay the bills and thus making it accessible to the majority is financially the best option. Leaning either way too often kills revenue.
@sliceofmymind
@sliceofmymind 6 жыл бұрын
I discovered David Kanter through Arstechnica (one of my favorite tech/science websites) and learned a lot about chip engineering. While that website tends to cater to a rather technical demographic, Gamers Nexus has come close to matching their attention to detail. There is a lot of science behind Steve's content and his audience (based off his current subscription total) is far from the majority that visits, say LTT.
@Atilolzz
@Atilolzz 6 жыл бұрын
The inner machinations of my mind are an enigma! -Patrick Star
@joeykeilholz925
@joeykeilholz925 6 жыл бұрын
Atilolzz poetic
@DarkLinkAD
@DarkLinkAD 6 жыл бұрын
**Bill Fagerbakke
@yledoos
@yledoos 6 жыл бұрын
Really loving the technical topics GN has been presenting lately
@carniestyle6454
@carniestyle6454 6 жыл бұрын
Steve's hair flowing majestically in the wind.
@nO_d3N1AL
@nO_d3N1AL 6 жыл бұрын
As a Computer Science PhD student I find this channel to be an amazing source of technical information that, at least for bleeding edge hardware, can't be found anywhere else. Keep up the great work!
@MultiMrAsd
@MultiMrAsd 6 жыл бұрын
The most interesting thing about CUDA cores and SMs (at least in my opinion) is that one SM can render multiple pixels at once, by having it's different CUDA cores calculating with different values. But since there is only one instruction fetch and decode unit all CUDA cores must be state locked (meaning calculating the same instructions). This may result in wired behavior when running shaders with many branches, since all CUDA cores may run all branches, but discard the results of the calculation. This is why it can be faster to calculate everything the shader could need and then multiplying it by one or zero instead of using an if, since both parts of the if statement may get exicuted if the different pixels need to take different branches. This is something every graphics programmer should know, but many don't. Also that's why it's actually not that bad to market with the amount of CUDA cores, since that directly corelates to the amount of pixels that can be drawn simultaneously. (Please note that this is simplified you don't need to tell me that some details are not perfectly correct :p )
@yumri4
@yumri4 6 жыл бұрын
i think you left out to much detail but yeah
@SerBallister
@SerBallister 6 жыл бұрын
A lot of graphics programmers are familiar with branchless programming. It's some what relevant to CPU optimisation as well were missed-branch predictions cost so much performance.
@sjh7132
@sjh7132 6 жыл бұрын
How is always calculating what each branch needs first more efficient than letting one branch calculate (while the others are on hold), and then the others doing their thing? It seems equal to me, except that in the case where an entire warp takes a single branch, using the second method, you save the time of doing the calculations for the other branch. Given that a warp is only 32 ''threads', the odds of everyone taking the same branch sometimes aren't that bad.
@metatechnocrat
@metatechnocrat 3 жыл бұрын
listen man, you left out some details. I want to speak to your manager.
@npip99
@npip99 6 ай бұрын
​​​@@sjh7132 Because splitting the work across cores isn't free. When the GPU is executing code inside of an if statement, it needs to enable and disable its execution flag for the cores that aren't going to execute this cycle. Enabling/disabling the execution flag takes time. Additionally, doing so breaks the pipelining process, giving you less than 1 instruction per cycle before and after the execution flag is changed. (Instructions usually take several cycles to execute, the reason why cores achieve 1 instruction per cycle is because they're pipelined) If you have short if statements, the enable/disable issues are a significant fraction of the time spent executing the whole if statement. If the if and the else statement both have very long bodies but are doing something similar, branchless programming can often mean combining them into one body. For a CPU it doesn't matter, for a GPU it's twice as fast in warps that execute both sides of the if. Of course, if you're just wrapping a texture access with an if-statement, it's better to keep it an if than access the texture every single time (Unless you expect the if to happen a very high percentage of the time, but that's up to you)
@trazac
@trazac 6 жыл бұрын
Everyone likes to forget about the GeForce 8 Series where 'CUDA Cores' had yet to receive their name and they were just called Streaming Processors. In the same way that Hyperthreading is often called an Intel thing but SMT is an AMD thing, yet they are one-and-the-same. Hyperthreading is just marketing.
@SteelSkin667
@SteelSkin667 6 жыл бұрын
This was immensely interesting. I love the series of interviews you have been doing lately, it's great.
@EmpadaoPerfecto
@EmpadaoPerfecto 6 жыл бұрын
Need to watch it again, I'm damn lost
@S3rial85
@S3rial85 6 жыл бұрын
i think i am to stupid for this type of content......will watch anyway :)
@AwesomeBlackDude
@AwesomeBlackDude 6 жыл бұрын
Sebastian wait until the discussion about CGI real time DXR gaming technology.
@oak8728
@oak8728 6 жыл бұрын
it's two
@Dracossaint
@Dracossaint 6 жыл бұрын
Same, but it excites me to have a lane to further my understanding of computers and a direction to look for it also.
@Advection357
@Advection357 6 жыл бұрын
Basically a core is a self sufficient & independent structure that can access memory and other things on it's own. What the guy is saying is that he doesn't consider CUDA cores as true cores because they are not independent from the rest of the structure. A vector execution unit (which is what a CUDA "core" really is) is the part of the chip that traces 3D vectors, angles, magnitudes etc... it does the math to move & transform 3D objects. 3D graphics is mostly vector math. A vector is essentially a position in 3D space with a direction & an amplitude. Think of it as an arrow pointing in some direction. It's the fundamental structure of all 3D graphics.
@AlienFreak69
@AlienFreak69 6 жыл бұрын
In shader programming, in order to create the illusion of lighting (lit/unlit objects), you have to multiply or in simpler terms; join two or more colors together. These colors are stored as floating point numbers or decimal numbers. Once all the math is done and the final color is defined, the pixel is drawn on the screen. Now before it actually becomes a pixel, during the math process it's actually a vector stored inside a matrix, not a pixel. That's about as simple as I can explain his multiplication process.
@Multimeter1
@Multimeter1 6 жыл бұрын
Steve can you do technical analysis just like his some time in the future? It’s always great to get sources from multiple people. As shown at the beginning. I love in depth info of how computers work transistor level.
@Cafe_TTV
@Cafe_TTV 6 жыл бұрын
This guy knows what the hell he's talking about!
@SCrowley91
@SCrowley91 6 жыл бұрын
Videos like this are why you guys are my favorite tech channel! Keep them up! Educating the community is so valuable, I'm glad you guys take the time to get into the real details!
@Gahet
@Gahet 6 жыл бұрын
Awesome video, I love this discussion format. This is a really interesting topic, explained simply enough to make sense to me, but just far enough over my head to get me to do some additional reading. I love it! I would really enjoy more videos like this. o/
@danshannon1001
@danshannon1001 6 жыл бұрын
You really Hammered that answer out !!
@danshannon1001
@danshannon1001 6 жыл бұрын
Sorry but had to it . !! lol !!
@MentalCrusader
@MentalCrusader 6 жыл бұрын
I really like all those interviews you are putting out lately
@BillyMizrahi
@BillyMizrahi 6 жыл бұрын
This is good content. These technical videos are a great change of pace compared to the usual content GN produces. Love it.
@PJosh23
@PJosh23 6 жыл бұрын
Thank you! So many questions answered! And then some! I appreciate videos like this Steve, keep up the great work, love your channel!
@OGBhyve
@OGBhyve 6 жыл бұрын
Love these highly detailed videos!
@RamB0T
@RamB0T 6 жыл бұрын
I loved this talk, so much great information packed into 17min!
@chaoticblankness
@chaoticblankness 6 жыл бұрын
More David plz, always a treat. I really miss the old TechReport podcast days.
@LandoCalrissiano
@LandoCalrissiano 5 жыл бұрын
Love this this kind of technical discussion. Keep them coming.
@orcite
@orcite 6 жыл бұрын
Thanks for this one! Learned alot
@jacoavo7875
@jacoavo7875 6 жыл бұрын
I like the use of "knocking on heavens door" as your backtrack throughout the video.
@gravitationalpull1941
@gravitationalpull1941 6 жыл бұрын
Great and informative as usual. Thanks for the knowledge!
@VithorLeal
@VithorLeal 6 жыл бұрын
Great video! Loved the depth of it! Want more!
@osgrov
@osgrov 6 жыл бұрын
Interesting chat. I love these little talks you have with people now and then, please keep that up! No matter if it's random cool people or industry insiders, it's always interesting to hear people who know what they're talking about, well, talk! About stuff they know! :)
@landwolf00
@landwolf00 6 жыл бұрын
I really like this type of content; in depth technical descriptions of how GPUs are excellent at linear algebra!
@doppelkloppe
@doppelkloppe Жыл бұрын
This video aged really well. Thanks for making these Videos, Steve.
@issaciams
@issaciams 5 жыл бұрын
I love these intelligent discussions/interviews/explainations. Please continue doing these GN. 😁
@MarkJay
@MarkJay 6 жыл бұрын
great info in this video. Thanks!
@rowanjugernauth5519
@rowanjugernauth5519 6 жыл бұрын
Very informative. Thanks!!
@Jdmorris143
@Jdmorris143 6 жыл бұрын
Videos like these are the reason I come to you. Also because you are honest about your reviews. All hail tech Jesus.
@janhrobar782
@janhrobar782 5 жыл бұрын
very nice, I really enjoy it, I wish more talks with this guy
@spinshot6454
@spinshot6454 6 жыл бұрын
Thank you for a great piece of content. This is exactly why I subscribed in the first place, keep it up!
@jadedhealer7367
@jadedhealer7367 6 жыл бұрын
love these kinds of videos
@cainzjussYT
@cainzjussYT 6 жыл бұрын
Thanks! This exactly the video i needed. This CUDA core thing was driving me nuts.
@JordanService
@JordanService 4 жыл бұрын
Wow this video is so excellent, David is amazing!
@TJCCBR47
@TJCCBR47 6 жыл бұрын
Easy to fallow, great interview!
@Najstefaniji
@Najstefaniji 3 жыл бұрын
hahahahaahah
@ghosty1233
@ghosty1233 6 жыл бұрын
Super interesting conversation!
@BUDA20
@BUDA20 6 жыл бұрын
HAMMER TIME !, awesome video btw, keep it coming!.
@theDanMicWebshow
@theDanMicWebshow 6 жыл бұрын
Loved the video, very informative
@Xenonuke
@Xenonuke 6 жыл бұрын
Someone's trying real hard to tell us a *knock knock* joke
@lukefilewalker9454
@lukefilewalker9454 6 жыл бұрын
Very cool, knowledge is power. This video opens up a whole different perspective on how GPU's and CPU's differ but yet the same. Very interesting. Thumbs up!
@mrnix1001
@mrnix1001 6 жыл бұрын
I like this (minus the hammering in the background). Let's have more expert-interviews where they explain something in detail!
@evansmith9586
@evansmith9586 6 жыл бұрын
Best content GN has ever made! Thank you so much! Need more of this.
@kaihekoareichwein9392
@kaihekoareichwein9392 6 жыл бұрын
Cool stuff, thanks for the content!
@l33tbastard
@l33tbastard 6 жыл бұрын
This is some good background noise. Thank you for that! :)
@llanzas95
@llanzas95 6 жыл бұрын
Ivo Silva ambient noise lvl set to 100%
@justinlynn
@justinlynn 6 жыл бұрын
Huge thanks for publishing technical material like this! It's really nice to get behind all the marketing nonsense to get a better understanding of the underlying technology.
@Slavolko
@Slavolko 6 жыл бұрын
Please do more technical content like this.
@Disobeyedtoast
@Disobeyedtoast 6 жыл бұрын
I want more in depth content like this.
@ebwrightjr
@ebwrightjr 6 жыл бұрын
Great video!
@TehPlanetz
@TehPlanetz 6 жыл бұрын
Bold content. Much appreciated. 10/10.
@Advection357
@Advection357 6 жыл бұрын
Great topic!
@roax206
@roax206 6 жыл бұрын
just found this video and for someone who has a bachelor degree in computer science this makes a hell of a lot more sense than the usual jargon that is usually thrown around. In short a "core" is usually made up of: 1. A set of circuits that handle integer instructions (full number arithmetic/logic) or "Arithmetic Logic Unit". 2. A set of circuits that handle floating point instructions (decimal place arithmetic) or "Floating Point Unit". 3. An "Instruction Decoder" which directs the instruction to the right set of circuits (ALU or FPU) and converts the instruction into the electrical signals that go into that set of circuits. 4. lastly we have the "clock" which turns "on" and "off" at a set speed to differentiate between one instruction and the next. In this video he says the GPUs use a core setup where there are still only one "Instruction Decoder" and "clock" but a large number of FPUs designed for specific instructions. Thus what Nvidia calls the shader core is not the whole package but just the FPU that actually computes the decoded instructions. By limiting the core to only handle a small set of instructions and only use floating point numbers they can make these FPUs much smaller and cram many more of them into the same area.
@haozheng278
@haozheng278 5 жыл бұрын
roax206 you forgot the fetch unit and the store unit, also modern cores have other units like branch prediction
@AchwaqKhalid
@AchwaqKhalid 6 жыл бұрын
This is the exact piece of information that i was looking for
@sobhan704
@sobhan704 3 жыл бұрын
Love such content!
@BlitzvogelMobius
@BlitzvogelMobius 6 жыл бұрын
I'd like Steve and GN to do a multi-episode detailed analysis of graphics architectures since the beginning of PCs. I've been PC gaming since the final days of vertex and pixel shaders (2005 & 2006), and the switch to unified shaders rightfully took the PC gaming world by storm. Interestingly enough, the first unified shader GPU - Xenos in the 360 - more or less were directly related to the pixel shaders found in the Radeon X1000 series.
@meladath5181
@meladath5181 6 жыл бұрын
As a C++ Programmer (dabbled in GPGPU) I found this video very interesting!
@MazeFrame
@MazeFrame 6 жыл бұрын
Finally a video that goes into detail. Would have loved some more insight into Bulldozer.
@Peter_Cordes
@Peter_Cordes 4 жыл бұрын
Kanter's deep-dive article on Bulldozer is on his site: www.realworldtech.com/bulldozer/. Lots of block diagrams and detailed analysis comparing BD to previous AMD and Intel microarchitectures. Bulldozer is an interesting design, but several of its features were mistakes in hindsight that they "fixed" for Zen. e.g. Zen has normal write-back L1d cache vs. Bulldozer's weird 16k write-through L1d with a 4k write-combining buffer. Having 2 weak integer cores means that single-threaded integer performance is not great on Bulldozer-family. Zen uses SMT (the generic name for what Hyperthreading is) to let two threads share one wide core, or let one thread have all of it if there isn't another thread. Zen also fixed the higher latency of instructions like `movd xmm, eax` and the other direction, which is like 10 cycles on Bulldozer (agner.org/optimize/ ) vs. about 3 on Intel and Zen. Steamroller improved this some but it's still not great. Kinda makes sense that 2 cores sharing an FP/SIMD unit would have higher latency to talk to it, and it's not something that normal code has to do very often. Although general-purpose integer XMM does happen for integer FP conversions if the integer part doesn't vectorize.
@NeimEchsemm
@NeimEchsemm 6 жыл бұрын
Niceeeee :D Also some point that were not mentioned: Programming on a gpu is much harder than multi threading (use every core of a cpu) due to their physical properties And even for multi cpu, most of the time parralelising a workload is not so easy
@gravelman5789
@gravelman5789 4 жыл бұрын
Good interview technique!!! i wish everyone listened to their guests as good!!! 👏👏😁😁🖒🇺🇸☯♾
@RadoVod
@RadoVod 6 жыл бұрын
I like this type of content. Same as the one with primitive shaders. Keep it up!
@dimitris.p.kepenos
@dimitris.p.kepenos 6 жыл бұрын
Best. Channel! This contend is amazing!
@FrumpyPumpkin
@FrumpyPumpkin 6 жыл бұрын
The legendary Kanter! Whooooooo!
@ADR69
@ADR69 6 жыл бұрын
This dude's awesome. Thank you
@JasperSkallow
@JasperSkallow 6 жыл бұрын
Thanks David
@Tubaditor
@Tubaditor 6 жыл бұрын
Please makes these interviews much longer. When you are already interviewing someone who knows about that low level stuff that you should try to get even more information out of them. Good interview. Keep going!
@Valfaun
@Valfaun 6 жыл бұрын
i always thought it was strange that a GPU could have thousands of cores while CPUs are lucky to reach double digits also, knock knock
@theDanMicWebshow
@theDanMicWebshow 6 жыл бұрын
Valfaun who's there?
@Sliced_cheese98
@Sliced_cheese98 6 жыл бұрын
Daniel Bryant Me a nugger
@Valfaun
@Valfaun 6 жыл бұрын
Daniel Bryant someone who wants them nerds off their roof, perhaps
@grandsome1
@grandsome1 6 жыл бұрын
Best analogy I heard is that a CPU is a room with a few geniuses that solve complex problems while GPUs are like gymnasium filled with teens trying to solve for x.
@joesterling4299
@joesterling4299 6 жыл бұрын
More like a gym-ful of teens told to go paint one picket each on a fence.
@blazbohinc5735
@blazbohinc5735 6 жыл бұрын
A very interesting piece of content. Goof stuff, Steve. Keep em coming :)
@rockboy36
@rockboy36 6 жыл бұрын
sick no-scope @ 8:04
@il2xbox
@il2xbox 6 жыл бұрын
Recently I was thinking I wanted to know about real time raytracing and you made a video on it. I also wanted to know more about GPU architecture since I've studied CPU architecture but never GPUs. And here you are again making a video on it. Are you reading my mind, Steve? XD
@nutterztube
@nutterztube 6 жыл бұрын
Someone was hammering for sure.
@Mac_Daffy
@Mac_Daffy 6 жыл бұрын
Nice! I suggest the next video should be about FPGA‘s and why they perform better for ray tracing.
@silkmonkey
@silkmonkey 6 жыл бұрын
Neat. I still don't get it, but I'm not a microprocessor engineer so what are you gonna do.
@yumri4
@yumri4 6 жыл бұрын
in short what GPUs do most of is a Fuse-Multiply-aggregate instruction the take 2 values in binary put them together using the "ADD" instruction then multiplies them together then adds it to the value in the holding place that it got the data from until the FMA stream is done and does it very fast as it does not need anything but the FMA instruction for them to use the other instructions done are done with less units or more that nVidia and AMD do not tell the public how many are in that block of their GPU's block diagram
@jordanrodrigues1279
@jordanrodrigues1279 5 жыл бұрын
@@yumri4 AMD is *very* forthcoming with architectural details. There's even an assembly language optimization manual for Vega that's about as detailed as the similar document for Zen/Zen+ CPUs. The part that's hard for anyone to wrap their head around is that a group of threads is forced to follow instructions together. It's like a three-legged race, except it's sixty-four people wide instead of just two per team and it's not just a race but each one is actually playing hopscotch and the hopscotch diagram is your shader program. AMD actually explains in moderately useful detail the rules of this game, which is miles ahead of nVidia. (nVidia is better at providing high-level APIs, at least if you don't mind pledging allegiance to nVidia hardware and their closed-source software; why, they'd *never* exploit a strong market position.) ((Full disclosure: I'm an AMD fangirl. But I'm really hoping that Intel enters the market in a meaningful way: they're pretty good friends of open source, almost as friendly as AMD is and they can afford to donate a *lot* of time and information. Better open source middleware will force nVidia to compete on raw hardware merits and/or price instead of the "we bought out the new and upcoming GPU ideas, all your gaming is belong to us." That said, nVidia hasn't been slacking the past several generations, it's just that they've but built a little too good of a money printing machine and it's making them *lazy*.))
@yumri4
@yumri4 5 жыл бұрын
@@jordanrodrigues1279 how does that relate to the fuse-multiply-aggregate instruction on the instruction set of the GPU? II agree they are more open about what they are doing but you most have misread or not understood what i meant. Both nVidia, AMD, ARM and Intel use some version of this instruction. The programmer never gets to see the binary most times as programming using APIs is a lot quicker and a lot more compatible with more hardware than programming using binary code. Now how they do it is different in APIs from it on a GPU to it on a CPU due to what is being "called" but it is the same operation being done. It is all math so you only have add, subtract, and store. That is all all microprocessor can do when the instructions and data is on the microprocessor.
@dwindeyer
@dwindeyer 5 жыл бұрын
Are the FPU lanes in each SM individually addressable or asynchronous? Or does each SM/CU have to run all the lanes in a single pass?
@dteitleb
@dteitleb 6 жыл бұрын
I think a little clearer explanation is to point out that all the threads in a warp / workgroup share a single program counter. If two sides of a branch are taken by different threads then you wind up with branch divergence and the threads have to serially execute the two branches individually. Following that serial execution the thread scheduler can then reconverge the threads so they all run in parallel again. But basically if you're not branching then the performance of all the cuda "cores" behaves the same as if they were all full-fledged "cores"
@Imhotep397
@Imhotep397 2 жыл бұрын
So, is there a memory controller that’s doing the fetching and delivering for all of the clusters, some of the clusters or is that part of the CPUs job? (Not likely but I had to ask)
@Michaelwentsomewhere
@Michaelwentsomewhere 5 жыл бұрын
Anyone know why the corsair mouse linked in the description os sold out everywhere? This video isnt that old
@aplacefaraway
@aplacefaraway 6 жыл бұрын
During filming a construction worker is performing many hammer to nail operations
@kevin-jm3qb
@kevin-jm3qb 5 жыл бұрын
Steve u gotta make a video on the break down of a cpu architecture
@thevortexATM
@thevortexATM 6 жыл бұрын
More vids like this!
@MatrixJockey
@MatrixJockey 6 жыл бұрын
Dope video
@disinlungkamei2869
@disinlungkamei2869 2 ай бұрын
Nice talk
@TheWheelerj5000
@TheWheelerj5000 6 жыл бұрын
I love the guy hammering in the back ground
@mr.needmoremhz4148
@mr.needmoremhz4148 6 жыл бұрын
Good content, love the more in-depth stuff. This really separates your channel from the other tech channels!! Everyone talking about the same stuff in short videos is kinda boring. While i don't understand everything this triggers me to do some learning and read more.
@callumturner9101
@callumturner9101 6 жыл бұрын
Lovely weather.
@unvergebeneid
@unvergebeneid 6 жыл бұрын
Is that talk he gave at UCS Davis online somewhere?
@raredreamfootage
@raredreamfootage 6 жыл бұрын
Can you talk about how the fundamental "core" design changed when going from Fermi to Maxwell? I believe this was one of the paradigm shifts for CUDA.
@ventisca89
@ventisca89 6 жыл бұрын
1. So that's THE David Kanter, amazing 2. As far as I know (before watching this video) the CUDA "cores" are more like ALU than complete core. Nice deeper explanation here.
@hrvojeherceg2636
@hrvojeherceg2636 6 жыл бұрын
That guy with hammer is really annoying 😠
@AlienFreak69
@AlienFreak69 6 жыл бұрын
Steve looking over the edge to check if the guy is within spitting range before realizing he's on camera and proceeding to nod in agreement.
@joer8854
@joer8854 6 жыл бұрын
Omg. I thought maybe there was something wrong with my receiver. I know it's not their fault but holy crap is that distracting.
@joer8854
@joer8854 6 жыл бұрын
At least he had parents who gave a damn unlike you who acts like an asshole for no reason.
@PSNGormond
@PSNGormond 6 жыл бұрын
That's not a hammer, it's the audience banging their heads against the wall trying to follow the explanation!
@nswitch1083
@nswitch1083 6 жыл бұрын
We are at the brink of advancement in computing technology. The only problem is making the computing technology smaller without affecting temps and transfer. If we ever start using gallium nitride as a core component and replacement of silicon then we might be looking at the next jump in hardware evolution. So exciting.
@samdovakin2977
@samdovakin2977 6 жыл бұрын
can you use hbm as cache in a cpu ?
@kn00tcn
@kn00tcn 6 жыл бұрын
2:47 steve mesmerized by his dream coming true
@reinerheiner1148
@reinerheiner1148 4 жыл бұрын
Was that smog in the background or just fog?
@Kokinkun
@Kokinkun 6 жыл бұрын
Great informative video! It's easy to tell who has lived in California when they give statements but make it sound like a question. Is this prominent in other regions?
The Intel Problem: CPU Efficiency & Power Consumption
28:36
Gamers Nexus
Рет қаралды 477 М.
Intel 10nm Delay Explained & AMD's "7nm" | Ft. David Kanter
30:13
Gamers Nexus
Рет қаралды 216 М.
⬅️🤔➡️
00:31
Celine Dept
Рет қаралды 47 МЛН
"Google is Getting Worse," ft. Wendell of Level1 Techs
26:07
Gamers Nexus
Рет қаралды 171 М.
What Are CUDA Cores?
7:40
Greg Salazar
Рет қаралды 380 М.
An EPYC Disclosure
15:17
TechTechPotato
Рет қаралды 33 М.
What is a GPU and how does it work? - Gary explains
11:32
Android Authority
Рет қаралды 226 М.
What Are Memory Timings? CAS Latency, tRCD, tRP, & tRAS (Pt 1)
20:37
AMD RDNA / Navi Arch Deep-Dive: Waves & Cache, Ft. David Kanter
23:07
comparing GPUs to CPUs isn't fair
6:30
Low Level Learning
Рет қаралды 284 М.
Tim Besard - GPU Programming in Julia: What, Why and How?
30:06
The Julia Programming Language
Рет қаралды 4,3 М.
Lp. Последняя Реальность #91 БОСС МАФИИ [Элитный Отряд] • Майнкрафт
37:25