CPU vs GPU | GPU Programming | Episode 1

  Рет қаралды 10,685

Simon Oz

Simon Oz

Күн бұрын

Пікірлер: 22
@silience4095
@silience4095 Ай бұрын
Great video! Excellent introduction. Keep up, I just want to add that one of the main differences is that CPUs are absolutely incredible at branching code, while GPUs are terrible at that. CPUs have decades worth of optimizations specifically to handle branching really well, and keep the pipeline going as uninterrupted as possible.
@lqx7
@lqx7 3 ай бұрын
This is such quality content. Surely it’s going to blow up.
@dhananjayansudhakar3811
@dhananjayansudhakar3811 2 ай бұрын
Thanks for explaining it so well, yet in simple terms.
@Stefan-td1pw
@Stefan-td1pw 3 ай бұрын
Can't wait for this to pick up the traction it deserves, investing at 100 views
@xeqqail3546
@xeqqail3546 3 ай бұрын
real XD
@vastabyss6496
@vastabyss6496 2 ай бұрын
Commenting for the algorithm!
@rickpala_
@rickpala_ 2 ай бұрын
Bro the visual explanations are amazing! Wish the audio was a little better!
@Humble_Electronic_Musician
@Humble_Electronic_Musician 2 ай бұрын
Excellent!
@Coder.tahsin
@Coder.tahsin 2 ай бұрын
Nice video , I am learning OpenCL
@Anjalisharma-dk6tk
@Anjalisharma-dk6tk 2 ай бұрын
i feel like this channel ,especially this series, need a overwork. wherein, sir Simon has to make sure we know what the pre requites are. i am currently just found this channel as i was looking for good computer architecture animation channel(don't mind me there). and the content went in full speed without any seat belt. on a serious note, what is the pre required for this series?
@Anjalisharma-dk6tk
@Anjalisharma-dk6tk 2 ай бұрын
just in case you are like me. there is a zero episode with all these given. i will study everything needed before starting this series. anyone that can guide me for more sources to go before starting this beautiful series?
@houski4242564
@houski4242564 Ай бұрын
@@Anjalisharma-dk6tk I am trying to understand computer architecture too. I looked into it and I started learning from coursera course called “Building a Modern computer for first principles: NAND to Tetris” they provide everything to learn computer architecture from absolute beginners.
@Anjalisharma-dk6tk
@Anjalisharma-dk6tk Ай бұрын
@@houski4242564 thanks for the guidance ski, i might look into that series as well!
@TheCollectiveHexagon
@TheCollectiveHexagon Ай бұрын
@@houski4242564 would you say thats good for a beginner? or is some background needed?
@SAhellenLily
@SAhellenLily Ай бұрын
🤔🤷
@louisjx8009
@louisjx8009 2 ай бұрын
Dude, wtf. You can't compare CPU core and GPU core. They are not the same. GPU core count take into account the line width of the data and CPU just don't. One CPU core can process 8 to 64 additions in one cycle, but you don't multiply the core count by 8. But that's exactly what GPU manufacturers do...
@szymonozog7862
@szymonozog7862 2 ай бұрын
You are right, I just wanted to show a rough outline of the scale but the difference should have been made more clear in the video. Thank you for pointing that out
@vitalyl1327
@vitalyl1327 2 ай бұрын
Wrong. GPU core is not just a wide SIMD, there's a lot more there. A CPU equivalent would have been each SIMD instruction sandwiched in scatter-gather instructions.
@StarsManny
@StarsManny Ай бұрын
You can compare any 2 things that you like. After the comparison, you will find that the 2 things are similar, or fairly similar, or different, or very different etc. The fact that "They are not the same" is actually a very good reason to compare them.
@frenchmarty7446
@frenchmarty7446 Ай бұрын
"One CPU core can process 8 to 64 additions in one cycle." Yes, but that is still a single instruction on a single contiguous array. We think of a CPU as a set of hardware threads executing independent sequential streams of instructions. This isn't actually what's happening but that's the abstraction we use. This way of thinking just doesn't translate to GPUs. GPU cores *can* execute separate instructions (i.e. the GPU core is the smallest unit of abstraction in terms of instructions), they are not just sub-units of big SIMD instructions. But they are not as flexible as separate CPU threads; for example, GPU threads in the same group suffer a serious performance penalty if they hit a conditional statement and branch off in separate directions (though they *can* do it). It's not fair to say that GPU core counts are arbitrarily inflated. It makes sense within the GPU paradigm.
@SkegAudio
@SkegAudio 21 күн бұрын
your sensitive reaction is not warranted, my guy
@dovos8572
@dovos8572 2 ай бұрын
what language are you using in the examples? oh it is cuda. welp more to learn than i thought
Kernel Grid | GPU Programming | Episode 2
6:15
Simon Oz
Рет қаралды 2,7 М.
CPU vs GPU vs TPU vs DPU vs QPU
8:25
Fireship
Рет қаралды 1,8 МЛН
إخفاء الطعام سرًا تحت الطاولة للتناول لاحقًا 😏🍽️
00:28
حرف إبداعية للمنزل في 5 دقائق
Рет қаралды 84 МЛН
Faster than Rust and C++: the PERFECT hash table
33:52
strager
Рет қаралды 593 М.
The Magic of RISC-V Vector Processing
16:56
LaurieWired
Рет қаралды 318 М.
Computer Architecture Explained With MINECRAFT
6:47
Codeolences
Рет қаралды 1 МЛН
computers suck at division (a painful discovery)
5:09
Low Level
Рет қаралды 1,7 МЛН
The size of your variables matters.
11:03
Core Dumped
Рет қаралды 126 М.
Just enough assembly to blow your mind
29:31
Kay Lack
Рет қаралды 106 М.
إخفاء الطعام سرًا تحت الطاولة للتناول لاحقًا 😏🍽️
00:28
حرف إبداعية للمنزل في 5 دقائق
Рет қаралды 84 МЛН