Can you make a similar video for GPU vs Integrated GPU? Is there any difference in their architectures?
@RationalistRebel2 ай бұрын
The main differences are the number of cores, processor speed, available power, and memory. Integrated GPUs are part of another system component, usually the CPU nowadays. That limits the number of cores the GPU part can have, as well as their speed and power. They also have to use part of the system memory for its own tasks. Nonetheless, they're more energy efficient and more than powerful enough for most common tasks. Discrete GPUs have their own dedicated processor and high-speed memory. Higher end GPUs typically require more power, sometimes even more than the rest of the system.
@mohd5rose2 ай бұрын
Dont forget compute unit.
@HorizonOfHope2 ай бұрын
@@RationalistRebelThis is well explained. Also, as you scale up any processor, there are diminishing efficiency gains. GPUs are often scaled up so much that they produce huge amounts of heat. Often you need more cooling capacity for the GPU than the rest of the system put together.
@RationalistRebel2 ай бұрын
@@HorizonOfHope Yep, power consumption == heat production.
@mauriciofreitas33842 ай бұрын
An important thing to pay attention when scaling dedicated GPUs is your power supply. They consume more power, but that power has to come from somewhere. Never neglect your power supply when upgrading
@AungBaw2 ай бұрын
Simple yet short to the point, instant sub thanks mate.
@TechPrepYT2 ай бұрын
That was the goal glad it was helpful!
@chikenadobo14 күн бұрын
Dude read off of Wikipedia
@kimdunphy20092 ай бұрын
finally an explanation that I completely understand and not trying to sell me on anything!
@anotherfpsplayer2 ай бұрын
Easiest explanation I've ever heard... can't be a more simpler explanation than this.. Brilliant stuff..
@TechPrepYTАй бұрын
Thank you! That's the goal!
@samoerai68072 ай бұрын
Briljant video! I started my IT Forensics study last week and will share this with the other students in my class!
@TechPrepYT2 ай бұрын
Glad it was helpful!
@Zeqerak2 ай бұрын
Beautifully done. The best explanation I came across. Understood the core concepts you explained. Again, beautifully executed
@TechPrepYT2 ай бұрын
Thanks for the kind words!
@kartikpodugu2 ай бұрын
With the dawn of AI PCs, which always have CPU, GPU and NPU. Can you make a similar video on differences between GPU and NPU ?
@TechPrepYT2 ай бұрын
Its' on the list!
@AyoHues2 ай бұрын
A good follow up video would be a similar short summary explainer on SoCs. And maybe one on the difference between the other key components like Media Engines, NPUs, onboard vs separate RAM etc. Thanks. 🙏🏽
@TechPrepYT2 ай бұрын
Great idea, I'll put it on the list!
@technicallyme2 ай бұрын
Gpus also have a high tolerance for memory latency vs cpu Modern cpus have some parallelism built in. Functions such as branch prediction and out of order operations are common in almost all cpus
@gibiks70363 ай бұрын
Thank you... Simple and short....
@TechPrepYT2 ай бұрын
Thank you!
@atleast2minutes9162 ай бұрын
Thank you so much! Simple , brief and easy to understand!! Awesome
@TechPrepYTАй бұрын
Glad you enjoyed!
@simonpires61842 ай бұрын
Straight to the point and explained perfectly 👍🏽
@TechPrepYTАй бұрын
That was the goal thank you!
@Soupie622 ай бұрын
As an example... consider Pac Man. A grid of pixels is a Sprite. Pacman has 2 basic Sprite images: mouth open, and mouth closed. You need 4 copies of mouth open, for up/down/left/right, 5 Sprites total. Movement is created by deleting the old Sprite, then copying ONE of these from memory to some screen location. A source, a target, and a simple data copy, gives you animation. That's what the GPU does.
@davidwuhrer6704Ай бұрын
Not entirely correct. Some architectures can handle sprites, some don't. Some handle matrix muliplication and/or rotozooming, some don't. What you described is what an IBM PC with MS-DOS does. No sprite handling, no rotozoom, no framebuffer. So you need four copies of the open mouth sprite, but you can generate them at run-time. And you need to delete and redraw the sprite with every frame. Other systems, and I mean practically every other system, makes that easier. You only need one open mouth sprite, and you can rotate it as needed, and you don't need to delete it if the sprite is handled, you just change its position. This is simple if the image is redrawn with each vertical scan interrupt. But that was before GPUs. Modern GPUs use frame buffers. You have the choice of blanking the buffer to a chosen colour and then redrawing the scene with each iteration, or just drawing over it. The latter may be more efficient: 3D animation often draws environments, and 2D often uses sprite-based graphics, two cases where everthing gets painted over anyway. And yes, for sprites that means copying the sprite pattern to the buffer, in a way that makes rotozooming trivial, so you don't need copies for different sizes and orientations either. The mouse pointer is usually handled as a hardware sprite, which means it gets drawn each frame and is not part of the frame buffer. What you also neglected to mention are colour palettes. There are, in essence, four ways of handling colour: True or indexed, with or without alpha blending. True colour just means using the entire colour space, typically RGB, which is usually 24 bits, that is 8 bits for each colour channel. 32 bits if you use alpha blending, too. This has implications for how a sprite is stored. If you use a colour palette, you can either use a different sprite for each colour, or store the colour information with the sprite. Usually you would use the former, because it makes combining sprites and reusing them with different colours easier. With the latter, you can get animation effects simply by rotating the colour palette. If you use true colour, you can use a different sprite for each colour channel, but you typically wouldn't, especially if you use alpha blending.
@markonar140Ай бұрын
Thanks for this Great Explanation!!! 👍😁
@TechPrepYTАй бұрын
Glad you found it helpful!
@JimStanfield-zo2pz4 ай бұрын
Very powerful and concise explanation. Keep up the good work.
@TechPrepYT2 ай бұрын
Thank you!!
@olliehopnoodle4628Ай бұрын
Excellent and well put together. Thank you.
@TechPrepYTАй бұрын
Glad you liked it!
@GigaMarou18 күн бұрын
Hey nice video! Is it just that GPUs have more ALUs for each Cache and CU? Or are the GPUs ALUs different in structure? Similar for CUs and Caches?
@lodgechant2 ай бұрын
Very clear and helpful - thanks!
@TechPrepYTАй бұрын
Thank you!!
@natcuber2 ай бұрын
How much is the latency difference between a CPU and GPU, since it's stated that CPUs focus on better latency vs throughput?
@maxmuster70032 ай бұрын
Intel Core2 architecture can execute up to 4 integer instructions paralell with each single core.
@dinhomhm6 ай бұрын
Very clear, thank you I subscribed to your channel to see more videos like this.
@TechPrepYT5 ай бұрын
Thank you!!
@VashdyTV6 ай бұрын
Beautifully explained. Thank you
@samychihi63172 ай бұрын
which means INTL fall is not that bad, if GPU can't fully replace CPU, then INTL will remain in the market of computing and personal PC. thanks for the explanation
@juanclopgar9725 күн бұрын
Do you have a video talking about cores? in this example you show a core with 4 ALU, and I do not quiet understand how a single CONTROL UNIT can handle that
@waynestewart19192 ай бұрын
Very good. I am subscribing. Thank you.
@TechPrepYTАй бұрын
Thanks for the sub!
@waynestewart1919Ай бұрын
You are very welcome. You more than earned it. That may be the best explanation on CPUs and GPUs on KZbin. Please keep it up.
@gfmarshall2 ай бұрын
Thank you so much 🤯❤️
@TechPrepYTАй бұрын
You're welcome!
@StopWhining4912 ай бұрын
Excellent explanation. Thanks!.
@TechPrepYT2 ай бұрын
Thank you!
@maquestiauartandmoreАй бұрын
great! thank you, but if you could slow down a little in your explanation..😅
@TechPrepYTАй бұрын
Yep will try!!
@mohamadalkavmi49322 ай бұрын
very simple and nice
@TechPrepYTАй бұрын
Thank you 😊
@siriusbizniss2 ай бұрын
Holy Cow I’m ready to be a computer engineer. 👍🏾👌🏾🤓
@chandru_Ай бұрын
nice explanation
@TechPrepYTАй бұрын
Thanks!!
@ruan13o2 ай бұрын
From my experience, unless I am running a game, my GPU is typically only very lowly utilised while CPU might often be highly utilised. So when we are not running games (or similarly intensive graphics applications) why do computers not send some of the (no graphics) processing to the GPU to help out the CPU? Or does it already do this and I just don't realise?
@christophandreАй бұрын
That's already the case on some programs. The main reason why you don't see it that often is, that a program must be designed to run (sub)tasks on the GPU. This can make a program a lot more complex really fast, since most programmers don't decide on their own, which part of the program is calculated on which part of the computer. That's done by underlying frameworks (in most cases for good reasons).
@Trickey2413Ай бұрын
As the video made it a point to highlight, they excel at different things. Forcing a GPU to do a task it is not optimised to do would be less efficient and vice versa.
@abhay6264 ай бұрын
helpful. thank you!
@TechPrepYT3 ай бұрын
Thanks!
@buckyzona6 ай бұрын
Great!
@PhillyHank2 ай бұрын
Excellent!
@TechPrepYT2 ай бұрын
Thank you!
@jozsiolah14352 ай бұрын
When you stress the system by forcing a dos game to play the intro at very high cpu cycles, you also frce the system to turn on secret accelerating features that remain on. One is the sound acceleratig of sound device, it offlads the cpu from decompressing audio. The ther is the floating point unit, that is off by default, when it is on, some games will become harder. The Intel has an autopilot for car games, that is also off by defaut. With the GPU the secret seems to be the diamnd videos I am experimenting with, many games show diamnds as reward, and hard to get. Diamnd stresses the vga card, it's so compex to draw by the video chip. Also the tuning will consume the battery faster.
@mostlydaniel2 ай бұрын
2:34 lol, *core* differences
@mrpappa41052 ай бұрын
If possible, explain why a GPU can not replace a CPU. Great vid but my old vic64 brain (yeah im old) dont get this. Anyway, cheers from a new subscriber
@illicitryan2 ай бұрын
So let me ask a stupid question, why can't they combine the two and get the best of both worlds?Will doing so negate some functions rendering it useless? Or .. lol just curious 🤔 😊
@riteshdobhal63812 ай бұрын
CPU has few cores which is why parallel processing is hard on them but each individual core is extremely powerful. GPU has thousands of cores making them good at parallel processing but each individual core is comparatively not very powerful. You can make a Processor which has multiple cores like GPU and each core being powerful like CPU but that would cost huge amount of money
@trevoro.97312 ай бұрын
The question is stupid. You can either get high performance in non-parallel tasks and low latency or high performance in parallel tasks and high latency, if you extend the normal core with GPU features, it will instantly become large, power hungry and slow for normal tasks, that is why modern CPUs for desktops have bult-in separate GPU cores. Also, the problem is that you would need a lot of memory channels of memory to make use of that - memory for GPUs is very slow, but has a lot of internal channels.
@zoeynewark97742 ай бұрын
Can you combine a school bus with a Formula One car? There, you have your answer.
@boltez65072 ай бұрын
APUs do that.
@keithjustinevirgenes73872 ай бұрын
What do you think will happen if both the heat of cpu ang separate gpu combined with just one cooling fan and heatsink? Plus greater voltage needs resulting in more heat?
@graszАй бұрын
CISC vs RISC plz
@TechPrepYTАй бұрын
It's on the list!
@graszАй бұрын
@@TechPrepYT yay~!!!
@lanceorventech61292 ай бұрын
What about the Threads?
@mattslams-windows79182 ай бұрын
A thread is simply a logical unit of sequence of instructions that gets mapped to each core by a scheduler (modern systems usually use both hardware and software scheduling these days) whether that's on a GPU or CPU for execution. Primary difference between GPU and CPU threads is that GPUs usually execute the same "copy" of each instruction across all threads being executed on the device (single-instruction multiple data aka SIMD), whereas CPUs can have all the different threads very easily execute all kinds of different instructions simultaneously (multiple-instruction multiple data aka MIMD). In addition more than one thread can be mapped to each core whether it be a CPU or GPU and in the case that happens simultaneous multi threading (SMT) hardware technology is used to execute all the threads at once.
@Norman-z3s2 ай бұрын
What is it about AI that requires intense parallel computation?
@undercover48742 ай бұрын
In neural networks it's all about matrix multiplications and also we want to pass multiple inputs through the network (which also perform a number of matrix multiplication) with GPU we can perform all the passes of different inputs to the neural network parallel instead of just doing one input a time which can speed up the computations
@meroslave2 ай бұрын
A CPU can never be fully replaced by a GPU, so what happened now between intel and INVIDIA!?
@matteoposi95832 ай бұрын
am i the only one who sees dots in gpu drawing?
@tysonblake515Ай бұрын
No you're not! It's an optical illusion
@avalagum79572 ай бұрын
Still not clear for me: what component does the GPU miss so that it cannot replace a CPU? Ah, just checked with perplexity ai, the instruction set that a GPU accepts is too poor to make a GPU a replacement for a CPU.
@nakkabadz64432 ай бұрын
gpu is like PhD holder while the CPU is the jack of all trades. look at the name GPU graphics computing CPU is Central processing unit GPU can outperform cpu computing power in singular task like in graphics computing or any other given task, while the cpu can't compute as fast as the gpu cpu can compute task simultaneously.
@trevoro.97312 ай бұрын
Most of things in this video is BS. GPU can replace CPU, but will work many times slower for most of tasks while taking more power. Only on highly parallel tasks it is efficient and fast. Also, it is missing a lot of features to control hardware, like proper interrupts management etc.
@avalagum79572 ай бұрын
@@trevoro.9731 why does GPU is slower than cpu for most of tasks?
@trevoro.97312 ай бұрын
@@avalagum7957 It is optimized to consume minimum amount of energy and perform multiple calculations per cycle, but the calculations take much longer time to finish, up to 100 times slower. All those parallel operations go to waste if you don't need to perform the exact same operation on multiple entries. Also, its memory is way slower than that of CPU (albeit CPU memory is also not very fast, it merely got ~30% faster over the last 20 years, but got a lot of internal channels), but contains a lot of internal channels, so it is efficient in processing large amounts of data, which do not need high performance per each dataset.
@Theawesomeking44442 ай бұрын
No, you didn't explain nor do you understand anything, all you did was read a wikipedia page, gpus dont have "more cores" thats a marketing lie, what they have are very wide simd lanes, cpus have it too but they are smaller in exchange for bigger cache, higher frequency and less heat.
@mattslams-windows79182 ай бұрын
Honestly this video despite oversimplifying a bit (due to time constraints likely) isn't entirely incorrect. There are in fact more cores on average in a GPU it's just the architecture of each GPU core is completely different than a CPU core; companies like Nvidia and AMD aren't lying when they talk about their CUDA core/stream processor counts it's just that each core in a GPU serves somewhat different purposes than each core in a CPU is all. Also what you're saying about cache isn't really correct AMD RDNA 2+ cards have a pretty big pool of last level infinity cache that contributes a significant amount to the overall GPU performance
@Theawesomeking44442 ай бұрын
@@mattslams-windows7918 The problem is, as a graphics programmer, if i were to learn it again, this video tells me nothing, its literally a presentation a high school student would do for homework "cpus do serial gpus do parallel", the funny thing is that the main differences are literally shown in the first image he showed, where it shows the number of ALUs (the simd lanes, they also have FPUs) of each, yet he didnt explain that because he has no clue what those images mean. now gpus do have slightly more cores than their cpu counterpart, but its usually 1.5x-2x higher, not thousands of cores, if you want the correct terminologies, a cpu core is equivalent to streaming multiprocessor in nvidia , compute unit in amd(funny enough amd also refers to them as cores on their integrated graphics specifications), and core in apple, a cpu thread is warp in nvidia and wavefront in amd, a cpu simd lane is a cuda core in nvidia and stream processor in amd, now for the cache thing you mentioned, you are probably using a gaming pc or console for the comparison, they will usually have 4-8 core cpus with 16-32 core gpus, in games single core performance matters more (usually because most gamedevs dont know how to multithread haha), if you want more less of a comparison take the ryzen 9 5900x 12 core and the rx 6500 16 cores, roughly similar power consumption, L3 cache is 64MB on the cpu and 16MB on the gpu, L2 cache is 6MB on the cpu and 1MB on the gpu, L1 cache is 768KB and 128KB on the gpu, now if you get a gpu with higher core counts, you will notice that L3 cache increases a lot but L1 cache stays the same, this is because L3 cache is a shared memory pool for all of the cores within the gpu and cpu, while L2 and L1 cache are local to the core. anyways that was a long reply, hopefully that answered your questions xD.
@Theawesomeking44442 ай бұрын
@@mattslams-windows7918 lol my reply was removed
@JosGeerinkАй бұрын
@@Theawesomeking4444it wasn't?
@Theawesomeking4444Ай бұрын
@@JosGeerink nah i had another reply which i explained the technical details but you cant state facts with proof here, unfortunately.
@jlelelr2 ай бұрын
can cpu have something like cuda?
@mattslams-windows79182 ай бұрын
Depends on your definition of "have": if Nvidia makes a GPU driver that supports the CPU in question then technically one can combine an Nvidia GPU and that CPU into the same computer to run CUDA stuff, but executing GPU CUDA code on the CPU is something that Nvidia probably doesn't want people to do since Nvidia likely wants to keep making money on their GPU sales so executing GPU CUDA code on CPU will likely not be a thing anytime soon
@marcopo06Ай бұрын
👍
@zoemayneАй бұрын
Im just worried they set him up for failure. Those investors gutted the company and sold the most valuable asset - the land they owned. He has my support ill make sure to either stop by there with a group of friends or atart getting some healthy takeout from them. Those bankruptcy investors should be stopped. Just look how they plundered toys r us.
@sauravgupta52892 ай бұрын
Since each core is similar to CPU can we say that it has multiple CPU units?
@aorusgaming59132 ай бұрын
Does this means that gpu is a better version of cpu only or we can say faster version of cpu which can do many calculation at a time, then why dont we use two gpus instead of a cpu and a gpu?
@undercover48742 ай бұрын
GPU only performs better if the task being performed can be parallelized, but majority of the tasks can't be or don't need parallel computation So they would be slower on GPU. Main Power GPU gives us is parallelization if can't exploit it in a task the overhead will make it even slower than CPU.
@pear-zq1uj2 ай бұрын
No, GPU is like a factory with 100 workers. CPU is like a medical practice with 4 doctors. Neither can do each other's job.
@trevoro.97312 ай бұрын
You are wrong about many things. GPUs, the modern ones, aren't actually good in performing operations for each single pixels. They are far behind the CPU on that, but can more efficiently work with large groups of pixels. No, the modern high-end GPUs have 32-64 cores (top ones like 4090 have 128 cores). The marketing cores is a lie. No, the threads is a lie, the actual number is hundreds times lower. Those fake threads are parallel execution units, they are not threads, they are same code which works with a very large array. Each single core can run 1 or 2 actual threads, therefore the number of threads for high-end GPUs is usually limited to 128 or so. Only because of repetitive operations GPUs are faster in some tasks, they are generally much slower than a crappy processor.
2 ай бұрын
I think pretty much everyone knows the difference between gpu and cpu, the most useful information here would be the "why" the gpu cannot be used as a CPU.
@JuneJuliaАй бұрын
Still cant understand wht gpu can do what it does. Bad video.
@Trickey2413Ай бұрын
You have a low IQ so you struggle with basic information, its not your fault. Listen again at 0.5x speed and try to comprehend the essence of what he is saying. Take a notes as he lists what each of them do and then highlight the differences between the two. Make sure you make an effort to understand the words he is using, ask yourself "what does he mean when he says this" and try to formulate it in your own words.
@goodlifesaviorАй бұрын
thanks for foolization we in Russia have no enough our russian foolization and so need to be foolizised by american trainers
@THeXDesK2 ай бұрын
.•*
@johnvcougar2 ай бұрын
RAM actually stands for “Read And write Memory” … 😉
@pgowans2 ай бұрын
It doesn’t - it’s random access memory
@sauceman29242 ай бұрын
stupid 😂
@Trickey2413Ай бұрын
Imagine trying to correct someone whilst having the IQ of a carrot.
@Mrmask685 ай бұрын
nice ⛑⛑helpful
@TechPrepYT4 ай бұрын
Thanks!
@mrgran7992 ай бұрын
In the future maybe we will have only one thing.. Cgpu