CPU vs GPU | Simply Explained

  Рет қаралды 153,757

TechPrep

TechPrep

Күн бұрын

Пікірлер: 128
@rajiiv00
@rajiiv00 2 ай бұрын
Can you make a similar video for GPU vs Integrated GPU? Is there any difference in their architectures?
@RationalistRebel
@RationalistRebel 2 ай бұрын
The main differences are the number of cores, processor speed, available power, and memory. Integrated GPUs are part of another system component, usually the CPU nowadays. That limits the number of cores the GPU part can have, as well as their speed and power. They also have to use part of the system memory for its own tasks. Nonetheless, they're more energy efficient and more than powerful enough for most common tasks. Discrete GPUs have their own dedicated processor and high-speed memory. Higher end GPUs typically require more power, sometimes even more than the rest of the system.
@mohd5rose
@mohd5rose 2 ай бұрын
Dont forget compute unit.
@HorizonOfHope
@HorizonOfHope 2 ай бұрын
@@RationalistRebelThis is well explained. Also, as you scale up any processor, there are diminishing efficiency gains. GPUs are often scaled up so much that they produce huge amounts of heat. Often you need more cooling capacity for the GPU than the rest of the system put together.
@RationalistRebel
@RationalistRebel 2 ай бұрын
@@HorizonOfHope Yep, power consumption == heat production.
@mauriciofreitas3384
@mauriciofreitas3384 2 ай бұрын
An important thing to pay attention when scaling dedicated GPUs is your power supply. They consume more power, but that power has to come from somewhere. Never neglect your power supply when upgrading
@AungBaw
@AungBaw 2 ай бұрын
Simple yet short to the point, instant sub thanks mate.
@TechPrepYT
@TechPrepYT 2 ай бұрын
That was the goal glad it was helpful!
@chikenadobo
@chikenadobo 14 күн бұрын
Dude read off of Wikipedia
@kimdunphy2009
@kimdunphy2009 2 ай бұрын
finally an explanation that I completely understand and not trying to sell me on anything!
@anotherfpsplayer
@anotherfpsplayer 2 ай бұрын
Easiest explanation I've ever heard... can't be a more simpler explanation than this.. Brilliant stuff..
@TechPrepYT
@TechPrepYT Ай бұрын
Thank you! That's the goal!
@samoerai6807
@samoerai6807 2 ай бұрын
Briljant video! I started my IT Forensics study last week and will share this with the other students in my class!
@TechPrepYT
@TechPrepYT 2 ай бұрын
Glad it was helpful!
@Zeqerak
@Zeqerak 2 ай бұрын
Beautifully done. The best explanation I came across. Understood the core concepts you explained. Again, beautifully executed
@TechPrepYT
@TechPrepYT 2 ай бұрын
Thanks for the kind words!
@kartikpodugu
@kartikpodugu 2 ай бұрын
With the dawn of AI PCs, which always have CPU, GPU and NPU. Can you make a similar video on differences between GPU and NPU ?
@TechPrepYT
@TechPrepYT 2 ай бұрын
Its' on the list!
@AyoHues
@AyoHues 2 ай бұрын
A good follow up video would be a similar short summary explainer on SoCs. And maybe one on the difference between the other key components like Media Engines, NPUs, onboard vs separate RAM etc. Thanks. 🙏🏽
@TechPrepYT
@TechPrepYT 2 ай бұрын
Great idea, I'll put it on the list!
@technicallyme
@technicallyme 2 ай бұрын
Gpus also have a high tolerance for memory latency vs cpu Modern cpus have some parallelism built in. Functions such as branch prediction and out of order operations are common in almost all cpus
@gibiks7036
@gibiks7036 3 ай бұрын
Thank you... Simple and short....
@TechPrepYT
@TechPrepYT 2 ай бұрын
Thank you!
@atleast2minutes916
@atleast2minutes916 2 ай бұрын
Thank you so much! Simple , brief and easy to understand!! Awesome
@TechPrepYT
@TechPrepYT Ай бұрын
Glad you enjoyed!
@simonpires6184
@simonpires6184 2 ай бұрын
Straight to the point and explained perfectly 👍🏽
@TechPrepYT
@TechPrepYT Ай бұрын
That was the goal thank you!
@Soupie62
@Soupie62 2 ай бұрын
As an example... consider Pac Man. A grid of pixels is a Sprite. Pacman has 2 basic Sprite images: mouth open, and mouth closed. You need 4 copies of mouth open, for up/down/left/right, 5 Sprites total. Movement is created by deleting the old Sprite, then copying ONE of these from memory to some screen location. A source, a target, and a simple data copy, gives you animation. That's what the GPU does.
@davidwuhrer6704
@davidwuhrer6704 Ай бұрын
Not entirely correct. Some architectures can handle sprites, some don't. Some handle matrix muliplication and/or rotozooming, some don't. What you described is what an IBM PC with MS-DOS does. No sprite handling, no rotozoom, no framebuffer. So you need four copies of the open mouth sprite, but you can generate them at run-time. And you need to delete and redraw the sprite with every frame. Other systems, and I mean practically every other system, makes that easier. You only need one open mouth sprite, and you can rotate it as needed, and you don't need to delete it if the sprite is handled, you just change its position. This is simple if the image is redrawn with each vertical scan interrupt. But that was before GPUs. Modern GPUs use frame buffers. You have the choice of blanking the buffer to a chosen colour and then redrawing the scene with each iteration, or just drawing over it. The latter may be more efficient: 3D animation often draws environments, and 2D often uses sprite-based graphics, two cases where everthing gets painted over anyway. And yes, for sprites that means copying the sprite pattern to the buffer, in a way that makes rotozooming trivial, so you don't need copies for different sizes and orientations either. The mouse pointer is usually handled as a hardware sprite, which means it gets drawn each frame and is not part of the frame buffer. What you also neglected to mention are colour palettes. There are, in essence, four ways of handling colour: True or indexed, with or without alpha blending. True colour just means using the entire colour space, typically RGB, which is usually 24 bits, that is 8 bits for each colour channel. 32 bits if you use alpha blending, too. This has implications for how a sprite is stored. If you use a colour palette, you can either use a different sprite for each colour, or store the colour information with the sprite. Usually you would use the former, because it makes combining sprites and reusing them with different colours easier. With the latter, you can get animation effects simply by rotating the colour palette. If you use true colour, you can use a different sprite for each colour channel, but you typically wouldn't, especially if you use alpha blending.
@markonar140
@markonar140 Ай бұрын
Thanks for this Great Explanation!!! 👍😁
@TechPrepYT
@TechPrepYT Ай бұрын
Glad you found it helpful!
@JimStanfield-zo2pz
@JimStanfield-zo2pz 4 ай бұрын
Very powerful and concise explanation. Keep up the good work.
@TechPrepYT
@TechPrepYT 2 ай бұрын
Thank you!!
@olliehopnoodle4628
@olliehopnoodle4628 Ай бұрын
Excellent and well put together. Thank you.
@TechPrepYT
@TechPrepYT Ай бұрын
Glad you liked it!
@GigaMarou
@GigaMarou 18 күн бұрын
Hey nice video! Is it just that GPUs have more ALUs for each Cache and CU? Or are the GPUs ALUs different in structure? Similar for CUs and Caches?
@lodgechant
@lodgechant 2 ай бұрын
Very clear and helpful - thanks!
@TechPrepYT
@TechPrepYT Ай бұрын
Thank you!!
@natcuber
@natcuber 2 ай бұрын
How much is the latency difference between a CPU and GPU, since it's stated that CPUs focus on better latency vs throughput?
@maxmuster7003
@maxmuster7003 2 ай бұрын
Intel Core2 architecture can execute up to 4 integer instructions paralell with each single core.
@dinhomhm
@dinhomhm 6 ай бұрын
Very clear, thank you I subscribed to your channel to see more videos like this.
@TechPrepYT
@TechPrepYT 5 ай бұрын
Thank you!!
@VashdyTV
@VashdyTV 6 ай бұрын
Beautifully explained. Thank you
@samychihi6317
@samychihi6317 2 ай бұрын
which means INTL fall is not that bad, if GPU can't fully replace CPU, then INTL will remain in the market of computing and personal PC. thanks for the explanation
@juanclopgar97
@juanclopgar97 25 күн бұрын
Do you have a video talking about cores? in this example you show a core with 4 ALU, and I do not quiet understand how a single CONTROL UNIT can handle that
@waynestewart1919
@waynestewart1919 2 ай бұрын
Very good. I am subscribing. Thank you.
@TechPrepYT
@TechPrepYT Ай бұрын
Thanks for the sub!
@waynestewart1919
@waynestewart1919 Ай бұрын
You are very welcome. You more than earned it. That may be the best explanation on CPUs and GPUs on KZbin. Please keep it up.
@gfmarshall
@gfmarshall 2 ай бұрын
Thank you so much 🤯❤️
@TechPrepYT
@TechPrepYT Ай бұрын
You're welcome!
@StopWhining491
@StopWhining491 2 ай бұрын
Excellent explanation. Thanks!.
@TechPrepYT
@TechPrepYT 2 ай бұрын
Thank you!
@maquestiauartandmore
@maquestiauartandmore Ай бұрын
great! thank you, but if you could slow down a little in your explanation..😅
@TechPrepYT
@TechPrepYT Ай бұрын
Yep will try!!
@mohamadalkavmi4932
@mohamadalkavmi4932 2 ай бұрын
very simple and nice
@TechPrepYT
@TechPrepYT Ай бұрын
Thank you 😊
@siriusbizniss
@siriusbizniss 2 ай бұрын
Holy Cow I’m ready to be a computer engineer. 👍🏾👌🏾🤓
@chandru_
@chandru_ Ай бұрын
nice explanation
@TechPrepYT
@TechPrepYT Ай бұрын
Thanks!!
@ruan13o
@ruan13o 2 ай бұрын
From my experience, unless I am running a game, my GPU is typically only very lowly utilised while CPU might often be highly utilised. So when we are not running games (or similarly intensive graphics applications) why do computers not send some of the (no graphics) processing to the GPU to help out the CPU? Or does it already do this and I just don't realise?
@christophandre
@christophandre Ай бұрын
That's already the case on some programs. The main reason why you don't see it that often is, that a program must be designed to run (sub)tasks on the GPU. This can make a program a lot more complex really fast, since most programmers don't decide on their own, which part of the program is calculated on which part of the computer. That's done by underlying frameworks (in most cases for good reasons).
@Trickey2413
@Trickey2413 Ай бұрын
As the video made it a point to highlight, they excel at different things. Forcing a GPU to do a task it is not optimised to do would be less efficient and vice versa.
@abhay626
@abhay626 4 ай бұрын
helpful. thank you!
@TechPrepYT
@TechPrepYT 3 ай бұрын
Thanks!
@buckyzona
@buckyzona 6 ай бұрын
Great!
@PhillyHank
@PhillyHank 2 ай бұрын
Excellent!
@TechPrepYT
@TechPrepYT 2 ай бұрын
Thank you!
@jozsiolah1435
@jozsiolah1435 2 ай бұрын
When you stress the system by forcing a dos game to play the intro at very high cpu cycles, you also frce the system to turn on secret accelerating features that remain on. One is the sound acceleratig of sound device, it offlads the cpu from decompressing audio. The ther is the floating point unit, that is off by default, when it is on, some games will become harder. The Intel has an autopilot for car games, that is also off by defaut. With the GPU the secret seems to be the diamnd videos I am experimenting with, many games show diamnds as reward, and hard to get. Diamnd stresses the vga card, it's so compex to draw by the video chip. Also the tuning will consume the battery faster.
@mostlydaniel
@mostlydaniel 2 ай бұрын
2:34 lol, *core* differences
@mrpappa4105
@mrpappa4105 2 ай бұрын
If possible, explain why a GPU can not replace a CPU. Great vid but my old vic64 brain (yeah im old) dont get this. Anyway, cheers from a new subscriber
@illicitryan
@illicitryan 2 ай бұрын
So let me ask a stupid question, why can't they combine the two and get the best of both worlds?Will doing so negate some functions rendering it useless? Or .. lol just curious 🤔 😊
@riteshdobhal6381
@riteshdobhal6381 2 ай бұрын
CPU has few cores which is why parallel processing is hard on them but each individual core is extremely powerful. GPU has thousands of cores making them good at parallel processing but each individual core is comparatively not very powerful. You can make a Processor which has multiple cores like GPU and each core being powerful like CPU but that would cost huge amount of money
@trevoro.9731
@trevoro.9731 2 ай бұрын
The question is stupid. You can either get high performance in non-parallel tasks and low latency or high performance in parallel tasks and high latency, if you extend the normal core with GPU features, it will instantly become large, power hungry and slow for normal tasks, that is why modern CPUs for desktops have bult-in separate GPU cores. Also, the problem is that you would need a lot of memory channels of memory to make use of that - memory for GPUs is very slow, but has a lot of internal channels.
@zoeynewark9774
@zoeynewark9774 2 ай бұрын
Can you combine a school bus with a Formula One car? There, you have your answer.
@boltez6507
@boltez6507 2 ай бұрын
APUs do that.
@keithjustinevirgenes7387
@keithjustinevirgenes7387 2 ай бұрын
What do you think will happen if both the heat of cpu ang separate gpu combined with just one cooling fan and heatsink? Plus greater voltage needs resulting in more heat?
@grasz
@grasz Ай бұрын
CISC vs RISC plz
@TechPrepYT
@TechPrepYT Ай бұрын
It's on the list!
@grasz
@grasz Ай бұрын
@@TechPrepYT yay~!!!
@lanceorventech6129
@lanceorventech6129 2 ай бұрын
What about the Threads?
@mattslams-windows7918
@mattslams-windows7918 2 ай бұрын
A thread is simply a logical unit of sequence of instructions that gets mapped to each core by a scheduler (modern systems usually use both hardware and software scheduling these days) whether that's on a GPU or CPU for execution. Primary difference between GPU and CPU threads is that GPUs usually execute the same "copy" of each instruction across all threads being executed on the device (single-instruction multiple data aka SIMD), whereas CPUs can have all the different threads very easily execute all kinds of different instructions simultaneously (multiple-instruction multiple data aka MIMD). In addition more than one thread can be mapped to each core whether it be a CPU or GPU and in the case that happens simultaneous multi threading (SMT) hardware technology is used to execute all the threads at once.
@Norman-z3s
@Norman-z3s 2 ай бұрын
What is it about AI that requires intense parallel computation?
@undercover4874
@undercover4874 2 ай бұрын
In neural networks it's all about matrix multiplications and also we want to pass multiple inputs through the network (which also perform a number of matrix multiplication) with GPU we can perform all the passes of different inputs to the neural network parallel instead of just doing one input a time which can speed up the computations
@meroslave
@meroslave 2 ай бұрын
A CPU can never be fully replaced by a GPU, so what happened now between intel and INVIDIA!?
@matteoposi9583
@matteoposi9583 2 ай бұрын
am i the only one who sees dots in gpu drawing?
@tysonblake515
@tysonblake515 Ай бұрын
No you're not! It's an optical illusion
@avalagum7957
@avalagum7957 2 ай бұрын
Still not clear for me: what component does the GPU miss so that it cannot replace a CPU? Ah, just checked with perplexity ai, the instruction set that a GPU accepts is too poor to make a GPU a replacement for a CPU.
@nakkabadz6443
@nakkabadz6443 2 ай бұрын
gpu is like PhD holder while the CPU is the jack of all trades. look at the name GPU graphics computing CPU is Central processing unit GPU can outperform cpu computing power in singular task like in graphics computing or any other given task, while the cpu can't compute as fast as the gpu cpu can compute task simultaneously.
@trevoro.9731
@trevoro.9731 2 ай бұрын
Most of things in this video is BS. GPU can replace CPU, but will work many times slower for most of tasks while taking more power. Only on highly parallel tasks it is efficient and fast. Also, it is missing a lot of features to control hardware, like proper interrupts management etc.
@avalagum7957
@avalagum7957 2 ай бұрын
@@trevoro.9731 why does GPU is slower than cpu for most of tasks?
@trevoro.9731
@trevoro.9731 2 ай бұрын
@@avalagum7957 It is optimized to consume minimum amount of energy and perform multiple calculations per cycle, but the calculations take much longer time to finish, up to 100 times slower. All those parallel operations go to waste if you don't need to perform the exact same operation on multiple entries. Also, its memory is way slower than that of CPU (albeit CPU memory is also not very fast, it merely got ~30% faster over the last 20 years, but got a lot of internal channels), but contains a lot of internal channels, so it is efficient in processing large amounts of data, which do not need high performance per each dataset.
@Theawesomeking4444
@Theawesomeking4444 2 ай бұрын
No, you didn't explain nor do you understand anything, all you did was read a wikipedia page, gpus dont have "more cores" thats a marketing lie, what they have are very wide simd lanes, cpus have it too but they are smaller in exchange for bigger cache, higher frequency and less heat.
@mattslams-windows7918
@mattslams-windows7918 2 ай бұрын
Honestly this video despite oversimplifying a bit (due to time constraints likely) isn't entirely incorrect. There are in fact more cores on average in a GPU it's just the architecture of each GPU core is completely different than a CPU core; companies like Nvidia and AMD aren't lying when they talk about their CUDA core/stream processor counts it's just that each core in a GPU serves somewhat different purposes than each core in a CPU is all. Also what you're saying about cache isn't really correct AMD RDNA 2+ cards have a pretty big pool of last level infinity cache that contributes a significant amount to the overall GPU performance
@Theawesomeking4444
@Theawesomeking4444 2 ай бұрын
@@mattslams-windows7918 The problem is, as a graphics programmer, if i were to learn it again, this video tells me nothing, its literally a presentation a high school student would do for homework "cpus do serial gpus do parallel", the funny thing is that the main differences are literally shown in the first image he showed, where it shows the number of ALUs (the simd lanes, they also have FPUs) of each, yet he didnt explain that because he has no clue what those images mean. now gpus do have slightly more cores than their cpu counterpart, but its usually 1.5x-2x higher, not thousands of cores, if you want the correct terminologies, a cpu core is equivalent to streaming multiprocessor in nvidia , compute unit in amd(funny enough amd also refers to them as cores on their integrated graphics specifications), and core in apple, a cpu thread is warp in nvidia and wavefront in amd, a cpu simd lane is a cuda core in nvidia and stream processor in amd, now for the cache thing you mentioned, you are probably using a gaming pc or console for the comparison, they will usually have 4-8 core cpus with 16-32 core gpus, in games single core performance matters more (usually because most gamedevs dont know how to multithread haha), if you want more less of a comparison take the ryzen 9 5900x 12 core and the rx 6500 16 cores, roughly similar power consumption, L3 cache is 64MB on the cpu and 16MB on the gpu, L2 cache is 6MB on the cpu and 1MB on the gpu, L1 cache is 768KB and 128KB on the gpu, now if you get a gpu with higher core counts, you will notice that L3 cache increases a lot but L1 cache stays the same, this is because L3 cache is a shared memory pool for all of the cores within the gpu and cpu, while L2 and L1 cache are local to the core. anyways that was a long reply, hopefully that answered your questions xD.
@Theawesomeking4444
@Theawesomeking4444 2 ай бұрын
@@mattslams-windows7918 lol my reply was removed
@JosGeerink
@JosGeerink Ай бұрын
​@@Theawesomeking4444it wasn't?
@Theawesomeking4444
@Theawesomeking4444 Ай бұрын
@@JosGeerink nah i had another reply which i explained the technical details but you cant state facts with proof here, unfortunately.
@jlelelr
@jlelelr 2 ай бұрын
can cpu have something like cuda?
@mattslams-windows7918
@mattslams-windows7918 2 ай бұрын
Depends on your definition of "have": if Nvidia makes a GPU driver that supports the CPU in question then technically one can combine an Nvidia GPU and that CPU into the same computer to run CUDA stuff, but executing GPU CUDA code on the CPU is something that Nvidia probably doesn't want people to do since Nvidia likely wants to keep making money on their GPU sales so executing GPU CUDA code on CPU will likely not be a thing anytime soon
@marcopo06
@marcopo06 Ай бұрын
👍
@zoemayne
@zoemayne Ай бұрын
Im just worried they set him up for failure. Those investors gutted the company and sold the most valuable asset - the land they owned. He has my support ill make sure to either stop by there with a group of friends or atart getting some healthy takeout from them. Those bankruptcy investors should be stopped. Just look how they plundered toys r us.
@sauravgupta5289
@sauravgupta5289 2 ай бұрын
Since each core is similar to CPU can we say that it has multiple CPU units?
@aorusgaming5913
@aorusgaming5913 2 ай бұрын
Does this means that gpu is a better version of cpu only or we can say faster version of cpu which can do many calculation at a time, then why dont we use two gpus instead of a cpu and a gpu?
@undercover4874
@undercover4874 2 ай бұрын
GPU only performs better if the task being performed can be parallelized, but majority of the tasks can't be or don't need parallel computation So they would be slower on GPU. Main Power GPU gives us is parallelization if can't exploit it in a task the overhead will make it even slower than CPU.
@pear-zq1uj
@pear-zq1uj 2 ай бұрын
No, GPU is like a factory with 100 workers. CPU is like a medical practice with 4 doctors. Neither can do each other's job.
@trevoro.9731
@trevoro.9731 2 ай бұрын
You are wrong about many things. GPUs, the modern ones, aren't actually good in performing operations for each single pixels. They are far behind the CPU on that, but can more efficiently work with large groups of pixels. No, the modern high-end GPUs have 32-64 cores (top ones like 4090 have 128 cores). The marketing cores is a lie. No, the threads is a lie, the actual number is hundreds times lower. Those fake threads are parallel execution units, they are not threads, they are same code which works with a very large array. Each single core can run 1 or 2 actual threads, therefore the number of threads for high-end GPUs is usually limited to 128 or so. Only because of repetitive operations GPUs are faster in some tasks, they are generally much slower than a crappy processor.
2 ай бұрын
I think pretty much everyone knows the difference between gpu and cpu, the most useful information here would be the "why" the gpu cannot be used as a CPU.
@JuneJulia
@JuneJulia Ай бұрын
Still cant understand wht gpu can do what it does. Bad video.
@Trickey2413
@Trickey2413 Ай бұрын
You have a low IQ so you struggle with basic information, its not your fault. Listen again at 0.5x speed and try to comprehend the essence of what he is saying. Take a notes as he lists what each of them do and then highlight the differences between the two. Make sure you make an effort to understand the words he is using, ask yourself "what does he mean when he says this" and try to formulate it in your own words.
@goodlifesavior
@goodlifesavior Ай бұрын
thanks for foolization we in Russia have no enough our russian foolization and so need to be foolizised by american trainers
@THeXDesK
@THeXDesK 2 ай бұрын
.•*
@johnvcougar
@johnvcougar 2 ай бұрын
RAM actually stands for “Read And write Memory” … 😉
@pgowans
@pgowans 2 ай бұрын
It doesn’t - it’s random access memory
@sauceman2924
@sauceman2924 2 ай бұрын
stupid 😂
@Trickey2413
@Trickey2413 Ай бұрын
Imagine trying to correct someone whilst having the IQ of a carrot.
@Mrmask68
@Mrmask68 5 ай бұрын
nice ⛑⛑helpful
@TechPrepYT
@TechPrepYT 4 ай бұрын
Thanks!
@mrgran799
@mrgran799 2 ай бұрын
In the future maybe we will have only one thing.. Cgpu
@nel_tu_
@nel_tu_ 2 ай бұрын
central graphics processing unit?
@thebtm
@thebtm 2 ай бұрын
Cpu/gpu combo units exist with ARM CPUs.
@a-youtube-user
@a-youtube-user 2 ай бұрын
​@@thebtm also with Intel & AMD's APUs
How do Graphics Cards Work?  Exploring GPU Architecture
28:30
Branch Education
Рет қаралды 1,7 МЛН
I Built a BETTER CPU in Excel
12:22
Inkbox
Рет қаралды 81 М.
How to Fight a Gross Man 😡
00:19
Alan Chikin Chow
Рет қаралды 13 МЛН
Noodles Eating Challenge, So Magical! So Much Fun#Funnyfamily #Partygames #Funny
00:33
FOREVER BUNNY
00:14
Natan por Aí
Рет қаралды 27 МЛН
Миллионер | 3 - серия
36:09
Million Show
Рет қаралды 2 МЛН
How the Clock Tells the CPU to "Move Forward"
14:22
Core Dumped
Рет қаралды 89 М.
How to STUDY so FAST it feels like CHEATING
8:03
The Angry Explainer
Рет қаралды 1,8 МЛН
Basic Networking: Protocols and OSI, TCP/IP Models
14:03
OPENWALNUT
Рет қаралды 63
Do we really need NPUs now?
15:30
TechAltar
Рет қаралды 759 М.
This is Why Programming Is Hard For you
10:48
The Coding Sloth
Рет қаралды 952 М.
HOW TRANSISTORS REMEMBER DATA
16:58
Core Dumped
Рет қаралды 371 М.
CRAFTING A CPU TO RUN PROGRAMS
19:49
Core Dumped
Рет қаралды 116 М.
Transistors - The Invention That Changed The World
8:12
Real Engineering
Рет қаралды 6 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,3 МЛН
How to Fight a Gross Man 😡
00:19
Alan Chikin Chow
Рет қаралды 13 МЛН