Looking for books & other references mentioned in this video? Check out the video description for all the links! Want early access to videos & exclusive perks? Join our channel membership today: kzbin.info/door/s_tLP3AiwYKwdUHpltJPuAjoin Question for you: What’s your biggest takeaway from this video? Let us know in the comments! ⬇
@stevenmiller54522 күн бұрын
Nicely presented and matches exactly my 40 years of experiences in the computer industry where one of my tasks for a major computer company was to track Moore‘s law and understand the implications of changes in the rate of progress. It was clear in the early 2000s that Moore’s Law was going to start to really slow down and it sure has. Much of the progress in the recent advancement in AI computation speed has been the reduction of the precision of the math and the optimization of the chips for that type of computation. Plus, it’s often not stated, but the cost of those systems have also increased dramatically so if you look at the performance per dollar growth, it’s significant, but not quite as impressive as some have presented it to be. Also, a lot of those optimizations were one time optimizations such as going from 32 bits down to four or eight bits per operation. So I think now, looking forward, the rate of progress of computation per dollar and per watt for AI will be much much slower. And as evidence for that, I had also tracked the performance per dollar and performance per watt advancement rate of GPUs for doing 32 bit floating point arithmetic because we used GPUs for math accelerators. And it turns out GPUs were also slowing down in their rate of progress when measured with a consistent method, but their advancement rate slowdown started later than CPUs so the impact of the slowdown of Moore’s was somewhat delayed for them, for various reasons. So AI also needs a software innovation to make AI much more computationally efficient. Whether they get that or not is anyone’s guess but it certainly has made watching the advancement in computing more interesting again.
@cyberpunkalphamaleКүн бұрын
thanks
@964tractorboy4 күн бұрын
I'm glad I stuck around. This lady knows her onions. The numbers are metaphorically out of this world. I wonder how long Intel can maintain their current position in the top three fabs. Great lecture, Thanks!
@SavageBits19 сағат бұрын
Surprised there was not a single slide discussing memory latency and how we have been relying on the same single bit capacitor memory cell architecture since the 8080.
@ChrisJackson-js8rd3 күн бұрын
"Amdahl's Law is an actual law; with an equation." Well said.
@robfielding8566Күн бұрын
but there is gustafson's law, which is the same equation manipulated. you need to increase program size as you scale up.
@alexandrtortik12 сағат бұрын
@@robfielding8566 не надо путать пушистое и тёплое... они связаны конечно с друг другом, но не на прямую... умение писать для распределённой архитектуры очень важный аспект... don't confuse fluffy and warm... they are connected to each other, of course, but not directly... The ability to write for a distributed architecture is a very important aspect...
@rodolfonetto118Күн бұрын
She doesn't play around! Nice Thanks for the lecture!!!
@AdrianLopez-sc6zwКүн бұрын
@@rodolfonetto118 well, thats the standard when you work as one of the ARM procesor creator....
@odopaisen29982 күн бұрын
a wonderful presentation - interesting and let's me giggle a lot
@GaborGyebnar3 күн бұрын
Wait, you need a MEGAWATT laser, turn it into a rainbow, just to get 20 watts of 13nm EUV light? Okay, I probably don't really need that new phone.
@JasminUwU2 күн бұрын
@@GaborGyebnar and EUV gets absorbed by basically everything, so you need very special mirrors (Bragg reflectors) to project it into the wafer. And even then, they are only 60% reflective, which means that only 1% of the EUV light reaches the wafer after the 10 reflections in the projection system.
@cuda_weeklyКүн бұрын
You actually get >100W out, but yeah really low efficiency
@TheSyntheticSnake20 сағат бұрын
@@cuda_weekly There are some new papers on a potential design that reduces the number of mirrors a decent bit iirc
@qijia476920 сағат бұрын
if you use chip's made from old ways, you will use much more power through the life time of your system.
@treyquattroКүн бұрын
brilliant talk. I learned quite a bit! (No pun)
@scottstenslandКүн бұрын
Molecular self-assembly, guided by genetic and epigenetic instructions, orchestrates the formation of precise nanostructures essential for life at the sub micron scale ... well smaller than EUV lithograph ... Let's doit
@AbAb-th5qeКүн бұрын
I believe Intel has done quite a lot of research into nano technology already. But I agree that's what the protein manufacturing infrastructure in biological cells already does.
@vitalyl132723 сағат бұрын
Getting close to it - aminoacids are in 1nm scale.
@ianmarlow6350Күн бұрын
Excellent !
@AbeDillon2 минут бұрын
19:38 I think Sophie Wilson is a treasure and an inspiration, however; I think the "revolution in software" has been and continues to be in the form of deep learning. It just doesn't look like anyone expected it to look like. There are problems that are tractable by traditional hand programming, like searching and sorting data sets or traversing graphs, but those problems have largely been solved for some time. There are other problems like protein folding or annotating photographs that aren't tractible by traditional methods, they're more tractible by stochastic/machine-learning based methods. If you asked a team of programmers to hand-code software that takes a text prompt and generates a corresponding image, I doubt they'd know where to even start. I don't think that's a matter of inadequate software tools. Deep learning largely circumvents Amdahl's law. It's not 95% parallel, it's essentially arbitrarily parallel: 99.99...% The main problem now is that deep learning doesn't map very efficiently to s Von Neumann architecture. In that regard, the ball is back in hardware's court. By my estimation, the folks at Cerebras are making the most progress on that problem.
@SteveLFBO2 сағат бұрын
I really really wanted to do digital electronics when I was 17 (in 1976) but I got caught up in programming and was irrevocably seduced. Seeing this talk, I am at last happy about my career choice ;P
@AbAb-th5qeКүн бұрын
What problems is all this leading edge computing capability intended to solve anyhow? Weather simulation? chatbots? Maybe something like pytorch can be used to develop software for such systems, but in using it I can't help but feel we still need new paradigms to make better use of what's available, even in the present.
@Rockyzach88Күн бұрын
I've always been interested in the idea of different type of computational machines using natural processes we find in nature that are essentially optimized by evolution. I suppose it would be adjacent or similar to a analog computer. The name right now is "Natural Computing", however natural computing can also describe the mimicry of natural processes to computer and also using computers to synthesize natural processes (so like the reverse).
@amigalemming22 сағат бұрын
We need stagnation of CPU development for one or two decades in order to make it worthwile for programmers to write more efficient code.
@frankfahrenheit95372 күн бұрын
- How do you get the power into that wafer scale thingy? 23kW or 23000 Amp at 1V core supply Having the size of a heating plate but/and consuming 10x more power than my water boiler it could get me 4 cups of tea within 15seconds. -How do you get the data in and out?
@petergerdes109421 сағат бұрын
"Ever more exotic ... ways of making a transistor take up less space." Seems reasonable to describe that as 'getting smaller' (in area at least even if not volume)
@securityinteractive2 күн бұрын
Why is the parallel software not being created? Is this a library / pipeline issue within C++ or lower level languages?
@TheEVEInspiration2 күн бұрын
They are being created all the time, but there is just only so much to extract for each type of software. Anything that requires random branching will run into a wall. But even if at a larger scale things can be broken up so multiple cores can be used in parallel, the splitting up the work and then recombining the results, requires large electrical distances to be traveled and the tasks require precise co-ordination. This is not free, with limits the number of situations where it works well. And then there is cost, writing more complex software to make use of multiple cores or other forms of parallelism, is not just more costly. It is also much more likely to contain errors and to make matters worse, also way harder to test for those errors. The work has to be done by much more capable people because all of this, which limits availability. So much of the time, it is just not worth the effort. But where the economic/financial stakes are high enough, the effort certainly is made!
@photovincent2 күн бұрын
@@securityinteractive In part, but mostly because only some problems are inherently parallel and most aren’t. Graphics processing is (give each core part of the image to work on), physics simulations are - which is why world’s fastest supercomputers (massively parallel, but also special architectures) are working on simulation of nuclear explosions. Everything else: very hard to get above that 95% parallelization on the shown Amdahl graph….
@bedngrs2 күн бұрын
Mostly a trickle down effect from hardware manufacturers not opening up their ISAs, leading to software abstractions to fix complex incompatibility across devices, leading to blocking single-thing-at-a-time processes If we could have an open GPU instruction set, we could quickly rewrite things for more efficiency and parallelism
@EvincarOfAutumn5 сағат бұрын
A majority of programmers are used to languages that are sequential by nature, including C++. Libraries can make it easier for programmers to exploit whatever parallelism is available in their code, but what we really need is for there to be more structured parallelism and concurrency in the code in the first place. Practically, that means making it easier to use different language paradigms-such as array programming, functional programming, and logic programming-to get programmers to change their thinking, and solve problems in ways that fundamentally make better use of the hardware.
@Pablo-Herrero2 күн бұрын
I'm kind of confused how Amdahl's law applies to GPUs since acceleration using them seems be well above 20x and still scaling...
@DystopikachuКүн бұрын
GPUs have traditionally relied only on small instructions in parallel rather than longer sequential programs, and you can do more than 95% parallel operations in a graphics pipeline. I think it's a bit more complicated than that in recent times but that's the gist of it.
@procedupixel213Күн бұрын
We kept increasing the number of pixels per second: higher resolutions, then higher frame rates. And currently we keep increasing the computational effort per pixel: more geometric detail, larger game worlds, more lighting effects, more accurate lighting, and finally physically accurate global illumination. But it still doesn't beat what my imagination produced from the block graphics of the 8-Bit age. 🙂
@blurandomnumberКүн бұрын
GPU's exploit another real law -- the Gustafson-Barsis' Law, which basically states, that for a task of sufficiently embarrassing levels of parallelism you can bump up the parallel part of the workload to get even more work done for the same amount of time.
@amigalemming22 сағат бұрын
These high acceleration factors often stem from comparisons of code optimized for months for a specific GPU with a lazy implementation on the CPU. Those CPU implementations often lack cache and vector optimizations.
@petergerdes109421 сағат бұрын
She did explicitly call out ray-tracing as one of the tasks it doesn't apply because they are essentially 100% parallel. That's not all graphics obviously but many of the algorithms aren't too far off in per pixel and per frame scaling.
@cs233Күн бұрын
You said that you couldn’t put that wafer scale processor in a smart phone - of course you can! You just need a rather large smartphone (like 400mm on a side), a really good battery that can discharge all its power in a 1 minute battery life, and some oven mitts to hold the phone (might make typing on the phone a bit difficult so the large size will be a plus) and be sure not to set it down on anything flammable till it cools off or you might burn the place down! 😁😁 And you thought it wouldn’t work! Just need more out of the box thinking!
@warpspeedscpКүн бұрын
@@cs233 don't forget the heat it produces is liable to ensure it never runs again!
@afterthesmash6 сағат бұрын
If it's bigger than a Motorola DynaTAC, it's a field station, not a phone.
@duncan9401921 сағат бұрын
I used the 6502 and wrote assembly code for it. As best I recall the 6502 had pipelining. One of the very nice features that meant it was faster than the Morotola 6800 and much much faster than the Intel 8080.
@sarunas8002Күн бұрын
95% of the time my I'm using 5% of my 16threads ryzen processor. the processor is 3 years old. but really why do I need 5ghz multi-core to do spreadsheets ?
@hamesparde98889 сағат бұрын
Single threaded non SIMD performance has definitely increased since 2006! Yes the rate of increase is a lot lower, but it's still increasing. IPC has been increasing and even clock rates have. In 2006 the fastest Intel CPU (in terms of clock rate) was 3.8GHz (I believe.) Now some new Intel chips can go as high as about 6GHz.
@hamesparde98889 сағат бұрын
I'm accounting for using Multiple threads here.
@stevenmiller54526 сағат бұрын
Yes, there still is advancement, but it’s just much slower and that rate continues to decrease. 3.8Ghz to 6Ghz in 18 years is less than 2x… compared to 1981 (5mhz) to 18 years later in 1999 (800Mhz) is 160x. And at that time it went from a 16 bit processor to a 32 bit processor and several clock cycles per instruction to more than one instruction per clock cycle, and with no floating point hardware to abundant floating point hardware, so just huge advancements across all aspects. When you think about it, it is really dramatic how much Moore’s law has slowed down for CPUs.
@hamesparde98886 сағат бұрын
@stevenmiller5452 yeah I know. I was just pointing out that CPUs are still getting faster (even if at a much slower rate), because the speaker was acting as if there had been basically no improvement since 06.
@hamesparde98885 сағат бұрын
@stevenmiller5452 I don't think the stuff you mentioned about 16 to 32 bit or FPUs is really relevant. In terms of FPUs you could say that we have massive GPUs and as for 16 to 32, we haven't gone past 64 because there isn't much need to.
@0netom3 сағат бұрын
@@hamesparde9888 but that rate of improvement is slow enough to not worth upgrading, so from a practical/economical perspective, people are stuck on machines with several years old CPUs. eg. i just spent 130 USD on replacing the keyboard in a 2016 Intel MacBook Pro, because I use it rarely enough not to buy a new one, because I spend most of my time on an M1 or M2 Pro Mac mini, which are also several years old...
@hamesparde98889 сағат бұрын
Where did the 10,000x number come from? Just clock speed? My AMD 5900x is probably about 1000x faster than an old 600MHz Cellron I have (going by how fast it compiles the same program.) And that's probably only a bit over 20 years difference (although the cellron probably wasn't that fast when it came out), but I think the number is probably closer to over 100,000 at a minimum.
@jonlaban42723 сағат бұрын
Not carbon emissions neutral 🤔 Check out the Scope 3 (embodied) GHG emissions for Meta @99%
@luminousfractal420Күн бұрын
needs more fractal
@movax20hСағат бұрын
Sadly there is nothing about the actual future or novel designing like VLIW, backside delivery, interconnect. Just a bunch of history and short term roadmaps. And as usual not understanding Amdhla law fully (it is often misused even by professionals).
@Rockyzach88Күн бұрын
The mushine.
@gustinianКүн бұрын
So it'll be back to analog computing for AI then. Or maybe brains-in-jars style biological computers...
@gsestreamКүн бұрын
no, constant fully pipelined data process flow micro-fgpa gate logic circuit (fully pipelined micro-fpga network) running opencl c99 -> gate code, is the future. all gates (micro-fpga math operations) run all the time at full capacity.
@badscrew402322 сағат бұрын
I should probably go and invest in a goat farm. This tech thing doesn't look like it has any future.
@rolz71Күн бұрын
The subject is the past and current state of microprocessors, not so much the future. It offers no guidance as to how that will affect next gens beyond an implied "don't trust predictions" vibe. Not exactly an insightful peek into the future. I did enjoy the "kicking Pat" joke. They were definitely taking the piss out of intel, regardless of the disclaimer.
@Rockyzach88Күн бұрын
It's a heuristic that ain't working out so well anymore.
@815TypeSirius2 күн бұрын
We solved Amdahls law with recursive functions. Fyi.
@crabapple19742 күн бұрын
@@815TypeSirius How do you mean? From what I remember from my CS studies 25 years ago recursive functions can always be rewritten as a non recursive equivalent iterative function?
@TheBoing20012 күн бұрын
That's incorrect. A law is a statement of fact, not something "solvable"
@costadev8970Күн бұрын
Your comment makes no sense.
@badscrew402321 сағат бұрын
Where's your research paper?
@mk71b2 күн бұрын
Very interesting informative talk. Only those last 30 seconds stained it with tiresome climate scare propaganda that is already starting to get old.
@frankfahrenheit95372 күн бұрын
LA fires are a hoax ?
@jeffwadsКүн бұрын
Yes, watching this from the deep south where we just had the coldest day in recorded history at 4 degrees F. Also loved how she stuck the "pandemic" in that first slide. Reminds me of the ancient acid rain stuff.
@JonathanGerard-pe8yjКүн бұрын
Er about the climate part, unless you've been following along, 1) deadly floods and landslides have forced bout 12 million people from their homes in India, Nepal and Bangladesh, since the region’s monsoon rains are being intensified by rising sea surface temperatures in South Asia 2) The 2020 Australia fires had burned through more than 10 million hectares, killed at least 28 people, razed entire communities to the ground, taken the homes of thousands of families, and left millions of people affected by a hazardous smoke haze. 3) In March 2019, Cyclone Kenneth swept through northern Mozambique, hitting areas where no tropical cyclone had been observed since the satellite era. 4) Between 2006 and 2016, the rate of global sea-level rise was 2.5 times faster than it was for almost all of the 20th century. And the list goes on. This is an interesting quote I found recently: "Sometimes people don't want to hear the truth, because they don't want their illusions destroyed" - Friedrich Nietzsche
@guyfromtheinternet1432Күн бұрын
@@jeffwads Let me have a guess, you're homeschooled and voted for Trump?
@MaclabhruinnКүн бұрын
Sophie Wilson, a graduate of Cambridge University, designed (among other things) the ARM CPU, the most widely used processor in the world today. She is a Fellow of the Royal Society, Fellow of the Royal Academy of Engineering and a Distinguished Fellow of the British Computer Society. She is a smart lady. If she thinks there's something to this Climate Change stuff, I'd be inclined to take notice of her opinion.