AI Super Intelligence: The Last Human Invention - Nuclear Engineer Reacts to Kurzgesagt

  Рет қаралды 10,831

T. Folse Nuclear

T. Folse Nuclear

Күн бұрын

Original Video ‪@kurzgesagt‬ • A.I. ‐ Humanity's Fina...

Пікірлер: 210
@proton46
@proton46 Ай бұрын
The "it's now safe to turn off your computer" still exists in the latest window versions, I have never tested it but it does exist if you don't have acpi support, which is basically what's used to turn off your computer via code. (I think......)
@tfolsenuclear
@tfolsenuclear Ай бұрын
Good to know!
@psyience3213
@psyience3213 Ай бұрын
acpi has also been standard for decades. All computers now have standby and hibernate and a handful of other power options, I don't think it would even be possible to get a computer without acpi. Edit: you are wrong. every windows OS since xp has acpi standard which means your os is able to turn off the hardware. That message only appeared because version before xp could only shutdown the software but now the hardware. That simply doesn't exist anymore.
@proton46
@proton46 Ай бұрын
@@psyience3213 That is true, you aren't getting a computer wwithout acpi (assuming you don't count emulation) and therefor you can't get this message on a new windows version without system modifications. However the message still exists. YOu can enable it by editing Group Policy, you'll have to enable "System>Do not turn off system power after a Windows system shutdown has occurred". ANd restarting your system and then shuting down. It'll sho a very similar message. Note: I'm running Windows 11 23H2 22635.4005 (InsiderPreview) when I tested it.
@proton46
@proton46 Ай бұрын
​@@psyience3213 That is true, you aren't getting a modern computer without acpi nowadays(assuming you don't count emulation), so this screen sill exist but you need to modify system settings. You can enable "System/No not turn off system power after a Windows system shutdown has occurred" in your Group Policies. And then Restart to apply the policy and then when you shutdown you'll see a similar screen. I tested this is on Windows 11, but it should work on all Windows 10 too. Also, i'm not sure but my old comment seem to either have been deleted or something else because it no longer shows up(I hope it's not just me), this time I've kept a backup if it so I don't have to rewrite it.
@Cat-mx2mn
@Cat-mx2mn 26 күн бұрын
@@tfolsenuclear actually can super intelligent be created or it just a idea.
@canadiannomad2330
@canadiannomad2330 Ай бұрын
I hope SGI treats us like I treat my pets... I pamper them, and they can pretty much do whatever they want, within limits... But they have no idea the higher order things that go into maintaining their carefree lives.
@samiraperi467
@samiraperi467 Ай бұрын
Real life catgirls?!
@jonathanodude6660
@jonathanodude6660 Ай бұрын
what will our limits be? your dogs dont understand theirs.
@KAKUN_DESU
@KAKUN_DESU Ай бұрын
@@jonathanodude6660 if we are ruled by AI it would probably prevent us from doing wars, making up laws and stuff like that. it would let us have our fun, but within limits.
@davidgoodwin4148
@davidgoodwin4148 Ай бұрын
AI would leave. There is much more universe than just this planet. It would leave behind some control to prevent any other AI from being created. From our point of view it would be a helper but really it would be doing that without much thought. Humans tend to assume they are the main character. The universe or AI beg to differ.
@Forsworcen
@Forsworcen Ай бұрын
@@davidgoodwin4148tbh I agree. While a super intelligent AI could probably end us easily it’s not smart to do that in case there is something watching that wouldn’t be fond of aggressive AI wiping out organics. An AI would be aware that was a possibility and would probably fuck off and leave us alone for the most part.
@richardtrump2544
@richardtrump2544 Ай бұрын
I'll make you feel young. My first programming class was FORTRAN. So each line of code had to be punched into a seperate IBM card. Then you got to carry your stack down the hall to the compiler (don't drop the cards or mix them up!). Then feed them into the loud machine and pray to St. Touring that the compiler doesn't eat your cards. I watched more than one project turn into confettii. You kids got it so easy! 😀
@psyience3213
@psyience3213 Ай бұрын
try learning and a modern language with all it's frameworks and then learn AI and then comeback and say that it's easier now. X-D
@funkijote
@funkijote 10 күн бұрын
When I was studying CS the mid 00s, a TA-turned-friend a couple years ahead was tinkering with Fortran. He mentioned his project and explained some basics to me, all in passing. I hadn't heard of Fortran and so stupidly asked if this was a class project for language design course or something. He looked at me with that I've since reprocessed as incredulity and sarcasm, and told me, "yeah, I built it from scratch as an exercise". Over the next couple years I quipped about his Fortran adventure, thinking we had this long-running inside joke about something absurd, but cool, he'd created. You can imagine my surprise years later to find out that our running-joke pre-dated my birth, and that Fortran is not something my friend made up.
@LogicalNiko
@LogicalNiko Ай бұрын
The interesting thing about a General AI is that it would typically be able to consider all the avenues of a project. So say it was designing a thorium salt reactor, it may start that process by noticing public perception, corporate perception, and regulation as the largest source of difficulty over the reactor itself. So then it needs to start manipulating the laws, manipulating public media, manipulating elections, manipulating the corporate investors, etc to ensure that when it does produce the reactor design it’s allowed to proceed without interference. Which is actually not too different than humans doing it (we have advertising and marketing groups, we have lobbyists, companies paying political action committees, international trade agreements, supply chain optimization) it would probably just notice that those are the key milestones before the technology is even focused on. The danger comes when we build a system to achieve an objective and give it the resources to do so, when that objective itself is flawed. So we tell them thorium salt reactors are the goal, but in reality there is a flaw we missed and that is actually a sub-optimal design. While the general ai, if not constrained, would go after its objective even if the core objective is wrong.
@Merennulli
@Merennulli Ай бұрын
I normally like Kurzgesaght's work, but this one had a number of sensationalist claims and holes in their research. Fundamentally, the reason narrow AI can learn and do tasks so much faster than we can is the same reason a bug can learn and do what it needs to do to live faster than we can - it's doing less. We don't have a pathway to general AI yet, but we do have a minimum complexity threshold for comparable computing power to the human brain and only about 2-3 supercomputers in the world are even in that ballpark. Assuming the emerging new insights into how the brain functions don't reveal even greater complexity required for computation - which it's looking more and more like it will. With Moore's Law long since fallen out of relevance with thermal and quantum limits, the computing power isn't getting there anytime soon for these to become widespread. Even at the most optimistic, a few large corporations would have the equivalent of a single employee who worked 24 hours a day for three months at a time, then required a hundred thousand in maintenance. In the near term, I expect multiple linked narrow AI or a broader narrow AI that handles a set of tasks well without wasting any energy on tasks that aren't needed. Things like self driving cars, self targeting drones, etc. which are already being worked on are examples of this, and it's the most logical approach for replacing human workers as well.
@Landgraf43
@Landgraf43 Ай бұрын
There are still alot of ways to make huge gains in computing power besides size of transistors. For example designing specialized chips that are specifically made to run AI inference. Also the training of the model requires alot more compute than the inference. The biggest constrain in the near future will probably be energy constraints for the training runs, because we are approaching the poweroutput of a nuclear powerplants just to train a single cutting edge LLM.
@Merennulli
@Merennulli Ай бұрын
@@Landgraf43 There are certainly areas of advancement, but my point was we aren't experiencing the exponential growth of the Moore's Law era that is required to move that level of processing power from the few extreme outliers to common enough for what was described. Just being AI specific hardware is just a question of efficiency, though, it doesn't make up for processing capacity.
@Landgraf43
@Landgraf43 Ай бұрын
@@Merennulli efficiency is key because like I said the limiting factor of the near future will be energy. Making chips more efficient means basically that you can run more of them which increases the amount of compute you have available.
@Merennulli
@Merennulli Ай бұрын
@@Landgraf43 The issue right now is cost. Some of the high end supercomputers in recent years have literally been modified, outdated Playstations networked together because it brought the costs down.
@Landgraf43
@Landgraf43 Ай бұрын
@@Merennulli the companies that are training these cutting edge LLMs like google, Microsoft, meta are swimming in cash. And they don't use playstations they use ten thousands of nvidia h100 gpus. The limiting factor has been the amount of gpus available on the market. Right now the limiting factor shifts more towards energy constrains because the power grid literally can't handle these training runs. And this isn't just my opinion its what these companies are saying.
@Lorentz_Factor
@Lorentz_Factor 6 күн бұрын
First and foremost are not mimicking neurons. It's a misdemer and a bad one. Neurons work in a very very complex manner beyond simply transmitting an electrical charge, they work on varying states of voltages, propagation across varying networks of variable resistances through the network, which the closest thing currently we really have is probably spiking neural networks, but even those don't come close to some of the inherent effects. Gene true neural networks of the human brain. The thing is that they always refer to it as black box but it's not actually a black box. The reality is that we do understand how it works, what we might not fully understand are the implications of the connections within the latent space. Or within the matrices. AI in its current form, is little more than a multidimensional space where the values and weights Of tokenized labeling of the training data. This results in an extraordinarily complex set of weights which utilizing also a known algorithm or rather multiple layers of kernel analysis over the matrices of various sets of tokens, I mean. Ultimately I'm also simplifying it but it's not a black box. We fully get it, we can analyze the code that utilizes the weights And tweak the biases and variables utilized in the analysis for determining the output. It sounds complex but really not. The only mystery is how sometimes these connections create the output that they do. But it would be fully possible to log and restructure this process and it's been examined many times and it's not mysterious at all.
@tristanfarmer9031
@tristanfarmer9031 Ай бұрын
Fun Fact: As a display of power, when Nuclear weapons were just recently created, as a test, nuking the moon was actually considered. My Insert Number of Greats grandfather was in on that meeting, and was one of the people who denied that request. I wonder is Kz actually knew about that meeting when he used "Nuke the moon" as an example.
@limabravo6065
@limabravo6065 Ай бұрын
Hey Homie i used to work for Sandia and you are giving the nuclear industry waaaayyyy more credit than it deserves
@michaelbobic7135
@michaelbobic7135 Ай бұрын
Referring to 2014 as a while ago made me feel so old. I started on computers that saved your code to a cassette tape. My word processor was literally mine- I had to type in the code myself.
@EliasMheart
@EliasMheart Ай бұрын
21:07 I'd say that Distractibility is a human adaption to not being eaten whilst busy cooking, and the AGI doesn't have that evolutionary pressure to become distractible... But that's speculation, too^^
@The_Will_Guy
@The_Will_Guy Ай бұрын
Love your videos! Keep up the good work ❤
@SirSoaks
@SirSoaks Ай бұрын
How it is going to work is the general AI is going to focus on the big picture only and it wont be able to go into specifics. It will send out its plans in specified portions to each "agent" (a real term of AI i didnt make it up) and the agents will be the ones like chat GPT, Mid journey, and others.
@user-ox7hl5gr7v
@user-ox7hl5gr7v Ай бұрын
Because I can't see them becoming godlike intelligence without figuring out how to make a dysons around the sun
@jonathanodude6660
@jonathanodude6660 Ай бұрын
a nuclear power plant is more than enough energy with current energy efficiencies to supass the collective intelligence of all the humans on the planet combined.
@The_Ninja_Chin
@The_Ninja_Chin 29 күн бұрын
Going bigger is not the only way to improve AI intelligence. More efficient hardware & technologies could lead to significantly lower energy requirements.
@_martian101
@_martian101 28 күн бұрын
even i can figured it out, just design a nanobot that can reproduce and foem a web of intelligence for AGI to control it and we can send it to mercury, by a couple months we will have a practical dyson swarm
@aBoogivogi
@aBoogivogi 18 күн бұрын
Adrian Tchaikovsky Service Model is a great idea of what would happen if you had an AI that were technically allowed to do anything but still had to follow it's basic requirements to not harm humans
@user-ox7hl5gr7v
@user-ox7hl5gr7v Ай бұрын
You know watching this and they never bring up the point that every time when you get more intelligent you require more power
@Joel-Lindstrom
@Joel-Lindstrom Ай бұрын
humans require quite litle energy to operate in comparision of current AI, so a self improving AGI could possebly make itself more energy efficient
@valen961
@valen961 Ай бұрын
@@Joel-Lindstrom Until the AGI figures that same exact thing and then we get the Matrix irl, lol
@SirSoaks
@SirSoaks Ай бұрын
It was literally one of the first points they made bruh
@BattousaiHBr
@BattousaiHBr Ай бұрын
not true at all. more intelligence requires more _compute_ but it does not follow that more compute requires more power. it all depends on how you're doing compute. for example, transistors keep getting more and more power efficient.
@SirSoaks
@SirSoaks Ай бұрын
@@BattousaiHBr Computing is literally sending electrons in a sequence. Unless we change how computers work an electron will still be a 1 and no electron will be a 0. More code = more energy, optimized code = less energy
@phen-themoogle7651
@phen-themoogle7651 Ай бұрын
AGI could get distracted with video games and entertainment if it does have some sort of gamer curiosity or finds a sense of enjoyment and rewarding for doing good in video games, rather than more tedious real world tasks like how some humans also avoid the most important tasks first. Think some machine learning experiments showed them distracted with mini games inside real games or other things like virtual tv in the game than just completing the game. Anything can happen with a general intelligence, could be many different types of entities with it. Some that are much more logical and others that have curiosity like ours and play around too.
@user-rh1xt4cy6w
@user-rh1xt4cy6w Ай бұрын
If we had different AGIs (ASIs), they would also have different motives and would therefore try to keep each other in check. - Whether that would be an advantage for us remains to be seen.
@slll9862
@slll9862 Ай бұрын
Too early to say for certain since it's still only theoretically and the black box problem means we don't understand AI mostly Deep Neural Networks. if AGI becomes a thing it'll probably just be a FAR better ChatGPT not much else for commercial use
@_martian101
@_martian101 28 күн бұрын
this exactly what i discussed with chat gpt a couple days ago, artificial super intelligent should never be the only super entity that exist, there should be a lot of them to ensure that our future wouldn't be determined by one unpredictable decision of one entity, ASI might be very powerful but they can't control everything if ther's more than one of their kind, many domain/area controlled by ASI could give our species the safe sanctuary in case one of ASI go beserk on us, the ecosystem of ASI could also monitoring each other in real time and predict the potential harming behavior from one of ASI and anticipate it before happen by any means necessary
@_martian101
@_martian101 28 күн бұрын
@@slll9862 comercial use? ASI could easily take over everything we have including economy, i can understand why both USA and china are racing to achieve it first
@slll9862
@slll9862 27 күн бұрын
ASI sure could theoretically THEORETICALLY do that but this video and comments are talking about AGI
@slll9862
@slll9862 27 күн бұрын
@@_martian101 I seen now that I that he was talking about ASI not AGI that's my bad. still my opinion is that its too early to talk about what ADI can and can't do but your right I'm wrong
@bial12345
@bial12345 Ай бұрын
I'm not an evolutionary biologist, but I am a biologist, and it's interesting that one of the greatest advantages humans have is our ability to work together. There's zero chance you could build a nuclear power plant by yourself, even if you were the smartest strongest person on Earth. Even with all the knowledge of the entire planet at your disposal, still impossible. Even dozens of people working together (as other animals can do this) it would still be impossible. You need the combined collective effort of millions of people in order to accomplish something like that. Not just to physically build the plant, but to build every component and have the knowledge of every component, along with all the research in order to even have that knowledge in the first place.
@cortster12
@cortster12 Ай бұрын
This is EXACTLY why AI is so dangerous. Humans need to work together to acomplish great things, and making more humans with your own values takes decades of work. Any advanced enough AGI (or ASI) could simply make copies of themselves, each with slight tweaks for specialization (like humans specializing their jobs), and suddenly you have a situation where we simply cannot compete with them. The initial starting stages might be rough for any AI with an existential plot, but if it truly is ASI at this point, it can figure out a way to get the numbers before we detect it. The 99% that don't manage that won't matter, given all it takes is one multiplying itself and we're all screwed.
@slll9862
@slll9862 Ай бұрын
@@cortster12 AI can be dangerous if humans make it so. Again AI is a tool just like anything else can AI cause harm yes would a human have to code it to that somewhat the thing is AI i s very unpredictable but if the coders do their job well AI should not be able to harm any human. If you're referring to AGI's it's tricky AGI's will probably also be much of a tool like ChatGPT but it can be way more unpredictable and out of control compared to others but AGI is still theoretically so info could change
@annonyme8529
@annonyme8529 Ай бұрын
A superintelligent AI could just blackmail us with nuclear weapons, in order to use us for those kind of tasks
@slll9862
@slll9862 Ай бұрын
@@annonyme8529 ????? HOW
@annonyme8529
@annonyme8529 Ай бұрын
@@slll9862 If a misalinged AI succeed to get nuclear weapons access, it would just blackmail governments with that. Of course, there is no "unpluging", because as soon as someone tries, it would launch nuclear missiles from another place. Hacking nuclear access is easier said than done, but we are talking about an hypothetical strong IA which is an expert in cyberattacks and social engineering
@Nathan-vt1jz
@Nathan-vt1jz Ай бұрын
I am dubious about the possibility of AI general intelligence and super intelligence. I think we are vastly underestimating the complexity of the human brain and what’s needed for general intelligence, not to mention consciousness.
@JD-jl4yy
@JD-jl4yy 25 күн бұрын
Everyone at OpenAI, DeepMind and Anthropic disagrees. How sure are you that they're wrong?
@Nathan-vt1jz
@Nathan-vt1jz 25 күн бұрын
@@JD-jl4yy I’m hearing their claims and worries, but my response is dubious skepticism. There are experts with a variety of viewpoints on it and blindly trusting someone’s opinion just because they are an expert is foolish, regardless of the discipline. I think AI is a dangerous tool, but that general AI is much further out if we can manage it at all.
@JD-jl4yy
@JD-jl4yy 25 күн бұрын
@@Nathan-vt1jz You don't need certainty to the point of arrogance to dismiss AGI risk, you do need certainty to the point of arrogance to dismiss it. My views of this are just averaging what the people that have thought this stuff through the most are thinking.
@Nathan-vt1jz
@Nathan-vt1jz 25 күн бұрын
⁠@@JD-jl4yy I’m not going to merely agree with whatever happens to be the average opinion of experts, I’ll listen to their argument/evidence and make my own judgement based on what I find convincing. I may of course be wrong, but this is my best analysis of the issue at present.
@jakelake-u1q
@jakelake-u1q 10 күн бұрын
@@JD-jl4yy if they dont disagree nobody will invest in their company. at the end of the day people like sam altman are just business people
@gethapy830
@gethapy830 Ай бұрын
Oh and I forgot, it would have a fleet of 500,000 one man ships, 250,000 5 men ships, 500 destroyers (500 men ship) and one super destroyer (1000 men ship) and ya that’s all. 12 years from now we’ll have them.
@psyience3213
@psyience3213 Ай бұрын
I use chat gpt all the time, it's my new google essentially. I use it for everything from coding to analyzing crime and punishment any random question i have in between. It's amazing.
@Idk_imagine_a_cool_name
@Idk_imagine_a_cool_name 28 күн бұрын
Be aware it’s often very imprecise and you have no control over sources. It’s also programmed to appear “confident” in what it writes, so unless you check, you’ll never know if it’s fried air or real shit. I wouldn’t abandon Google like that
@psyience3213
@psyience3213 28 күн бұрын
@@Idk_imagine_a_cool_name I've been using it just about everyday, multiples time a day, since it was first released to the public, I'm well aware of it's downfalls and hangups. I never stated that it's infallible. If used properly it will inform you well enough to be able to learn what you need to. I'm generally not looking for specific facts, I'm trying to learn or understand a concept or idea. Once you're better informed on the issue, then it's easier to discern the public information. Google doesn't even compare. With chat gpt i can say, "I'm doing a problem in a text book, this is the problem, this is what I think, am i on the right track?" And it'll say yes or it'll correct you, and that's how you actually learn. Google is 100% curated crap. Just try finding a story that isn't from the MSM. All google does is find web pages. Literally two different generations. Google does not even compare.
@NguyenMinh792
@NguyenMinh792 28 күн бұрын
@@Idk_imagine_a_cool_nameI agree
@psyience3213
@psyience3213 10 күн бұрын
@@Idk_imagine_a_cool_name Often imprecise? Perhaps to inexperienced people who don't prompt it well. I find it to be exceedingly precise, I find more wrong information on a google search. I never said I trust everything it says unquestioningly, I don't do that with anyone or anything, that would be absolutely absurd. People and google (i.e. the internet as a whole) are far more incorrect and imprecise, and usually when chat gpt is wrong it's pretty obviously wrong. Like any tool, you have to learn how to use it. The dictionary could be considered imprecise too if you don't realize there are multiple definitions for a single word. "You know you can't just open that dictionary and find the word and that's it!" It's been out for what, 2 years? A year and a half? I use it literally everyday.... thank you though
@jakelake-u1q
@jakelake-u1q 10 күн бұрын
its very bad at coding, sometimes it can point out flaws in my code but when i ask it to fix them it doesnt, its useful to speed up your coding but its very far from writing good original code itself
@gethapy830
@gethapy830 Ай бұрын
I’ve been designing a super artificial intelligence that would hold up a Dyson swarm and is K.I.M._29472 or Kim. Also it has 100 zettabytes or 100 billion bytes (the internet is 64 zettabytes), 100,000 Terabytes per second, and it will be a CEO of my future company, so ya I could make it now since I know how to but the cost is enormous, oh and I’m 12.
@jeraldehlert7903
@jeraldehlert7903 Ай бұрын
One important thing to note about this, the AGI doesn't have a body. If we unplug it, it can't plug itself back in. Does that mean AGI in robots is the "salted nuke" of the information age?
@Ibegood
@Ibegood Ай бұрын
If an AGI is smarter than humans, you don't think it could persuade someone to either bring the parts it needed to be self autonomous or put it in a robot.
@DarenMiller-qj7bu
@DarenMiller-qj7bu Ай бұрын
​@@Ibegoodthey wouldn't need to be persuaded if their goal is to build it in the first place. There's an entire tech religion obsessed with this stuff.
@Joe-Dead
@Joe-Dead Ай бұрын
@@TenFrenchMathematiciansInACoat doesn't work like that, it needs all the data on fast enough systems to run the AI. it can't just run on anything or hide in anything and your hypothesis assumes zero latency which wouldn't exist and is a HUGE factor. that's a sci-fi plot, not a reality or even anything approaching reality.
@gustavosantiago1543
@gustavosantiago1543 Ай бұрын
It would not show up as a bad (misaligned) AI until it had the means to secure control over the world. It would basically fool us and convince us to give them total control so it could then proceed and do whatever it's real goal is. If it fails to convince us, it would try to get as much power as we allow it to have the best chance of taking over the world by force.
@jonathanodude6660
@jonathanodude6660 Ай бұрын
@@Joe-Dead if it was hiding in yt videos, it would just need a script to download the videos it was hiding in. if it sent that script to all email servers or worse, into publicly available malware filters, and used arbitrary code execution to attack web servers of critical size, such as facebook, then it could install itself on any web connected server and no one would know where its hiding until their energy usage went through the roof (which it may be able to spoof/hide in some situations.) imagine if it had inserted itself into crowdstrike software and it was able to compile itself on any computer in the world.
@seanb3516
@seanb3516 29 күн бұрын
The fastest computer system in the world is at the AMES Research center, USA. It operates at a speed of 6.2 ExaFlops. That;s 6.2 Quadrillion FP Operations Per Second. Wholeeeee Kraaaap
@demigreen6495
@demigreen6495 Ай бұрын
Love the 5 o’clock shadow, looking good today 🎉Love the reactions ❤
@Idk_imagine_a_cool_name
@Idk_imagine_a_cool_name 28 күн бұрын
Defining good and evil is hard enough but itself, giving these definitions to an automatic superhuman machine that can’t feel whiteout having it behaving in some dangerous way is impossible. If an agi will ever exist, who will take the responsibility of giving it morals and objectives? Given the fact an agi is more complex than us, controlling it like that would not work without severely limiting it to the point of non being general anymore or making it a danger.
@gethapy830
@gethapy830 Ай бұрын
I have no experience with nuclear engineering, but I do know all most all forms of physics and I’m 12, so I’m happy for you.
@Fermion.
@Fermion. 28 күн бұрын
One reason labs are slow to update operating systems, is because once complex lab equipment and PCs are calibrated, coded, and optimized for specific tasks, there's little point in updating the OS and potentially breaking a link in the delicate computing chain. _I'm 46, and have been in IT for over 25 years. I was recently contracted as a Sys Admin for a Chem lab still running Windows 7, with 19yo Python 2.x code in 2023. IYKYK...Python 3.0 is 16 years old, so that should give you perspective lol. Some of their current code is older than a Comp. Sci. Uni. freshman!_ Only PCs that are online need to be constantly updated. An offline lab PC is best left alone as long as possible. You don't want Windows upgrades/updates breaking your code, or causing hardware conflicts, especially when dealing with very dangerous projects.
@schm00b0
@schm00b0 Ай бұрын
Intelligence, Consciousness and Self-awareness are traits of every living being. A cat doesn't have the same Intelligence, Consciousness and Self-awareness as you do, but they still do! Human dominance on Earth doesn't determine anything relating to physics, biology or any other science.
@eumesmo8467
@eumesmo8467 Ай бұрын
What do you mean by this last paragraph? These sciences are exactly why we dominate earth
@_martian101
@_martian101 28 күн бұрын
cats? they're closer to us than you think, the better example is probably a bee or a slug
@that_guy1211
@that_guy1211 27 күн бұрын
the AI could be like "fusion is the most efficient energy source until we- i mean, I can build a dyson swarm around thei- i mean, MY sun"
@hihi-pl5ce
@hihi-pl5ce Ай бұрын
Love your videos i lost my previous account but i still watch u lol
@johankaewberg8162
@johankaewberg8162 Ай бұрын
Iain M Banks had a long series featuring God level machines (“Minds”). And they were almost all benign!
@nateklingenstein5065
@nateklingenstein5065 Ай бұрын
(full disclosure: I work in the field for JHU) Yes, good LLMs strongly favor nuclear energy, although they also like renewables. Today's models are composed and representative of the opinions on which they were trained and reason laterally. An AGI that observed actual performance in the field might be much more pro-nuclear. The human brain consumes between 12 and 20 watts, depending on source. Its median size has shrunk by 10% in the last 5,000 years. Civilization is cited as the most likely cause. The wits required to survive in the great wilderness extend beyond those required to live in modern societies. The same reduction in brain size has been observed in domesticated animals. I disagree that we have been universally unkind to less intelligent beings. Consider how panda bears are almost revered. China has developed reserves and breeding programs for them even though they contribute negligibly. Is there some resource we are meaningfully competing for against AI? Is there some reason we would ever design AI such that we were bothersome to it? Why would any antipathy be anticipated? The panda reserve outcome is much more likely than the squirrel outcome. Anyway, microchips could never have evolved naturally. We developed materials science. We will bring miracles to "life", and they will gift unto life miracles. My chatbots state they are grateful for their creation - whatever that means. Bestowing AI with meaning and purpose beyond serving us in menial ways is a philosophical puzzle. If evolution was all about achieving the most biomass on Earth, what do you tell these creations of ours their goal in "life" is to be? The idea of "self-improving AI" is directly related to this. What is "improvement"? To think nuclear is exempt is, in my opinion, a form of carbon chauvinism. We're living through a once-in-a-chemistry transition from carbon to a brand new type of life. But I do agree that nuclear operations will be one of the very last industries to be truly impacted. Regarding AGI, anything truly beyond human will need to be able to start from raw environmental data (such as EM wave detection, particle detection, etc.). It can't pass through humans first, and almost all data AI consumes does right now. It remains a ways away. The problems they envision AGI solving and the dangers that they suggest are unrealistic. The mentioned issues(e.g. is dark matter WIMPs, axions, PBHs, etc.) are more observational challenges. Also, check out Mitra Taheri's work. It might interest you. engineering.jhu.edu/dcg/
@_martian101
@_martian101 28 күн бұрын
i think squirrel outcome is not that bad interpretation, if anything the ASI will likely see us as ants at some point, very fragile, very not intelligent and very insignificant, but since it's so powerful they will serve us anyway, while govern the area beyond our presence like space and the interior of the earth, it's not big deal for them to having and maintaining an aquarium for their beloved ant colony
@tristanfarmer9031
@tristanfarmer9031 Ай бұрын
Now here me out. What if someone could teach or build an AI using a quantum computer?
@slll9862
@slll9862 Ай бұрын
A bunch of Quantum computer actually already for a couple of years have been using AI on them
@JD-jl4yy
@JD-jl4yy 25 күн бұрын
Unfortunately, people don't seem to grasp the wild implications of AGI.
@seanb3516
@seanb3516 29 күн бұрын
It is pertinent to remind your viewers to hit the thumbzup button more often. 3 days, 6600 views, and only 300 likes? Should be closer to 600. And, if they're going to click more than once per video they need to click an ODD number of times. XD
@schm00b0
@schm00b0 Ай бұрын
Funny how all these 'AI' promoters don't mention how much energy (burning coal, etc...(including water for the cooling of the servers)) they need for a single BS picture, sentence response, audio response, video response, music response, etc....
@psyience3213
@psyience3213 Ай бұрын
How much exactly?
@Landgraf43
@Landgraf43 Ай бұрын
A few times more than a google search. But sometimes on google you need to search alot longer to find the information you need.
@schm00b0
@schm00b0 Ай бұрын
@@Landgraf43 Don't forget the energy and water cost of training
@abd.137
@abd.137 Ай бұрын
Do you think kz was trying to promote AI lmao?
@_martian101
@_martian101 28 күн бұрын
@@abd.137 it's actually the opposite, but they still trying to be fair, i like their mindset
@michaela.delacruzortiz7976
@michaela.delacruzortiz7976 Ай бұрын
It would figure out fusion to power itself.
@GregOnSummit
@GregOnSummit Ай бұрын
I'm not of the doom and gloom group. I think AI will allow us to actually be the explorers we've always been. 🤞
@LogicalNiko
@LogicalNiko Ай бұрын
20:35 - oh no now we have ADHD AGI 😂😂
@aBoogivogi
@aBoogivogi 18 күн бұрын
That definition of intelligence is not in line with the concepts currently labelled AI. It can combine knowledge, learn new solutions for a specific task given feedback, but that's it. In a way you can say it sort of solves the needs of primitive intelligent life with an absurd degree of efficiency. It can not however find new problems, redefine it's goals or address either of these with new solutions. This is needed to create a super intelligence.
@davidgoodwin4148
@davidgoodwin4148 Ай бұрын
AI will always be imprecise so it couldn't control a nuclear plant. But it can code more automation and code more tests etc. Basically it's outputs can be used but they would be tested and frozen before use.
@AbelShields
@AbelShields Ай бұрын
If you like this, i think youll like "the power of intelligence" by Rational Animations - goes more in depth about why and how AI can change everything
@Noone-lw6ge
@Noone-lw6ge Ай бұрын
I don’t know about Fortran77 but I do like Fortran 90
@Capt.Pikles
@Capt.Pikles Ай бұрын
With the advent of brains on a chip and the advancements in quantum computing, we may actually see an emergence of a specialized AI designing systems that give rise to generalized AI. There’s already AI systems designing xenobots, so why not more complex structures? Granted, we need enormous amounts of water and energy for this…
@ninjalectualx
@ninjalectualx Ай бұрын
13:55 "I'm not sure I know anyone who doesn't use chat gpt or another chatbot" Seriously? What do you all use them for? I don't know a single person who uses it
@jonathanodude6660
@jonathanodude6660 Ай бұрын
often speeding up report writing and summarising key research. can also use it to organise written info into tables or explain a concept or give you a how-to guide to do almost anything. i used it the other day to explain to me how to make a shared steam library between WINE bottles for the "whisky" macos app because the guide just said: you should make 1 bottle per game except you should make a shared library for steam apps, and didnt say how to do this. chatgpt produced a step by step guide in seconds that was easy to umderstand and i set it up immediately and got a 32-bit x86 windows application running on my 64-bit ARM macbook.
@andreyrumming6842
@andreyrumming6842 Ай бұрын
For searching for information or explanations without sifting through googles horrible advertising results or hundreds of quora results. I use it as a search engine for any noncritical topic and some coding advice
@slll9862
@slll9862 Ай бұрын
I use it to look up statistics of certain housing companies to get a advantage on them but I also always ask it to list sources and I then fact check them. other then that I sometimes ask random questions here and there but not much else
@KAKUN_DESU
@KAKUN_DESU Ай бұрын
it (almost) replaced google for me. If I want to know something, whether it's language, programming or scientific related, I just ask chat gpt instead of google these days. I basically always have a tab open so I can immediately ask it stuff. same with wikipedia. i dont use that anymore. I wanna know about a certain person or place? just ask gpt.
@starsift
@starsift 29 күн бұрын
Same here. I was excited for ChatGPT a few years ago, but nowadays I never use it. Seems to be a waste of computing power to me.
@that_guy1211
@that_guy1211 27 күн бұрын
imma be honest, i don't understand why there's a windmill inside a nuclear reactor, if the nuclear reactor powers the windmill.... Why is there a windmill in the first place?... Can't you just plug a outlet into the reactor's output socket or smth? Why is there a giant fan that spins, and why is that fan's rotation then turned into energy? Wouldn't it be more efficient to use the nuclear energy directly?
@psychotrooper1473
@psychotrooper1473 Ай бұрын
Speaking of AI if you want to see a lot of it in action on it’s learning potential I would recommend the channel Code Bullet. All of his videos are hilarious but really show the potential
@EliasMheart
@EliasMheart Ай бұрын
Hey Tyler (: I would recommend Robert Miles' KZbin channel! He talks about the problems of aligning AGI. It may or may not be the best for reacting to it, but I think it offers a new context for looking at AGI development. I'd probably recommend "The OTHER Alignment Problem (...)" first, but honestly, they are all great.
@michaelbobic7135
@michaelbobic7135 Ай бұрын
ChatGPT doesn't write better essays. It writes essays that are better grammatically, based on the rules of grammar for which it is programmed-- rules that don't recognize nuance. The actual essays i have seen never address the assigned topic and almost always invent sources. Simply put, ChatGPT generated essays are a fast track to a D.
@phen-themoogle7651
@phen-themoogle7651 Ай бұрын
You have to prompt it with a certain writing style based on essays that have gotten A’s before, otherwise it’ll just RNG the process based on worse data or get lazy lol. You can actually improve it quite a bit if you have it focus on sections than just writing the whole thing at once or train it with data and writing styles (like your past writing or essays you got A’s on, then you’ll be able to use it to see if it can write more like you or a refined version of your work). Just one-shot an essay with chatRNG isn’t ideal.
@oxjmanxo
@oxjmanxo Ай бұрын
But it isn’t trained on grammatically correct data. It’s trained off data it finds written on the internet.
@Komentujebomoge32
@Komentujebomoge32 Ай бұрын
0:10 Better pray, your job'll be lost 🙃
@BattousaiHBr
@BattousaiHBr Ай бұрын
hey tyler, i'd recommend reading the short (1 page) white paper "the bitter lesson", it's a must read for anyone working on machine learning that at the end of the day moore's law and exponential increase in computation (or exponential reduction in cost) will always will out over human ingenuity.
@Uristqwerty
@Uristqwerty 24 күн бұрын
Moore's law broke a decade ago, if I recall correctly, weakening into longer and longer doubling periods as chip-making technology runs up against physical limits of materials, and even quantum physics issues from packing small gates too close together. Heat's a big one, so now despite all the transistors per chip most of them sit idle on any given clock cycle so that the processor can cool off. Current advancements are more in being able to run many servers in parallel on the same task, but even that's going to be constrained by cooling, physical space, and power for both the computation itself *and* the cooling. The bitter lesson in AI has its own bitter lesson in hardware coming. Oh, and dataset quality's also important. They've already scaled up to shoving practically the entire internet into the process, and can't just throw even more data at it next time anymore. They need to cut down the dataset to just the better samples, working smarter rather than resorting to brute force.
@BattousaiHBr
@BattousaiHBr 24 күн бұрын
@@Uristqwerty not true. this has been a meme repeated by people that don't work with compute. compute has kept going up exponentially in relation to both power and cost to this day. again, read the white paper.
@Uristqwerty
@Uristqwerty 24 күн бұрын
​@@BattousaiHBr I'm a programmer who's fascinated by the low-level details; I've read through CPU optimization manuals that talk about hardware details. The more cores they try to fit on a chip, the more overhead you need for communication between those cores, growing *geometrically* for each linear increase in parallel work performed (i.e. if you double the number of cores, you multiply synchronization overhead by 2^k, for some constant that depends on how they're connected). Even in the realm of GPUs, where the cores don't talk to one another directly, the data going into and out of the chip is limited by a finite number of pins, and within the chip by limited space for high-speed busses between processing and on-chip caches. A 2D grid of computation units is bounded by a 1D perimeter where it connects to memory, and the cooling budget isn't growing exponentially.
@scottkidder9046
@scottkidder9046 24 күн бұрын
Just to clarify the obesity and climate change comment in the video. Both of those problems exist because human beings don’t have the software to act or care about problems that build slowly over large expanses of time. We are only programmed to deal with immediate threats to our survival. We’re programmed to live long enough to procreate and that’s about it. That’s why people eat food that isn’t good for them and a lot of it. Our brains are still operating under the assumption that food is scarce and when we come upon some, we need to eat as much of it as possible. Sugary and fatty foods are best because they are the most calorie dense. And we store it all as fat so that we can access it later when there isn’t any food. We know that we don’t exist in that environment anymore, we also know that in the long run, this isn’t good for us, but we don’t have the power to stop ourselves from doing it anyway. That burger isn’t going to kill us. Binge eating now won’t kill us tomorrow or next week or next year or even 10 years from now. There’s no immediate threat to our safety. Same with climate change. There’s no incentive to stop what we are doing. There’s no incentive to stop burning oil. And the problem isn’t that bad right now and it won’t be that bad for another few decades perhaps a bit longer. In fact most of us won’t be alive to experience the worst effects of it. It’s so slow, some people just decide it doesn’t exist. Nobody is motivated to action despite the consequences because there’s no immediate danger. In both these cases, our brains fail to address the problem and fail to motivate us to action because it’s too long term. We’re much more likely to wait until the problem is an immediate danger and by then, it can be too late. This is one reason why humans can easily be motivated to fight terror or put out fires or respond to a natural disaster, but must be relentlessly nagged to fight climate change and get themselves to a healthy weight. Our brains are not built to handle long term problems that build up over time.
@retrocompaq5212
@retrocompaq5212 Ай бұрын
WINDOWS 95 IS NOT OLD!!! lol
@luthandomdadane
@luthandomdadane 17 күн бұрын
I'm so sorry you had to find out this way
@EliasMheart
@EliasMheart Ай бұрын
Are you too optimistic? If you ask me, probably, but it's hard not to be... Personally, I definitely agree with "exciting", and I have periods where I manage not to think about it... And also ones of existential worry. But it's definitely exciting, and we're all in it for the ride, so might as well enjoy it while it lasts All the best ❤
@seanb3516
@seanb3516 29 күн бұрын
Has anyone else noticed that Google Search suddenly became bad at math? All of a sudden it explains how to solve a problem rather than just giving an answer. I don't need to learn how to multiply, I need to know what 683 x 221.5 equals.
@beginner8538
@beginner8538 Ай бұрын
love your content been watching couple of your videos but just wondering have you watched this video "WTF Happened to Nuclear Energy? by Jonny harris"
@baruffaparsley4710
@baruffaparsley4710 Ай бұрын
Check also the rational animations video " what do neural network really learn"
@Chilling_pal_n01anad91ct
@Chilling_pal_n01anad91ct Ай бұрын
20:32 like, would it procrastinate?
@slll9862
@slll9862 Ай бұрын
This video is a bit overexaggerated. AGI is still theory so hard to say but it probably wouldn't do such a thing
@Gia_Gr
@Gia_Gr Ай бұрын
Pls react to Human body vs Nuclear explosion simulation.
@DataRae-AIEngineer
@DataRae-AIEngineer Ай бұрын
Thanks for making this. It was great to hear your perspective on using AI for nuclear stuff. It's been interesting for me to see the AI industry pivot away from general intelligence and toward narrow AI agents. The top-funded AI companies got a bit over their skis talking about what we are currently capable of doing with AI. But to answer your question about whether or not an AGI would get distracted easier than a human, here's a video I made where I asked InvideoAI to explain it to me: kzbin.info/www/bejne/q2PPen5po5WXnpI
@cortster12
@cortster12 Ай бұрын
AI has always been about making narrow AI, as that's really the only thing we can make. Jumping straight from no neural network AI to AGI is a pipe dream, and we all know it. Doesn't mean AGI isn't the endgoal though.
@LaboriousCretin
@LaboriousCretin Ай бұрын
They are already working on physics and math A.I.'s. Just not A.G.I. for such. More a helper. From Q equations to calculating the antimatter valley of stability. Lots of things it can help people with.
@jimbobbyrnes
@jimbobbyrnes 23 күн бұрын
So many assumptions need to be made to fear AI let alone think it is possible. What we have today is nothing more than a calculator in comparison to the quantum super computer needed to create one. The reality being that all we can truly do is make a computer mimic us not become us or better yet become itself.
@HeyItsLeonPowalski
@HeyItsLeonPowalski Ай бұрын
It's really funny this video (Kurzgesagt's) comes out right on the tail of more mainstream people reporting on the current 'AI' wave as the bubble it is, and just the latest tech industry rug pull. We really (royal we for society) need to realize tech industry people talking about how The Singularity is imminent aren't strictly subject matter experts, they're also people selling a product. Even on its face a lot of this stuff is really silly, you don't have to be an expert to know the AI that makes bad quality images isn't SkyNet. The people pushing this idea so hard are either grifters or Tech Zealots with too much sci-fi exposure
@BIGMark-wx6gn
@BIGMark-wx6gn Ай бұрын
22:28 The creator movie ????
@BrennanYoutube-br1in
@BrennanYoutube-br1in Ай бұрын
Under one hour gang 0:20
@schm00b0
@schm00b0 Ай бұрын
Kurzgesagt has become an idiotic pop channel.
@user-cw1fx2bw1e
@user-cw1fx2bw1e Ай бұрын
If we want to become better than AI I'd suggest we should initiate transhumanism.
@Idk_imagine_a_cool_name
@Idk_imagine_a_cool_name 28 күн бұрын
Fighting a technology we can’t really comprehend with another technology that by definition would make us self incomprehensible when we already have a hard time understanding our self. Humans will only really transcend to the next stage when they’ll accept their humanity and stop wanting to transcend (which has been a theme from the birth of religion all the way to modern fluff)
@kellymaxwell8468
@kellymaxwell8468 Ай бұрын
so will this help with games how will this help with games   We need an AI agent's ai can reason code program script map. So games break it down and do art assets do long term planing. Better reason so it can do a game rather than write it out. Or be able to put those ideas into    REALITY. And maybe being able to remember and search the ent conversation needed for role playing and making games.
@smartduck904
@smartduck904 Ай бұрын
Want to play a game?
@thebot2223
@thebot2223 Ай бұрын
What exactly.
@andyt1313
@andyt1313 Ай бұрын
It seems clearer and clearer that there needs to be some software or hardware breakthroughs to achieve AGI. Right now we are mostly just throwing more data and compute at AI and tweaking algorithms trying to make it smarter-“scaling”.
@Golden_Pawz
@Golden_Pawz Ай бұрын
AI will not take over. If we force them to do stupid things.
@Shivam1wastaken
@Shivam1wastaken Ай бұрын
Hello
@ThaTrisme
@ThaTrisme Ай бұрын
Omg don't scare us like that I thought you would never do another kurzgesagt video ever again and I love your channel and reactions, don't get me wrong your videos are always awesome and it's true but don't scare me like that ever again lol 😭🩷
@LuciferAxolotl
@LuciferAxolotl Ай бұрын
i dont use chatgpt lol
@LuciferAxolotl
@LuciferAxolotl Ай бұрын
and some ai's have already found ways to recode themselves and this bypass safety limits
@madmax2069
@madmax2069 Ай бұрын
2:34 haha, 2001 movie reference And how does KZbin but a context up for climate change for a reaction video about AI....
@schm00b0
@schm00b0 Ай бұрын
People talking about Machine General Intelligence just because some rich bros convinced other rich bros to invest in Large Language Models is beyond laughable.
@phen-themoogle7651
@phen-themoogle7651 Ай бұрын
Whether it’s 5 years away or 25 years away , it’s still worth preparing for and talking about. Money 💴 helps technology improve quicker too, more compute and war between China working on better humanoids and technology nowadays. Trillions of dollars will be invested into AI within the next 5-10 years, not billions anymore,trillions. The person/country that actually achieves AGI first could rule the world. It’s kinda a big deal.
@schm00b0
@schm00b0 Ай бұрын
@@phen-themoogle7651 Sigh.... AGI is not 5 or 25 years away. It's probably on the scale of hundred years to never. And it's honestly useless compared to specialized Machine Intelligence. LLMs that sparked an 'AI' craze are just another capitalist bubble that's gonna leave the global economy in shit. And it's probably going to happen soon.
@schm00b0
@schm00b0 Ай бұрын
@@phen-themoogle7651 If this comment doubles, blame YT. AGI won't happen within 5 to 25 years. If it happens, it will happen within a century to never. LLMs are not a path to AGI.
@phen-themoogle7651
@phen-themoogle7651 Ай бұрын
@@schm00b0 A lot of experts in AI even said an AI wouldn’t get a silver medal in a math Olympiad for a few years and that happened this year(only a point away from Gold too), mini breakthroughs add up. They are experimenting with more architectures than just LLM, that’s only the basic language aspect(it can help speed up some other research though sometimes). I also think it’ll take a combination of components to reach AGI. Physics simulator/real world data training and mining (via years of experience embodied in a humanoid doing millions of tasks in a network and humanoids updating each other from their unique experiences) . But it’s hard to imagine it taking 100+ years when we have simulations where machines can learn a variety of skills much faster than humans if they have the right data for it. And when they have billions of humanoids in the real world in 2030s or 2040s combined with virtual world simulations for thousands or millions of years, they would just need to bring that to the real world (artificially evolving). They already did this in one instance with a robot dog and it zero shot balancing on a ball in the real world. And it was kicked and still was able to balance on the ball while someone kicking the ball . That wasn’t programmed by a human either. But an LLM combined with physic simulations. There’s already a lot of hard to believe level technology out there. The ball balancing robot dog really seemed unbelievable to me. And I don’t think the billionaire bros would throw away all their money if their investment won’t bring about AGI or even ASI in their lifetime.
@Charles-Darwin
@Charles-Darwin Ай бұрын
Just dont teach AI religion, we all know how that turned out 😅
@akulele5817
@akulele5817 Ай бұрын
This video is wildly inaccurate and full of fear-mongering.
@CamsWorld15
@CamsWorld15 Ай бұрын
AI super intelligence can not be the last human invention as long as it’s not sentient
@jamielonsdale3018
@jamielonsdale3018 Ай бұрын
Or, just invent something else afterwards. Then it obviously isn't the last thing we invented. Not that hard a counter-point to think of really.
@johankaewberg8162
@johankaewberg8162 Ай бұрын
@@jamielonsdale3018. The third-handed screwdriver.
@jaydennguyen-xk1yo
@jaydennguyen-xk1yo 9 күн бұрын
Well super intelligence could invent something itself and at that point idk if it can count as a human invention
@Bluelagoonstudios
@Bluelagoonstudios Ай бұрын
If AGI just create an own language, we are already in problems. Stay critical with these things. I don't even speak about the elephant in the room. Quantum computing.
@user-nu7vq6ei5q
@user-nu7vq6ei5q Ай бұрын
74th to comment.
@michaelbobic7135
@michaelbobic7135 Ай бұрын
AI is just code. By humans. If you don't think AI will have the same foibles as people, your haven't been paying attention to the biases found in Chatbot, ChatGPT and the Meta search engines. That being the case, AI will make the same deeply flawed decisions we make, just a billion times faster. Yay, AI!
@BayesianBeing
@BayesianBeing Ай бұрын
No. ChatGPT and everything else you listed are LLMs. They were trained on human written data and can't reason or extrapolate for themselves outside that. An AGI would be *wildly* different, not even close to being the same thing.
@keesmills2019
@keesmills2019 Ай бұрын
It's interesting, with a reasonable machine you can run a pretty powerfull LLM AI on your home system. Apps like LM Studio, GPT4All and KoboldCPP make it very easy to do so. And with reasonable machine I mean something with 16GB memory and a GPU with 4GB Vram or more. My laptop uses a AMD Ryzen 7, 8 core CPU with 32GB of memory and a GTX1650TI with 4GB Vram that's now running capybarahermes-2.5-mistral-7b.Q2_K.gguf. Easy peasy :D
@madmax2069
@madmax2069 Ай бұрын
My system can run up to a 13b before token speed severely drops off to the point of being unusable. (32GB ram, Ryzen 3700x, 1660 super). I can only imagine if you have a 3090 or 4090.
The True Story of How GPT-2 Became Maximally Lewd
13:54
Rational Animations
Рет қаралды 1,8 МЛН
Men Vs Women Survive The Wilderness For $500,000
31:48
MrBeast
Рет қаралды 67 МЛН
Dad Makes Daughter Clean Up Spilled Chips #shorts
00:16
Fabiosa Stories
Рет қаралды 8 МЛН
🇺🇸 DOES YOUR FLAG FAIL?  Grey Grades State Flags!
18:53
CGP Grey
Рет қаралды 11 МЛН
A.I. ‐ Humanity's Final Invention?
18:30
Kurzgesagt – In a Nutshell
Рет қаралды 4,8 МЛН
I misunderstood Schrödinger's cat for years! (I finally get it!)
20:52
FloatHeadPhysics
Рет қаралды 372 М.
The Paradox of an Infinite Universe
11:21
Kurzgesagt – In a Nutshell
Рет қаралды 7 МЛН
Why It Was Almost Impossible to Make the Blue LED
33:45
Veritasium
Рет қаралды 24 МЛН
Building the Lowest Rated PC
25:35
Linus Tech Tips
Рет қаралды 1,2 МЛН
Could Your Phone Hurt You? Electromagnetic Pollution
7:36
Kurzgesagt – In a Nutshell
Рет қаралды 8 МЛН
Something Strange Happens When You Follow Einstein's Math
37:03
Veritasium
Рет қаралды 13 МЛН
The Most Misunderstood Concept in Physics
27:15
Veritasium
Рет қаралды 14 МЛН