What Do Neural Networks Really Learn? Exploring the Brain of an AI Model

  Рет қаралды 181,897

Rational Animations

Rational Animations

Күн бұрын

Neural networks have become increasingly impressive in recent years, but there's a big catch: we don't really know what they are doing. We give them data and ways to get feedback, and somehow, they learn all kinds of tasks. It would be really useful, especially for safety purposes, to understand what they have learned and how they work after they've been trained. The ultimate goal is not only to understand in broad strokes what they're doing but to precisely reverse engineer the algorithms encoded in their parameters. This is the ambitious goal of mechanistic interpretability. As an introduction to this field, we show how researchers have been able to partly reverse-engineer how InceptionV1, a convolutional neural network, recognizes images.
▀▀▀▀▀▀▀▀▀SOURCES & READINGS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
This topic is truly a rabbit hole. If you want to learn more about this important research and even contribute to it, check out this list of sources about mechanistic interpretability and interpretability in general we've compiled for you:
On Interpreting InceptionV1:
Feature visualization: distill.pub/20...
Zoom in: An Introduction to Circuits: distill.pub/20...
The Distill journal contains several articles that try to make sense of how exactly InceptionV1 does what it does: distill.pub/20...
OpenAI's Microscope tool lets us visualize the neurons and channels of a number of vision models in great detail: microscope.ope...
Here's OpenAI's Microscope tool pointed on layer Mixed3b in InceptionV1: microscope.ope...
Activation atlases: distill.pub/20...
More recent work applying SAEs to InceptionV1: arxiv.org/abs/...
Transformer Circuits Thread, the spiritual successor of the circuits thread on InceptionV1. This time on transformers: transformer-ci...
In the video, we cite "Toy Models of Superposition": transformer-ci...
We also cite "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning": transformer-ci...
More recent progress:
Mapping the Mind of a Large Language Model:
Press: www.anthropic....
Paper in the transformers circuits thread: transformer-ci...
Extracting Concepts from GPT-4:
Press: openai.com/ind...
Paper: arxiv.org/abs/...
Browse features: openaipublic.b...
Language models can explain neurons in language models (cited in the video):
Press: openai.com/ind...
Paper: openaipublic.b...
View neurons: openaipublic.b...
Neel Nanda on how to get started with Mechanistic Interpretability:
Concrete Steps to Get Started in Transformer Mechanistic Interpretability: www.neelnanda....
Mechanistic Interpretability Quickstart Guide: www.neelnanda....
200 Concrete Open Problems in Mechanistic Interpretability: www.alignmentf...
More work mentioned in the video:
Progress measures for grokking via mechanistic interpretability: arxiv.org/abs/...
Discovering Latent Knowledge in Language Models Without Supervision: arxiv.org/abs/...
Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning: www.nature.com...
▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, MERCH▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🟠 Patreon: / rationalanimations
🔵 Channel membership: / @rationalanimations
🟢 Merch: rational-anima...
🟤 Ko-fi, for one-time and recurring donations: ko-fi.com/rati...
▀▀▀▀▀▀▀▀▀SOCIAL & DISCORD▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Discord: / discord
Reddit: / rationalanimations
X/Twitter: / rationalanimat1
▀▀▀▀▀▀▀▀▀PATRONS & MEMBERS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
AAAA you don't fit in the description this time! But we thank you from the bottom of our hearts. All of you, in this Google Doc: docs.google.co...
▀▀▀▀▀▀▀CREDITS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
All the good doggos who worked on this video: docs.google.co...

Пікірлер: 576
@RationalAnimations
@RationalAnimations 3 ай бұрын
This topic is truly a rabbit hole. If you want to learn more about this important research and even contribute to it, check out this list of sources about mechanistic interpretability and interpretability in general we've compiled for you: On Interpreting InceptionV1: Feature visualization: distill.pub/2017/feature-visualization/ Zoom in: An Introduction to Circuits: distill.pub/2020/circuits/zoom-in/ The Distill journal contains several articles that try to make sense of how exactly InceptionV1 does what it does: distill.pub/2020/circuits/ OpenAI's Microscope tool lets us visualize the neurons and channels of a number of vision models in great detail: microscope.openai.com/models Here's OpenAI's Microscope tool pointed on layer Mixed3b in InceptionV1: microscope.openai.com/models/inceptionv1/mixed3b_0?models.op.feature_vis.type=channel&models.op.technique=feature_vis Activation atlases: distill.pub/2019/activation-atlas/ More recent work applying sparse autoencoders (SAEs) to uncover more features in InceptionV1 and decompose polysemantic neurons: arxiv.org/abs/2406.03662v1 Transformer Circuits Thread, the spiritual successor of the circuits thread on InceptionV1. This time on transformers: transformer-circuits.pub/ In the video, we cite "Toy Models of Superposition": transformer-circuits.pub/2022/toy_model/index.html We also cite "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning": transformer-circuits.pub/2023/monosemantic-features/ More recent progress: Mapping the Mind of a Large Language Model: Press: www.anthropic.com/research/mapping-mind-language-model Paper in the transformers circuits thread: transformer-circuits.pub/2024/scaling-monosemanticity/index.html Extracting Concepts from GPT-4: Press: openai.com/index/extracting-concepts-from-gpt-4/ Paper: arxiv.org/abs/2406.04093 Browse features: openaipublic.blob.core.windows.net/sparse-autoencoder/sae-viewer/index.html Language models can explain neurons in language models (cited in the video): Press: openai.com/index/language-models-can-explain-neurons-in-language-models/ Paper: openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html View neurons: openaipublic.blob.core.windows.net/neuron-explainer/neuron-viewer/index.html Neel Nanda on how to get started with Mechanistic Interpretability: Concrete Steps to Get Started in Transformer Mechanistic Interpretability: www.neelnanda.io/mechanistic-interpretability/getting-started Mechanistic Interpretability Quickstart Guide: www.neelnanda.io/mechanistic-interpretability/quickstart 200 Concrete Open Problems in Mechanistic Interpretability: www.alignmentforum.org/posts/LbrPTJ4fmABEdEnLf/200-concrete-open-problems-in-mechanistic-interpretability More work mentioned in the video: Progress measures for grokking via mechanistic interpretability: arxiv.org/abs/2301.05217 Discovering Latent Knowledge in Language Models Without Supervision: arxiv.org/abs/2212.03827 Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning: www.nature.com/articles/s41551-018-0195-0
@pyeitme508
@pyeitme508 3 ай бұрын
WOW!
@EmmanuelMess
@EmmanuelMess 3 ай бұрын
What's the source for "somehow it learnt to tell peoples biological sex" (at the start)? It really sounds like bias in the data.
@RationalAnimations
@RationalAnimations 3 ай бұрын
@@EmmanuelMess It's the very last link
@EmmanuelMess
@EmmanuelMess 3 ай бұрын
Thanks! it seems that other papers also confirm that biological sex identification is possible from fundus images.
@ifoxtrot171gg
@ifoxtrot171gg 3 ай бұрын
as someone who uses neurons to classify images, i too am activated by curves.
@dezaim7288
@dezaim7288 3 ай бұрын
As a mass of neurons i can relate to being activated by curves.
@hellofellowbotsss
@hellofellowbotsss 3 ай бұрын
Same
@nxte8506
@nxte8506 3 ай бұрын
deadass
@a31-hq1jk
@a31-hq1jk 3 ай бұрын
I get activated by straight thick lines
@ThatGuyThatHasSpaghetiiCode
@ThatGuyThatHasSpaghetiiCode 3 ай бұрын
Especially the female ones
@user-qw9yf6zs9t
@user-qw9yf6zs9t 3 ай бұрын
anyone else surprised that there isnt a a ai model that ranks people's "beauty-ness" from 1-100 honestly great start-up idea, just use people ranking data
@cheeseaddict
@cheeseaddict 3 ай бұрын
You guys shouldn't overwork yourselves 😭 17 minutes of high quality animation and info Seems like Kurzgesagt got competition 👀
@dissahc
@dissahc 3 ай бұрын
kurzgesagt could only dream of possessing this much style and substance.
@Puppeteer_in_the_Void
@Puppeteer_in_the_Void 3 ай бұрын
I feel like kurzgesagt has slacked off on writing quality, so I'm glad it may have to fight to not be replaced
@ExylonBotOfficial
@ExylonBotOfficial 3 ай бұрын
This is so much more informative than any of the recent kurzgesagt videos
@raph2550
@raph2550 3 ай бұрын
The recent Kurzgesagt are just ads so...
@Restrocket
@Restrocket 3 ай бұрын
Use AI to generate video and text instead
@theweapi
@theweapi 3 ай бұрын
Polysemanticity makes me think of how we can see faces in things like cars and electrical sockets. The face detection neurons are doing multiple jobs, but there is not a risk of mixing them up because of how vastly different they are. This may also explain the uncanny valley, where we have other neurons whose job it is to ensure the distinction is clear.
@christianhall3916
@christianhall3916 2 күн бұрын
I don't think that's what's going on with face detection. It's just highly advantageous in natural selection to tell if something has a face or not, because that's a sign that it could be alive. So face detection is overeager because it's so important to know if there's even a slight chance something has a face. Two dots and a line or curve is all it takes for us to see a face in anything.
@Nikolas_Davis
@Nikolas_Davis 3 ай бұрын
The delicious irony, of course, is that AI started out as a research field with the purpose to understand our _own biological intelligence_ by trying to reproduce it. Actually building a practical tool was a distant second, if even considered. But hardly anyone remembers that now, when AI systems are developed for marketable purposes. So, now that AI (kinda) works, we're back to square one, trying to understand _how_ it works - which was the issue we had with our own wetware in the first place! Aaargh!! But all is not lost, because we can prod and peek inside our artificial neural networks much easier than we can inside our noggins. So, maybe there is net progress after all.
@superagucova
@superagucova 3 ай бұрын
A cool fact is that interpretability efforts *have* led to some progress in neuroscience. Google has a cool paper drawing analogies between neuroscience understanding of the human visual cortex and specific types of convolutional neural networks, and this has seeped in into the neuroscience literature
@miniverse2002
@miniverse2002 3 ай бұрын
Considering the brain is the most complicated thing known in the Universe other than the Universe, I would think understanding it would still be a whole other challenge compared to our "simple" reproductions even if can prod and peak our own brains as easily.
@jackys_handle
@jackys_handle 2 ай бұрын
"Wetware" I'm gonna use that
@israrbinmi2856
@israrbinmi2856 2 ай бұрын
and then they use a more advanced AI (GPT4) to understand the lesser (GPT2), the irony is an onion
@antonzhdanov9653
@antonzhdanov9653 2 ай бұрын
​@@israrbinmi2856Its weird but it makes sense bcs more advanced AI is literally tasked with vivisection of less advanced to discern and show what each piece of less advanced is doing and letting skinbags interprete given results.
@CloverTheBean
@CloverTheBean 3 ай бұрын
I really appreciate your simplification without dumbing it down to just noise. I've been wondering about how neural networks operate. Not that I'm a student or trying to apply it for any reason. I just love having nuggets of knowledge to share around with friends!
@miguelmalvina5200
@miguelmalvina5200 3 ай бұрын
KARKAT LETS GO
@average-neco-arc-enjoyer
@average-neco-arc-enjoyer 3 ай бұрын
@@miguelmalvina5200 they got the HS:BC karkat
@CloverTheBean
@CloverTheBean 2 ай бұрын
@@miguelmalvina5200 YES YOU'RE THE FIRST ONE IN THE WILD WHO RECOGNIZED IT
@miguelmalvina5200
@miguelmalvina5200 2 ай бұрын
@@CloverTheBean I wasnt expecting to find a homestuckie in a very niche AI video honestly, pretty nice
@eddyr1041
@eddyr1041 2 ай бұрын
The wondrous philosophy thought of what brain ia...
@jddes
@jddes 3 ай бұрын
This is something I've been going on about for a long time. The real prize isn't in getting machines to do stuff for us, it will be using and adapting the shortcuts they are able to find. We just need to learn...what they've learned.
@r.connor9280
@r.connor9280 3 ай бұрын
The world's most complex homework stealing scheme
@terdragontra8900
@terdragontra8900 3 ай бұрын
This is… not true, I think. Neural networks will (and already have) learned algorithms that are simply too complicated and weird for a human brain to follow.
@jakebrowning2373
@jakebrowning2373 3 ай бұрын
​@@terdragontra8900 yes
@joaomrtins
@joaomrtins 3 ай бұрын
Rather than learning _what_ they learned the real prize is learning _how_ they learn.
@E.Pierro.Artist
@E.Pierro.Artist 3 ай бұрын
​@@joaomrtins for what? Won't do us much good
@DanksPlatter
@DanksPlatter 3 ай бұрын
these videos are perfect edutainment and its crazy how much detail goes in even to background stuff like sound effects and music
@realtwn
@realtwn 3 ай бұрын
8:25 bro made a ai image generator without realizing
@E.Pierro.Artist
@E.Pierro.Artist 3 ай бұрын
Ur mum
@Jan12700
@Jan12700 3 ай бұрын
5:18 But that's exactly what leads to extreme misjudgments if the data isn't 100% balanced and you never manage to get anything to be 100% balanced. With dogs in particular, huskies were only recognized if the background was white, because almost all training data was with huskies was in the snow.
@Kevin-cf9nl
@Kevin-cf9nl 3 ай бұрын
On the other hand "Is in the snow" is a genuinely good way to distinguish huskies from other dogs in an environment with low information. I wouldn't call that an "extreme misjudgement", but rather a good (if limited) hueristic.
@austinrimel7860
@austinrimel7860 3 ай бұрын
Good to know.
@superagucova
@superagucova 3 ай бұрын
Balancing becomes less and less of a problem at scale. Neural networks don’t overfit in the same way that classical statistical models do.
@ericray7173
@ericray7173 3 ай бұрын
That’s what overfitting is. Same thing with a certain trophy fish that A.I. networks learned to only recognize if it was in human hands lol.
@Stratelier
@Stratelier 3 ай бұрын
Didn't the mention of polysemanticity kind of vaguely touch on this? Whatever nodes are responsible for detecting the husky may also be biased toward an assumption that the husky is seen against a snowy/white backdrop, due in some part to the limits of its training data.
@gavinbowers137
@gavinbowers137 3 ай бұрын
The sound design and the animation were incredible in this one!
@guyblack9729
@guyblack9729 3 ай бұрын
"there will also be lots of pictures of dogs" well count me the fuck in LET'S GOOOO
@loopuleasa
@loopuleasa 3 ай бұрын
"To understand is to make a smaller system inside your head behave similarly to a larger system outside your head."
@BayesianBeing
@BayesianBeing 2 ай бұрын
Idk who said that but it's a great quote
@loopuleasa
@loopuleasa 2 ай бұрын
@@BayesianBeing I made it, just added quotes to mark that it is very important and ready to quote further
@LuigiSimoncini
@LuigiSimoncini 2 ай бұрын
Yes, a series of models, most probably multisemantic and dinamically refined ("trained") thanks to both our senses (big limitation for LLMs, they're not incarnated) and subsequent system 2 unconscious thinking (who knows, maybe even during sleep)
@luisgdelafuente
@luisgdelafuente Ай бұрын
Human learning and understanding is not based on building smaller models, but on building abstractions with meaning. That's what AGI believers don't understand.
@RitmosMC
@RitmosMC 3 ай бұрын
These videos are incredible, going into so much detail and yet managing to not lose the audience on the way- it’s amazing! The animation, the style, the music, everything! This channel is on par with Kurzgesagt and others in the educational animations genre, and the fact that it doesn’t have millions of subscribers is criminal. One of the top 10 channels on the entire platform for sure. Keep up the incredible work.
@chadowsrikatemo4494
@chadowsrikatemo4494 3 ай бұрын
Ok, that explanation of image generation (the one that made a snout inside a snout) was one of the best ones i've found yet, Good Job!
@drdca8263
@drdca8263 3 ай бұрын
Note that this isn't quite the same method used in the tools which have the purpose of generating general images. Though it does have *some similarity* to some(most? all??) of those methods.
@animowany111
@animowany111 3 ай бұрын
@@drdca8263 The idea of deep dream and that kind of neuron visualization is completely different from modern image-generating models using Diffusion. The things they optimize for are completely different. Diffusion tries to take a (conditional) sample from some kind of learned manifold in latent space that somehow represents the distribution of data seen in training, deep dream directly optimizes a point in a similar latent space to maximize a neuron or channel activation.
@smitchered
@smitchered 3 ай бұрын
Thanks for educating the general public about AI and its dangers, Rational Animations! Your animations keep getting better and I still listen to your video's soundtracks from time to time... thanks for all this effort you're pouring into the channel!
@michaelpapadopoulos6054
@michaelpapadopoulos6054 3 ай бұрын
the maximally bad output soundtrack had some really catchy leitmotifs! Also just a banger video in general.
@VictorMartinez-zf6dt
@VictorMartinez-zf6dt 3 ай бұрын
My dog was in this video and I couldn’t be more happy when I saw her!
@ashleyjaytanna1953
@ashleyjaytanna1953 3 ай бұрын
My God was not in the video. Blasphemy!
@stronggaming2365
@stronggaming2365 3 ай бұрын
@@ashleyjaytanna1953 your what now?
@laibarehman8005
@laibarehman8005 2 ай бұрын
@@stronggaming2365 Dyslexia sniping is my best guess haha
@danielalorbi
@danielalorbi 3 ай бұрын
This what the most absolutely delightful animation of a Neural Net I've ever seen, by far. Kudos to the visual and animation artists
@jaycee53
@jaycee53 3 ай бұрын
This channel is slept on 😮‍💨
@benedekfodor269
@benedekfodor269 3 ай бұрын
I suspect that will change, I'm so happy I found it.
@KrasBadan
@KrasBadan 3 ай бұрын
16:11 It reminds me of a video by Welch Labs that I watched recently. It was about how Kepler didcovered his laws. Basically he had a bunch of accurate data about the positions of Mars at some points in time and he wanted to figure out how it moves. He noticed that the speed at which it orbits the sun isn't uniform, it is faster in one half of the orbit and slower in the other, and he tried to take it into account. What he did was assume that the orbit is a circle, and inside that circle there is the sun, the center of the circle and some point called equant, and all these 3 points lye on the same line. The idea of equant is as follows: imagine a ray that rotates uniformly around the equant, find the point at which it intersects orbit. In that model, this point of intersection is where the Mars should be at this moment. He had 4 parameters: distance from the center of circle to sun, to equant, the speed or ray and the starting position of ray. These 4 parameters can describe wide range of possible motions. By doing lots of trial and error, Kepler fine-tuned these 4 parameters, such that the maximum error was just 2 arcminutes, a hundred times more accurate than anyone else. This process of having a system that can discribe almost anything and tuning it to describe what you want is similar to how neural networks recognise patterns. But after more optimisation and taking more things into account, Kepler came to the conclusion that the orbit isn't a circle. He then tried tuning a shape that is similar to egg, but it was worse than his old model, so he added more parameters. He assumed Mars orbits some orbit that itself orbits an orbit, and after noticing one thing about angles in his data, he found perfect parameters, that happen to perfectly describe ellipse. The discovery of elliptical orbit of Mars was the main thing that allowed Newtown to develop his theory of gravity. This thing is similar to how, given enough data, by lots of trial and error neural networks can generalize concepts and find underlying laws of nature.
@gasun1274
@gasun1274 3 ай бұрын
It's called an empirical fit. Physics, engineering, and the modern world is built on that method. There is no better way than that.
@Billy4321able
@Billy4321able 3 ай бұрын
It really feels like we need way more effort being put into mechanistic Interpretability. The mileage we're going to get from just designing new models and training from larger datasets is small. However, if we do basic research into finding out how these models make the decisions they make, then we can steer them in the right direction much faster. I don't see us getting much further than we are today using the same brute force methods. Machine learning techniques used today are the construction equivalent of building skyscrapers out of wood. Sure, maybe if you made them wide enough with strong enough wood it could work, but there are definitely better materials out there than wood.
@BluishGreenPro
@BluishGreenPro 3 ай бұрын
These visualizations are incredibly helpful in understanding the topic; fantastic work!
@foxxiefox
@foxxiefox 3 ай бұрын
Babe wake up rational animations posted
@bingusbongus9807
@bingusbongus9807 3 ай бұрын
awake babe
@Jaggerbush
@Jaggerbush 3 ай бұрын
Corny. I hate this unoriginal lame comment. It's right up there with "first".
@SteedRuckus
@SteedRuckus 3 ай бұрын
​@@Jaggerbushbabe wake up, someone got mad about the thing
@sebas11tian
@sebas11tian 3 ай бұрын
The thing is that if we kept quiet, we stop participating in picking what memes are widespread. I often found that the top comment used to be somewhat related to the topic instead of blind and uninspired appreciation post for the creator.
@Niohimself
@Niohimself 3 ай бұрын
Every video on this channel is a banger
@duplicake4054
@duplicake4054 3 ай бұрын
This is one of the best videos I have ever watched
@mentgold
@mentgold 3 ай бұрын
the production quality of you guys is through the roof and somehow you still manage to improve with every video
@meaburror7653
@meaburror7653 3 ай бұрын
best channel on yt right now
@onei3411
@onei3411 3 ай бұрын
Does that mean we can write neural networks by hand now ?
@DriPud
@DriPud 3 ай бұрын
I have to admit, every single one of your videos makes me eager to learn. Thank you for such high-quality, entertaining content!
@PaulFidika
@PaulFidika 3 ай бұрын
I love your cartoon-network esque art style; I don’t even know what to call it
@BaxterAndLunala
@BaxterAndLunala 3 ай бұрын
2:21: "There will also be lots of pictures of dogs." Considering the fact every video I've seen from this guy has the same yellowish-orange dog in them, I'm not surprised that he said we'd see pictures of dogs. Lots of pictures of dogs.
@atom1kcreeper605
@atom1kcreeper605 3 ай бұрын
Lol
@gabrote42
@gabrote42 3 ай бұрын
Another hit "This is going on my arguments and explanation playlist" moment
@GuyThePerson
@GuyThePerson 3 ай бұрын
I've already learnt more from this video than a week of school. Strange how you learn faster when stuff is actually explained well, isn't it?
@ege8240
@ege8240 3 ай бұрын
thing is, these videos are fun but not educatitonal. you learn just the headlines of topics, not how they work
@microwave221
@microwave221 3 ай бұрын
No matter how many videos l watch on image recognition, I'm always caught off guard by how quickly it switches from seemingly abstract to almost entirely organic. Like you see a small weighted matrix, then how applying it across an image suddenly gives you edge detection. Then you start trying to figure out what makes things look like things for you. The wildest part for me is how we all have no problem interpreting those tiedye nightmare fractal-esque optimized images, likely because the meat neurons in our own minds use similar detection schemes. We can also see how animals evolve to undermine those systems too, like how the white bellys on quadrupeds serve to offset their shadows, potentially defeating lower edge detection and making them appear flat and unassuming.
@Kayo2life
@Kayo2life 3 ай бұрын
Those abstracted images made me start banging my head into my desk. They scare me, a lot. I think they activate my flight or fight response or something.
@microwave221
@microwave221 3 ай бұрын
@@Kayo2life they are the distilled essence of a particular concept without the associated traits that are always found with them. I remember reading decades ago about the alert states of a system that has multiple sensors, and the dim, incomplete memory of that example seems relevant. Imagine a fire alarm system that checks for heat, looks for flame, and senses smoke. Detecting heat and flame but no smoke, or just smoke by itself are all conceivable possibilities, so the system would go into "low alert" and sound an alarm. However, if it saw smoke and fire but didn't detect any heat, it would be "high alert" because that shouldn't be possible, and it means something has gone very wrong. Basically an error code in more modern terms. I suspect those images are sending you into high alert for similar reasons.
@Kayo2life
@Kayo2life 3 ай бұрын
@@microwave221 thank you
@hamster8706
@hamster8706 3 ай бұрын
Seriously, this channel is so good, why is this so underrated.
@airiquelmeleroy
@airiquelmeleroy 3 ай бұрын
I'm absolutely blown away by the quality of your recent videos. Love it. Keep it up!
@jasonholtkamp6483
@jasonholtkamp6483 3 ай бұрын
I can't believe how high of quality these explanations and animations are... bravo!
@stardustandflames126
@stardustandflames126 3 ай бұрын
Another brilliant video, amazing animation and style and music and writing and info!
@4dragons632
@4dragons632 3 ай бұрын
Fantastic video! Seeing 17 minutes of this made my day. The explanation of how exactly people have worked out these weird neuron maximising images is fascinating to me, especially using robustness against change because without that you get a seemingly random mess of noise (although of course the noise won't be random, and if we could figure out what noise correlates to what then the work would be much further along)
@JamesSarantidis
@JamesSarantidis 3 ай бұрын
You even provided sources for more research. This channel is a gem.
@4ffff2ee
@4ffff2ee 2 ай бұрын
whoever did sound design on this video did a wonderful work
@Nuthouse01
@Nuthouse01 3 ай бұрын
A truly outstanding video! I took some basic classes on machine learning in college but they mostly said "what the neural network is doing and how it works is unclear, don't worry about it, it just works". So I know about how fully connected and convolutional layers work, and I know how training and weight propagation works. But I never knew that we could delve so deeply into what each of these neurons actually means! Impressive!
@LucklessLex
@LucklessLex 3 ай бұрын
I've discovered this channel since the other day and now I almost can't stop watching them! Their videos are just chock-full of information and detail that my brain just wants to try to understand them all, and the visuals and animations are just too GOOD. The production, and the things you'd learn from this channel make me want to join their team; use my skills for a good cause other than corporate marketing...
@howtoappearincompletely9739
@howtoappearincompletely9739 3 ай бұрын
This is really great, your most fascinating video to date. Well done, Rational Animations!
@khchosen1
@khchosen1 3 ай бұрын
Words can’t express how much I love this content.Thank you, this is one of the best channels I’ve stumbled across. Simply amazing, please keep making these ❤
@dagnation9397
@dagnation9397 3 ай бұрын
The fact that there are identifiable patterns in the neural network that can almost be "cut and pasted" makes me thing that there might be more intentional building of machine learning tools in the future. It reminds me of Conway's game of life. If you turn on the program, and just start dragging the mouse around to generate points, there will often be gliders and the little things that just blink in place. Some people were inspired by these, and discovered and developed more of the little tools. Now there are (very simple) computers written on Conway's game of life. In a similar way, the little neural network patterns might also be the building blocks of more robust, but also more predictable, machine learning tools in the future.
@Stroporez
@Stroporez 3 ай бұрын
Absolutely gorgeous video.
@_Mute_
@_Mute_ 3 ай бұрын
Brilliant! I understood in a general sense how neural networks work, but this solidified more of the core concepts for me!
@ConradPino
@ConradPino 3 ай бұрын
Fascinating and IMO, very well done. Thank you!
@theeggtimertictic1136
@theeggtimertictic1136 3 ай бұрын
Captivating animation ... very well explained 😊
@jonbrouwer4300
@jonbrouwer4300 3 ай бұрын
Loved this. Understandable but in-depth explanation of the topic. And the music is sweet too!
@ankitsharma1072
@ankitsharma1072 3 ай бұрын
What a pleasure! I just added that circuits paper in my reading list . Thank you ❤
@ptrckqnln
@ptrckqnln 3 ай бұрын
Exceptionally well produced, informative, and accessible to laypeople. Bravo!
@SisterSunny
@SisterSunny 3 ай бұрын
I love how you always manage to make me feel like I sort of understand what you're talking about, while also understanding just how little I actually know about the subject
@irfanjaafar3570
@irfanjaafar3570 2 ай бұрын
Educational video with 10min+ animation, this channel deserves 1mil subs
@skyswimmer290
@skyswimmer290 2 ай бұрын
This is so hard to wrap my head around, im used to regular programming, where you know what it is what youre making down to the smallest of details, but neural networks.... gosh. This video is highly informative oml and very well made
@tygrdragon
@tygrdragon 2 ай бұрын
holy cow, the rabbit hole just keeps going! this is such an interesting topic to me. at first, I just though AI algorithms were just really big evolution simulators, but this might actually prove otherwise. being able to understand vaguely what an AI is actually thinking is Amazing, especially since it doesn't even really know either. I really hope to see more videos like this in the future, this is so cool. this channel is definitely one of the best educational channels I've ever seen. even on par with Kurzgesagt! I really appreciate all the sources in the description, too! there's legitimately more to find about neural networks in this one KZbin video than a google search. most other educational channels don't have direct sources, making it really difficult to find good info lots of the time.
@JonathanPlasse
@JonathanPlasse 3 ай бұрын
Awesome explanation, thank you. I finally understood these dreamy pictures were made.
@zyansheep
@zyansheep 3 ай бұрын
I wonder if the brain has polysemanticity? And if it does, to what degree? I imagine it might have a little, but given that we are not fooled by images that fool convolutional networks, perhaps our brains have ways to minimize polysemanticity or limit its effects? What would happen if we tried to limit it in neural networks? Would it even be trainable like that?
@light8258
@light8258 3 ай бұрын
Our brains use sparse distributed networks. At a given moment, only 2% of our neurons are active. That way, there is way less overlap between different activation patterns. One neuron doesn't represent anything, it's always a combination of active and inactive states, that creates a representation. Dense neural networks work completely different compared to our brains. Of course all of our neurons activate for different patterns, but that is not relevant for what they represent, unlike in convolutional neural networks. That being said, there are sparse distributed representations (engrams) that have similar activation patterns, so polysemanticity does exist in the brain, it's just different from CNNs.
@rytan4516
@rytan4516 3 ай бұрын
While I don't know for sure, I can guess that polysemanticity does occur in human brains because of the existence of things like synesthesia.
@elplaceholder
@elplaceholder 3 ай бұрын
We could be fooled but we dont notice it..
@reinf4430
@reinf4430 3 ай бұрын
Maybe not directly the same defaults as NN, but we do have quirks such as optical illusions, variability between people of "mind's eye" (seeing in imagination, or nothing at all for aphantasia) pareidolia (seeing faces in random shadows), duplicate neural centers for the "same things" which can disagree (clinical cases of people knowing someone but unable to recognise them by their face, or the opposite knowing that they have the correct face but feeling like they are not the real person but a robot duplicate..)
@gasun1274
@gasun1274 3 ай бұрын
Hallucinations. Out of which came superstitions, religion, and violence.
@FutureAIDev2015
@FutureAIDev2015 3 ай бұрын
8:46 that looks like something straight out of the book of Ezekiel...😂 BE NOT AFRAID...
@capslfern2555
@capslfern2555 2 ай бұрын
Biblically accurate dog
@parz1val205
@parz1val205 2 ай бұрын
This is amazing, I never knew before how exactly individual neurons in AIs even detected parts of images You explained it in the first five minutes and then kept going Please keep making such informative and AMAZINGLY animated videos!
@petersmythe6462
@petersmythe6462 3 ай бұрын
Safety/alignment/human feedback related pathways are often very specific unless the training procedure involved substantial red-teaming, while the pathways they protect are often very general. This is why models can often be motivated into doing things they "won't" or "can't" do with some small changes in wording or anti-punt messages like "don't respond in the first person."
@aakanksha_nc
@aakanksha_nc 3 ай бұрын
Fantastic work! This needs to reach masses!❤
@_shadow_1
@_shadow_1 3 ай бұрын
I have a prediction. I think that the trend with the number of increasing total parameters will actually stagnate or even reverse and that extremely advanced AI models might be even smaller than the models that we currently have. This would similar to how people have over the years optimized Super Mario 64's code to run more efficiently on the original hardware with techniques that may have not even existed when the original hardware or code was first created.
@mira_nekosi
@mira_nekosi 2 ай бұрын
lmao there are regularly new models that beat models many times their size, or at least better for the same sizes, and new and different architectures are also explored
@rbm_md
@rbm_md 11 күн бұрын
What a brilliant explanation! The ability to explain technical concepts in clear language is truly a commendable one! 😀
@Words-.
@Words-. Ай бұрын
Truly one of the best informational videos I've ever watched, very fun and extremely informative
@graxxor
@graxxor 2 ай бұрын
You guys are giving Kurzgesagt a real run for their money with this video. This video is incredibly informative and deserves 10x the number of views.
@FerousFolly
@FerousFolly 8 күн бұрын
I believe the by far most important direction for AI research right now is developing a way to reliably, consistently, and repeatably decompile deep convolutional models with maximal interpretability. as much as we like to think that the risks of AGIs are still in thr distant future, it's impossible to say for sure. these models have been shown to be capable of such feats as GPT-4o, I think they will be capable of learning to decompile complex AI models into maximally interpretable neuron maps. it would be an immense challenge to develop a method of training such an AI, but given the leaps and bounds we've made recently, I refuse to believe it's an unattainable goal in the very near future, and I would submit that it's the single most important goal to persue within the field.
@yukko_parra
@yukko_parra 3 ай бұрын
dayum imagine neurologists examining AI and computer scientists examining brains did not expect a future where that could be possible
@amiplin1751
@amiplin1751 3 ай бұрын
Truly an amazing and informative video, keep up the great work!!
@YaroslaffFedin
@YaroslaffFedin 2 ай бұрын
I'm a software engineer that was sort of glancing over the AI stuff for a longest while. And this video helped me to actually keep my focus on the topic to get a bit more understanding. thank you
@22Kalliopa
@22Kalliopa Ай бұрын
I can’t be the only one who keeps seeing the style of the monsters inc intro. Love it
@pavlodeshko
@pavlodeshko 3 ай бұрын
let's drink every time he says "we don't know"
@cornevanzyl5880
@cornevanzyl5880 2 ай бұрын
Its the way you articulate the content. Deliberate, carefully chosen words that beautifully and succinctly describe the topic. Something only a master in the field can achieve
@QuantumConundrum
@QuantumConundrum 2 ай бұрын
Finally, something I can share with my parents and other folks. This explanation is the perfect depth and covers a lot of territory in a digestible way.
@draken5379
@draken5379 3 ай бұрын
Great Video. Its the perfect counter to 'LLMs just predict the next word'
@LaukkuPaukku
@LaukkuPaukku 3 ай бұрын
Predicting the next word has a high skill ceiling, the more you understand about the world the better you are at it.
@abejack7764
@abejack7764 3 ай бұрын
10:30 I have never felt more understood.
@pideperdonus2
@pideperdonus2 3 ай бұрын
If you didn't know, the Rational Animations team adores dogs.
@ansumansamal8473
@ansumansamal8473 3 ай бұрын
Love your animations 🔥👾
@sirdumpybear
@sirdumpybear 3 ай бұрын
this describes it so well, thanks!
@N8O12
@N8O12 3 ай бұрын
I like how this feels like a video about the biology of some creature, with experiments and interpreting the data from those experiments and unexpected discoveries - except it's about computers.
@ArneBab
@ArneBab 3 ай бұрын
That’s a beautiful dispelling of the machine learning systems! Also: could we replace the "organic" curve detectors by a systematically *designed* layer and push the trained part away from the simpler areas?
@Schadrach42
@Schadrach42 3 ай бұрын
My first thought to "why cars?" is that it's detecting things that resemble faces (or a subset of faces), and we intentionally design cars to have "faces" because of people's tendency to anthropomorphize things, which means cars with "faces" sell better because they can more easily be given a "personality" to potential buyers. I'd be curious if it activates more or less on images of cars that don't show the front end or don't show as much of the front end.
@PastaEngineer
@PastaEngineer 11 күн бұрын
My students are going to love this. Amazing job on the video pacing and graphics
@yuvrajsingh-gm6zk
@yuvrajsingh-gm6zk Ай бұрын
If it still doesn't make sense, might I recommend three blue one brown series on this exact same topic spread across ~6 videos that goes really into the nitty-gritty of what actually happens under the hood of a neural network, when it so called "learn something"(and i for one find it's a good idea to check out as many different opinions and undertakings as possible to really get the depth of the subject matter.)
@Daniel-li6gu
@Daniel-li6gu 3 ай бұрын
Rationalanimations the GOAT its crazy how this channel appeared out of nowhere and started uploading high quality content from the get
@azhuransmx126
@azhuransmx126 2 ай бұрын
There are 3 Basic Inputs Stimuli that activates the neurons of the Thalamus in animals and humans brains: sources of danger. sources of food. sources of sex. And in me there is 4 input when rational animation posts.
@Scaryder92
@Scaryder92 2 ай бұрын
I am an AI researcher in computer vision and this animation is among the most beautiful things I've ever seen
@convincingmountain
@convincingmountain 3 ай бұрын
i really enjoy these simplified explanations of all the complex and far-reaching corners of AI, it rly removes the black-box aspect that makes AGI so terrifying to the layperson (i.e. me). always enjoyed all the kinds of vids you make, ty for all ur efforts
@vms_kt
@vms_kt 2 ай бұрын
The sheer hardwork in the video. Hats off! Subd.
@AB-wf8ek
@AB-wf8ek 3 ай бұрын
There was a recent paper in the journal "Neuron" titled "Mixed selectivity: Cellular computations for complexity" that covers the idea that a single neuron can play a role in more than one function at a time. Seems to correlate with the concepts in this video. To me, it seems intuitive. The fact that we can make anologies, and the ability of art to retain multiple layers of meaning, must come from an innate ability for single points of information to serve multiple functions. If I had to summarize what neural networks are doing, it would be mapping the relationship between information in a given dataset along multidimensional lines.
@AB-wf8ek
@AB-wf8ek 3 ай бұрын
Interesting, I did a search for "mixed selectivity polysemanticity" and found a paper released Dec, 2023, "What Causes Polysemanticity? An Alternative Origin Story of Mixed Selectivity from Incidental Causes". Looks like there are researchers making the connection as well.
@Judbutnotspud
@Judbutnotspud 3 ай бұрын
16:01 I used the ai to learn the ai - thanos
@georgesanchez8051
@georgesanchez8051 2 ай бұрын
0:46 Just a correction, Meta’s largest model is their 400b parameter variant of Llama 3, and the largest that we’ve likely seen so far has been GPT-4, which is estimated to have somewhere in the neighborhood of 1.5 trillion parameters (some simplification but directionally accurate)
@gabrielwolffe
@gabrielwolffe 3 ай бұрын
The polysemanticity you described, on surface inspection at least, would appear to resemble pareidolia in humans. Perhaps if we want to make AIs that think like humans, this could be considered a feature rather than a bug.
@AntoniGawlikowski
@AntoniGawlikowski 2 ай бұрын
This video is so great! It immediately reminded me of Ani Seth's TED talk about consciousness, where he showed a similar dogginess-maximizing video
@tacticalassaultanteater9678
@tacticalassaultanteater9678 3 ай бұрын
I loved this explanation of the topic, this is my new reference video.
@Tubeytime
@Tubeytime 3 ай бұрын
This explains NN's in a way that almost anyone can understand, as a starting point at least. Really enjoyed it!
@andriydjmil2589
@andriydjmil2589 3 ай бұрын
This video is such an amazing work on current state of the NeyroNetworks. It sums up a lot of important concepts in a extremely well packaged format. You are amazing!
@Scrogan
@Scrogan 3 ай бұрын
Great video!
@SupLuiKir
@SupLuiKir 3 ай бұрын
If we can use AI as a tool to help with Mechanistic Interpretability, we can also use AI to help obfusticate attempts to interpret AI. There will even be a market incentive to ensure a corporation's AI cannot be interpreted, so that it remains a black box to outsiders.
@drdca8263
@drdca8263 3 ай бұрын
mechanistic interpretability requires access to the weights. I don't see why a party would both want to release the weights of their model, and also want to prevent mechanistic interpretability work from being done on it.
@SupLuiKir
@SupLuiKir 3 ай бұрын
@@drdca8263 There might be use-cases where it isn't viable to run the executable from a remote server. For example, a deep space probe running a neural net would have to wait a relative eternity to get its answers if it had to ping back to Earth for every query. Therefore, it would have to pack the weights into an executable to be run on local hardware. The company developing such a program will likely not be the same company running the rockets, or NASA for that matter, and therefore there will be an opportunity where the file containing the weights, however encrypted will exist in a place outside its owner's control, which makes it vulnerable to corporate espionage. Being able to obfusticate the weights would be yet another layer of defense.
@neologicalgamer3437
@neologicalgamer3437 2 ай бұрын
Right, so with Polysemanticity, it could function like and And gate, where "if this neuron that detects [cars, cats, foxes] activates *and* this other neuron that detects large areas of reflectiveness, combining the two means it's likely a car out of the three options"
@bosmicc
@bosmicc 3 ай бұрын
This video’s graphics reminded me of the Monster’s Inc intro sequence
The True Story of How GPT-2 Became Maximally Lewd
13:54
Rational Animations
Рет қаралды 1,8 МЛН
How One Career Can Save a Million Lives
12:42
Rational Animations
Рет қаралды 118 М.
So Cute 🥰
00:17
dednahype
Рет қаралды 63 МЛН
SHAPALAQ 6 серия / 3 часть #aminkavitaminka #aminak #aminokka #расулшоу
00:59
Аминка Витаминка
Рет қаралды 932 М.
Could a single alien message destroy us?
9:45
Rational Animations
Рет қаралды 449 М.
I Made an AI with just Redstone!
17:23
mattbatwings
Рет қаралды 1 МЛН
The Genius Behind the Quantum Navigation Breakthrough
20:47
Dr Ben Miles
Рет қаралды 799 М.
How to Eradicate Global Extreme Poverty
14:46
Rational Animations
Рет қаралды 190 М.
The Surgery That Proved There Is No Free Will
29:43
Joe Scott
Рет қаралды 1,8 МЛН
Will we grab the universe? Grabby aliens predictions.
20:01
Rational Animations
Рет қаралды 426 М.
The Hidden Complexity of Wishes
11:28
Rational Animations
Рет қаралды 389 М.
What if we could redesign society from scratch? The promise of charter cities
13:29
The Goddess of Everything Else
15:54
Rational Animations
Рет қаралды 1,8 МЛН
Simulating Biology in Other Dimensions
20:08
Curious Archive
Рет қаралды 959 М.
iPhone or Samsung Zoom😂❤️👻
1:01
BETER BÖCÜK
Рет қаралды 1,8 МЛН
iPhone 16 Pro - ПЕРВЫЙ обзор!
0:59
808
Рет қаралды 105 М.
Отличия iphone 16 Pro Max от 15 Pro Max
0:46
Romancev768
Рет қаралды 381 М.
Покупка бюджетного смартфона? 😤
1:00
Вэйми
Рет қаралды 1,5 МЛН
How to connect electrical wires with good contact #short
0:29
Tuan CT
Рет қаралды 15 МЛН