What Do Neural Networks Really Learn? Exploring the Brain of an AI Model

  Рет қаралды 208,122

Rational Animations

Rational Animations

Күн бұрын

Пікірлер: 605
@RationalAnimations
@RationalAnimations 6 ай бұрын
This topic is truly a rabbit hole. If you want to learn more about this important research and even contribute to it, check out this list of sources about mechanistic interpretability and interpretability in general we've compiled for you: On Interpreting InceptionV1: Feature visualization: distill.pub/2017/feature-visualization/ Zoom in: An Introduction to Circuits: distill.pub/2020/circuits/zoom-in/ The Distill journal contains several articles that try to make sense of how exactly InceptionV1 does what it does: distill.pub/2020/circuits/ OpenAI's Microscope tool lets us visualize the neurons and channels of a number of vision models in great detail: microscope.openai.com/models Here's OpenAI's Microscope tool pointed on layer Mixed3b in InceptionV1: microscope.openai.com/models/inceptionv1/mixed3b_0?models.op.feature_vis.type=channel&models.op.technique=feature_vis Activation atlases: distill.pub/2019/activation-atlas/ More recent work applying sparse autoencoders (SAEs) to uncover more features in InceptionV1 and decompose polysemantic neurons: arxiv.org/abs/2406.03662v1 Transformer Circuits Thread, the spiritual successor of the circuits thread on InceptionV1. This time on transformers: transformer-circuits.pub/ In the video, we cite "Toy Models of Superposition": transformer-circuits.pub/2022/toy_model/index.html We also cite "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning": transformer-circuits.pub/2023/monosemantic-features/ More recent progress: Mapping the Mind of a Large Language Model: Press: www.anthropic.com/research/mapping-mind-language-model Paper in the transformers circuits thread: transformer-circuits.pub/2024/scaling-monosemanticity/index.html Extracting Concepts from GPT-4: Press: openai.com/index/extracting-concepts-from-gpt-4/ Paper: arxiv.org/abs/2406.04093 Browse features: openaipublic.blob.core.windows.net/sparse-autoencoder/sae-viewer/index.html Language models can explain neurons in language models (cited in the video): Press: openai.com/index/language-models-can-explain-neurons-in-language-models/ Paper: openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html View neurons: openaipublic.blob.core.windows.net/neuron-explainer/neuron-viewer/index.html Neel Nanda on how to get started with Mechanistic Interpretability: Concrete Steps to Get Started in Transformer Mechanistic Interpretability: www.neelnanda.io/mechanistic-interpretability/getting-started Mechanistic Interpretability Quickstart Guide: www.neelnanda.io/mechanistic-interpretability/quickstart 200 Concrete Open Problems in Mechanistic Interpretability: www.alignmentforum.org/posts/LbrPTJ4fmABEdEnLf/200-concrete-open-problems-in-mechanistic-interpretability More work mentioned in the video: Progress measures for grokking via mechanistic interpretability: arxiv.org/abs/2301.05217 Discovering Latent Knowledge in Language Models Without Supervision: arxiv.org/abs/2212.03827 Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning: www.nature.com/articles/s41551-018-0195-0
@pyeitme508
@pyeitme508 6 ай бұрын
WOW!
@EmmanuelMess
@EmmanuelMess 6 ай бұрын
What's the source for "somehow it learnt to tell peoples biological sex" (at the start)? It really sounds like bias in the data.
@RationalAnimations
@RationalAnimations 6 ай бұрын
@@EmmanuelMess It's the very last link
@EmmanuelMess
@EmmanuelMess 6 ай бұрын
Thanks! it seems that other papers also confirm that biological sex identification is possible from fundus images.
@ifoxtrot171gg
@ifoxtrot171gg 6 ай бұрын
as someone who uses neurons to classify images, i too am activated by curves.
@dezaim7288
@dezaim7288 6 ай бұрын
As a mass of neurons i can relate to being activated by curves.
@hellofellowbotsss
@hellofellowbotsss 6 ай бұрын
Same
@nxte8506
@nxte8506 6 ай бұрын
deadass
@a31-hq1jk
@a31-hq1jk 6 ай бұрын
I get activated by straight thick lines
@ThatGuyThatHasSpaghetiiCode
@ThatGuyThatHasSpaghetiiCode 6 ай бұрын
Especially the female ones
@user-qw9yf6zs9t
@user-qw9yf6zs9t 6 ай бұрын
anyone else surprised that there isnt a a ai model that ranks people's "beauty-ness" from 1-100 honestly great start-up idea, just use people ranking data
@cheeseaddict
@cheeseaddict 6 ай бұрын
You guys shouldn't overwork yourselves 😭 17 minutes of high quality animation and info Seems like Kurzgesagt got competition 👀
@dissahc
@dissahc 6 ай бұрын
kurzgesagt could only dream of possessing this much style and substance.
@Puppeteer_in_the_Void
@Puppeteer_in_the_Void 6 ай бұрын
I feel like kurzgesagt has slacked off on writing quality, so I'm glad it may have to fight to not be replaced
@ExylonBotOfficial
@ExylonBotOfficial 6 ай бұрын
This is so much more informative than any of the recent kurzgesagt videos
@raph2550
@raph2550 6 ай бұрын
The recent Kurzgesagt are just ads so...
@Restrocket
@Restrocket 6 ай бұрын
Use AI to generate video and text instead
@CloverTheBean
@CloverTheBean 6 ай бұрын
I really appreciate your simplification without dumbing it down to just noise. I've been wondering about how neural networks operate. Not that I'm a student or trying to apply it for any reason. I just love having nuggets of knowledge to share around with friends!
@miguelmalvina5200
@miguelmalvina5200 6 ай бұрын
KARKAT LETS GO
@average-neco-arc-enjoyer
@average-neco-arc-enjoyer 6 ай бұрын
@@miguelmalvina5200 they got the HS:BC karkat
@CloverTheBean
@CloverTheBean 5 ай бұрын
@@miguelmalvina5200 YES YOU'RE THE FIRST ONE IN THE WILD WHO RECOGNIZED IT
@miguelmalvina5200
@miguelmalvina5200 5 ай бұрын
@@CloverTheBean I wasnt expecting to find a homestuckie in a very niche AI video honestly, pretty nice
@eddyr1041
@eddyr1041 5 ай бұрын
The wondrous philosophy thought of what brain ia...
@theweapi
@theweapi 6 ай бұрын
Polysemanticity makes me think of how we can see faces in things like cars and electrical sockets. The face detection neurons are doing multiple jobs, but there is not a risk of mixing them up because of how vastly different they are. This may also explain the uncanny valley, where we have other neurons whose job it is to ensure the distinction is clear.
@christianhall3916
@christianhall3916 3 ай бұрын
I don't think that's what's going on with face detection. It's just highly advantageous in natural selection to tell if something has a face or not, because that's a sign that it could be alive. So face detection is overeager because it's so important to know if there's even a slight chance something has a face. Two dots and a line or curve is all it takes for us to see a face in anything.
@Nikolas_Davis
@Nikolas_Davis 6 ай бұрын
The delicious irony, of course, is that AI started out as a research field with the purpose to understand our _own biological intelligence_ by trying to reproduce it. Actually building a practical tool was a distant second, if even considered. But hardly anyone remembers that now, when AI systems are developed for marketable purposes. So, now that AI (kinda) works, we're back to square one, trying to understand _how_ it works - which was the issue we had with our own wetware in the first place! Aaargh!! But all is not lost, because we can prod and peek inside our artificial neural networks much easier than we can inside our noggins. So, maybe there is net progress after all.
@superagucova
@superagucova 6 ай бұрын
A cool fact is that interpretability efforts *have* led to some progress in neuroscience. Google has a cool paper drawing analogies between neuroscience understanding of the human visual cortex and specific types of convolutional neural networks, and this has seeped in into the neuroscience literature
@miniverse2002
@miniverse2002 6 ай бұрын
Considering the brain is the most complicated thing known in the Universe other than the Universe, I would think understanding it would still be a whole other challenge compared to our "simple" reproductions even if can prod and peak our own brains as easily.
@jackys_handle
@jackys_handle 5 ай бұрын
"Wetware" I'm gonna use that
@israrbinmi2856
@israrbinmi2856 5 ай бұрын
and then they use a more advanced AI (GPT4) to understand the lesser (GPT2), the irony is an onion
@antonzhdanov9653
@antonzhdanov9653 5 ай бұрын
​@@israrbinmi2856Its weird but it makes sense bcs more advanced AI is literally tasked with vivisection of less advanced to discern and show what each piece of less advanced is doing and letting skinbags interprete given results.
@DanksPlatter
@DanksPlatter 6 ай бұрын
these videos are perfect edutainment and its crazy how much detail goes in even to background stuff like sound effects and music
@E.Hunter.Esquire
@E.Hunter.Esquire 6 ай бұрын
Ur mum
@Jan12700
@Jan12700 6 ай бұрын
5:18 But that's exactly what leads to extreme misjudgments if the data isn't 100% balanced and you never manage to get anything to be 100% balanced. With dogs in particular, huskies were only recognized if the background was white, because almost all training data was with huskies was in the snow.
@Kevin-cf9nl
@Kevin-cf9nl 6 ай бұрын
On the other hand "Is in the snow" is a genuinely good way to distinguish huskies from other dogs in an environment with low information. I wouldn't call that an "extreme misjudgement", but rather a good (if limited) hueristic.
@austinrimel7860
@austinrimel7860 6 ай бұрын
Good to know.
@superagucova
@superagucova 6 ай бұрын
Balancing becomes less and less of a problem at scale. Neural networks don’t overfit in the same way that classical statistical models do.
@ericray7173
@ericray7173 6 ай бұрын
That’s what overfitting is. Same thing with a certain trophy fish that A.I. networks learned to only recognize if it was in human hands lol.
@Stratelier
@Stratelier 6 ай бұрын
Didn't the mention of polysemanticity kind of vaguely touch on this? Whatever nodes are responsible for detecting the husky may also be biased toward an assumption that the husky is seen against a snowy/white backdrop, due in some part to the limits of its training data.
@jddes
@jddes 6 ай бұрын
This is something I've been going on about for a long time. The real prize isn't in getting machines to do stuff for us, it will be using and adapting the shortcuts they are able to find. We just need to learn...what they've learned.
@r.connor9280
@r.connor9280 6 ай бұрын
The world's most complex homework stealing scheme
@terdragontra8900
@terdragontra8900 6 ай бұрын
This is… not true, I think. Neural networks will (and already have) learned algorithms that are simply too complicated and weird for a human brain to follow.
@jakebrowning2373
@jakebrowning2373 6 ай бұрын
​@@terdragontra8900 yes
@joaomrtins
@joaomrtins 6 ай бұрын
Rather than learning _what_ they learned the real prize is learning _how_ they learn.
@E.Hunter.Esquire
@E.Hunter.Esquire 6 ай бұрын
​@@joaomrtins for what? Won't do us much good
@KrasBadan
@KrasBadan 6 ай бұрын
16:11 It reminds me of a video by Welch Labs that I watched recently. It was about how Kepler didcovered his laws. Basically he had a bunch of accurate data about the positions of Mars at some points in time and he wanted to figure out how it moves. He noticed that the speed at which it orbits the sun isn't uniform, it is faster in one half of the orbit and slower in the other, and he tried to take it into account. What he did was assume that the orbit is a circle, and inside that circle there is the sun, the center of the circle and some point called equant, and all these 3 points lye on the same line. The idea of equant is as follows: imagine a ray that rotates uniformly around the equant, find the point at which it intersects orbit. In that model, this point of intersection is where the Mars should be at this moment. He had 4 parameters: distance from the center of circle to sun, to equant, the speed or ray and the starting position of ray. These 4 parameters can describe wide range of possible motions. By doing lots of trial and error, Kepler fine-tuned these 4 parameters, such that the maximum error was just 2 arcminutes, a hundred times more accurate than anyone else. This process of having a system that can discribe almost anything and tuning it to describe what you want is similar to how neural networks recognise patterns. But after more optimisation and taking more things into account, Kepler came to the conclusion that the orbit isn't a circle. He then tried tuning a shape that is similar to egg, but it was worse than his old model, so he added more parameters. He assumed Mars orbits some orbit that itself orbits an orbit, and after noticing one thing about angles in his data, he found perfect parameters, that happen to perfectly describe ellipse. The discovery of elliptical orbit of Mars was the main thing that allowed Newtown to develop his theory of gravity. This thing is similar to how, given enough data, by lots of trial and error neural networks can generalize concepts and find underlying laws of nature.
@gasun1274
@gasun1274 6 ай бұрын
It's called an empirical fit. Physics, engineering, and the modern world is built on that method. There is no better way than that.
@guyblack9729
@guyblack9729 6 ай бұрын
"there will also be lots of pictures of dogs" well count me the fuck in LET'S GOOOO
@RitmosMC
@RitmosMC 6 ай бұрын
These videos are incredible, going into so much detail and yet managing to not lose the audience on the way- it’s amazing! The animation, the style, the music, everything! This channel is on par with Kurzgesagt and others in the educational animations genre, and the fact that it doesn’t have millions of subscribers is criminal. One of the top 10 channels on the entire platform for sure. Keep up the incredible work.
@gavinbowers137
@gavinbowers137 6 ай бұрын
The sound design and the animation were incredible in this one!
@chadowsrikatemo4494
@chadowsrikatemo4494 6 ай бұрын
Ok, that explanation of image generation (the one that made a snout inside a snout) was one of the best ones i've found yet, Good Job!
@drdca8263
@drdca8263 6 ай бұрын
Note that this isn't quite the same method used in the tools which have the purpose of generating general images. Though it does have *some similarity* to some(most? all??) of those methods.
@animowany111
@animowany111 6 ай бұрын
@@drdca8263 The idea of deep dream and that kind of neuron visualization is completely different from modern image-generating models using Diffusion. The things they optimize for are completely different. Diffusion tries to take a (conditional) sample from some kind of learned manifold in latent space that somehow represents the distribution of data seen in training, deep dream directly optimizes a point in a similar latent space to maximize a neuron or channel activation.
@loopuleasa
@loopuleasa 6 ай бұрын
"To understand is to make a smaller system inside your head behave similarly to a larger system outside your head."
@BayesianBeing
@BayesianBeing 5 ай бұрын
Idk who said that but it's a great quote
@loopuleasa
@loopuleasa 5 ай бұрын
@@BayesianBeing I made it, just added quotes to mark that it is very important and ready to quote further
@LuigiSimoncini
@LuigiSimoncini 5 ай бұрын
Yes, a series of models, most probably multisemantic and dinamically refined ("trained") thanks to both our senses (big limitation for LLMs, they're not incarnated) and subsequent system 2 unconscious thinking (who knows, maybe even during sleep)
@luisgdelafuente
@luisgdelafuente 4 ай бұрын
Human learning and understanding is not based on building smaller models, but on building abstractions with meaning. That's what AGI believers don't understand.
@rheamad
@rheamad 7 сағат бұрын
@@luisgdelafuente wait say more
@smitchered
@smitchered 6 ай бұрын
Thanks for educating the general public about AI and its dangers, Rational Animations! Your animations keep getting better and I still listen to your video's soundtracks from time to time... thanks for all this effort you're pouring into the channel!
@michaelpapadopoulos6054
@michaelpapadopoulos6054 6 ай бұрын
the maximally bad output soundtrack had some really catchy leitmotifs! Also just a banger video in general.
@vectorhacker-r2
@vectorhacker-r2 6 ай бұрын
My dog was in this video and I couldn’t be more happy when I saw her!
@ashleyjaytanna1953
@ashleyjaytanna1953 6 ай бұрын
My God was not in the video. Blasphemy!
@stronggaming2365
@stronggaming2365 6 ай бұрын
@@ashleyjaytanna1953 your what now?
@laibarehman8005
@laibarehman8005 6 ай бұрын
@@stronggaming2365 Dyslexia sniping is my best guess haha
@danielalorbi
@danielalorbi 6 ай бұрын
This what the most absolutely delightful animation of a Neural Net I've ever seen, by far. Kudos to the visual and animation artists
@dagnation9397
@dagnation9397 6 ай бұрын
The fact that there are identifiable patterns in the neural network that can almost be "cut and pasted" makes me thing that there might be more intentional building of machine learning tools in the future. It reminds me of Conway's game of life. If you turn on the program, and just start dragging the mouse around to generate points, there will often be gliders and the little things that just blink in place. Some people were inspired by these, and discovered and developed more of the little tools. Now there are (very simple) computers written on Conway's game of life. In a similar way, the little neural network patterns might also be the building blocks of more robust, but also more predictable, machine learning tools in the future.
@BluishGreenPro
@BluishGreenPro 6 ай бұрын
These visualizations are incredibly helpful in understanding the topic; fantastic work!
@jaycee53
@jaycee53 6 ай бұрын
This channel is slept on 😮‍💨
@benedekfodor269
@benedekfodor269 6 ай бұрын
I suspect that will change, I'm so happy I found it.
@mentgold
@mentgold 6 ай бұрын
the production quality of you guys is through the roof and somehow you still manage to improve with every video
@meaburror7653
@meaburror7653 6 ай бұрын
best channel on yt right now
@DriPud
@DriPud 6 ай бұрын
I have to admit, every single one of your videos makes me eager to learn. Thank you for such high-quality, entertaining content!
@hamster8706
@hamster8706 6 ай бұрын
Seriously, this channel is so good, why is this so underrated.
@foxxiefox
@foxxiefox 6 ай бұрын
Babe wake up rational animations posted
@bingusbongus9807
@bingusbongus9807 6 ай бұрын
awake babe
@Jaggerbush
@Jaggerbush 6 ай бұрын
Corny. I hate this unoriginal lame comment. It's right up there with "first".
@SteedRuckus
@SteedRuckus 6 ай бұрын
​@@Jaggerbushbabe wake up, someone got mad about the thing
@sebas11tian
@sebas11tian 6 ай бұрын
The thing is that if we kept quiet, we stop participating in picking what memes are widespread. I often found that the top comment used to be somewhat related to the topic instead of blind and uninspired appreciation post for the creator.
@Niohimself
@Niohimself 6 ай бұрын
Every video on this channel is a banger
@Nuthouse01
@Nuthouse01 6 ай бұрын
A truly outstanding video! I took some basic classes on machine learning in college but they mostly said "what the neural network is doing and how it works is unclear, don't worry about it, it just works". So I know about how fully connected and convolutional layers work, and I know how training and weight propagation works. But I never knew that we could delve so deeply into what each of these neurons actually means! Impressive!
@Billy4321able
@Billy4321able 6 ай бұрын
It really feels like we need way more effort being put into mechanistic Interpretability. The mileage we're going to get from just designing new models and training from larger datasets is small. However, if we do basic research into finding out how these models make the decisions they make, then we can steer them in the right direction much faster. I don't see us getting much further than we are today using the same brute force methods. Machine learning techniques used today are the construction equivalent of building skyscrapers out of wood. Sure, maybe if you made them wide enough with strong enough wood it could work, but there are definitely better materials out there than wood.
@corrinlone4813
@corrinlone4813 Ай бұрын
Great video. Love your content!
@4dragons632
@4dragons632 6 ай бұрын
Fantastic video! Seeing 17 minutes of this made my day. The explanation of how exactly people have worked out these weird neuron maximising images is fascinating to me, especially using robustness against change because without that you get a seemingly random mess of noise (although of course the noise won't be random, and if we could figure out what noise correlates to what then the work would be much further along)
@petersmythe6462
@petersmythe6462 6 ай бұрын
Safety/alignment/human feedback related pathways are often very specific unless the training procedure involved substantial red-teaming, while the pathways they protect are often very general. This is why models can often be motivated into doing things they "won't" or "can't" do with some small changes in wording or anti-punt messages like "don't respond in the first person."
@khchosen1
@khchosen1 6 ай бұрын
Words can’t express how much I love this content.Thank you, this is one of the best channels I’ve stumbled across. Simply amazing, please keep making these ❤
@airiquelmeleroy
@airiquelmeleroy 6 ай бұрын
I'm absolutely blown away by the quality of your recent videos. Love it. Keep it up!
@BaxterAndLunala
@BaxterAndLunala 6 ай бұрын
2:21: "There will also be lots of pictures of dogs." Considering the fact every video I've seen from this guy has the same yellowish-orange dog in them, I'm not surprised that he said we'd see pictures of dogs. Lots of pictures of dogs.
@atom1kcreeper605
@atom1kcreeper605 6 ай бұрын
Lol
@AB-wf8ek
@AB-wf8ek 6 ай бұрын
There was a recent paper in the journal "Neuron" titled "Mixed selectivity: Cellular computations for complexity" that covers the idea that a single neuron can play a role in more than one function at a time. Seems to correlate with the concepts in this video. To me, it seems intuitive. The fact that we can make anologies, and the ability of art to retain multiple layers of meaning, must come from an innate ability for single points of information to serve multiple functions. If I had to summarize what neural networks are doing, it would be mapping the relationship between information in a given dataset along multidimensional lines.
@AB-wf8ek
@AB-wf8ek 6 ай бұрын
Interesting, I did a search for "mixed selectivity polysemanticity" and found a paper released Dec, 2023, "What Causes Polysemanticity? An Alternative Origin Story of Mixed Selectivity from Incidental Causes". Looks like there are researchers making the connection as well.
@jasonholtkamp6483
@jasonholtkamp6483 6 ай бұрын
I can't believe how high of quality these explanations and animations are... bravo!
@Schadrach42
@Schadrach42 6 ай бұрын
My first thought to "why cars?" is that it's detecting things that resemble faces (or a subset of faces), and we intentionally design cars to have "faces" because of people's tendency to anthropomorphize things, which means cars with "faces" sell better because they can more easily be given a "personality" to potential buyers. I'd be curious if it activates more or less on images of cars that don't show the front end or don't show as much of the front end.
@howtoappearincompletely9739
@howtoappearincompletely9739 6 ай бұрын
This is really great, your most fascinating video to date. Well done, Rational Animations!
@microwave221
@microwave221 6 ай бұрын
No matter how many videos l watch on image recognition, I'm always caught off guard by how quickly it switches from seemingly abstract to almost entirely organic. Like you see a small weighted matrix, then how applying it across an image suddenly gives you edge detection. Then you start trying to figure out what makes things look like things for you. The wildest part for me is how we all have no problem interpreting those tiedye nightmare fractal-esque optimized images, likely because the meat neurons in our own minds use similar detection schemes. We can also see how animals evolve to undermine those systems too, like how the white bellys on quadrupeds serve to offset their shadows, potentially defeating lower edge detection and making them appear flat and unassuming.
@Kayo2life
@Kayo2life 6 ай бұрын
Those abstracted images made me start banging my head into my desk. They scare me, a lot. I think they activate my flight or fight response or something.
@microwave221
@microwave221 6 ай бұрын
@@Kayo2life they are the distilled essence of a particular concept without the associated traits that are always found with them. I remember reading decades ago about the alert states of a system that has multiple sensors, and the dim, incomplete memory of that example seems relevant. Imagine a fire alarm system that checks for heat, looks for flame, and senses smoke. Detecting heat and flame but no smoke, or just smoke by itself are all conceivable possibilities, so the system would go into "low alert" and sound an alarm. However, if it saw smoke and fire but didn't detect any heat, it would be "high alert" because that shouldn't be possible, and it means something has gone very wrong. Basically an error code in more modern terms. I suspect those images are sending you into high alert for similar reasons.
@Kayo2life
@Kayo2life 6 ай бұрын
@@microwave221 thank you
@JamesSarantidis
@JamesSarantidis 6 ай бұрын
You even provided sources for more research. This channel is a gem.
@PaulFidika
@PaulFidika 6 ай бұрын
I love your cartoon-network esque art style; I don’t even know what to call it
@SisterSunny
@SisterSunny 6 ай бұрын
I love how you always manage to make me feel like I sort of understand what you're talking about, while also understanding just how little I actually know about the subject
@FerousFolly
@FerousFolly 3 ай бұрын
I believe the by far most important direction for AI research right now is developing a way to reliably, consistently, and repeatably decompile deep convolutional models with maximal interpretability. as much as we like to think that the risks of AGIs are still in thr distant future, it's impossible to say for sure. these models have been shown to be capable of such feats as GPT-4o, I think they will be capable of learning to decompile complex AI models into maximally interpretable neuron maps. it would be an immense challenge to develop a method of training such an AI, but given the leaps and bounds we've made recently, I refuse to believe it's an unattainable goal in the very near future, and I would submit that it's the single most important goal to persue within the field.
@gabrote42
@gabrote42 6 ай бұрын
Another hit "This is going on my arguments and explanation playlist" moment
@stardustandflames126
@stardustandflames126 6 ай бұрын
Another brilliant video, amazing animation and style and music and writing and info!
@GuyThePerson
@GuyThePerson 6 ай бұрын
I've already learnt more from this video than a week of school. Strange how you learn faster when stuff is actually explained well, isn't it?
@ege8240
@ege8240 6 ай бұрын
thing is, these videos are fun but not educatitonal. you learn just the headlines of topics, not how they work
@Seraphiel29
@Seraphiel29 3 ай бұрын
Love your kingdom hearts sound effects at 2:06
@FutureAIDev2015
@FutureAIDev2015 6 ай бұрын
14:34 This is what happens when you give a computer LSD
@xentarch
@xentarch 6 ай бұрын
Maybe the cat and car thing is related to the fact that the output differs by only a single letter at the end? Since we have no idea how exactly the network infrastructure is formed, it's possible that the words themselves have something to do with the mixed neuron usage...
@xentarch
@xentarch 6 ай бұрын
Also, abstraction is a cool thing. Most of the time, it doesn't really make sense to try and "explain" what a single neuron is doing (in the context of its network, of course). Abstraction has many more degrees of freedom than the number of words we have defined.
@gpt-jcommentbot4759
@gpt-jcommentbot4759 Ай бұрын
No. The model doesn't have categories labellled cat or dog, just output neurons which can fire or not. It does not see words so it can not relate them textually
@xentarch
@xentarch Ай бұрын
@@gpt-jcommentbot4759 Input words are broken down into tokens which should correspond to vectors in the latent space, no? I'm not sure how the model in question breaks words down into tokens, so the idea I proposed comes down to the **potential** for similarity in word structure (which the model simply must parse somehow if it converts input text to output images) to be related to similarity in output.
@TheRealMcNuggs
@TheRealMcNuggs 6 ай бұрын
8:20 I don't understand at all what is meant by "layering a bunch of smooth waves on top of each other". Like in what sense is the grey image on the top derived from the noisy bottom image or vice versa?
@tygrdragon
@tygrdragon 6 ай бұрын
holy cow, the rabbit hole just keeps going! this is such an interesting topic to me. at first, I just though AI algorithms were just really big evolution simulators, but this might actually prove otherwise. being able to understand vaguely what an AI is actually thinking is Amazing, especially since it doesn't even really know either. I really hope to see more videos like this in the future, this is so cool. this channel is definitely one of the best educational channels I've ever seen. even on par with Kurzgesagt! I really appreciate all the sources in the description, too! there's legitimately more to find about neural networks in this one KZbin video than a google search. most other educational channels don't have direct sources, making it really difficult to find good info lots of the time.
@draken5379
@draken5379 6 ай бұрын
Great Video. Its the perfect counter to 'LLMs just predict the next word'
@LaukkuPaukku
@LaukkuPaukku 6 ай бұрын
Predicting the next word has a high skill ceiling, the more you understand about the world the better you are at it.
@duplicake4054
@duplicake4054 6 ай бұрын
This is one of the best videos I have ever watched
@jonbrouwer4300
@jonbrouwer4300 6 ай бұрын
Loved this. Understandable but in-depth explanation of the topic. And the music is sweet too!
@N8O12
@N8O12 6 ай бұрын
I like how this feels like a video about the biology of some creature, with experiments and interpreting the data from those experiments and unexpected discoveries - except it's about computers.
@zyansheep
@zyansheep 6 ай бұрын
I wonder if the brain has polysemanticity? And if it does, to what degree? I imagine it might have a little, but given that we are not fooled by images that fool convolutional networks, perhaps our brains have ways to minimize polysemanticity or limit its effects? What would happen if we tried to limit it in neural networks? Would it even be trainable like that?
@light8258
@light8258 6 ай бұрын
Our brains use sparse distributed networks. At a given moment, only 2% of our neurons are active. That way, there is way less overlap between different activation patterns. One neuron doesn't represent anything, it's always a combination of active and inactive states, that creates a representation. Dense neural networks work completely different compared to our brains. Of course all of our neurons activate for different patterns, but that is not relevant for what they represent, unlike in convolutional neural networks. That being said, there are sparse distributed representations (engrams) that have similar activation patterns, so polysemanticity does exist in the brain, it's just different from CNNs.
@rytan4516
@rytan4516 6 ай бұрын
While I don't know for sure, I can guess that polysemanticity does occur in human brains because of the existence of things like synesthesia.
@elplaceholder
@elplaceholder 6 ай бұрын
We could be fooled but we dont notice it..
@reinf4430
@reinf4430 6 ай бұрын
Maybe not directly the same defaults as NN, but we do have quirks such as optical illusions, variability between people of "mind's eye" (seeing in imagination, or nothing at all for aphantasia) pareidolia (seeing faces in random shadows), duplicate neural centers for the "same things" which can disagree (clinical cases of people knowing someone but unable to recognise them by their face, or the opposite knowing that they have the correct face but feeling like they are not the real person but a robot duplicate..)
@gasun1274
@gasun1274 6 ай бұрын
Hallucinations. Out of which came superstitions, religion, and violence.
@_Mute_
@_Mute_ 6 ай бұрын
Brilliant! I understood in a general sense how neural networks work, but this solidified more of the core concepts for me!
@skyswimmer290
@skyswimmer290 5 ай бұрын
This is so hard to wrap my head around, im used to regular programming, where you know what it is what youre making down to the smallest of details, but neural networks.... gosh. This video is highly informative oml and very well made
@4ffff2ee
@4ffff2ee 5 ай бұрын
whoever did sound design on this video did a wonderful work
@graxxor
@graxxor 5 ай бұрын
You guys are giving Kurzgesagt a real run for their money with this video. This video is incredibly informative and deserves 10x the number of views.
@ankitsharma1072
@ankitsharma1072 6 ай бұрын
What a pleasure! I just added that circuits paper in my reading list . Thank you ❤
@22Kalliopa
@22Kalliopa 4 ай бұрын
I can’t be the only one who keeps seeing the style of the monsters inc intro. Love it
@sirnikkel6746
@sirnikkel6746 6 ай бұрын
8:48 "Looks kind of reasonable"? That thing look as if it was extracted straight from Lovecraft brain.
@aidandanielski
@aidandanielski 6 ай бұрын
Thanks!
@engi.2
@engi.2 Ай бұрын
Your videos are so good, I spent an hour looking for your channel and found a link I sent to a friend 3 months ago that led to this video
@MrBotdotnet
@MrBotdotnet 5 ай бұрын
This is amazing, I never knew before how exactly individual neurons in AIs even detected parts of images You explained it in the first five minutes and then kept going Please keep making such informative and AMAZINGLY animated videos!
@andriydjmil2589
@andriydjmil2589 6 ай бұрын
This video is such an amazing work on current state of the NeyroNetworks. It sums up a lot of important concepts in a extremely well packaged format. You are amazing!
@irfanjaafar3570
@irfanjaafar3570 5 ай бұрын
Educational video with 10min+ animation, this channel deserves 1mil subs
@cornevanzyl5880
@cornevanzyl5880 5 ай бұрын
Its the way you articulate the content. Deliberate, carefully chosen words that beautifully and succinctly describe the topic. Something only a master in the field can achieve
@ptrckqnln
@ptrckqnln 6 ай бұрын
Exceptionally well produced, informative, and accessible to laypeople. Bravo!
@QuantumConundrum
@QuantumConundrum 5 ай бұрын
Finally, something I can share with my parents and other folks. This explanation is the perfect depth and covers a lot of territory in a digestible way.
@hydroxu
@hydroxu 6 ай бұрын
I really like this :3 It scratches my itch for a fun animation style explaining science topics (Without existential dread shoved down my throat every 5 seconds. You know who I'm talking about)
@DracoDragon-135
@DracoDragon-135 5 ай бұрын
Yup
@maucazalv903
@maucazalv903 6 ай бұрын
8:45 seeing this and remembering when this was the most advance stuff an AI could do *its really impressive how it basically became exponential growth from that point on xd*
@yuvrajsingh-gm6zk
@yuvrajsingh-gm6zk 4 ай бұрын
If it still doesn't make sense, might I recommend three blue one brown series on this exact same topic spread across ~6 videos that goes really into the nitty-gritty of what actually happens under the hood of a neural network, when it so called "learn something"(and i for one find it's a good idea to check out as many different opinions and undertakings as possible to really get the depth of the subject matter.)
@JonathanPlasse
@JonathanPlasse 6 ай бұрын
Awesome explanation, thank you. I finally understood these dreamy pictures were made.
@convincingmountain
@convincingmountain 6 ай бұрын
i really enjoy these simplified explanations of all the complex and far-reaching corners of AI, it rly removes the black-box aspect that makes AGI so terrifying to the layperson (i.e. me). always enjoyed all the kinds of vids you make, ty for all ur efforts
@georgesanchez8051
@georgesanchez8051 5 ай бұрын
0:46 Just a correction, Meta’s largest model is their 400b parameter variant of Llama 3, and the largest that we’ve likely seen so far has been GPT-4, which is estimated to have somewhere in the neighborhood of 1.5 trillion parameters (some simplification but directionally accurate)
@yukko_parra
@yukko_parra 6 ай бұрын
dayum imagine neurologists examining AI and computer scientists examining brains did not expect a future where that could be possible
@rbm_md
@rbm_md 3 ай бұрын
What a brilliant explanation! The ability to explain technical concepts in clear language is truly a commendable one! 😀
@YaroslaffFedin
@YaroslaffFedin 5 ай бұрын
I'm a software engineer that was sort of glancing over the AI stuff for a longest while. And this video helped me to actually keep my focus on the topic to get a bit more understanding. thank you
@Words-.
@Words-. 4 ай бұрын
Truly one of the best informational videos I've ever watched, very fun and extremely informative
@ConradPino
@ConradPino 6 ай бұрын
Fascinating and IMO, very well done. Thank you!
@Name_Pendingg
@Name_Pendingg 6 ай бұрын
8:47 "kind of reasonable" 'kind of' is doing a _lot_ heavy lifting here
@Daniel-li6gu
@Daniel-li6gu 6 ай бұрын
Rationalanimations the GOAT its crazy how this channel appeared out of nowhere and started uploading high quality content from the get
@Tubeytime
@Tubeytime 6 ай бұрын
This explains NN's in a way that almost anyone can understand, as a starting point at least. Really enjoyed it!
@FutureAIDev2015
@FutureAIDev2015 6 ай бұрын
8:46 that looks like something straight out of the book of Ezekiel...😂 BE NOT AFRAID...
@capslfern2555
@capslfern2555 5 ай бұрын
Biblically accurate dog
@_shadow_1
@_shadow_1 6 ай бұрын
I have a prediction. I think that the trend with the number of increasing total parameters will actually stagnate or even reverse and that extremely advanced AI models might be even smaller than the models that we currently have. This would similar to how people have over the years optimized Super Mario 64's code to run more efficiently on the original hardware with techniques that may have not even existed when the original hardware or code was first created.
@mira_nekosi
@mira_nekosi 5 ай бұрын
lmao there are regularly new models that beat models many times their size, or at least better for the same sizes, and new and different architectures are also explored
@aakanksha_nc
@aakanksha_nc 6 ай бұрын
Fantastic work! This needs to reach masses!❤
@theeggtimertictic1136
@theeggtimertictic1136 6 ай бұрын
Captivating animation ... very well explained 😊
@azhuransmx126
@azhuransmx126 6 ай бұрын
There are 3 Basic Inputs Stimuli that activates the neurons of the Thalamus in animals and humans brains: sources of danger. sources of food. sources of sex. And in me there is 4 input when rational animation posts.
@LucklessLex
@LucklessLex 6 ай бұрын
I've discovered this channel since the other day and now I almost can't stop watching them! Their videos are just chock-full of information and detail that my brain just wants to try to understand them all, and the visuals and animations are just too GOOD. The production, and the things you'd learn from this channel make me want to join their team; use my skills for a good cause other than corporate marketing...
@gabrielwolffe
@gabrielwolffe 6 ай бұрын
The polysemanticity you described, on surface inspection at least, would appear to resemble pareidolia in humans. Perhaps if we want to make AIs that think like humans, this could be considered a feature rather than a bug.
@amiplin1751
@amiplin1751 6 ай бұрын
Truly an amazing and informative video, keep up the great work!!
@null-0
@null-0 18 күн бұрын
This video must've been a hella ambitious project. The quality is super.
@Stroporez
@Stroporez 6 ай бұрын
Absolutely gorgeous video.
@OumarDicko-c5i
@OumarDicko-c5i 6 ай бұрын
Best channel of all time !
@Scaryder92
@Scaryder92 5 ай бұрын
I am an AI researcher in computer vision and this animation is among the most beautiful things I've ever seen
@anon_y_mousse
@anon_y_mousse 6 ай бұрын
The colored static is because it's a one dimensional representation of a two dimensional image representing our three dimensional world. I still say that the current method of imitating neurons is going about it the wrong way, but I'm actually glad for that because it means they're not likely to create true AGI.
@c64cosmin
@c64cosmin 6 ай бұрын
Banging my head while learning, this was incredible!!! Thank you so much
@incompletemachine877
@incompletemachine877 6 ай бұрын
Im glad this is here, as im incredibly interested in the processes of Neural Networks and studying consciousness in general
Will we grab the universe? Grabby aliens predictions.
20:01
Rational Animations
Рет қаралды 435 М.
Everything might change forever this century (or we’ll go extinct)
32:35
Rational Animations
Рет қаралды 1,9 МЛН
1% vs 100% #beatbox #tiktok
01:10
BeatboxJCOP
Рет қаралды 67 МЛН
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН
UFC 310 : Рахмонов VS Мачадо Гэрри
05:00
Setanta Sports UFC
Рет қаралды 1,2 МЛН
The True Story of How GPT-2 Became Maximally Lewd
13:54
Rational Animations
Рет қаралды 2 МЛН
How to Create a Neural Network (and Train it to Identify Doodles)
54:51
Sebastian Lague
Рет қаралды 1,9 МЛН
2024's Biggest Breakthroughs in Math
15:13
Quanta Magazine
Рет қаралды 469 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,4 МЛН
Sympathy for the Machine
26:31
Curious Archive
Рет қаралды 1,9 МЛН
What Does An Electron ACTUALLY Look Like?
16:02
PBS Space Time
Рет қаралды 562 М.
How to Take Over the Universe (in Three Easy Steps)
18:01
Rational Animations
Рет қаралды 1,2 МЛН
How One Career Can Save a Million Lives
12:42
Rational Animations
Рет қаралды 131 М.
How One Line in the Oldest Math Text Hinted at Hidden Universes
31:12
The Surgery That Proved There Is No Free Will
29:43
Joe Scott
Рет қаралды 2,4 МЛН