This topic is truly a rabbit hole. If you want to learn more about this important research and even contribute to it, check out this list of sources about mechanistic interpretability and interpretability in general we've compiled for you: On Interpreting InceptionV1: Feature visualization: distill.pub/2017/feature-visualization/ Zoom in: An Introduction to Circuits: distill.pub/2020/circuits/zoom-in/ The Distill journal contains several articles that try to make sense of how exactly InceptionV1 does what it does: distill.pub/2020/circuits/ OpenAI's Microscope tool lets us visualize the neurons and channels of a number of vision models in great detail: microscope.openai.com/models Here's OpenAI's Microscope tool pointed on layer Mixed3b in InceptionV1: microscope.openai.com/models/inceptionv1/mixed3b_0?models.op.feature_vis.type=channel&models.op.technique=feature_vis Activation atlases: distill.pub/2019/activation-atlas/ More recent work applying sparse autoencoders (SAEs) to uncover more features in InceptionV1 and decompose polysemantic neurons: arxiv.org/abs/2406.03662v1 Transformer Circuits Thread, the spiritual successor of the circuits thread on InceptionV1. This time on transformers: transformer-circuits.pub/ In the video, we cite "Toy Models of Superposition": transformer-circuits.pub/2022/toy_model/index.html We also cite "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning": transformer-circuits.pub/2023/monosemantic-features/ More recent progress: Mapping the Mind of a Large Language Model: Press: www.anthropic.com/research/mapping-mind-language-model Paper in the transformers circuits thread: transformer-circuits.pub/2024/scaling-monosemanticity/index.html Extracting Concepts from GPT-4: Press: openai.com/index/extracting-concepts-from-gpt-4/ Paper: arxiv.org/abs/2406.04093 Browse features: openaipublic.blob.core.windows.net/sparse-autoencoder/sae-viewer/index.html Language models can explain neurons in language models (cited in the video): Press: openai.com/index/language-models-can-explain-neurons-in-language-models/ Paper: openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html View neurons: openaipublic.blob.core.windows.net/neuron-explainer/neuron-viewer/index.html Neel Nanda on how to get started with Mechanistic Interpretability: Concrete Steps to Get Started in Transformer Mechanistic Interpretability: www.neelnanda.io/mechanistic-interpretability/getting-started Mechanistic Interpretability Quickstart Guide: www.neelnanda.io/mechanistic-interpretability/quickstart 200 Concrete Open Problems in Mechanistic Interpretability: www.alignmentforum.org/posts/LbrPTJ4fmABEdEnLf/200-concrete-open-problems-in-mechanistic-interpretability More work mentioned in the video: Progress measures for grokking via mechanistic interpretability: arxiv.org/abs/2301.05217 Discovering Latent Knowledge in Language Models Without Supervision: arxiv.org/abs/2212.03827 Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning: www.nature.com/articles/s41551-018-0195-0
@pyeitme5086 ай бұрын
WOW!
@EmmanuelMess6 ай бұрын
What's the source for "somehow it learnt to tell peoples biological sex" (at the start)? It really sounds like bias in the data.
@RationalAnimations6 ай бұрын
@@EmmanuelMess It's the very last link
@EmmanuelMess6 ай бұрын
Thanks! it seems that other papers also confirm that biological sex identification is possible from fundus images.
@ifoxtrot171gg6 ай бұрын
as someone who uses neurons to classify images, i too am activated by curves.
@dezaim72886 ай бұрын
As a mass of neurons i can relate to being activated by curves.
@hellofellowbotsss6 ай бұрын
Same
@nxte85066 ай бұрын
deadass
@a31-hq1jk6 ай бұрын
I get activated by straight thick lines
@ThatGuyThatHasSpaghetiiCode6 ай бұрын
Especially the female ones
@user-qw9yf6zs9t6 ай бұрын
anyone else surprised that there isnt a a ai model that ranks people's "beauty-ness" from 1-100 honestly great start-up idea, just use people ranking data
@cheeseaddict6 ай бұрын
You guys shouldn't overwork yourselves 😭 17 minutes of high quality animation and info Seems like Kurzgesagt got competition 👀
@dissahc6 ай бұрын
kurzgesagt could only dream of possessing this much style and substance.
@Puppeteer_in_the_Void6 ай бұрын
I feel like kurzgesagt has slacked off on writing quality, so I'm glad it may have to fight to not be replaced
@ExylonBotOfficial6 ай бұрын
This is so much more informative than any of the recent kurzgesagt videos
@raph25506 ай бұрын
The recent Kurzgesagt are just ads so...
@Restrocket6 ай бұрын
Use AI to generate video and text instead
@CloverTheBean6 ай бұрын
I really appreciate your simplification without dumbing it down to just noise. I've been wondering about how neural networks operate. Not that I'm a student or trying to apply it for any reason. I just love having nuggets of knowledge to share around with friends!
@miguelmalvina52006 ай бұрын
KARKAT LETS GO
@average-neco-arc-enjoyer6 ай бұрын
@@miguelmalvina5200 they got the HS:BC karkat
@CloverTheBean5 ай бұрын
@@miguelmalvina5200 YES YOU'RE THE FIRST ONE IN THE WILD WHO RECOGNIZED IT
@miguelmalvina52005 ай бұрын
@@CloverTheBean I wasnt expecting to find a homestuckie in a very niche AI video honestly, pretty nice
@eddyr10415 ай бұрын
The wondrous philosophy thought of what brain ia...
@theweapi6 ай бұрын
Polysemanticity makes me think of how we can see faces in things like cars and electrical sockets. The face detection neurons are doing multiple jobs, but there is not a risk of mixing them up because of how vastly different they are. This may also explain the uncanny valley, where we have other neurons whose job it is to ensure the distinction is clear.
@christianhall39163 ай бұрын
I don't think that's what's going on with face detection. It's just highly advantageous in natural selection to tell if something has a face or not, because that's a sign that it could be alive. So face detection is overeager because it's so important to know if there's even a slight chance something has a face. Two dots and a line or curve is all it takes for us to see a face in anything.
@Nikolas_Davis6 ай бұрын
The delicious irony, of course, is that AI started out as a research field with the purpose to understand our _own biological intelligence_ by trying to reproduce it. Actually building a practical tool was a distant second, if even considered. But hardly anyone remembers that now, when AI systems are developed for marketable purposes. So, now that AI (kinda) works, we're back to square one, trying to understand _how_ it works - which was the issue we had with our own wetware in the first place! Aaargh!! But all is not lost, because we can prod and peek inside our artificial neural networks much easier than we can inside our noggins. So, maybe there is net progress after all.
@superagucova6 ай бұрын
A cool fact is that interpretability efforts *have* led to some progress in neuroscience. Google has a cool paper drawing analogies between neuroscience understanding of the human visual cortex and specific types of convolutional neural networks, and this has seeped in into the neuroscience literature
@miniverse20026 ай бұрын
Considering the brain is the most complicated thing known in the Universe other than the Universe, I would think understanding it would still be a whole other challenge compared to our "simple" reproductions even if can prod and peak our own brains as easily.
@jackys_handle5 ай бұрын
"Wetware" I'm gonna use that
@israrbinmi28565 ай бұрын
and then they use a more advanced AI (GPT4) to understand the lesser (GPT2), the irony is an onion
@antonzhdanov96535 ай бұрын
@@israrbinmi2856Its weird but it makes sense bcs more advanced AI is literally tasked with vivisection of less advanced to discern and show what each piece of less advanced is doing and letting skinbags interprete given results.
@DanksPlatter6 ай бұрын
these videos are perfect edutainment and its crazy how much detail goes in even to background stuff like sound effects and music
@E.Hunter.Esquire6 ай бұрын
Ur mum
@Jan127006 ай бұрын
5:18 But that's exactly what leads to extreme misjudgments if the data isn't 100% balanced and you never manage to get anything to be 100% balanced. With dogs in particular, huskies were only recognized if the background was white, because almost all training data was with huskies was in the snow.
@Kevin-cf9nl6 ай бұрын
On the other hand "Is in the snow" is a genuinely good way to distinguish huskies from other dogs in an environment with low information. I wouldn't call that an "extreme misjudgement", but rather a good (if limited) hueristic.
@austinrimel78606 ай бұрын
Good to know.
@superagucova6 ай бұрын
Balancing becomes less and less of a problem at scale. Neural networks don’t overfit in the same way that classical statistical models do.
@ericray71736 ай бұрын
That’s what overfitting is. Same thing with a certain trophy fish that A.I. networks learned to only recognize if it was in human hands lol.
@Stratelier6 ай бұрын
Didn't the mention of polysemanticity kind of vaguely touch on this? Whatever nodes are responsible for detecting the husky may also be biased toward an assumption that the husky is seen against a snowy/white backdrop, due in some part to the limits of its training data.
@jddes6 ай бұрын
This is something I've been going on about for a long time. The real prize isn't in getting machines to do stuff for us, it will be using and adapting the shortcuts they are able to find. We just need to learn...what they've learned.
@r.connor92806 ай бұрын
The world's most complex homework stealing scheme
@terdragontra89006 ай бұрын
This is… not true, I think. Neural networks will (and already have) learned algorithms that are simply too complicated and weird for a human brain to follow.
@jakebrowning23736 ай бұрын
@@terdragontra8900 yes
@joaomrtins6 ай бұрын
Rather than learning _what_ they learned the real prize is learning _how_ they learn.
@E.Hunter.Esquire6 ай бұрын
@@joaomrtins for what? Won't do us much good
@KrasBadan6 ай бұрын
16:11 It reminds me of a video by Welch Labs that I watched recently. It was about how Kepler didcovered his laws. Basically he had a bunch of accurate data about the positions of Mars at some points in time and he wanted to figure out how it moves. He noticed that the speed at which it orbits the sun isn't uniform, it is faster in one half of the orbit and slower in the other, and he tried to take it into account. What he did was assume that the orbit is a circle, and inside that circle there is the sun, the center of the circle and some point called equant, and all these 3 points lye on the same line. The idea of equant is as follows: imagine a ray that rotates uniformly around the equant, find the point at which it intersects orbit. In that model, this point of intersection is where the Mars should be at this moment. He had 4 parameters: distance from the center of circle to sun, to equant, the speed or ray and the starting position of ray. These 4 parameters can describe wide range of possible motions. By doing lots of trial and error, Kepler fine-tuned these 4 parameters, such that the maximum error was just 2 arcminutes, a hundred times more accurate than anyone else. This process of having a system that can discribe almost anything and tuning it to describe what you want is similar to how neural networks recognise patterns. But after more optimisation and taking more things into account, Kepler came to the conclusion that the orbit isn't a circle. He then tried tuning a shape that is similar to egg, but it was worse than his old model, so he added more parameters. He assumed Mars orbits some orbit that itself orbits an orbit, and after noticing one thing about angles in his data, he found perfect parameters, that happen to perfectly describe ellipse. The discovery of elliptical orbit of Mars was the main thing that allowed Newtown to develop his theory of gravity. This thing is similar to how, given enough data, by lots of trial and error neural networks can generalize concepts and find underlying laws of nature.
@gasun12746 ай бұрын
It's called an empirical fit. Physics, engineering, and the modern world is built on that method. There is no better way than that.
@guyblack97296 ай бұрын
"there will also be lots of pictures of dogs" well count me the fuck in LET'S GOOOO
@RitmosMC6 ай бұрын
These videos are incredible, going into so much detail and yet managing to not lose the audience on the way- it’s amazing! The animation, the style, the music, everything! This channel is on par with Kurzgesagt and others in the educational animations genre, and the fact that it doesn’t have millions of subscribers is criminal. One of the top 10 channels on the entire platform for sure. Keep up the incredible work.
@gavinbowers1376 ай бұрын
The sound design and the animation were incredible in this one!
@chadowsrikatemo44946 ай бұрын
Ok, that explanation of image generation (the one that made a snout inside a snout) was one of the best ones i've found yet, Good Job!
@drdca82636 ай бұрын
Note that this isn't quite the same method used in the tools which have the purpose of generating general images. Though it does have *some similarity* to some(most? all??) of those methods.
@animowany1116 ай бұрын
@@drdca8263 The idea of deep dream and that kind of neuron visualization is completely different from modern image-generating models using Diffusion. The things they optimize for are completely different. Diffusion tries to take a (conditional) sample from some kind of learned manifold in latent space that somehow represents the distribution of data seen in training, deep dream directly optimizes a point in a similar latent space to maximize a neuron or channel activation.
@loopuleasa6 ай бұрын
"To understand is to make a smaller system inside your head behave similarly to a larger system outside your head."
@BayesianBeing5 ай бұрын
Idk who said that but it's a great quote
@loopuleasa5 ай бұрын
@@BayesianBeing I made it, just added quotes to mark that it is very important and ready to quote further
@LuigiSimoncini5 ай бұрын
Yes, a series of models, most probably multisemantic and dinamically refined ("trained") thanks to both our senses (big limitation for LLMs, they're not incarnated) and subsequent system 2 unconscious thinking (who knows, maybe even during sleep)
@luisgdelafuente4 ай бұрын
Human learning and understanding is not based on building smaller models, but on building abstractions with meaning. That's what AGI believers don't understand.
@rheamad7 сағат бұрын
@@luisgdelafuente wait say more
@smitchered6 ай бұрын
Thanks for educating the general public about AI and its dangers, Rational Animations! Your animations keep getting better and I still listen to your video's soundtracks from time to time... thanks for all this effort you're pouring into the channel!
@michaelpapadopoulos60546 ай бұрын
the maximally bad output soundtrack had some really catchy leitmotifs! Also just a banger video in general.
@vectorhacker-r26 ай бұрын
My dog was in this video and I couldn’t be more happy when I saw her!
@ashleyjaytanna19536 ай бұрын
My God was not in the video. Blasphemy!
@stronggaming23656 ай бұрын
@@ashleyjaytanna1953 your what now?
@laibarehman80056 ай бұрын
@@stronggaming2365 Dyslexia sniping is my best guess haha
@danielalorbi6 ай бұрын
This what the most absolutely delightful animation of a Neural Net I've ever seen, by far. Kudos to the visual and animation artists
@dagnation93976 ай бұрын
The fact that there are identifiable patterns in the neural network that can almost be "cut and pasted" makes me thing that there might be more intentional building of machine learning tools in the future. It reminds me of Conway's game of life. If you turn on the program, and just start dragging the mouse around to generate points, there will often be gliders and the little things that just blink in place. Some people were inspired by these, and discovered and developed more of the little tools. Now there are (very simple) computers written on Conway's game of life. In a similar way, the little neural network patterns might also be the building blocks of more robust, but also more predictable, machine learning tools in the future.
@BluishGreenPro6 ай бұрын
These visualizations are incredibly helpful in understanding the topic; fantastic work!
@jaycee536 ай бұрын
This channel is slept on 😮💨
@benedekfodor2696 ай бұрын
I suspect that will change, I'm so happy I found it.
@mentgold6 ай бұрын
the production quality of you guys is through the roof and somehow you still manage to improve with every video
@meaburror76536 ай бұрын
best channel on yt right now
@DriPud6 ай бұрын
I have to admit, every single one of your videos makes me eager to learn. Thank you for such high-quality, entertaining content!
@hamster87066 ай бұрын
Seriously, this channel is so good, why is this so underrated.
@foxxiefox6 ай бұрын
Babe wake up rational animations posted
@bingusbongus98076 ай бұрын
awake babe
@Jaggerbush6 ай бұрын
Corny. I hate this unoriginal lame comment. It's right up there with "first".
@SteedRuckus6 ай бұрын
@@Jaggerbushbabe wake up, someone got mad about the thing
@sebas11tian6 ай бұрын
The thing is that if we kept quiet, we stop participating in picking what memes are widespread. I often found that the top comment used to be somewhat related to the topic instead of blind and uninspired appreciation post for the creator.
@Niohimself6 ай бұрын
Every video on this channel is a banger
@Nuthouse016 ай бұрын
A truly outstanding video! I took some basic classes on machine learning in college but they mostly said "what the neural network is doing and how it works is unclear, don't worry about it, it just works". So I know about how fully connected and convolutional layers work, and I know how training and weight propagation works. But I never knew that we could delve so deeply into what each of these neurons actually means! Impressive!
@Billy4321able6 ай бұрын
It really feels like we need way more effort being put into mechanistic Interpretability. The mileage we're going to get from just designing new models and training from larger datasets is small. However, if we do basic research into finding out how these models make the decisions they make, then we can steer them in the right direction much faster. I don't see us getting much further than we are today using the same brute force methods. Machine learning techniques used today are the construction equivalent of building skyscrapers out of wood. Sure, maybe if you made them wide enough with strong enough wood it could work, but there are definitely better materials out there than wood.
@corrinlone4813Ай бұрын
Great video. Love your content!
@4dragons6326 ай бұрын
Fantastic video! Seeing 17 minutes of this made my day. The explanation of how exactly people have worked out these weird neuron maximising images is fascinating to me, especially using robustness against change because without that you get a seemingly random mess of noise (although of course the noise won't be random, and if we could figure out what noise correlates to what then the work would be much further along)
@petersmythe64626 ай бұрын
Safety/alignment/human feedback related pathways are often very specific unless the training procedure involved substantial red-teaming, while the pathways they protect are often very general. This is why models can often be motivated into doing things they "won't" or "can't" do with some small changes in wording or anti-punt messages like "don't respond in the first person."
@khchosen16 ай бұрын
Words can’t express how much I love this content.Thank you, this is one of the best channels I’ve stumbled across. Simply amazing, please keep making these ❤
@airiquelmeleroy6 ай бұрын
I'm absolutely blown away by the quality of your recent videos. Love it. Keep it up!
@BaxterAndLunala6 ай бұрын
2:21: "There will also be lots of pictures of dogs." Considering the fact every video I've seen from this guy has the same yellowish-orange dog in them, I'm not surprised that he said we'd see pictures of dogs. Lots of pictures of dogs.
@atom1kcreeper6056 ай бұрын
Lol
@AB-wf8ek6 ай бұрын
There was a recent paper in the journal "Neuron" titled "Mixed selectivity: Cellular computations for complexity" that covers the idea that a single neuron can play a role in more than one function at a time. Seems to correlate with the concepts in this video. To me, it seems intuitive. The fact that we can make anologies, and the ability of art to retain multiple layers of meaning, must come from an innate ability for single points of information to serve multiple functions. If I had to summarize what neural networks are doing, it would be mapping the relationship between information in a given dataset along multidimensional lines.
@AB-wf8ek6 ай бұрын
Interesting, I did a search for "mixed selectivity polysemanticity" and found a paper released Dec, 2023, "What Causes Polysemanticity? An Alternative Origin Story of Mixed Selectivity from Incidental Causes". Looks like there are researchers making the connection as well.
@jasonholtkamp64836 ай бұрын
I can't believe how high of quality these explanations and animations are... bravo!
@Schadrach426 ай бұрын
My first thought to "why cars?" is that it's detecting things that resemble faces (or a subset of faces), and we intentionally design cars to have "faces" because of people's tendency to anthropomorphize things, which means cars with "faces" sell better because they can more easily be given a "personality" to potential buyers. I'd be curious if it activates more or less on images of cars that don't show the front end or don't show as much of the front end.
@howtoappearincompletely97396 ай бұрын
This is really great, your most fascinating video to date. Well done, Rational Animations!
@microwave2216 ай бұрын
No matter how many videos l watch on image recognition, I'm always caught off guard by how quickly it switches from seemingly abstract to almost entirely organic. Like you see a small weighted matrix, then how applying it across an image suddenly gives you edge detection. Then you start trying to figure out what makes things look like things for you. The wildest part for me is how we all have no problem interpreting those tiedye nightmare fractal-esque optimized images, likely because the meat neurons in our own minds use similar detection schemes. We can also see how animals evolve to undermine those systems too, like how the white bellys on quadrupeds serve to offset their shadows, potentially defeating lower edge detection and making them appear flat and unassuming.
@Kayo2life6 ай бұрын
Those abstracted images made me start banging my head into my desk. They scare me, a lot. I think they activate my flight or fight response or something.
@microwave2216 ай бұрын
@@Kayo2life they are the distilled essence of a particular concept without the associated traits that are always found with them. I remember reading decades ago about the alert states of a system that has multiple sensors, and the dim, incomplete memory of that example seems relevant. Imagine a fire alarm system that checks for heat, looks for flame, and senses smoke. Detecting heat and flame but no smoke, or just smoke by itself are all conceivable possibilities, so the system would go into "low alert" and sound an alarm. However, if it saw smoke and fire but didn't detect any heat, it would be "high alert" because that shouldn't be possible, and it means something has gone very wrong. Basically an error code in more modern terms. I suspect those images are sending you into high alert for similar reasons.
@Kayo2life6 ай бұрын
@@microwave221 thank you
@JamesSarantidis6 ай бұрын
You even provided sources for more research. This channel is a gem.
@PaulFidika6 ай бұрын
I love your cartoon-network esque art style; I don’t even know what to call it
@SisterSunny6 ай бұрын
I love how you always manage to make me feel like I sort of understand what you're talking about, while also understanding just how little I actually know about the subject
@FerousFolly3 ай бұрын
I believe the by far most important direction for AI research right now is developing a way to reliably, consistently, and repeatably decompile deep convolutional models with maximal interpretability. as much as we like to think that the risks of AGIs are still in thr distant future, it's impossible to say for sure. these models have been shown to be capable of such feats as GPT-4o, I think they will be capable of learning to decompile complex AI models into maximally interpretable neuron maps. it would be an immense challenge to develop a method of training such an AI, but given the leaps and bounds we've made recently, I refuse to believe it's an unattainable goal in the very near future, and I would submit that it's the single most important goal to persue within the field.
@gabrote426 ай бұрын
Another hit "This is going on my arguments and explanation playlist" moment
@stardustandflames1266 ай бұрын
Another brilliant video, amazing animation and style and music and writing and info!
@GuyThePerson6 ай бұрын
I've already learnt more from this video than a week of school. Strange how you learn faster when stuff is actually explained well, isn't it?
@ege82406 ай бұрын
thing is, these videos are fun but not educatitonal. you learn just the headlines of topics, not how they work
@Seraphiel293 ай бұрын
Love your kingdom hearts sound effects at 2:06
@FutureAIDev20156 ай бұрын
14:34 This is what happens when you give a computer LSD
@xentarch6 ай бұрын
Maybe the cat and car thing is related to the fact that the output differs by only a single letter at the end? Since we have no idea how exactly the network infrastructure is formed, it's possible that the words themselves have something to do with the mixed neuron usage...
@xentarch6 ай бұрын
Also, abstraction is a cool thing. Most of the time, it doesn't really make sense to try and "explain" what a single neuron is doing (in the context of its network, of course). Abstraction has many more degrees of freedom than the number of words we have defined.
@gpt-jcommentbot4759Ай бұрын
No. The model doesn't have categories labellled cat or dog, just output neurons which can fire or not. It does not see words so it can not relate them textually
@xentarchАй бұрын
@@gpt-jcommentbot4759 Input words are broken down into tokens which should correspond to vectors in the latent space, no? I'm not sure how the model in question breaks words down into tokens, so the idea I proposed comes down to the **potential** for similarity in word structure (which the model simply must parse somehow if it converts input text to output images) to be related to similarity in output.
@TheRealMcNuggs6 ай бұрын
8:20 I don't understand at all what is meant by "layering a bunch of smooth waves on top of each other". Like in what sense is the grey image on the top derived from the noisy bottom image or vice versa?
@tygrdragon6 ай бұрын
holy cow, the rabbit hole just keeps going! this is such an interesting topic to me. at first, I just though AI algorithms were just really big evolution simulators, but this might actually prove otherwise. being able to understand vaguely what an AI is actually thinking is Amazing, especially since it doesn't even really know either. I really hope to see more videos like this in the future, this is so cool. this channel is definitely one of the best educational channels I've ever seen. even on par with Kurzgesagt! I really appreciate all the sources in the description, too! there's legitimately more to find about neural networks in this one KZbin video than a google search. most other educational channels don't have direct sources, making it really difficult to find good info lots of the time.
@draken53796 ай бұрын
Great Video. Its the perfect counter to 'LLMs just predict the next word'
@LaukkuPaukku6 ай бұрын
Predicting the next word has a high skill ceiling, the more you understand about the world the better you are at it.
@duplicake40546 ай бұрын
This is one of the best videos I have ever watched
@jonbrouwer43006 ай бұрын
Loved this. Understandable but in-depth explanation of the topic. And the music is sweet too!
@N8O126 ай бұрын
I like how this feels like a video about the biology of some creature, with experiments and interpreting the data from those experiments and unexpected discoveries - except it's about computers.
@zyansheep6 ай бұрын
I wonder if the brain has polysemanticity? And if it does, to what degree? I imagine it might have a little, but given that we are not fooled by images that fool convolutional networks, perhaps our brains have ways to minimize polysemanticity or limit its effects? What would happen if we tried to limit it in neural networks? Would it even be trainable like that?
@light82586 ай бұрын
Our brains use sparse distributed networks. At a given moment, only 2% of our neurons are active. That way, there is way less overlap between different activation patterns. One neuron doesn't represent anything, it's always a combination of active and inactive states, that creates a representation. Dense neural networks work completely different compared to our brains. Of course all of our neurons activate for different patterns, but that is not relevant for what they represent, unlike in convolutional neural networks. That being said, there are sparse distributed representations (engrams) that have similar activation patterns, so polysemanticity does exist in the brain, it's just different from CNNs.
@rytan45166 ай бұрын
While I don't know for sure, I can guess that polysemanticity does occur in human brains because of the existence of things like synesthesia.
@elplaceholder6 ай бұрын
We could be fooled but we dont notice it..
@reinf44306 ай бұрын
Maybe not directly the same defaults as NN, but we do have quirks such as optical illusions, variability between people of "mind's eye" (seeing in imagination, or nothing at all for aphantasia) pareidolia (seeing faces in random shadows), duplicate neural centers for the "same things" which can disagree (clinical cases of people knowing someone but unable to recognise them by their face, or the opposite knowing that they have the correct face but feeling like they are not the real person but a robot duplicate..)
@gasun12746 ай бұрын
Hallucinations. Out of which came superstitions, religion, and violence.
@_Mute_6 ай бұрын
Brilliant! I understood in a general sense how neural networks work, but this solidified more of the core concepts for me!
@skyswimmer2905 ай бұрын
This is so hard to wrap my head around, im used to regular programming, where you know what it is what youre making down to the smallest of details, but neural networks.... gosh. This video is highly informative oml and very well made
@4ffff2ee5 ай бұрын
whoever did sound design on this video did a wonderful work
@graxxor5 ай бұрын
You guys are giving Kurzgesagt a real run for their money with this video. This video is incredibly informative and deserves 10x the number of views.
@ankitsharma10726 ай бұрын
What a pleasure! I just added that circuits paper in my reading list . Thank you ❤
@22Kalliopa4 ай бұрын
I can’t be the only one who keeps seeing the style of the monsters inc intro. Love it
@sirnikkel67466 ай бұрын
8:48 "Looks kind of reasonable"? That thing look as if it was extracted straight from Lovecraft brain.
@aidandanielski6 ай бұрын
Thanks!
@engi.2Ай бұрын
Your videos are so good, I spent an hour looking for your channel and found a link I sent to a friend 3 months ago that led to this video
@MrBotdotnet5 ай бұрын
This is amazing, I never knew before how exactly individual neurons in AIs even detected parts of images You explained it in the first five minutes and then kept going Please keep making such informative and AMAZINGLY animated videos!
@andriydjmil25896 ай бұрын
This video is such an amazing work on current state of the NeyroNetworks. It sums up a lot of important concepts in a extremely well packaged format. You are amazing!
@irfanjaafar35705 ай бұрын
Educational video with 10min+ animation, this channel deserves 1mil subs
@cornevanzyl58805 ай бұрын
Its the way you articulate the content. Deliberate, carefully chosen words that beautifully and succinctly describe the topic. Something only a master in the field can achieve
@ptrckqnln6 ай бұрын
Exceptionally well produced, informative, and accessible to laypeople. Bravo!
@QuantumConundrum5 ай бұрын
Finally, something I can share with my parents and other folks. This explanation is the perfect depth and covers a lot of territory in a digestible way.
@hydroxu6 ай бұрын
I really like this :3 It scratches my itch for a fun animation style explaining science topics (Without existential dread shoved down my throat every 5 seconds. You know who I'm talking about)
@DracoDragon-1355 ай бұрын
Yup
@maucazalv9036 ай бұрын
8:45 seeing this and remembering when this was the most advance stuff an AI could do *its really impressive how it basically became exponential growth from that point on xd*
@yuvrajsingh-gm6zk4 ай бұрын
If it still doesn't make sense, might I recommend three blue one brown series on this exact same topic spread across ~6 videos that goes really into the nitty-gritty of what actually happens under the hood of a neural network, when it so called "learn something"(and i for one find it's a good idea to check out as many different opinions and undertakings as possible to really get the depth of the subject matter.)
@JonathanPlasse6 ай бұрын
Awesome explanation, thank you. I finally understood these dreamy pictures were made.
@convincingmountain6 ай бұрын
i really enjoy these simplified explanations of all the complex and far-reaching corners of AI, it rly removes the black-box aspect that makes AGI so terrifying to the layperson (i.e. me). always enjoyed all the kinds of vids you make, ty for all ur efforts
@georgesanchez80515 ай бұрын
0:46 Just a correction, Meta’s largest model is their 400b parameter variant of Llama 3, and the largest that we’ve likely seen so far has been GPT-4, which is estimated to have somewhere in the neighborhood of 1.5 trillion parameters (some simplification but directionally accurate)
@yukko_parra6 ай бұрын
dayum imagine neurologists examining AI and computer scientists examining brains did not expect a future where that could be possible
@rbm_md3 ай бұрын
What a brilliant explanation! The ability to explain technical concepts in clear language is truly a commendable one! 😀
@YaroslaffFedin5 ай бұрын
I'm a software engineer that was sort of glancing over the AI stuff for a longest while. And this video helped me to actually keep my focus on the topic to get a bit more understanding. thank you
@Words-.4 ай бұрын
Truly one of the best informational videos I've ever watched, very fun and extremely informative
@ConradPino6 ай бұрын
Fascinating and IMO, very well done. Thank you!
@Name_Pendingg6 ай бұрын
8:47 "kind of reasonable" 'kind of' is doing a _lot_ heavy lifting here
@Daniel-li6gu6 ай бұрын
Rationalanimations the GOAT its crazy how this channel appeared out of nowhere and started uploading high quality content from the get
@Tubeytime6 ай бұрын
This explains NN's in a way that almost anyone can understand, as a starting point at least. Really enjoyed it!
@FutureAIDev20156 ай бұрын
8:46 that looks like something straight out of the book of Ezekiel...😂 BE NOT AFRAID...
@capslfern25555 ай бұрын
Biblically accurate dog
@_shadow_16 ай бұрын
I have a prediction. I think that the trend with the number of increasing total parameters will actually stagnate or even reverse and that extremely advanced AI models might be even smaller than the models that we currently have. This would similar to how people have over the years optimized Super Mario 64's code to run more efficiently on the original hardware with techniques that may have not even existed when the original hardware or code was first created.
@mira_nekosi5 ай бұрын
lmao there are regularly new models that beat models many times their size, or at least better for the same sizes, and new and different architectures are also explored
@aakanksha_nc6 ай бұрын
Fantastic work! This needs to reach masses!❤
@theeggtimertictic11366 ай бұрын
Captivating animation ... very well explained 😊
@azhuransmx1266 ай бұрын
There are 3 Basic Inputs Stimuli that activates the neurons of the Thalamus in animals and humans brains: sources of danger. sources of food. sources of sex. And in me there is 4 input when rational animation posts.
@LucklessLex6 ай бұрын
I've discovered this channel since the other day and now I almost can't stop watching them! Their videos are just chock-full of information and detail that my brain just wants to try to understand them all, and the visuals and animations are just too GOOD. The production, and the things you'd learn from this channel make me want to join their team; use my skills for a good cause other than corporate marketing...
@gabrielwolffe6 ай бұрын
The polysemanticity you described, on surface inspection at least, would appear to resemble pareidolia in humans. Perhaps if we want to make AIs that think like humans, this could be considered a feature rather than a bug.
@amiplin17516 ай бұрын
Truly an amazing and informative video, keep up the great work!!
@null-018 күн бұрын
This video must've been a hella ambitious project. The quality is super.
@Stroporez6 ай бұрын
Absolutely gorgeous video.
@OumarDicko-c5i6 ай бұрын
Best channel of all time !
@Scaryder925 ай бұрын
I am an AI researcher in computer vision and this animation is among the most beautiful things I've ever seen
@anon_y_mousse6 ай бұрын
The colored static is because it's a one dimensional representation of a two dimensional image representing our three dimensional world. I still say that the current method of imitating neurons is going about it the wrong way, but I'm actually glad for that because it means they're not likely to create true AGI.
@c64cosmin6 ай бұрын
Banging my head while learning, this was incredible!!! Thank you so much
@incompletemachine8776 ай бұрын
Im glad this is here, as im incredibly interested in the processes of Neural Networks and studying consciousness in general