Semantic networks and spreading activation | Processing the Environment | MCAT | Khan Academy

  Рет қаралды 131,963

khanacademymedicine

khanacademymedicine

Күн бұрын

Learn about how knowledge is organized in the mind. Created by Carole Yue.
Watch the next lesson: www.khanacadem...
Missed the previous lesson? www.khanacadem...
MCAT on Khan Academy: Go ahead and practice some passage-based questions!
About Khan Academy: Khan Academy offers practice exercises, instructional videos, and a personalized learning dashboard that empower learners to study at their own pace in and outside of the classroom. We tackle math, science, computer programming, history, art history, economics, and more. Our math missions guide learners from kindergarten to calculus using state-of-the-art, adaptive technology that identifies strengths and learning gaps. We've also partnered with institutions like NASA, The Museum of Modern Art, The California Academy of Sciences, and MIT to offer specialized content.
For free. For everyone. Forever. #YouCanLearnAnything
Subscribe to Khan Academy’s MCAT channel: / @khanacademymcatprep
Subscribe to Khan Academy: www.youtube.co...

Пікірлер: 19
@hanafihikari
@hanafihikari 5 жыл бұрын
This is what has running through my mind before I dream of something. I think of a cat, fluffy, comfort, a luxury sofa and the list goes on until it hooked to an idea, then I'll dream of it.
@DerrickRoccka
@DerrickRoccka 4 жыл бұрын
the concept of Class in Object Oriented Programming comes always to my mind when i see this
@pilateswithpooja7224
@pilateswithpooja7224 8 жыл бұрын
I must say this is very well explained ,thank you for putting up this video ,it's helping me to study better👍
@nileytrueluv1
@nileytrueluv1 9 жыл бұрын
Information in LTM is organized and associated with other info in LTM and it's organized in a hierarchal structure with overlapping networks of concepts connected by meaningful links. Each concept or NODE is linked with other nodes, so activating one node will activate another. When retrieving information from LTM, we begin by searching for a region of memory and tracing it back to other areas that are related. The shorter the links between nodes means the association is stronger, and stronger associations mean faster activation (retrieval) of information. The modified semantic network theory explains spread activation - activation of one node increases the likelihood of other nodes being activated. Again, the shorter the links, the stronger the associations and the faster the activation and retrieval. Longer links take longer to be retrieved.
@sergiosanchezpadilla6941
@sergiosanchezpadilla6941 3 жыл бұрын
It's a nice concept and explanation, but you missed two very important points: 1) What you drew as single points is misleading; bird, fish, canary, etc. are not points, but complex semantic networks themselves (conceptual prototypes with central and peripheral members, with central and peripheral attributes (Rosch, Labov, etc.)); and 2) precisely because of this latter, there is much overlapping between different categories; reticular models of connectionism are closer to whatever is going on in our brains (truth be told and to be fair, nobody knows how exactly this system truly works). I love your channel; I am watching lots of videos from this channel, and I am learning a lot, but you should read a few papers from Lawrence Barsalou on Distributed Networks of Representation; you could improve these videos by a lot if you update your bibliography on the subject. Thank you so much for spreading knowledge and science. You guys rule. Thumbs up!
@xXTheIntricateXx
@xXTheIntricateXx 10 жыл бұрын
thanks! very concise and informative! subscribed :)
@ぺこペこ-m6q
@ぺこペこ-m6q 4 жыл бұрын
Thank you-peko!!!
@skyacaniadev2229
@skyacaniadev2229 7 жыл бұрын
And what's the mental process of "1+1=2"? This model is still so incomplete.
@banuchandar4860
@banuchandar4860 6 жыл бұрын
I agree. Information in our brains are not this much organized.
@slurp451
@slurp451 4 жыл бұрын
Ok let me try to explain my idea. What you describe it's a quite simple mathematical operation that can be resolved not by thinking but through a simple process of retrieving information from your memory, let me say like a "nursery rhyme". So let's take a more complicated one, like 34×41. In your semantic memory you have an incredible amount of images, which are the basis of concepts, ideas, meaning. What happens is that your working memory retrieve all the knowledge based on the meaning of the tasks, so you decodes "34×41" with a particular meaning that "go catch" or "evokes" some kind of related knowledge in your semantic memory, such as how to solve a multiplcation, what is the meaning of "×, and so on. All of this knowledge is used by your working memory to create a mental image or thought, that is what, in this case, you would call "solution", which is than "concretized" through speaken words and numbers
@skyacaniadev2229
@skyacaniadev2229 Жыл бұрын
@@slurp451 thank you. Do you have a paper that is associated with this theory?
@skyacaniadev2229
@skyacaniadev2229 Жыл бұрын
@@slurp451 I can see how some people (like artists) solve math with this method.
@Brickswol
@Brickswol 9 ай бұрын
It just so happens that I take on an Ostrich as my online persona. (except YT, I'm a Brick.)
@Diego-ce4bs
@Diego-ce4bs Жыл бұрын
Are the "links" neural pathways? Also, what's an "Exemplar?" That's not made clear.
@skyacaniadev2229
@skyacaniadev2229 7 жыл бұрын
What about "not", "same", "after"?
@paulasimow741
@paulasimow741 Жыл бұрын
I'm not an expert in the slightest, but I would guess that the reason those words are not typically sorted into semantic networks is that they are not really concepts on their own. They do not have native properties. "Not", "same", "after", and a variety of other words are just linguistic mechanisms that allow us to describe how things relate to each other. To ask why the brain does not store an "after" node is like asking why no one has made a painting of "the".
@skyacaniadev2229
@skyacaniadev2229 Жыл бұрын
@@paulasimow741 wow thank you for replying to a 5-year old comment. So many happened in AI these year, and I pretty much have the answers for this question and now working on an AGI model I designed a few years ago. I do think there are neural representations for those abstract concept, and language neural networks (or cortical columns) associated to them. The reason why no one paints a paint of “the” is related to the AI concept of “auto-encoder/decoder”. I pretty much had a vague answer even back when I asked this question (just try to see others’ idea), now with the development of ChatGPT and Midjourney, I pretty much high have a concrete plan for my AGI.
@chandrabhanbhai3635
@chandrabhanbhai3635 5 жыл бұрын
Hindi language me iska video banaiyega please sir / madam
@drilldrulus1235
@drilldrulus1235 Жыл бұрын
This is How propaganda works today they reverse engineer
Intelligence | Processing the Environment | MCAT | Khan Academy
7:58
khanacademymedicine
Рет қаралды 168 М.
367 Lecture 18.2 Collins & Quillian's Model of Semantic Knowledge
15:04
Sigma Kid Mistake #funny #sigma
00:17
CRAZY GREAPA
Рет қаралды 30 МЛН
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,7 МЛН
The Lucy Letby Files! CoCH Findings/Dr. Shoo Lee & 14 Neonatal Experts
22:20
Operation Merlin Falcon
Рет қаралды 156
How the MCAT Tests - Memory
17:58
IFD - Informing Future Doctors
Рет қаралды 12 М.
3 tips on how to study effectively
5:09
TED-Ed
Рет қаралды 6 МЛН
How Your Brain Chooses What to Remember
17:19
Artem Kirsanov
Рет қаралды 386 М.
5 Times NEO Federer Humiliated Next Gen
20:00
WivoRN Productions
Рет қаралды 506 М.
Category Recognition (Intro Psych Tutorial #88)
7:03
PsychExamReview
Рет қаралды 12 М.
The Multi-Store Model: How We Make Memories
6:45
Sprouts
Рет қаралды 73 М.