Yann LeCun: Turing Award Lecture "The Deep Learning Revolution: The Sequel"

  Рет қаралды 2,546

Preserve Knowledge

Preserve Knowledge

Күн бұрын

Пікірлер: 1
@fredt3217
@fredt3217 3 жыл бұрын
When thinking of intelligence and only thinking of the perceptron... that is only what the eye does and not the mind. So while you try and recreate things with it until you have both parts it will not work as you want. So keep in mind that is only about 5% of the process. The rest happens in the mind. The world is constructed of attributes, patterns and context/concept. Stick with that imho. How we learn models is all based on how we store and process information. There are at least 6 parts you need to build to do anything since the mind generally uses them all for any prediction/state of the world. This is done by millions and billions of acceptance and denials. Or what would be positive and negative associations at the end. Depth is just patterns. When we see the pattern emerge we perceive one is infront of the other and register it in our state of the world. Works with language and not vision cause one is a basic task and the other a natural process. So you can easily cut that task out and use it alone but not the process. For prediction most shown here is wrong unless used for narrow tasks. I'd throw out latent model predictions just like 3d model predictions. Assuming the goal is human level intelligence. Predictions with masks are just basic patterns we store and use on the perceived state. We build millions. Each is different so in the end every prediction you build through it will be. In otherwords you have to build them all before they can be used. We store them all in the mind. And for the record... deep learning and intelligence are two different things. Deep learning leads to AI systems and AGI systems lead to general intelligence. Hence the "G" in there. So one will not lead to the other. You have to study it alone. And while we all know why you can't put wings on your back and flap your arms... in this case you are basically building it exactly, or at least closer, to the real thing. The reason is the mind is just electrical patterns. So you can't exactly go too far off the path even if you wanted to. And all because it is just electricity running through a system. Not millions of parts flying through the air. So you should keep it the same, or similar, imho.
Geoffrey Hinton: Turing Award Lecture "The Deep Learning Revolution"
32:28
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 96 МЛН
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН
Quando eu quero Sushi (sem desperdiçar) 🍣
00:26
Los Wagners
Рет қаралды 15 МЛН
How AI Powers Self-Driving Tesla with Elon Musk and Andrej Karpathy
29:48
Preserve Knowledge
Рет қаралды 65 М.
ICRA 2020 Plenary Talk:  Yann LeCun  -- Self-Supervised Learning & World Models
40:30
IEEE Robotics and Automation Society
Рет қаралды 1,8 М.
And this year's Turing Award goes to...
15:44
Polylog
Рет қаралды 125 М.
What is generative AI and how does it work? - The Turing Lectures with Mirella Lapata
46:02
Accelerating scientific discovery with AI
29:02
Vetenskapsakademien
Рет қаралды 57 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,6 МЛН
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 874 М.
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 96 МЛН