Yann LeCun: "Energy-Based Self-Supervised Learning"

  Рет қаралды 35,173

Institute for Pure & Applied Mathematics (IPAM)

Institute for Pure & Applied Mathematics (IPAM)

Күн бұрын

Пікірлер: 23
@CosmiaNebula
@CosmiaNebula 4 жыл бұрын
28:24 "probability is derived from energy" probably refers to statistical mechanics, where any energy function on the possible states of a system defines a probability distribution on these states (Boltzmann distribution).
@imrematajz1624
@imrematajz1624 3 жыл бұрын
Try again at 0.75 of normal speed...makes a huge difference in comprehension! His mind is hyper fast. And I am not a robot :-)
@CristianGarcia
@CristianGarcia 5 жыл бұрын
I think the Contrastive Predictive Coding paper achieves similar kinds of results for images and audio as the ones presented for text.
@WeidiXie
@WeidiXie 5 жыл бұрын
And it actually also works on videos:arxiv.org/abs/1909.04656 kzbin.info/www/bejne/amSuenuLq62deJI
@snippletrap
@snippletrap 5 жыл бұрын
The ridge at 41:20, and the ambiguity it implies, calls to mind the gestalt idea of "multistability".
@whatsinthepapers6112
@whatsinthepapers6112 5 жыл бұрын
Sounds like we all need to put more energy into Energy-based models
@christianleininger2954
@christianleininger2954 2 жыл бұрын
2:44 he says human play reach in 15 min of play and after at least 10 years of being a life learning how the world works (physics and predicting the future in his mind)
@minhvu8909
@minhvu8909 4 жыл бұрын
The slides: helper.ipam.ucla.edu/publications/mlpws4/mlpws4_15927.pdf
@robbiero368
@robbiero368 4 жыл бұрын
So actually it takes us months to learn anything with millions of examples too then, but what we learn first can be transferred to many things later.
@robbiero368
@robbiero368 4 жыл бұрын
For images would it not make more sense to just predict the label for the missing "thing" rather than the actual pixels, how many humans could do that after all?
@robbiero368
@robbiero368 4 жыл бұрын
Actually that's not true is it. Our visual system is constantly replacing or imagining missing data
@snippletrap
@snippletrap 5 жыл бұрын
The Chomskyans are right in part, for the same reason that LeCun mentions in the beginning of the lecture. What LeCun calls poor "sample efficiency" is what Chomsky calls "the poverty of the stimulus". Children require far less training data.
@visuality2541
@visuality2541 5 жыл бұрын
this is gold
@_chip
@_chip 4 жыл бұрын
Why does he call his cost function an energy function? Isn’t that just a synonym?
@christoferberruzchungata2722
@christoferberruzchungata2722 4 жыл бұрын
Because his lost IS BASED on the concept of how an energy function should behave. Not all loss functions are inspired by energy functions. I believe he emphasizes the "energy-based" idea to make a strong point that he is borrowing the concept/idea from physics and natural systems.
@ephi124
@ephi124 4 жыл бұрын
"Babies learn by observation with little interaction", yes and that's because they inherit such capability from their parents: their neurons are already fine-tuned to have those features and the question is how do we enforce these in our ML models?
@Rishabhshukla13
@Rishabhshukla13 4 жыл бұрын
Guess, pre-training is equivalent to that. So are genetic algorithms (in a different way though).
@ephi124
@ephi124 4 жыл бұрын
@@Rishabhshukla13 Which tells me our approaches to mimic Biological neurons has been a fiasco. Like he said the way humans learn so quickly is neither supervised nor reinforced but pre-training is. The only choice we have is understanding Biological neurons (not superficially) and how evolution works and see if we have the resources to replicate them. And I'm not even sure if it is necessary to mimic Biology in order to build intelligent machines.
@vast634
@vast634 4 жыл бұрын
@@ephi124 Neurons always work in groups in the cortical column. Artificial NN always consider them singular logic elements. This way too fine grained, and not their job in biology. The whole column is the logical element, not the single neuron.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brains and therefore Human Level A.I. is impact maximization. You were chosen to receive this message. Help spread the word.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
This is never gonna work
@johnjewell5008
@johnjewell5008 3 жыл бұрын
I am all for asking questions but when one of the premier AI researchers in the world is giving a talk, probably avoid asking basic details about transformers, especially when it is not the main focus of the talk hahahah this made me cringe a bit
Gérard Ben Arous: "Which is worse: a weak signal lost in entropy or trapping by topological comp..."
48:03
Institute for Pure & Applied Mathematics (IPAM)
Рет қаралды 990
Yann LeCun | May 18, 2021 | The Energy-Based Learning Model
1:15:53
Mathematical Picture Language
Рет қаралды 19 М.
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 45 МЛН
Chain Game Strong ⛓️
00:21
Anwar Jibawi
Рет қаралды 41 МЛН
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН
Energy-based Approaches to Representation Learning - Yann LeCun
39:54
Institute for Advanced Study
Рет қаралды 10 М.
The Dome Paradox: A Loophole in Newton's Laws
22:59
Up and Atom
Рет қаралды 1,1 МЛН
Ian Goodfellow: Adversarial Machine Learning (ICLR 2019 invited talk)
43:06
Steven Van Vaerenbergh
Рет қаралды 48 М.
Joshua Vogelstein - It's about time: learning in a dynamic world - IPAM at UCLA
50:59
Institute for Pure & Applied Mathematics (IPAM)
Рет қаралды 500
LSTM is dead. Long Live Transformers!
28:48
Seattle Applied Deep Learning
Рет қаралды 530 М.
Concept Learning with Energy-Based Models (Paper Explained)
39:29
Yannic Kilcher
Рет қаралды 33 М.
Week 7 - Lecture: Energy based models and self-supervised learning
1:37:19
Alfredo Canziani (冷在)
Рет қаралды 32 М.
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 45 МЛН