ICLR 2020: Yann LeCun and Energy-Based Models

  Рет қаралды 21,207

Machine Learning Street Talk

Machine Learning Street Talk

4 жыл бұрын

This week Connor Shorten, Yannic Kilcher and Tim Scarfe reacted to Yann LeCun's keynote speech at this year's ICLR conference which just passed. ICLR is the number two ML conference and was completely open this year, with all the sessions publicly accessible via the internet. Yann spent most of his talk speaking about self-supervised learning, Energy-based models (EBMs) and manifold learning. Don't worry if you hadn't heard of EBMs before, neither had we!
Thanks for watching! Please Subscribe!
Paper Links:
ICLR 2020 Keynote Talk: iclr.cc/virtual_2020/speaker_...
A Tutorial on Energy-Based Learning: yann.lecun.com/exdb/publis/pdf...
Concept Learning with Energy-Based Models (Yannic's Explanation): • Concept Learning with ...
Concept Learning with Energy-Based Models (Paper): arxiv.org/pdf/1811.02486.pdf
Concept Learning with Energy-Based Models (OpenAI Blog Post): openai.com/blog/learning-conc...
#deeplearning #machinelearning #iclr #iclr2020 #yannlecun

Пікірлер: 44
@MachineLearningStreetTalk
@MachineLearningStreetTalk 4 жыл бұрын
5:35 Energy Functions, a Hitchhiker’s Guide to the Machine Learning Galaxy 11:07 Initial Reactions to the LeCun’s Talk 19:50 The Future is Self-Supervised, Early Concept Acquisition in Infants 24:35 The REVOLUTION WILL NOT BE SUPERVISED! 25:44 Three Challenges for Deep Learning 30:18 Self-Supervised Learning is Fill-in-the-Blanks 31:30 Inference and Multimodal Predictions 33:25 Energy-Based Models “Without Resorting to Probabilities” 37:33 Gradient-Based Infernce 39:35 Unconditional version of Energy-based Models, how K-Means is an Energy-based Model 41:26 Latent Variable EBM (To Be Continued)
@sphereron
@sphereron 4 жыл бұрын
I like the music as an intro, but I think it's okay to fade it out earlier as it distracts from the explanation
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 4 жыл бұрын
I'm really sorry about that, on KZbin and on my phone now it sounds way louder than it did on my main machine. I'll be more careful next time and check the levels on another machine 😁😊👍
@csr7080
@csr7080 4 жыл бұрын
@@machinelearningdojowithtim2898 Generally toning down the intro in terms of fast cuts, cut effects and music and making it a bit more 'relaxed' might help. I don't know, I prefer a slightly quieter approach.
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 4 жыл бұрын
@@csr7080 fair enough, let's compromise on the next one ;)
@charlesfoster6326
@charlesfoster6326 4 жыл бұрын
Loved this discussion! Another way to think of "the manifold" is through D-dimensional heat-maps (or scalar fields). A continuous energy-based model defines one D-dimensional heat-map, and the true data distribution defines another. Energy-based methods hope to make the "cold" (i.e. low-valued) regions in the true map "cold" in the model map. Contrastive methods transfer "heat/coldness" patterns from the true map to the model map. Regularized methods make sure that the model doesn't just make the whole map cold! :)
@YannicKilcher
@YannicKilcher 4 жыл бұрын
This is a great description, thanks
@XOPOIIIO
@XOPOIIIO 4 жыл бұрын
Music is too loud and I don't think it's suitable at all.
@nprithvi24
@nprithvi24 4 жыл бұрын
This was a wonderful discourse on EBMs. I'm glad I spent enough time to understand the concept of learning manifolds from these guys. Worth it.
@welcomeaioverlords
@welcomeaioverlords 4 жыл бұрын
I think a key point for casting existing methods into the energy framework is that it allows you to understand that existing methods are particular points on a broader spectrum and therefore there are gaps that exists between existing methods that could be equally valid and more effective. It wasn't covered here, but I'd like to hear more about how the probability framework results for less smooth surfaces, which might inhibit learning compared to energy-based methods. But I think the idea of doing gradient descent as part of inference is a fantastically interesting idea. Combine that with the concept of LISTA, which uses a NN to predict the outcome of this "SGD on Z" process, and this becomes a bit like transitioning from System 2 to System 1. In other words, if you do this "SGD at inference" enough times, you can fit a predictive model to that process. Then, recovering the optimal Z is just another feed-forward inference exercise, which is more like System 1 intuition (along with all the opportunities for mistakes and biases).
@antonschwarz6685
@antonschwarz6685 3 жыл бұрын
I loved the music, appropriately epic IMO..... Your content is outstanding, sincere thanks to you all.
@IRWBRW964
@IRWBRW964 4 жыл бұрын
The music I think is good but please lower the volume when someone is speaking
@rakeshmallick9161
@rakeshmallick9161 4 жыл бұрын
Yann Lecunn, has shared this video on his Facebook wall. Will spend today and tomorrow slowing going through the video in phases.
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 4 жыл бұрын
Amazing that Yann shared on Facebook too! We are super excited about where we can take this channel
@theodorosgalanos9663
@theodorosgalanos9663 4 жыл бұрын
Great talk thank you for your effort and patience! I'm curious about the point on using energy during inference, not learning. Could this be related to a sort 'esoteric inference' the models might do, something akin to the Concept Learning model where the energy function is used during what they call 'execution time' to infer, internally, concepts out of pairs of input data? Wonder if that makes sense, if LeCun had an idea of internalizing the process of inference in the model as a way to learn more abstract and difficult tasks / concepts, like in the same paper SGD is used to create an output?
@YannicKilcher
@YannicKilcher 4 жыл бұрын
Yes, that makes sense and you're absolutely on the right track. Of course, we can't speak for LeCun here, but as I imagine it this - what you're saying - is one of the advantages that these EBMs have. Of course the power of using something like SGD during inference comes with the cost that you have to train this somehow.
@mohammadxahid5984
@mohammadxahid5984 4 жыл бұрын
1:30:40 I think language is continuous in temporal domain whereas Image in spatial domain may not be continuous which leads to denoising an image with masked pixel to be less effective in contrast to a masked language model.
@Dougystyle11
@Dougystyle11 3 жыл бұрын
Super cool talk, thanks!
@robbiero368
@robbiero368 4 жыл бұрын
Music ends around 7:54
@zxl2537
@zxl2537 2 жыл бұрын
What's the difference between an energy function and a general cost function?
@andres_pq
@andres_pq 3 жыл бұрын
36:23 Why energy is used for inference, but not for learning? Also, Yannic is amazing at reducing complicated topics into understandable terms!
@YouLoveMrFriendly
@YouLoveMrFriendly 4 жыл бұрын
I get the feeling self supervised learning for images and video is going to take many decades to figure out.
@arkasaha4412
@arkasaha4412 4 жыл бұрын
Very informative! I still don't quite get what a manifold is though, can you suggest me some great sources? Also, is manifold somehow related to loss landscape? Thanks!
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 4 жыл бұрын
Hey Arka! A manifold is just all the places where certain types of data can exist. For example a 2d plane is a manifold, you can only place points on that plane. Imagine the 3d locations of all the cities in the world, they exist on the surface of a spherical manifold. And real world data also sits on a manifold, albeit, a very complicated one which you couldn't visualise! It's a great way to reason about the inner workings of deep learning models.
@vinca43
@vinca43 2 жыл бұрын
The introduction answers the long-standing question: what would happen if a bunch of computer scientists DJ'd a rave? I'm definitely on board the machine learning rave train.
@rytiskazimierasjonynas4561
@rytiskazimierasjonynas4561 4 жыл бұрын
Do you mind reuploading the video without the music? I was eager to watch this, but music made me quit after first 1min.
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 4 жыл бұрын
It's a 2 hour show, just skip forward a few minutes
@Lupobass1
@Lupobass1 4 жыл бұрын
Bro you are going to make me write an AI to remove background music
@snippletrap
@snippletrap 3 жыл бұрын
I disagree with Yannic's assertion that System 1 and System 2 are arbitrary distinctions. We are talking about computing systems here, and there are different kinds of computer / data pairs. System 1 corresponds to DFAs / regular languages, while System 2 is context-free or higher. Hierarchical decomposition, as in planning, amounts to grammatical parsing, which is fundamentally distinct from regex- and CNN-style pattern matching. It is true that some tasks originally learned in System 2 can eventually be distilled and passed down to System 1, since every regular language is a subset of some context-free language. But it is also true that there are some System 2 tasks that can never be passed down to System 1. For example, multiplying 8 digit numbers, composing Petrarchan sonnets, proving theorems, remembering what your wife just said, or writing in Assembly. There is no "muscle memory" for higher level tasks like these -- they require sustained conscious attention, just as every computer higher than a DFA requires memory.
@Georgesbarsukov
@Georgesbarsukov 2 жыл бұрын
Why is there background music? :(((((((
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 жыл бұрын
Old video, we are skilling up on video editing all the time! Sorry!
@BiancaAguglia
@BiancaAguglia 4 жыл бұрын
There is a story about John von Neumann and a-bit-tricky-yet-very-simple problem: train stations A and B are 150 miles apart. Train T1 goes from station A to station B, and train T2 goes from station B to station A. Both trains travel at 75 miles/hour and they leave at the same time. A fly, initially on train T1, flies towards T2 at 50 miles / hour. When it reaches T2 it instantly turns back and flies towards T1 (and the pattern repeats). How many miles does the fly travel before it meets its inevitable fate? 😊 The story is that von Neumann instantly answered 50 miles. When asked how he did it, he said: "I summed the infinite series, of course." 😁This story is often told to talk about von Neumann's genius, but I think it should also be seen as a reminder that genius can sometimes overlook the simplest and most elegant solutions. I don't understand most of the things discussed in this video (I'm not at that level yet ) but, from having watched Yann LeCun's past talks, I wonder if sometimes the experts are too smart to see the "dumb" solutions 😁Of course, I have no clue. I'll keep learning though. I really enjoyed watching your conversation and seeing you try figuring out the workings of a brilliant mind. It's very helpful.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 4 жыл бұрын
That's a wonderful anecdote Bianca! Thanks for sharing!
@aBigBadWolf
@aBigBadWolf 4 жыл бұрын
50 miles makes no sense. You must have memorized the wrong numbers.
@benbridgwater6479
@benbridgwater6479 4 жыл бұрын
The answer is 50 miles, but the point of the story is that von Neumann, who solved it immediately, didn't use the "aha" method (trains meet in 1hr, ergo fly travels 50 miles since he's flying at 50mph). von Neumann instead instantantly formulated and summed the infinite series in his head.
@aBigBadWolf
@aBigBadWolf 4 жыл бұрын
​@@benbridgwater6479 Suddenly it stops when trains meet? What infinite series? Edit: So I looked it up. The fly is supposed to be faster than the train(!) and pinball between the trains until it gets squashed(!). With that info it makes sense. Here: www.infiltec.com/j-logic.htm
@BiancaAguglia
@BiancaAguglia 4 жыл бұрын
@@aBigBadWolf 😁Yes and no. I got the wrong numbers but not because I memorized them wrongly. I didn't memorize them at all. It was the logic of the story that I remembered. Unfortunately I forgot the fly had to be faster than the trains for this to make sense at all. 😁
@snippletrap
@snippletrap 3 жыл бұрын
Agree with Yannic's point about babies. It's like asking, "How do spiders learn how to spin webs so quickly?"
@josephgardi7522
@josephgardi7522 2 жыл бұрын
multi-task learning is way more straightforward and the tech is already working well. Unsupervised learning will never be optimal because it has no ability to discard information that is not relevant to tasks we care about
@Hannah-cb7wr
@Hannah-cb7wr 4 жыл бұрын
The music makes this unwatchable
@konataizumi5829
@konataizumi5829 4 жыл бұрын
Yeah, like others have said, this sucks
Energy-based Approaches to Representation Learning - Yann LeCun
39:54
Institute for Advanced Study
Рет қаралды 10 М.
The Lottery Ticket Hypothesis with Jonathan Frankle
1:26:44
Machine Learning Street Talk
Рет қаралды 7 М.
[Vowel]물고기는 물에서 살아야 해🐟🤣Fish have to live in the water #funny
00:53
1❤️
00:20
すしらーめん《りく》
Рет қаралды 28 МЛН
Miracle Doctor Saves Blind Girl ❤️
00:59
Alan Chikin Chow
Рет қаралды 24 МЛН
Introduction to Energy-Based Learning | Yann LeCun Paper
4:21
What's AI by Louis-François Bouchard
Рет қаралды 4,3 М.
This is what DeepMind just did to Football with AI...
19:11
Machine Learning Street Talk
Рет қаралды 181 М.
Andrew Ng's Secret to Mastering Machine Learning - Part 1 #shorts
0:48
DjangoCon US 2023: Don't Buy the "A.I." Hype
26:09
Tim Allen
Рет қаралды 10 М.
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Machine Learning Street Talk
Рет қаралды 164 М.
Concept Learning with Energy-Based Models (Paper Explained)
39:29
Yannic Kilcher
Рет қаралды 29 М.