Meta-Learning and One-Shot Learning

  Рет қаралды 24,632

macheads101

macheads101

Күн бұрын

Пікірлер: 60
@AlexanderBollbach
@AlexanderBollbach 7 жыл бұрын
i really enjoy your videos. they are just at the right level of detail for someone like me who wants to know the details of these algorithms but isn't fully in the field.
@macheads101
@macheads101 7 жыл бұрын
Glad you like them :) I try to aim these things at people who are very interested but don't have enough background knowledge to read the literature directly. I figure, even for people who are more involved in the field, they can listen to my overview first and then actually look at the papers if they are interested.
@timshen7337
@timshen7337 5 жыл бұрын
That's interesting even in late 2019!
@albertwang5974
@albertwang5974 6 жыл бұрын
We can use a graph database as external memory.
@steef7142
@steef7142 7 жыл бұрын
Verry good clear explonation of Meta-learning. Keep it up!
@KatyKarineLee
@KatyKarineLee 7 жыл бұрын
thanks for sharing! It's a good summarization. I look forward to see your work!
@100timezcooler
@100timezcooler 2 жыл бұрын
i think this topic is coming up again with surgence of transfer learning using pretrained LLM (ie BERT or GPT). Also the memory he speaks of towards the end is probably what Attention Heads ended up being, which can be thought of as content-based memory retrieval mechanisms.
@Crazymuse
@Crazymuse 6 жыл бұрын
Awesome video man. I love the way you simplify the concept.
@daesoolee1083
@daesoolee1083 5 жыл бұрын
I really like your idea of "storage network", especially, I'm very intrigued in terms of memory efficiency :)
@steveimm
@steveimm 2 жыл бұрын
Have you written your works in a paper that we can read?
@ckwong21
@ckwong21 7 жыл бұрын
its helpful for me to understand some of the latest development on AI, great work!
@planktonfun1
@planktonfun1 7 жыл бұрын
The first paper is like a combination of seeing thing and hearing it as well and use both to classify, afterall humans have five senses. Eyes btw make many frames of data each second, its impossible for us to learn from a single frame, but this one shot only use one frame.
@akshaysonawane9453
@akshaysonawane9453 4 жыл бұрын
is it still a good topic to learn in 2020?
@jasdeepsingh9774
@jasdeepsingh9774 5 жыл бұрын
Nice and innovating work....keep it up!
@thiliniyatanwala2349
@thiliniyatanwala2349 5 жыл бұрын
Hi,can you please give me some idea ,how to use or apply oneshot/few shot leaning concept to support edge computing ?
@pitrolla
@pitrolla 5 жыл бұрын
Maybe a k-nearest-neighbor algorithm with good features could perform well on one-shot-learning?
@iHooDoo
@iHooDoo 7 жыл бұрын
Awww! i was so proud of you... but i forgot it was April Fools haha!
@-long-
@-long- 4 жыл бұрын
Could you talk a bit about the difference between Ravi et al and Andrychowicz et al, regarding " Learning to learn by gradient descent by gradient descent" URL: arxiv.org/abs/1606.04474 ? To me as a person who just gets started with Meta-Learning, the first paper seems to reuse a lot from the latter, so I cannot tell the difference. One more thing to point out is the sequel to Finn et al is arxiv.org/abs/1803.02999. As the time of this video it has not been published yet.
@rogerrabbitar4698
@rogerrabbitar4698 3 жыл бұрын
hello, found your content to be very inspiring and wish i came across it sooner, do you have any updates of the state of technology you are most interested in?
@akrammohamed8374
@akrammohamed8374 6 жыл бұрын
Macheads, I'm currently working on proposing a machine vision system to the factory I'm working in, and I have encountered a problem which I would like to seek your input on, would that be possible ? Thanks
@dilbaum
@dilbaum 7 жыл бұрын
1:58 all of that looks like the Japanese kana for 'Yu' (in Katakana)
@macheads101
@macheads101 7 жыл бұрын
Interesting, I was thinking it was the Hebrew Beit. I wish I remembered which alphabet I took it from (it's from Omniglot).
@tranquil-tracks-creation
@tranquil-tracks-creation 6 жыл бұрын
Thank you very much for the video. It was great. Can this be used for Natural Language Processing?
@andreasv9472
@andreasv9472 7 жыл бұрын
Hi! Have you continued to work with your model? I was thinking about how to make the memory module effective and thought about having three networks on the memory without impacting performance in the short term, but making it more condense in the loong term though autoencoders. You don't happen to use tensorflow/python I can try it on?
@Ujwal.v
@Ujwal.v 4 жыл бұрын
Steve Jobs !!
@jjashim7317
@jjashim7317 7 жыл бұрын
Hey what about using uRNN (Unitary RNN) for one shot learning. As, you were commenting about using large memory in your problem set .Also, don't u think, uRNN will be able to resolve lookup/similarity problem too. share your thoughts on it. BTW, am interested in one shot learning can you guide me to its materials. Thanks.
@macheads101
@macheads101 7 жыл бұрын
Any kind of RNN is worth trying when it comes to meta-learning. However, for the particular problems I talk about in this video, training sequences are only 50 to 100 timesteps long. For such short sequences, I wouldn't expect uRNNs to outperform LSTMs in a significant way. Models like uRNN are optimized for extremely long sequences, not necessarily for fast high-bandwidth recall. The links in the description are a decent start for one-shot learning. I believe you can also use Google Scholar to see what cites those papers.
@peteoo9467
@peteoo9467 7 жыл бұрын
Whats your college major? How are you liking it?
@macheads101
@macheads101 7 жыл бұрын
Technically speaking, I haven't declared a major yet. I was thinking about CS, but spending so much time in college "learning" something I am already good at seems like a waste. Either way, I disliked the college experience so much that it drove me to take this semester off--something most people don't do.
@peteoo9467
@peteoo9467 7 жыл бұрын
Funny, you just described my experience in a nutshell. I also took few semesters off before declaring a major in CS of recent. Its challenging but fun if you make it so.
@macheads101
@macheads101 7 жыл бұрын
The macheads101 twitter account is pretty much dead. I (Alex) have a personal twitter account @unixpickle, where I often tweet about ML.
@lucasalshouse7023
@lucasalshouse7023 7 жыл бұрын
Why did you upload this on April 1st?
@mc4444
@mc4444 7 жыл бұрын
Could one also make a neural net that can learn to create a an "external" memory on it's own? With these thing it seems to me that the less human meat sticks you have involved in the process, the better. That would also blur the like between controller and memory and be closer integrated and more natural (maybe? I don't know how the human brain really works). You could also go in the opposite direction and have another layer between the two, let's call them the user (previously controller), memory controller and memory. This way you could abstract away direct memory access and so the user could send queries on a higher lever.
@macheads101
@macheads101 7 жыл бұрын
Having something learn the structure of external memory would be interesting to see. I almost feel like neuroevolution would be the best approach for that. As far as that "user" idea goes, I'm not sure how that would really differ from current memory augmentation models. If the user is a neural net that acts as a middleman between the controller and the memory, why couldn't one just say that the user is part of the controller? It's not uncommon to have a few neural net layers process controller output and turn it into more direct memory queries, but that's still considered to be a part of the controller. **EDIT:** when reading your comment, i mixed up what you called "user" and "memory controller". Same point applies, though.
@SweetHyunho
@SweetHyunho 7 жыл бұрын
Is there yet any system that genuinely plans its own behavior and changes its own hypotheses, living episodes(epochs, days) writing diaries like "tomorrow I'll focus on finding how to raise the output value of node 13, because it seems important for the final goal value. also I should analyze its relationship with node 8. other nodes are yet undecipherable." I believe we should pay much more attention to communication and language, because teaching is deeply related to meta-learning(thought). What do you think?
@macheads101
@macheads101 7 жыл бұрын
Long term planning/reasoning is kind of an open problem. If you want a machine to learn this behavior on its own, you need an environment that is complex enough (and long-running enough) to reward good planning. Also, our current learning algorithms aren't good at spanning long time dependencies. As far as communication is concerned, there is some work being done on multi-agent environments (e.g. some work at OpenAI on learning to communicate).
@SweetHyunho
@SweetHyunho 7 жыл бұрын
Thanks, but let me ask one more. I'm not too happy about how "deep learning" is mostly about CNNs these days, because their response is like single firing of reflexes. To me, CNNs are fixed equations with holes (paramers). Should we not have nodes that represent loop variables (change the form of the equation)? Is there some RNN where the firing signal travels conditionally back and forth, depending on the specific input, and the response comes out at a variable time?
@SweetHyunho
@SweetHyunho 7 жыл бұрын
Imagine making an AI agent playing Solitaire, but it's allowed to see only one card at a time. It has to emit arrow keys to navigate and see other cards. Does today's definition of NNs allow successful play Spider this way?
@adabrew2310
@adabrew2310 6 жыл бұрын
Very interesting!
@TurrettiniPizza
@TurrettiniPizza 7 жыл бұрын
What are you doing these days? I mean school/work wise?
@macheads101
@macheads101 7 жыл бұрын
Currently taking a semester off from college to work on ML.
@Myrslokstok
@Myrslokstok 7 жыл бұрын
macheads101 Within 20 years it might be some kids school that closed down for months, and then a student sits at home and figure out a total new paradigm for cognition and AI. Would not suprise me.
@andrewvanpelt9829
@andrewvanpelt9829 7 жыл бұрын
Can you please do tutorials on how you make apps like the JamWiFi
@avhd187
@avhd187 7 жыл бұрын
Hey do you still use java, Whats your main programming language as of now? Like for instance, in your machine learning.
@macheads101
@macheads101 7 жыл бұрын
The language I use the most is Go (a language developed by google). Most people seem to use Python for deep learning these days.
@corey333p
@corey333p 7 жыл бұрын
Great content. +1
@altobyy4855
@altobyy4855 7 жыл бұрын
lmao. you had me fooled tbh. I really thought that you had done it.
@sandzz
@sandzz 6 жыл бұрын
7:30 that sounds a very shitty life
@ulissemini5492
@ulissemini5492 4 жыл бұрын
COMPRESSION
@medoessa8858
@medoessa8858 3 жыл бұрын
very interesting can I have your email
@userou-ig1ze
@userou-ig1ze 7 жыл бұрын
first 30s are like... nah... turning off the video... that 'April fools' joke was just terrible dude. Also the pace is very slow. Thanks for the video though, the overall info is amazing, the explanations and the logical flow of the presentation are exemplary!
@ThisAgressionwWontStandMan
@ThisAgressionwWontStandMan 5 жыл бұрын
I’ve watched you grow up from being a little kid
@AmCanTech
@AmCanTech 7 жыл бұрын
April fool's
@lucasalshouse7023
@lucasalshouse7023 7 жыл бұрын
Watched the whole thing. What a joke. A very convincing joke but still a joke.
@justinburdge5642
@justinburdge5642 7 жыл бұрын
Not that advanced, I've got a friend who goes on and on about this shit all the time. Get a job Nuuuuuurrrrrdddd!
@alexnichol3138
@alexnichol3138 7 жыл бұрын
I'm guessing you don't have a second friend who's obsessed with machine learning.
@justinburdge5642
@justinburdge5642 7 жыл бұрын
Nah the second friend talks too much about smash.
@joshuafishman9002
@joshuafishman9002 7 жыл бұрын
Wow Justin Burdge (if that is your real name)! You think your friend is hot shit? Well I have a friend who is hotter shit than your friend... Nurd!
@alexnichol3138
@alexnichol3138 7 жыл бұрын
I live for comment threads like this.
Word Embeddings
14:28
macheads101
Рет қаралды 158 М.
RBF Networks
20:10
macheads101
Рет қаралды 53 М.
요즘유행 찍는법
0:34
오마이비키 OMV
Рет қаралды 12 МЛН
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,5 МЛН
How to Remember Everything You Read
26:12
Justin Sung
Рет қаралды 3,2 МЛН
AI Unpacked with Nobel Laureate Geoffrey Hinton
23:33
Valence
Рет қаралды 12 М.
Learning to learn: An Introduction to Meta Learning
1:27:17
Machine Learning TV
Рет қаралды 28 М.
Accelerating scientific discovery with AI
29:02
Vetenskapsakademien
Рет қаралды 57 М.
A friendly introduction to Deep Learning and Neural Networks
33:20
Serrano.Academy
Рет қаралды 704 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
Neural Networks (Part 2) - Training
49:37
macheads101
Рет қаралды 37 М.