i really enjoy your videos. they are just at the right level of detail for someone like me who wants to know the details of these algorithms but isn't fully in the field.
@macheads1017 жыл бұрын
Glad you like them :) I try to aim these things at people who are very interested but don't have enough background knowledge to read the literature directly. I figure, even for people who are more involved in the field, they can listen to my overview first and then actually look at the papers if they are interested.
@timshen73375 жыл бұрын
That's interesting even in late 2019!
@albertwang59746 жыл бұрын
We can use a graph database as external memory.
@steef71427 жыл бұрын
Verry good clear explonation of Meta-learning. Keep it up!
@rogerrabbitar46983 жыл бұрын
hello, found your content to be very inspiring and wish i came across it sooner, do you have any updates of the state of technology you are most interested in?
@KatyKarineLee7 жыл бұрын
thanks for sharing! It's a good summarization. I look forward to see your work!
@daesoolee10835 жыл бұрын
I really like your idea of "storage network", especially, I'm very intrigued in terms of memory efficiency :)
@100timezcooler2 жыл бұрын
i think this topic is coming up again with surgence of transfer learning using pretrained LLM (ie BERT or GPT). Also the memory he speaks of towards the end is probably what Attention Heads ended up being, which can be thought of as content-based memory retrieval mechanisms.
@steveimm2 жыл бұрын
Have you written your works in a paper that we can read?
@akshaysonawane94534 жыл бұрын
is it still a good topic to learn in 2020?
@thiliniyatanwala23495 жыл бұрын
Hi,can you please give me some idea ,how to use or apply oneshot/few shot leaning concept to support edge computing ?
@-long-4 жыл бұрын
Could you talk a bit about the difference between Ravi et al and Andrychowicz et al, regarding " Learning to learn by gradient descent by gradient descent" URL: arxiv.org/abs/1606.04474 ? To me as a person who just gets started with Meta-Learning, the first paper seems to reuse a lot from the latter, so I cannot tell the difference. One more thing to point out is the sequel to Finn et al is arxiv.org/abs/1803.02999. As the time of this video it has not been published yet.
@pitrolla5 жыл бұрын
Maybe a k-nearest-neighbor algorithm with good features could perform well on one-shot-learning?
@ckwong217 жыл бұрын
its helpful for me to understand some of the latest development on AI, great work!
@iHooDoo7 жыл бұрын
Awww! i was so proud of you... but i forgot it was April Fools haha!
@Crazymuse6 жыл бұрын
Awesome video man. I love the way you simplify the concept.
@tranquil-tracks-creation6 жыл бұрын
Thank you very much for the video. It was great. Can this be used for Natural Language Processing?
@dilbaum7 жыл бұрын
1:58 all of that looks like the Japanese kana for 'Yu' (in Katakana)
@macheads1017 жыл бұрын
Interesting, I was thinking it was the Hebrew Beit. I wish I remembered which alphabet I took it from (it's from Omniglot).
@jasdeepsingh97745 жыл бұрын
Nice and innovating work....keep it up!
@planktonfun17 жыл бұрын
The first paper is like a combination of seeing thing and hearing it as well and use both to classify, afterall humans have five senses. Eyes btw make many frames of data each second, its impossible for us to learn from a single frame, but this one shot only use one frame.
@jjashim73177 жыл бұрын
Hey what about using uRNN (Unitary RNN) for one shot learning. As, you were commenting about using large memory in your problem set .Also, don't u think, uRNN will be able to resolve lookup/similarity problem too. share your thoughts on it. BTW, am interested in one shot learning can you guide me to its materials. Thanks.
@macheads1017 жыл бұрын
Any kind of RNN is worth trying when it comes to meta-learning. However, for the particular problems I talk about in this video, training sequences are only 50 to 100 timesteps long. For such short sequences, I wouldn't expect uRNNs to outperform LSTMs in a significant way. Models like uRNN are optimized for extremely long sequences, not necessarily for fast high-bandwidth recall. The links in the description are a decent start for one-shot learning. I believe you can also use Google Scholar to see what cites those papers.
@akrammohamed83746 жыл бұрын
Macheads, I'm currently working on proposing a machine vision system to the factory I'm working in, and I have encountered a problem which I would like to seek your input on, would that be possible ? Thanks
@peteoo94677 жыл бұрын
Whats your college major? How are you liking it?
@macheads1017 жыл бұрын
Technically speaking, I haven't declared a major yet. I was thinking about CS, but spending so much time in college "learning" something I am already good at seems like a waste. Either way, I disliked the college experience so much that it drove me to take this semester off--something most people don't do.
@peteoo94677 жыл бұрын
Funny, you just described my experience in a nutshell. I also took few semesters off before declaring a major in CS of recent. Its challenging but fun if you make it so.
@macheads1017 жыл бұрын
The macheads101 twitter account is pretty much dead. I (Alex) have a personal twitter account @unixpickle, where I often tweet about ML.
@andreasv94727 жыл бұрын
Hi! Have you continued to work with your model? I was thinking about how to make the memory module effective and thought about having three networks on the memory without impacting performance in the short term, but making it more condense in the loong term though autoencoders. You don't happen to use tensorflow/python I can try it on?
@TurrettiniPizza7 жыл бұрын
What are you doing these days? I mean school/work wise?
@macheads1017 жыл бұрын
Currently taking a semester off from college to work on ML.
@Myrslokstok7 жыл бұрын
macheads101 Within 20 years it might be some kids school that closed down for months, and then a student sits at home and figure out a total new paradigm for cognition and AI. Would not suprise me.
@adabrew23106 жыл бұрын
Very interesting!
@lucasalshouse70237 жыл бұрын
Why did you upload this on April 1st?
@Ujwal.v4 жыл бұрын
Steve Jobs !!
@mc44447 жыл бұрын
Could one also make a neural net that can learn to create a an "external" memory on it's own? With these thing it seems to me that the less human meat sticks you have involved in the process, the better. That would also blur the like between controller and memory and be closer integrated and more natural (maybe? I don't know how the human brain really works). You could also go in the opposite direction and have another layer between the two, let's call them the user (previously controller), memory controller and memory. This way you could abstract away direct memory access and so the user could send queries on a higher lever.
@macheads1017 жыл бұрын
Having something learn the structure of external memory would be interesting to see. I almost feel like neuroevolution would be the best approach for that. As far as that "user" idea goes, I'm not sure how that would really differ from current memory augmentation models. If the user is a neural net that acts as a middleman between the controller and the memory, why couldn't one just say that the user is part of the controller? It's not uncommon to have a few neural net layers process controller output and turn it into more direct memory queries, but that's still considered to be a part of the controller. **EDIT:** when reading your comment, i mixed up what you called "user" and "memory controller". Same point applies, though.
@avhd1877 жыл бұрын
Hey do you still use java, Whats your main programming language as of now? Like for instance, in your machine learning.
@macheads1017 жыл бұрын
The language I use the most is Go (a language developed by google). Most people seem to use Python for deep learning these days.
@SweetHyunho7 жыл бұрын
Is there yet any system that genuinely plans its own behavior and changes its own hypotheses, living episodes(epochs, days) writing diaries like "tomorrow I'll focus on finding how to raise the output value of node 13, because it seems important for the final goal value. also I should analyze its relationship with node 8. other nodes are yet undecipherable." I believe we should pay much more attention to communication and language, because teaching is deeply related to meta-learning(thought). What do you think?
@macheads1017 жыл бұрын
Long term planning/reasoning is kind of an open problem. If you want a machine to learn this behavior on its own, you need an environment that is complex enough (and long-running enough) to reward good planning. Also, our current learning algorithms aren't good at spanning long time dependencies. As far as communication is concerned, there is some work being done on multi-agent environments (e.g. some work at OpenAI on learning to communicate).
@SweetHyunho7 жыл бұрын
Thanks, but let me ask one more. I'm not too happy about how "deep learning" is mostly about CNNs these days, because their response is like single firing of reflexes. To me, CNNs are fixed equations with holes (paramers). Should we not have nodes that represent loop variables (change the form of the equation)? Is there some RNN where the firing signal travels conditionally back and forth, depending on the specific input, and the response comes out at a variable time?
@SweetHyunho7 жыл бұрын
Imagine making an AI agent playing Solitaire, but it's allowed to see only one card at a time. It has to emit arrow keys to navigate and see other cards. Does today's definition of NNs allow successful play Spider this way?
@corey333p7 жыл бұрын
Great content. +1
@andrewvanpelt98297 жыл бұрын
Can you please do tutorials on how you make apps like the JamWiFi
@altobyy48557 жыл бұрын
lmao. you had me fooled tbh. I really thought that you had done it.
@sandzz6 жыл бұрын
7:30 that sounds a very shitty life
@medoessa88583 жыл бұрын
very interesting can I have your email
@ulissemini54924 жыл бұрын
COMPRESSION
@userou-ig1ze7 жыл бұрын
first 30s are like... nah... turning off the video... that 'April fools' joke was just terrible dude. Also the pace is very slow. Thanks for the video though, the overall info is amazing, the explanations and the logical flow of the presentation are exemplary!
@ThisAgressionwWontStandMan5 жыл бұрын
I’ve watched you grow up from being a little kid
@lucasalshouse70237 жыл бұрын
Watched the whole thing. What a joke. A very convincing joke but still a joke.
@AmCanTech7 жыл бұрын
April fool's
@justinburdge56427 жыл бұрын
Not that advanced, I've got a friend who goes on and on about this shit all the time. Get a job Nuuuuuurrrrrdddd!
@alexnichol31387 жыл бұрын
I'm guessing you don't have a second friend who's obsessed with machine learning.
@justinburdge56427 жыл бұрын
Nah the second friend talks too much about smash.
@joshuafishman90027 жыл бұрын
Wow Justin Burdge (if that is your real name)! You think your friend is hot shit? Well I have a friend who is hotter shit than your friend... Nurd!