Thanks for this amazing lecture visual recognition models. Really appricated!
@SH-iy2gwАй бұрын
Thanks so much for making this available!
@SalmanKhan-er2ic5 ай бұрын
Thanks for the nice presentation but It could have been better. Maybe you can make a series of short videos to explain these with examples.
@savinduwedikkaraarachchi7 ай бұрын
Thank you so much ❤❤
@savinduwedikkaraarachchi7 ай бұрын
Thank you so much ❤❤
@savinduwedikkaraarachchi7 ай бұрын
Thank you so much for providing this amazing content for free! Your efforts are truly appreciated! ❤❤
@stracci_56987 ай бұрын
Your lecture is amazing, I love how you give so many practical examples!
@ayseguluygunoksuz65238 ай бұрын
Many thanks for this useful talk
@victoriiromero301510 ай бұрын
Hi, thank for this very good tutorial series. May I know if there is an issue in the collab notebook used in this episode? It reports an error on the as_classification_dataset...
@gyeonghokim Жыл бұрын
Such a great course for continual learning
@ashadams8143 Жыл бұрын
Thank you so so much for sharing this information for free. 🎉
@dibyajyotiacharya8916 Жыл бұрын
Thank you so much for this❤❤
@bth2012 Жыл бұрын
17:43 24:01 53:18
@shiv093 Жыл бұрын
0:18 Continual Learning of Object Instances 1:17 Generative Feature Replay for Class-incremental Learning 2:17 A simple Class Decision Balancing for Incremental Learning 3:10 Adaptive Group Sparse Regularization for Continual Learning 4:10 CatNet: Class Incremental 3D ConvNets for Lifelong EgoCentric Gesture Recognition 5:17 Few-shot Image Recognition for UAV Sports Cinematography 6:09 Generalized Class Incremental Learning (my favorite) 7:10 Rehearsal-Free Continual Learning over Small Non-I.I.D Batches 8:10 Continual Reinforcement Learning in 3D Non-stationary Environments 9:10 Reducing catastrophic forgetting with learning on synthetic data 10:13 Continual Learning for Anomaly detection in surveillance videos
@youssefmaghrebi6963 Жыл бұрын
This is really a very good well rounded course until this episode. You saved me much time and gave me the needed information about what's going in research of CL instead of spending too much time to discover the field. Thanks a lot.
@VincenzoLomonaco Жыл бұрын
Thanks! I'm glad you liked it! :)
@ruilin5498 Жыл бұрын
good
@aniruddhakishorkawadecs18b812 жыл бұрын
One strategy to rule them all 😂
@Charles-my2pb2 жыл бұрын
Can u share this slide? Thank u so much!
@zeyuli1549 Жыл бұрын
+1
@salehyousefi10992 жыл бұрын
Very Great work
@jake9592 жыл бұрын
🅿🆁🅾🅼🅾🆂🅼
@roseclemons65222 жыл бұрын
I Love this video
@niamhbartlett24652 жыл бұрын
Faultlessness🎇
@SuperAstrax1112 жыл бұрын
I like your explanation and examples. But taking too much time in small ideas
@VincenzoLomonaco2 жыл бұрын
Thanks for the feedback! It is always difficult to balance speed / complexity for a large audience with different backgrounds!
@aliasgher5272 Жыл бұрын
@@VincenzoLomonaco you did a great job.
@payamfiroozfar1615 Жыл бұрын
I think it was the positive view of the course, I like the professor's talking about the small ideas briefly, because we can understand all the things completely.
@gusseppebravo83342 жыл бұрын
Finally the last lecture :D, eager to see more on this course! Thanks!
@krishrocks112 жыл бұрын
This is great. I really wish the documentation had more working colab examples/tutorials so we could play around with the whole thing more.
@markhampton36142 жыл бұрын
Thank you very much for making this course public!
@simonsuh17332 жыл бұрын
dope tech!
@robinyadav69502 жыл бұрын
Hi, great lecture series! I was wondering if reinforcement learning can be considered as continual learning?
@vidushimaheshwari9732 жыл бұрын
I think reinforcement learning could be one of the "tasks" or events of a continual learning process. Training everything to one reinforcement learning model will most likely lead to the same problem as training everything to a neural network -- catastrophic forgetting.
@chenghanhuang92832 жыл бұрын
awesome work!
@JuliusUnscripted2 жыл бұрын
What we have learned today: Daenerys Targaryen is the mother of all strategies ;) haha
@VincenzoLomonaco2 жыл бұрын
ahah exactly! 🤣
@JuliusUnscripted2 жыл бұрын
Great lecture! I like the compass idea too :) Especially to be able to find similar paper later which possibly work on the same challenges and problem types.
@VincenzoLomonaco2 жыл бұрын
I'm glad to hear that! 😊
@zaidzubair38292 жыл бұрын
Perfect explanation
@VincenzoLomonaco2 жыл бұрын
Thanks Zaid, I'm gald you liked it!
@冯至-n4y2 жыл бұрын
great!it can help me to know about the direction,which is new ,and puzzle me for a while.
@VincenzoLomonaco2 жыл бұрын
Thanks! We hope you'll like it!
@smolboii11832 жыл бұрын
amogus
@TheShadyStudios2 жыл бұрын
thanks so much for uploading this course; about to go through it!
@VincenzoLomonaco2 жыл бұрын
Our pleasure, Truman! I hope you like it!
@angelomenezes123 жыл бұрын
Speaker was muted from 0:50 till 4:36 =(
@ContinualAI3 жыл бұрын
Yeah, sorry about that... at least we fixed it quick :)
@angelomenezes123 жыл бұрын
@@ContinualAI Yep, the talk was great anyways!
@growwitharosh50523 жыл бұрын
Hi, Thank you very much for such video. Can we add incremental learning (Creme by Keras) in facial emotion recognition?
@VincenzoLomonaco2 жыл бұрын
You can open a feature-request here: github.com/ContinualAI/avalanche/discussions/categories/feature-request
@EctoMorpheus3 жыл бұрын
Thanks for these interesting videos! Just one piece of feedback: the intro is REALLY loud, would be awesome if you can turn it down a bit :)
@VincenzoLomonaco3 жыл бұрын
Thanks for the feedback! We will tune it down in the next videos :)
@feynmanwang36553 жыл бұрын
11:32 Introduction. Vincenzo Lomonaco([email protected]), University of Pisa. 23:23 Continue Learning the challenge. Razvan Pascanu([email protected]), Research Scientist @ DeepMind. 55:18 CONTINUE LEARNING FOR AFFECTIVE ROBOTICS. Hatice Gunes. Reader in Affective Intelligence & Robotics, University of Cambridge, Department of Computer Science & Technology. 1:29:06 Exemplar-Free class-incremental Learning. Joost van de Weijer, Learning and Machine Perception group. Computer Vision Center, Universitat Autonoma de Barcelona. 2:0929 Break. 2:47:02 Continue Learning: Repetition, Reconstruction, and Forgetting. Jim Rehg. 3:22:23 Continual Learning: A story line and a wider scope. Rahaf Aljundi, Toyota Motor Europe. 3:57:06 A Table of Two CILS: The connection between Class Incremental Learning and Class Imbalanced Learning, and Beyond. Chen He, Key Lab of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences.
@parvatibhurani15303 жыл бұрын
how can i attend these meetup?
@VincenzoLomonaco3 жыл бұрын
You can join continualai on slack: www.continualai.org/
@alfos3 жыл бұрын
started at 13:34
@leixun3 жыл бұрын
Thanks!
@aitarun3 жыл бұрын
what do you call experience in the library? can you please elaborate.
@VincenzoLomonaco3 жыл бұрын
We call "experience" a set of examples, labels and meta-data that are given to the continual learning algorithm in subsequent steps in time. These are often referred to in the CL literature as "Tasks" or "Batches". We decided to use this general term to avoid confusion.
@aitarun3 жыл бұрын
@@VincenzoLomonaco if i take n_experience=5 for MNIST, does it mean that It will train 2 classes at a time (incrementlty ) for 5 rounds (2*5=10). Am I correct?
@sundeepjoshi97733 жыл бұрын
Great talk Chris and Thanks Continual ai community for opensource-ing this❤️ Chris make very important points.👍
@ContinualAI3 жыл бұрын
Thank you Sundeep for your kindness and support!
@Fatima-kj9ws3 жыл бұрын
great, thank you :D
@VincenzoLomonaco3 жыл бұрын
Thank you! :)
@dupatisrikar88733 жыл бұрын
Can i get mail id of speaker?
@ContinualAI3 жыл бұрын
minute 0:17 :)
@morty81393 жыл бұрын
Have you tried running the autoencoder on classes that haven't yet been seen by the model? If most images of "future" classes cannot be well reconstructed, it would definitely rule out the hypothesis that the classes are similarly distributed. At least locally.
@ContinualAI3 жыл бұрын
This is a very interesting question. We'll ping the author about it and see if she can provide an answer here!
@saisubramaniamgopalakrishn12263 жыл бұрын
Samples belonging to the same dataset sharing a 'similar' domain (e.g. classes within Cifar10) may still be approximated well by an autoencoder trained on initial few classes of the same dataset. However, the same may not hold across different datasets (different domains)
@saisubramaniamgopalakrishn12263 жыл бұрын
However, the method in the discussion has a strong assumption that 'reconstructed' latent space has classification properties, which would only be applicable for simpler cases with no/limited background.
@morty81393 жыл бұрын
@@saisubramaniamgopalakrishn1226 Like you said, the classes at hand seem to be sharing a similar domain, so it doesn't come as a surprise that the autoencoder doesn't suffer much from catastrophic forgetting. This was also pointed out by Vincenzo Lomonaco at 31:08 and 33:44. The author hypothesizes that the autoencoder learns to map local patches (32:37). This is perhaps good enough for reconstruction, but the model may overlook some global properties required for classification. Anyway, my gut tells me that the autoencoder will do a good job of reconstructing unseen classes. But it's worth checking nonetheless.
@gregor32643 жыл бұрын
you misspelled his name...
@ContinualAI3 жыл бұрын
Thanks for the heads-up we fixed the misspelling! :-)