Пікірлер
@bharathpasala-y6w
@bharathpasala-y6w Күн бұрын
Excellent Course...Thank You very much...
@Alice-kc3jx
@Alice-kc3jx 4 күн бұрын
Actual content starts at 57:13
@trololollolololololl
@trololollolololololl Ай бұрын
very nice
@syedmuhammadraza9534
@syedmuhammadraza9534 Ай бұрын
Thanks for this amazing lecture visual recognition models. Really appricated!
@SH-iy2gw
@SH-iy2gw Ай бұрын
Thanks so much for making this available!
@SalmanKhan-er2ic
@SalmanKhan-er2ic 5 ай бұрын
Thanks for the nice presentation but It could have been better. Maybe you can make a series of short videos to explain these with examples.
@savinduwedikkaraarachchi
@savinduwedikkaraarachchi 7 ай бұрын
Thank you so much ❤❤
@savinduwedikkaraarachchi
@savinduwedikkaraarachchi 7 ай бұрын
Thank you so much ❤❤
@savinduwedikkaraarachchi
@savinduwedikkaraarachchi 7 ай бұрын
Thank you so much for providing this amazing content for free! Your efforts are truly appreciated! ❤❤
@stracci_5698
@stracci_5698 7 ай бұрын
Your lecture is amazing, I love how you give so many practical examples!
@ayseguluygunoksuz6523
@ayseguluygunoksuz6523 8 ай бұрын
Many thanks for this useful talk
@victoriiromero3015
@victoriiromero3015 10 ай бұрын
Hi, thank for this very good tutorial series. May I know if there is an issue in the collab notebook used in this episode? It reports an error on the as_classification_dataset...
@gyeonghokim
@gyeonghokim Жыл бұрын
Such a great course for continual learning
@ashadams8143
@ashadams8143 Жыл бұрын
Thank you so so much for sharing this information for free. 🎉
@dibyajyotiacharya8916
@dibyajyotiacharya8916 Жыл бұрын
Thank you so much for this❤❤
@bth2012
@bth2012 Жыл бұрын
17:43 24:01 53:18
@shiv093
@shiv093 Жыл бұрын
0:18 Continual Learning of Object Instances 1:17 Generative Feature Replay for Class-incremental Learning 2:17 A simple Class Decision Balancing for Incremental Learning 3:10 Adaptive Group Sparse Regularization for Continual Learning 4:10 CatNet: Class Incremental 3D ConvNets for Lifelong EgoCentric Gesture Recognition 5:17 Few-shot Image Recognition for UAV Sports Cinematography 6:09 Generalized Class Incremental Learning (my favorite) 7:10 Rehearsal-Free Continual Learning over Small Non-I.I.D Batches 8:10 Continual Reinforcement Learning in 3D Non-stationary Environments 9:10 Reducing catastrophic forgetting with learning on synthetic data 10:13 Continual Learning for Anomaly detection in surveillance videos
@youssefmaghrebi6963
@youssefmaghrebi6963 Жыл бұрын
This is really a very good well rounded course until this episode. You saved me much time and gave me the needed information about what's going in research of CL instead of spending too much time to discover the field. Thanks a lot.
@VincenzoLomonaco
@VincenzoLomonaco Жыл бұрын
Thanks! I'm glad you liked it! :)
@ruilin5498
@ruilin5498 Жыл бұрын
good
@aniruddhakishorkawadecs18b81
@aniruddhakishorkawadecs18b81 2 жыл бұрын
One strategy to rule them all 😂
@Charles-my2pb
@Charles-my2pb 2 жыл бұрын
Can u share this slide? Thank u so much!
@zeyuli1549
@zeyuli1549 Жыл бұрын
+1
@salehyousefi1099
@salehyousefi1099 2 жыл бұрын
Very Great work
@jake959
@jake959 2 жыл бұрын
🅿🆁🅾🅼🅾🆂🅼
@roseclemons6522
@roseclemons6522 2 жыл бұрын
I Love this video
@niamhbartlett2465
@niamhbartlett2465 2 жыл бұрын
Faultlessness🎇
@SuperAstrax111
@SuperAstrax111 2 жыл бұрын
I like your explanation and examples. But taking too much time in small ideas
@VincenzoLomonaco
@VincenzoLomonaco 2 жыл бұрын
Thanks for the feedback! It is always difficult to balance speed / complexity for a large audience with different backgrounds!
@aliasgher5272
@aliasgher5272 Жыл бұрын
@@VincenzoLomonaco you did a great job.
@payamfiroozfar1615
@payamfiroozfar1615 Жыл бұрын
I think it was the positive view of the course, I like the professor's talking about the small ideas briefly, because we can understand all the things completely.
@gusseppebravo8334
@gusseppebravo8334 2 жыл бұрын
Finally the last lecture :D, eager to see more on this course! Thanks!
@krishrocks11
@krishrocks11 2 жыл бұрын
This is great. I really wish the documentation had more working colab examples/tutorials so we could play around with the whole thing more.
@markhampton3614
@markhampton3614 2 жыл бұрын
Thank you very much for making this course public!
@simonsuh1733
@simonsuh1733 2 жыл бұрын
dope tech!
@robinyadav6950
@robinyadav6950 2 жыл бұрын
Hi, great lecture series! I was wondering if reinforcement learning can be considered as continual learning?
@vidushimaheshwari973
@vidushimaheshwari973 2 жыл бұрын
I think reinforcement learning could be one of the "tasks" or events of a continual learning process. Training everything to one reinforcement learning model will most likely lead to the same problem as training everything to a neural network -- catastrophic forgetting.
@chenghanhuang9283
@chenghanhuang9283 2 жыл бұрын
awesome work!
@JuliusUnscripted
@JuliusUnscripted 2 жыл бұрын
What we have learned today: Daenerys Targaryen is the mother of all strategies ;) haha
@VincenzoLomonaco
@VincenzoLomonaco 2 жыл бұрын
ahah exactly! 🤣
@JuliusUnscripted
@JuliusUnscripted 2 жыл бұрын
Great lecture! I like the compass idea too :) Especially to be able to find similar paper later which possibly work on the same challenges and problem types.
@VincenzoLomonaco
@VincenzoLomonaco 2 жыл бұрын
I'm glad to hear that! 😊
@zaidzubair3829
@zaidzubair3829 2 жыл бұрын
Perfect explanation
@VincenzoLomonaco
@VincenzoLomonaco 2 жыл бұрын
Thanks Zaid, I'm gald you liked it!
@冯至-n4y
@冯至-n4y 2 жыл бұрын
great!it can help me to know about the direction,which is new ,and puzzle me for a while.
@VincenzoLomonaco
@VincenzoLomonaco 2 жыл бұрын
Thanks! We hope you'll like it!
@smolboii1183
@smolboii1183 2 жыл бұрын
amogus
@TheShadyStudios
@TheShadyStudios 2 жыл бұрын
thanks so much for uploading this course; about to go through it!
@VincenzoLomonaco
@VincenzoLomonaco 2 жыл бұрын
Our pleasure, Truman! I hope you like it!
@angelomenezes12
@angelomenezes12 3 жыл бұрын
Speaker was muted from 0:50 till 4:36 =(
@ContinualAI
@ContinualAI 3 жыл бұрын
Yeah, sorry about that... at least we fixed it quick :)
@angelomenezes12
@angelomenezes12 3 жыл бұрын
@@ContinualAI Yep, the talk was great anyways!
@growwitharosh5052
@growwitharosh5052 3 жыл бұрын
Hi, Thank you very much for such video. Can we add incremental learning (Creme by Keras) in facial emotion recognition?
@VincenzoLomonaco
@VincenzoLomonaco 2 жыл бұрын
You can open a feature-request here: github.com/ContinualAI/avalanche/discussions/categories/feature-request
@EctoMorpheus
@EctoMorpheus 3 жыл бұрын
Thanks for these interesting videos! Just one piece of feedback: the intro is REALLY loud, would be awesome if you can turn it down a bit :)
@VincenzoLomonaco
@VincenzoLomonaco 3 жыл бұрын
Thanks for the feedback! We will tune it down in the next videos :)
@feynmanwang3655
@feynmanwang3655 3 жыл бұрын
11:32 Introduction. Vincenzo Lomonaco([email protected]), University of Pisa. 23:23 Continue Learning the challenge. Razvan Pascanu([email protected]), Research Scientist @ DeepMind. 55:18 CONTINUE LEARNING FOR AFFECTIVE ROBOTICS. Hatice Gunes. Reader in Affective Intelligence & Robotics, University of Cambridge, Department of Computer Science & Technology. 1:29:06 Exemplar-Free class-incremental Learning. Joost van de Weijer, Learning and Machine Perception group. Computer Vision Center, Universitat Autonoma de Barcelona. 2:0929 Break. 2:47:02 Continue Learning: Repetition, Reconstruction, and Forgetting. Jim Rehg. 3:22:23 Continual Learning: A story line and a wider scope. Rahaf Aljundi, Toyota Motor Europe. 3:57:06 A Table of Two CILS: The connection between Class Incremental Learning and Class Imbalanced Learning, and Beyond. Chen He, Key Lab of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences.
@parvatibhurani1530
@parvatibhurani1530 3 жыл бұрын
how can i attend these meetup?
@VincenzoLomonaco
@VincenzoLomonaco 3 жыл бұрын
You can join continualai on slack: www.continualai.org/
@alfos
@alfos 3 жыл бұрын
started at 13:34
@leixun
@leixun 3 жыл бұрын
Thanks!
@aitarun
@aitarun 3 жыл бұрын
what do you call experience in the library? can you please elaborate.
@VincenzoLomonaco
@VincenzoLomonaco 3 жыл бұрын
We call "experience" a set of examples, labels and meta-data that are given to the continual learning algorithm in subsequent steps in time. These are often referred to in the CL literature as "Tasks" or "Batches". We decided to use this general term to avoid confusion.
@aitarun
@aitarun 3 жыл бұрын
​@@VincenzoLomonaco if i take n_experience=5 for MNIST, does it mean that It will train 2 classes at a time (incrementlty ) for 5 rounds (2*5=10). Am I correct?
@sundeepjoshi9773
@sundeepjoshi9773 3 жыл бұрын
Great talk Chris and Thanks Continual ai community for opensource-ing this❤️ Chris make very important points.👍
@ContinualAI
@ContinualAI 3 жыл бұрын
Thank you Sundeep for your kindness and support!
@Fatima-kj9ws
@Fatima-kj9ws 3 жыл бұрын
great, thank you :D
@VincenzoLomonaco
@VincenzoLomonaco 3 жыл бұрын
Thank you! :)
@dupatisrikar8873
@dupatisrikar8873 3 жыл бұрын
Can i get mail id of speaker?
@ContinualAI
@ContinualAI 3 жыл бұрын
minute 0:17 :)
@morty8139
@morty8139 3 жыл бұрын
Have you tried running the autoencoder on classes that haven't yet been seen by the model? If most images of "future" classes cannot be well reconstructed, it would definitely rule out the hypothesis that the classes are similarly distributed. At least locally.
@ContinualAI
@ContinualAI 3 жыл бұрын
This is a very interesting question. We'll ping the author about it and see if she can provide an answer here!
@saisubramaniamgopalakrishn1226
@saisubramaniamgopalakrishn1226 3 жыл бұрын
Samples belonging to the same dataset sharing a 'similar' domain (e.g. classes within Cifar10) may still be approximated well by an autoencoder trained on initial few classes of the same dataset. However, the same may not hold across different datasets (different domains)
@saisubramaniamgopalakrishn1226
@saisubramaniamgopalakrishn1226 3 жыл бұрын
However, the method in the discussion has a strong assumption that 'reconstructed' latent space has classification properties, which would only be applicable for simpler cases with no/limited background.
@morty8139
@morty8139 3 жыл бұрын
@@saisubramaniamgopalakrishn1226 Like you said, the classes at hand seem to be sharing a similar domain, so it doesn't come as a surprise that the autoencoder doesn't suffer much from catastrophic forgetting. This was also pointed out by Vincenzo Lomonaco at 31:08 and 33:44. The author hypothesizes that the autoencoder learns to map local patches (32:37). This is perhaps good enough for reconstruction, but the model may overlook some global properties required for classification. Anyway, my gut tells me that the autoencoder will do a good job of reconstructing unseen classes. But it's worth checking nonetheless.
@gregor3264
@gregor3264 3 жыл бұрын
you misspelled his name...
@ContinualAI
@ContinualAI 3 жыл бұрын
Thanks for the heads-up we fixed the misspelling! :-)