AugMax Explained!

  Рет қаралды 5,243

Connor Shorten

Connor Shorten

Күн бұрын

Пікірлер: 18
@iskrabesamrtna
@iskrabesamrtna 3 жыл бұрын
I was wondering what kind of augmentation would you recommand for videos mapped with keypoints estimation on a person that exibits various gestures in multiclass classification (huge amount of classes) if the augmentation method is even used on such task. Any hint would do for my further research. Thanks!
@connor-shorten
@connor-shorten 3 жыл бұрын
All of the same augmentations in image classification should work for this since they preserve the label of the gestures. Please see RandAugment for a full list of these augmentations - if any of these might flip the label for your application you would want to remove it from the N which RandAugment samples from. You still may not want to do this since the added noise in the training may help and removing the augs could be a bit of a pain - compared to using the imgaug implementation off the shelf. Thank you for the question! Please let me know if this was clear or not useful, happy to help with further questions!
@mrdbourke
@mrdbourke 3 жыл бұрын
Fantastic video!
@connor-shorten
@connor-shorten 3 жыл бұрын
Thanks, I really appreciate it! Happy to help if you have any issues with Data Augmentation or Robustness Testing!
@sayakpaul3152
@sayakpaul3152 3 жыл бұрын
I think it was JS-Divergence for the consistency loss term. Very similar to KL-D but wanted to point out the detail.
@connor-shorten
@connor-shorten 3 жыл бұрын
Got you -- JS adds symmetry to KL, just looked it up haha
@SakvaUA
@SakvaUA 3 жыл бұрын
So what's preventing the augmentation weights from collapsing to 0 and just giving the original image on the output?
@sayakpaul3152
@sayakpaul3152 3 жыл бұрын
That is probably encouraged by the friendly adversarial training term in the loss. I am not 100% sure, though.
@connor-shorten
@connor-shorten 3 жыл бұрын
I have been experimenting with this -- the consistency loss is generally held up by the 1-hot class label loss. Collapse happens all the time if you try to drill this deeper into the representation or increase the weight of the loss.
@connor-shorten
@connor-shorten 3 жыл бұрын
I agree, entropy regularization also appears in MEMO: Test Time Robustness via Adaptation and Augmentation -- Marvin Zhang, Sergey Levine, and Chelsea Finn
@dawwdd
@dawwdd 3 жыл бұрын
Good job as always
@connor-shorten
@connor-shorten 3 жыл бұрын
Thank you so much, I really appreciate it!
@domenickmifsud
@domenickmifsud 3 жыл бұрын
Thanks! Great content
@connor-shorten
@connor-shorten 3 жыл бұрын
Thanks for watching!
@rorrochanel
@rorrochanel 3 жыл бұрын
Wow! Amazing!
@connor-shorten
@connor-shorten 3 жыл бұрын
Very exciting stuff, congrats to the authors! Thanks for watching!
@MrMIB983
@MrMIB983 3 жыл бұрын
Love long videos
@connor-shorten
@connor-shorten 3 жыл бұрын
Really glad to hear it, these are fun to make!
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,2 МЛН
All Machine Learning algorithms explained in 17 min
16:30
Infinite Codes
Рет қаралды 495 М.
Chain Game Strong ⛓️
00:21
Anwar Jibawi
Рет қаралды 41 МЛН
The evil clown plays a prank on the angel
00:39
超人夫妇
Рет қаралды 53 МЛН
Attention in transformers, visually explained | DL6
26:10
3Blue1Brown
Рет қаралды 2 МЛН
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 417 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,5 МЛН
DSPy Explained!
54:16
Connor Shorten
Рет қаралды 66 М.
Предел развития НЕЙРОСЕТЕЙ
18:53
Onigiri
Рет қаралды 204 М.
Variational Autoencoders
15:05
Arxiv Insights
Рет қаралды 524 М.
Graph Embeddings and PyTorch-BigGraph
25:40
Connor Shorten
Рет қаралды 6 М.
Chain Game Strong ⛓️
00:21
Anwar Jibawi
Рет қаралды 41 МЛН