EfficientNet Explained!

  Рет қаралды 56,604

Connor Shorten

Connor Shorten

Күн бұрын

Пікірлер: 37
@mtmotoki2
@mtmotoki2 4 жыл бұрын
I have a hard time reading papers because my English isn't very good, but you've been very helpful in explaining it in your videos. Thank you.
@daesoolee1083
@daesoolee1083 4 жыл бұрын
Interesting. Building a CNN model always depended on my intuition from existing CNN models. I never questioned the significance of each scale-up method. The analysis by disentangling is very helpful to the community. Excellent.
@roymarley5178
@roymarley5178 3 жыл бұрын
I know it is quite off topic but do anyone know of a good place to stream new movies online?
@easonlyon2217
@easonlyon2217 3 жыл бұрын
Looking forward to EfficientNet-V2 paper!
@anonyme103
@anonyme103 4 жыл бұрын
Clean, simple, and great explanation! Thanks
@faridalijani1578
@faridalijani1578 3 жыл бұрын
224*224 image resolution (r=1.0) --> 560*560 image resolution (r=2.5)
@MaryamSadeghi-AI
@MaryamSadeghi-AI 2 жыл бұрын
Great explanations thank you!
@maoztamir1980
@maoztamir1980 2 жыл бұрын
great explanation! Thanks
@omkardhekane4404
@omkardhekane4404 3 жыл бұрын
Well explained. Thank you!
@guruprasadsomasundaram9273
@guruprasadsomasundaram9273 4 жыл бұрын
What a lovely summary thanks!
@selfdrivingcars3605
@selfdrivingcars3605 3 жыл бұрын
Great intuitive explanation! Thank you!!
@BlakeEdwards333
@BlakeEdwards333 5 жыл бұрын
Thanks Henry!
@connor-shorten
@connor-shorten 5 жыл бұрын
Thank you!!
@divyamgoel8038
@divyamgoel8038 4 жыл бұрын
Hi! I think you got the resolution scaling wrong. They don't change the input dimensions (from say 224 to 360) but rather increase the number of convolution filters in every convolution, effectively increasing the number of feature maps of the low-level representation of the input at any given point in the model.
@schneeekind
@schneeekind 4 жыл бұрын
1:07 is the image right? b) and d) should change the figures? how is higher resolution resulting in deeper blocks?
@svm_user
@svm_user 4 жыл бұрын
Thanks, great explanation.
@konataizumi5829
@konataizumi5829 3 жыл бұрын
I feel like the grid search to find alpha, beta and gamma was not elaborated on enough in the paper. Does anyone understand this more deeply, or how one could reproduce it?
@AbdullahKhan-if8fn
@AbdullahKhan-if8fn 5 жыл бұрын
Very well explained. Thanks!
@connor-shorten
@connor-shorten 5 жыл бұрын
Thank you!
@Ftur-57-fetr
@Ftur-57-fetr 3 жыл бұрын
Superb explanation!!!!
@IvanGoncharovAI
@IvanGoncharovAI 5 жыл бұрын
Great explanation!
@connor-shorten
@connor-shorten 5 жыл бұрын
Thank you!!
@gvlokeshkumar
@gvlokeshkumar 4 жыл бұрын
Thank you so much!!!
@shreymishra646
@shreymishra646 Жыл бұрын
maybe i am a bit stupid on this but there is a mistake at 4:43 and i checked in the paper too w should be equal to -0.07 instead of 0.07 because if assuming asper the video the ratio is 0.07 then say if the flops of the resultant model are half of the target flops then this will become ACC * (0.5)^0.07 ~ ACC * 0.95 which is less than 1 hence penalizing the model (since we need to maximize this, right ? ) which is wrong it should actually support such a model while if we keep w equals to -0.07 then objective fx bcomes Acc * (0.5)^-0.07 ~ Acc * 1.04 I was a bit confused in the begenning of the vide since i didnt read the paper first, but now i am quite certain of it . I am quite surprised no one else noticed it !!!
@anticlementous
@anticlementous 4 жыл бұрын
Really great video! One thing I don't understand though, is how the scaling works exactly. Are the network dimensions scaled while training and while keeping the weights from the smaller scale or is the entire network retrained from scratch on each scaling? Also, if I do transfer learning with a model pre-trained on efficientnet I could get the benefits of reducing the network size but wouldn't have to run through the same scaling process?
@masoudparpanchi505
@masoudparpanchi505 4 жыл бұрын
a question. by this equation you said : Alpha * (Beta^2) * (gamma^2) = 2 . when I increase Alpha I should decrease two other variables?
@anefuoche1053
@anefuoche1053 2 жыл бұрын
thank you
@xxRAP13Rxx
@xxRAP13Rxx 3 жыл бұрын
At 2:37, shouldn't 2^n more computational resources imply a B^(2n) and a gamma^(2n) increase given the constraint A*B^2*gamma^2 = 2 ?
@xxRAP13Rxx
@xxRAP13Rxx 3 жыл бұрын
Also, I'm looking over the actual paper. The chart at 5:32 is a bit different from what I'm seeing. Everything's about the same but every BOLDED Top1 Acc. entry (recorded from their own architecture) has been boosted up a few percentage points to outshine their rival counterparts. I wonder if they updated the paper since you posted this video, or maybe they figure it best to fudge the numbers since this chart is located on the front page of the paper.
@ibropwns
@ibropwns 5 жыл бұрын
Thanks a lot!
@connor-shorten
@connor-shorten 5 жыл бұрын
Thank you!
@masoudparpanchi505
@masoudparpanchi505 4 жыл бұрын
good explanation
@tashin8312
@tashin8312 4 жыл бұрын
I have a question. For my custom dataset I have used effnet b0-b5 & the results were getting poor each time I am using more complex models. Which means b0 gave best outcome while b5 gave the worst.... image sizes were 2000x1500 ...what could be the reason for that?
@tushartiwari7929
@tushartiwari7929 2 жыл бұрын
Did you find the reason for that?
@l.perceval9460
@l.perceval9460 2 ай бұрын
Ur data scale depends!
@rahuldeora5815
@rahuldeora5815 4 жыл бұрын
MobileNetV2 and EfficientNet' video
@miremax0
@miremax0 4 жыл бұрын
Большое спасибо!
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 372 М.
Haunted House 😰😨 LeoNata family #shorts
00:37
LeoNata Family
Рет қаралды 14 МЛН
Кто круче, как думаешь?
00:44
МЯТНАЯ ФАНТА
Рет қаралды 4,4 МЛН
The IMPOSSIBLE Puzzle..
00:55
Stokes Twins
Рет қаралды 138 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,3 МЛН
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,2 МЛН
Vision Transformer for Image Classification
14:47
Shusen Wang
Рет қаралды 121 М.
DETR: End-to-End Object Detection with Transformers (Paper Explained)
40:57
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 1 МЛН
Depthwise Separable Convolution - A FASTER CONVOLUTION!
12:43
CodeEmporium
Рет қаралды 96 М.
EfficientNet! - Keras Code Examples
18:17
Connor Shorten
Рет қаралды 16 М.
EfficientNet from scratch in Pytorch
38:48
Aladdin Persson
Рет қаралды 28 М.
All Machine Learning algorithms explained in 17 min
16:30
Infinite Codes
Рет қаралды 344 М.
Haunted House 😰😨 LeoNata family #shorts
00:37
LeoNata Family
Рет қаралды 14 МЛН