2014 Fully Convolutional Network (FCN) Paper summary

  Рет қаралды 15,252

Hao Tsui

Hao Tsui

Күн бұрын

Пікірлер: 22
@danielkusuma6473
@danielkusuma6473 3 жыл бұрын
Thanks for the video! I had difficulties while reading the papers but you break it down really nicely!
@armagaan007
@armagaan007 Жыл бұрын
Yeah, I was really disappointed. The authors should have used more figures.
@ILovePianoForever
@ILovePianoForever Жыл бұрын
very informative and better to understand than the original paper. Thank you!
@SofieSimp
@SofieSimp 2 жыл бұрын
Please continue doing this type of video.
@yashanand6311
@yashanand6311 11 ай бұрын
thanks for this video. you explained so well to understand this paper easily!!!
@phongtrangia9031
@phongtrangia9031 Жыл бұрын
thanks for your summary, this thing is very strange for me but you help me understand and image it
@tahacharehjoo9473
@tahacharehjoo9473 Жыл бұрын
Thank you very much! It helped me a lot to understand the paper!
@vladyslavkutsuruk6432
@vladyslavkutsuruk6432 3 жыл бұрын
You are doing amazing job with these papers explanations, thank you =) Btw, would be great to see your summary on Mask R-CNN.
@hnull99
@hnull99 Ай бұрын
Very nice video and explanations thank you
@souadyahiaberrouiguet1285
@souadyahiaberrouiguet1285 3 ай бұрын
Thank you for your explanation
@Maciek17PL
@Maciek17PL 2 жыл бұрын
Your explenation of convolutionlizatiobn is totally wrong because convolution is scaling down the image by constant factor so when using 1x1 convolution final vector will also be of arbitrary size not always 1x1xD
@usmaniyaz1059
@usmaniyaz1059 3 жыл бұрын
Hi! Can you make a video on the 'Meta Pseudo Labels' paper. Your paper summary videos are awesome
@akshayv2849
@akshayv2849 Жыл бұрын
Hello sir that was an amazing explanation. I'm currently doing mechanical engineering bacherlor degree and would love to work on Autonomous vehicles and work on the software side of things. Do you have any career guidance.
@sarynasser993
@sarynasser993 4 ай бұрын
thank you great explanation
@CrypticPulsar
@CrypticPulsar 7 ай бұрын
Thank you for this video!
@tm-jw2sq
@tm-jw2sq 3 жыл бұрын
講得太好了,謝謝你!!
@samarthshah8498
@samarthshah8498 3 жыл бұрын
Hey, great video and amazingly explained. One doubt: At 15:10, you say that we are now not restricted to flatten it and if we put a bigger resolution image. It will automatically fir to these value at the cost of some loss in information. My question is how do you restrict it?, like pooling cannot restrict it. What is the layer that actually makes it stick to that dimension. Also some place i saw that its not always 1x1xclasses, i saw 7x7xclasses. So what is the logic behind that? I am still new and highly interested in topic so these doubts might not be good ones, but if you can please clear it
@gianluca3131
@gianluca3131 3 жыл бұрын
So, take what I'm saying with a grain of salt because I'm not a Professor, but from what I understand of the paper you don't actually get layers of the same size if you use larger images. Let's say you pull the net with 64x64x3 (rgb) images. If you run larger images in testing, where you would have gotten a 1x1x21 layer (we want to classify 21 classes in this problem) you would now have an MxMx21 layer, where M is a size that depends on the combination of layers you used. The point, as I understand it, is that if you do NOT use fully connected layers, replacing them with Convolutional Layers, having layers of different sizes (M in my case) is not a problem, because you just apply the filter you have to all the pixels. What must remain the same is the detph, so in our example 21, but that is determined by how many filters we use in each pass; if we use 21 filters in the last pass, whether the image is 64x64 or 500x500, the depth will always be 21. So if we use a Fully Convolutional Network we don't need an operation like Flattening, because we don't care about forcing all the values in a certain size (in a 1d array for example). Regarding the sense of having 1x1xclasses or 7x7xclasses as output, the sense I think is that if you have an image output, so instead of 1x1xclasses for example MxMxclasses, you can generate a heatmap that tells you in which zone of the image a certain class is present. If you output a 1d array (1x1xclasses), the network can only tell you what class the image belongs to, if it's a volume (e.g. 7x7xclasses) it can tell you what zone has what class. I hope I didn't say any bullshit, if someone more experienced would like to confirm or deny what I said I would appreciate it.
@kdubovetskyi
@kdubovetskyi 3 жыл бұрын
@@gianluca3131 during whole the video I thought that the model uses some global pooling to get the bottleneck, because the author said that we get 1x1xD-shaped bottleneck for *every* input size. Your interpretation sounds more reasonable, I'm glad I found your comment
@gianluca3131
@gianluca3131 3 жыл бұрын
@@kdubovetskyi glad you found that useful, I hope I haven't said anything wrong 😄
@chaimaaderda4655
@chaimaaderda4655 2 жыл бұрын
Amazing explanation thank you
@heikeneubau7064
@heikeneubau7064 2 жыл бұрын
Thank you very much!!!
Caleb Pressley Shows TSA How It’s Done
0:28
Barstool Sports
Рет қаралды 60 МЛН
Какой я клей? | CLEX #shorts
0:59
CLEX
Рет қаралды 1,9 МЛН
Wednesday VS Enid: Who is The Best Mommy? #shorts
0:14
Troom Oki Toki
Рет қаралды 50 МЛН
The U-Net (actually) explained in 10 minutes
10:31
rupert ai
Рет қаралды 139 М.
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,8 МЛН
Feature Pyramid Network | Neck | Essentials of Object Detection
14:38
Kapil Sachdeva
Рет қаралды 15 М.
2017 Squeeze and Excitation Networks paper summary
21:20
Hao Tsui
Рет қаралды 6 М.
Lecture 5.4 - CNNs for Sequential Data
28:07
DLVU
Рет қаралды 25 М.
Transposed Convolution
5:10
Joris van Lienen
Рет қаралды 59 М.
Neural Network Architectures & Deep Learning
9:09
Steve Brunton
Рет қаралды 810 М.
Convolutional Neural Networks from Scratch | In Depth
12:56
Caleb Pressley Shows TSA How It’s Done
0:28
Barstool Sports
Рет қаралды 60 МЛН