Пікірлер
@jameshopkins3541
@jameshopkins3541 11 сағат бұрын
Which is correct?????
@honourable8816
@honourable8816 9 күн бұрын
Stride value was 2 pixel
@architech5940
@architech5940 9 күн бұрын
You did not introduce convolution in any informative way, nor define any terms for your argument, and you didn't explain the purpose of 3D convolution or why 2D convolution is inaccurate in the first place. There is also no closing argument for what appears to be your proposition for the proper illustration of CNN. This whole video is completely open ended and thus ambiguous.
@AdmMusicc
@AdmMusicc 17 күн бұрын
Loved the animation thank you!!
@martinhladis1941
@martinhladis1941 17 күн бұрын
Excelent!!
@kage-sl8rz
@kage-sl8rz 19 күн бұрын
cool even better add names to the objects like kernel etc would be helpful to new people
@edsparr2798
@edsparr2798 19 күн бұрын
I adore your content, genuinely can’t wait for more videos of your visualizations. Feels like I’m building real intuition about what I’m doing watching you :)
@trololollolololololl
@trololollolololololl 19 күн бұрын
keep it up great videos
@rubiczhang5593
@rubiczhang5593 21 күн бұрын
That's a really good job, you have save me from struggling with the AI. Thank you from China
@zukofire6424
@zukofire6424 21 күн бұрын
Thanks! great explanation :)
@____-gy5mq
@____-gy5mq 23 күн бұрын
best generalization ever, covers all the corner cases
@bengodw
@bengodw 27 күн бұрын
Hi Animated AI, thanks for your great video. I have below question: 4:45 indicated the color of filters (i.e. red, yellow, green, blue) represent the "Features". A filter (e.g. the red one) itself in 3-dimension (Height, Width, Feature) also include "Feature". Thus, the "Feature" appear twice. Please could you advise why we need "Feature" twice?
@danielprovder
@danielprovder 28 күн бұрын
udiprod for ai
@yousrakateb2383
@yousrakateb2383 Ай бұрын
Please continue make such amzing videos....they really helped me
@PurnenduPrabhat
@PurnenduPrabhat Ай бұрын
Good job
@happyTonakai
@happyTonakai Ай бұрын
This is so great!
@mdnaseif7599
@mdnaseif7599 Ай бұрын
You are a legend keep it up!
@leonardommarques
@leonardommarques Ай бұрын
You got a new subscriber. You are 3b1b of AI. Thanks for existing.
@user-el1hd3iz6m
@user-el1hd3iz6m Ай бұрын
I felt like a lot of work has been put into making the animations for the series and I should leave learning something, but somehow I am left more confused after watching the entire series than before I started. Not sure if this is due to a need to visualize something that cannot be represented in 3D space, knowledge gap created due to assumptions used during the explanation process, or I am simply too stupid.
@macewindont9922
@macewindont9922 Ай бұрын
sick
@Karmush21
@Karmush21 Ай бұрын
Maybe someone can help me understand this. If I have just one 3d volume. Would ever make sense to do a 3D convolution in say PyTorch? Because doing a 2D convolution will work all the slices right? So say that I have a volume that's 300,300,100. Should I just move the slice dimension to the channel dimension and apply a 2D convolution? What would a 3D convolution even do here?
@wilfredomartel7781
@wilfredomartel7781 Ай бұрын
😊
@noomade
@noomade Ай бұрын
"fewer"
@__-de6he
@__-de6he Ай бұрын
It would be good to know the rational behind such way of calculation besides computational efficiency.
@coryfan5872
@coryfan5872 Ай бұрын
Saying that Multihead Attention has less parameter than a token-wise linear is true for NLP models but not true for ViT. Additionally, simply creating a mechanism which incorporates the entirety of the features does not explain away the success of attention mechanisms -- looking again at computer vision tasks, MLP Mixer also incorporates the entirety of the features in its computations, but is still less successful than the attention based ViTs. One part of the strength of the attention layer is its adaptability -- which you can see the value of in things like GAT. Otherwise, it could just be replaced with a generic low-rank linear.
@jaredtweed7826
@jaredtweed7826 Ай бұрын
I have been waiting for this video! Very much worth the wait!
@BooleanDisorder
@BooleanDisorder Ай бұрын
Computational efficiency is also due to higher dimensionality, right? You can represent data in a much richer space compared to RNN's of similar parameter size and capture more complex features due to this higher dimension space that's enabled by each attention layer. That said I might be unfair to RNN's since they have such bad long-range dependency and "physically" can't do the same stuff even if it wanted.
@vastabyss6496
@vastabyss6496 Ай бұрын
4:27 I'm definitely judging the animation of the recurrent layer...
@Azanixu
@Azanixu Ай бұрын
How the hell do you have so little views? This is both one of the best and factually correct animations out there.
@ShadeAKAhayate
@ShadeAKAhayate 25 күн бұрын
He'll get there. Quality informational content always picks up slowly, but as long as quality is not declining, its growth is exponential. To a limit, obviously, since this information is specialized, but this limit is high. So as more teachers discover these illustrations and pass these on to their students, it will grow.
@RobertMStahl
@RobertMStahl Ай бұрын
FWIW, have you seen the recent business presentation given by Randell L Mills who can explain the reality of N electron, 4 having the solution to EVERYTHING?
@ucngominh3354
@ucngominh3354 Ай бұрын
hi
@I77AGIC
@I77AGIC Ай бұрын
this made it make a ton of sense. but one problem is pixel shuffle does not get rid of the artifacts. it introduces its own artifacts
@user-tp6fd7ky1q
@user-tp6fd7ky1q Ай бұрын
有点想到了亚像素插值
@LokeKS
@LokeKS Ай бұрын
There is no convolution
@commanderlake7997
@commanderlake7997 Ай бұрын
It should be noted that this will not scale well with tensor cores and may even be slower.
@luccaa.martins9214
@luccaa.martins9214 Ай бұрын
Amazing content!!
@mironpetrikpopovic1621
@mironpetrikpopovic1621 Ай бұрын
amaaaazing
@nindzadza
@nindzadza Ай бұрын
Really good material! You got another sub.
@user-bh2cn3zf3y
@user-bh2cn3zf3y Ай бұрын
Great video... keep it goining..Thanks a lot <3
@kamosevoyan4370
@kamosevoyan4370 Ай бұрын
We all are waiting for the next videos, great job, thank you,
@sabelch
@sabelch 2 ай бұрын
visualization is great but the reflective surfaces I find visually distracting. I found it easier to follow previous videos where there were no reflections.
@andybrice2711
@andybrice2711 2 ай бұрын
_"Let's not pretend that greyscale is a thing in 2023."_ Christopher Nolan would like a word with you.
@adrienkin
@adrienkin 2 ай бұрын
From your example, it could be nice to give the number of computations as example of +/- 9x faster :)
@Firestorm-tq7fy
@Firestorm-tq7fy 2 ай бұрын
One of the best channels! I wish u‘d be covering more topics than only CNN, but guess can’t be a top pro in every topic. I def subbed and wished u‘d have way more videos already. But i can see that it takes alot of time and effort so i will wait. Thank u so much for this work ❤
@Firestorm-tq7fy
@Firestorm-tq7fy 2 ай бұрын
Thx very much. I couldn’t stand all those comments of wannabe guys under ur previous video
@Firestorm-tq7fy
@Firestorm-tq7fy 2 ай бұрын
I don’t see a reason for 1x1. All you achieve is loosing information, while also creating N-features, each scaled by a certain factor. This can also be achieved within a normal layer (the scaling i mean). There is rly no point. Obviously outside of Depthwise-Pointwise combo. Pls correct me if I’m missing smt.
@janphiliprichter96
@janphiliprichter96 2 ай бұрын
Incredible video! Brilliant visualisations and perfect explanation. Keep it up
@pritomroy2465
@pritomroy2465 2 ай бұрын
In the Unet, GAN architecture when it is required to generate a feature map half of its actual size a 4x4 kernel size is used.
@amankukretti8802
@amankukretti8802 2 ай бұрын
Great work! I have personally been confused over how a 2d conv is applied in perspective of depth, the fact that the function is called Conv2D just added to it, though I understand the reasoning behind the naming. Seeing your animations have made my basics strong. Looking forward to future videos.
@100deep1001
@100deep1001 2 ай бұрын
dude, you need to show us how you create these sick animations! great work man. GOAT indeed !