State of AI 2022 - My Highlights
9:33
Self-/Unsupervised GNN Training
12:09
Causality and (Graph) Neural Networks
16:13
Пікірлер
@comunedipadova1790
@comunedipadova1790 Күн бұрын
Has anyone been able to make it converge? What hyperparameters did you modify?
@akurmustafa_
@akurmustafa_ 4 күн бұрын
Thanks great explanation, I wonder how the sampling with T can produce plausible images with just a single pass. I would expect to call sampling code recursively T times with timestamp 1 at each call and z is updated according to result of the previous call. Similar to autoregressive architectures
@sangstar1
@sangstar1 6 күн бұрын
artificial neural network (ann) can be used to create fraud as well
@brenner4426
@brenner4426 8 күн бұрын
you are a true hero
@louisyeung9059
@louisyeung9059 8 күн бұрын
Amazing!
@beelinh7102
@beelinh7102 12 күн бұрын
I have a question : why do you use only 2 times the gcn layer in 11:33 ?
@sukikuii
@sukikuii 13 күн бұрын
There is a little error at 10:35, in the bottom of the fraction it's [Whi || Whk], not j. But super video, continue please!
@slazy9219
@slazy9219 13 күн бұрын
is the resulting embedding for each time stamp or snapshot passed from layer to layer under the hood by the model?
@HishamDahane
@HishamDahane 17 күн бұрын
You need to build a graph from first principles, not jut using libraries to transfom your data.
@Therapist659
@Therapist659 22 күн бұрын
Sir how can we use it with NN?
@Davi-it3in
@Davi-it3in 24 күн бұрын
Great video!
@hieuluc8888
@hieuluc8888 24 күн бұрын
Thank you so much! I spent many days trying to understand the meaning behind GCN but couldn't get it. And amazingly, it only took me less than 3 minutes watching your example at 1:54 to grasp its meaning.
@qbaliu6462
@qbaliu6462 25 күн бұрын
Thank you for the great video!
@jonsugarjones
@jonsugarjones 27 күн бұрын
Very well prepared and carefully delivered, congratulations and many thanks for your help in understanding LIME.
@lakshman587
@lakshman587 Ай бұрын
Thanks for the video!!
@maheshsonawane8737
@maheshsonawane8737 Ай бұрын
It is magnificent video, after understand math and concept behind TSNE, then u can clear ur concept here thoroughly here in this video. 🌟🌟🌟🌟
@frommarkham424
@frommarkham424 Ай бұрын
As a robot myself, i can confirm that an image really is worth 16x16 words
@frommarkham424
@frommarkham424 Ай бұрын
An image is worth 16x16 words🗣🗣🗣🗣🗣🗣🗣💯💯💯💯💯💯💯🔥🔥🔥🔥🔥🔥🔥
@waelmikaeel4244
@waelmikaeel4244 Ай бұрын
Great job mate, keep it up
@dfdiasbr
@dfdiasbr Ай бұрын
Awesome! Thanks for this video!
@oFabianLoL
@oFabianLoL Ай бұрын
Could your uncertainty bands in the ensemble code be incorrect? In the plot_de function, you compute: stds = torch.stack(mus).std(axis=0).detach().numpy()**(1/2) Why is this there another sqrt there? shouldn't it just be: stds = torch.stack(mus).std(axis=0).detach().numpy()? The results are visually better when the sqrt is included, but to me it sounds mathematically incorrect, no?
@juanete69
@juanete69 Ай бұрын
Why are many straight lines in your videos not straight?
@giulliabraga9709
@giulliabraga9709 Ай бұрын
I love your way of explaining each topic, thank you! A series on AWS coming maybe? 😢
@영어자막-h3p
@영어자막-h3p Ай бұрын
it is a crime that this doesn't get more views.
@gabrielcornejo2206
@gabrielcornejo2206 Ай бұрын
Excellent presentation. I have a question, can DICE be carried out in a model that has 3 classes instead of 2?
@radames09
@radames09 Ай бұрын
I loved the video, my friend. Huge explanation on the topic. I'll certainly watch the other videos in a near future. Thank you
@ovidiuluciancristina9127
@ovidiuluciancristina9127 Ай бұрын
Great video, really well explained! One thing I found suspicious is that in the forward diffusion process you generate noise like this "noise = torch.randn_like(x_0)". As far as I know, this samples the uniform distibution U(0,1) and not the standard Gaussian.
@juanete69
@juanete69 Ай бұрын
Why do you need to use weight if you have already oversampled the data?
@juanete69
@juanete69 Ай бұрын
Why do you say that x1+x2+x3 is a concatenation? Isn't it a element-wise addition?
@woosukbyun2455
@woosukbyun2455 Ай бұрын
Does anyone have problem accessing Colab?
@juanete69
@juanete69 Ай бұрын
Why is it "x" and not "self.x" ? And why self.training and not training?
@juanete69
@juanete69 Ай бұрын
Could you explain more in depth why we have 32 nodes and 9 features for this problem, please?
@the_random_noob9860
@the_random_noob9860 Ай бұрын
Amazing video! I have a question regarding one data point attributes. We have both x and y. The features themselves are the speeds which is the ground truth right? We want to train our model to predict into next 12 timestamps and compare it with values in x. So what is the significance of y (though in documentation, its given that y is the ground truth, both x and y should have the same values but they differ. Only difference that has to be is that x additionally has time of day as a second feature for each of the 12 timestamps per data point) Could you kindly clarify this?
@CarolineWu0719
@CarolineWu0719 Ай бұрын
thank you for your great explanation
@TrusePkay
@TrusePkay Ай бұрын
You did not cover LDA and ICA
@kshitijdesai2402
@kshitijdesai2402 2 ай бұрын
I found it hard to follow initially but after understanding GCNN thoroughly, this video is a gem.
@dr.aravindacvnmamit3770
@dr.aravindacvnmamit3770 2 ай бұрын
I agree with your lecture and was very nice. How to apply for images like x-ray or ct scan
@PedroPoquette-t4s
@PedroPoquette-t4s 2 ай бұрын
VonRueden Circle
@MaryamSadeghi-u6u
@MaryamSadeghi-u6u 2 ай бұрын
You have put a lot of time into creating this videos and it is really valuable that after 3 years it is still very useful
@MaryamSadeghi-u6u
@MaryamSadeghi-u6u 2 ай бұрын
Greta Video, thank you!
@luisperdigao6204
@luisperdigao6204 2 ай бұрын
"... github in the link below....". Where is the link?
@hannespeter1484
@hannespeter1484 2 ай бұрын
wow what a great video.Thank you, helped me a lot.
@giulliabraga9709
@giulliabraga9709 2 ай бұрын
I just discovered your channel and THANK YOU!!
@JonathanDenise-v4d
@JonathanDenise-v4d 2 ай бұрын
Adaline Pines
@王恺风
@王恺风 2 ай бұрын
I really enjoy this video! It is so concise, comprehensive and beautiful! And thanks a lot for so many useful links for further learning.
@ishara779
@ishara779 2 ай бұрын
So how are the edge features used in GCN algorithm? are they completely ignored? Because according to this explanation, only the node features take part in the convolution process
@VoltVipin_VS
@VoltVipin_VS 2 ай бұрын
The best part of Vision transformers is inbuilt support interpretability as compared to CNN where we had to compute saliency maps.
@metehkaya96
@metehkaya96 2 ай бұрын
Perfect video to understand GATs. However, I guess, you forgot to add sigmoid function when you demonstrate h1' as a sum of multiplications of hi* and attention values, in the last seconds of the video: 13:51
@ashishkannad3021
@ashishkannad3021 2 ай бұрын
why are we adding time embedding to input features, like literally adding them together. Can a simple concatenation of input features and time embedding possible ? btw dope video thanks for sharing
@urveesh09doshi62
@urveesh09doshi62 2 ай бұрын
I'm making a model from sx-stackoverflow of SNAP, it only has source target and timestamp, no clue how to make a dataset for TGN from that