The ultimate intro to Graph Neural Networks. Maybe.

  Рет қаралды 83,078

AI Coffee Break with Letitia

AI Coffee Break with Letitia

Күн бұрын

Пікірлер: 113
@khushpatelmd
@khushpatelmd 3 жыл бұрын
This channel is so underrated. There is lot of info if you are learning for first time. But best approach is to watch video 3-4 times.
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Thanks, I appreciate your words! If you think the channel is underrated, then do not hesitate to share out the content! 😁 By doing so, you can actively help! Thank you so much!
@malekaburaddaha5910
@malekaburaddaha5910 3 жыл бұрын
I have been watching videos about GNNs for two days and I got the idea, but here I completely understood everything. Thank you very much I am glad that I found your channel, keep going.
@mirjunaid26
@mirjunaid26 2 жыл бұрын
The way you break down and explain the mathematical formulae of GNN is amazing and beautiful. At the same time introducing and clearing the concept of permutation-invariance in such a short video is commendable. Thank you ❤️ Liking, sharing, & subscribing.
@AICoffeeBreak
@AICoffeeBreak 2 жыл бұрын
Wow, thank you! This comment made my day.
@vincent_hall
@vincent_hall 3 жыл бұрын
Love it, thanks Coffee Bean. I'd worked with Graph Theory and worked a lot with NNs, but I didn't know what Graph convolutional NNs were. Thanks for updating my skills. I came here because I kept seeing the Graph Neutral Network term everywhere.
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Ms. Coffee Bean is so glad this video was helpful for you!
@nicohambauer
@nicohambauer 3 жыл бұрын
Nice! Keep up the good work! You are a true researcher and helping us other researchers to keep on track the real relevant things from literature!
@swarajshinde3950
@swarajshinde3950 4 жыл бұрын
Thank you for such Clear and Concise information !!
@DavenH
@DavenH 3 жыл бұрын
Really well presented and animated. Keep it up!
@MCMelonslice
@MCMelonslice 2 жыл бұрын
i love your channel. Just found you yesterday and as a nearly complete scrub in ML this helps for a solid foundation!
@miladaghajohari2308
@miladaghajohari2308 3 жыл бұрын
That is a nice intro. Thanks for taking the time to make it.
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Glad you like it!
@firdawsyahya3749
@firdawsyahya3749 3 жыл бұрын
I finally have the formula on lock. Thank you
@UnrecycleRubdish
@UnrecycleRubdish 3 жыл бұрын
Very cute and entertaining idea with the coffee bean. Makes an otherwise dry subject a little bit... moist? Anyway thank you for the informative video. FYI I played it on 1.25x speed because she spoke a little bit too slow for me! Great work!
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Thanks for watching! The speaking speed is something I am still adjusting to find my pace. Happy that you used the speedup functionality to help yourself. :)
@pureeight7003
@pureeight7003 3 жыл бұрын
I find this video very useful. I have subscribed!
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Awesome! 👍
@pradyumnagupta3989
@pradyumnagupta3989 3 жыл бұрын
Oh my GOD , I have been trying to study gnns for so long and this is by far the best video I have seen on this topic. Thank you for clarifying that "convolution" is misleading in GCNs.
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Thanks, I really tried to convey the important concepts. The other Schnick-schnack in GNNs is very good at scaring away people trying to learn about GNNs for the first time.
@chronomo97
@chronomo97 2 жыл бұрын
Great intro!
@deepakravikumar674
@deepakravikumar674 3 жыл бұрын
2.54 - I was happy 3.00 - RIP
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
😅
@boscojay1381
@boscojay1381 3 жыл бұрын
i was about to leave.. then her voice at the end said, "hey, do not forget to like & subscribe ..", and that's how she caught her fish!
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Love your "fish-catching" formulation.😅 Is it right for her to assume from your profile picture that you like sailing?
@boscojay1381
@boscojay1381 3 жыл бұрын
@@AICoffeeBreak you’re absolutely right! lol
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Cool! I will tell you a secret: Ms. Coffee Bean loves sailing too (but she is quite the beginner in the matter)!
@TheAIEpiphany
@TheAIEpiphany 3 жыл бұрын
Really useful and I like the creative animations! Keep up the great work, subscribed!
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Thanks for the sub, really nice to have you around! I discovered your channel just yesterday and subscribed, but completely unrelated to this comment you left here! 😎
@TheAIEpiphany
@TheAIEpiphany 3 жыл бұрын
@@AICoffeeBreak heheh nice, glad to hear that and glad to be here!
@thomastorku9002
@thomastorku9002 3 жыл бұрын
She is phenomenal
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
You are phenomenal!
@samarthagarwal7219
@samarthagarwal7219 3 жыл бұрын
Nice explanation thanks!
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Glad to help!
@omarmafia234
@omarmafia234 3 жыл бұрын
I can not thank you enough!! Great great work!
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
So glad it helped! Means a lot to Ms. Coffee Bean.
@ИльясХарунов
@ИльясХарунов 2 жыл бұрын
I can't quite understand how the weighted sum of neighbour vectors is permutation invariant. For example if we swap node 4 and node 2 vectors, then node 4 vector will now get weight Ci2 and the node 2 vector will get Ci4. Why won't the sum change?
@AICoffeeBreak
@AICoffeeBreak Жыл бұрын
First we multiply node's 4 vector representation with a matrix W, we also multiply the vector representation of node to with the same matrix W. So swapping them in the sum does not change anything (commutative).
@wellisonraul5825
@wellisonraul5825 10 ай бұрын
Thank you!
@nikhilmanali920
@nikhilmanali920 Жыл бұрын
Thank you so much.
@abhinavmishra9401
@abhinavmishra9401 3 жыл бұрын
This is so beautiful. I am crying. Thanks a lot
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Thank you so much! It means a lot to me that it meant something to you!
@robertramji761
@robertramji761 3 жыл бұрын
Such a clear and accessible explanation, thank you!!!
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Great to hear, thanks! ☺️
@DeepFindr
@DeepFindr 3 жыл бұрын
I also made a video on GNNs but I have to admit yours is more compressed and gets faster "to the point" :) Well done!
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Haha, "to the point" is kind of the motto of the whole channel! But especially for everyone keen to find out everything around the topic, they have you! 😉
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
BTW, great channel! Your GNN videos are also accompanied by a blog post! You have something for everyone!
@DeepFindr
@DeepFindr 3 жыл бұрын
Thanks! Sounds good, I look forward to your future videos :)
@sunaryaseo
@sunaryaseo 2 жыл бұрын
Thanks very much for the explanation. I still didn't get it with the notation of 'W' and 'U' in the formula. Where is it exactly on the figure? is that in the edges of the graph so we can multiply with the 'H' ? or in somewhere else (another figure) you didn't show? If there is another NN after this graph, I am curious how do you connect this graph with that NN
@prachi07kgp
@prachi07kgp Жыл бұрын
Very helpful video, such complicated concept explained so beautifully and in simple manner
@newbie8051
@newbie8051 Жыл бұрын
Great explanation, loved this. Thanks a ton ma'am !
@sir_aken9706
@sir_aken9706 Жыл бұрын
Just discovered this channel and ngl, I think im in love with coffee bean 😂 very good and succinct video
@rohith2454
@rohith2454 2 жыл бұрын
Thanks a lot !
@736939
@736939 3 жыл бұрын
Are you sure that for each "h" we need to apply weight matrix "W", and not scalar "w"???
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Each h is a vector, therefore W is a matrix. 😃
@736939
@736939 3 жыл бұрын
@@AICoffeeBreak Thank you.
@denshaSai
@denshaSai 2 жыл бұрын
So what is the groundtruth and loss function for graphs? (how do you actually learn the weights)?
@AICoffeeBreak
@AICoffeeBreak 2 жыл бұрын
Depends on what you are trying to do: For node classification, you have a cross-entropy loss for predicting the label of each node, for example.
@ditherycarbon8661
@ditherycarbon8661 2 жыл бұрын
​@@AICoffeeBreak Soo, we are still using backpropagation to learn the weights right? And how do we get the final output from the individual node embeddings? (Considering that the output is global for the entire graph and not local for each node) Thanks in advanced
@AICoffeeBreak
@AICoffeeBreak 2 жыл бұрын
@@ditherycarbon8661 Yes, it is still backprop and gradient descent. If we need to make a classification of the whole graph, we need to apply the few extra classification layers on the aggregation of the whole graph. Just an idea (GNNs are not my area of expertise): one could still do graph classification if the GNN layers are so many that the information had time to smooth out over the entire graph. Then we could still use one output node alone to say something about the whole graph.
@ditherycarbon8661
@ditherycarbon8661 2 жыл бұрын
@@AICoffeeBreak Thanks, makes sense
@diwakerkumar5910
@diwakerkumar5910 Жыл бұрын
Thank you
@VenkatIyer
@VenkatIyer 3 жыл бұрын
Is there an easy implementation of simple GNNs available somewhere? On github I can see only sophisticated approaches for wierd problems.
@AICoffeeBreak
@AICoffeeBreak Жыл бұрын
If there is one, I would also like to know. :)
@sukanya4498
@sukanya4498 2 жыл бұрын
Great introduction with examples 👏🏽🙌🏽👍🏽!!
@AICoffeeBreak
@AICoffeeBreak 2 жыл бұрын
Glad you liked it!
@cesar73silva
@cesar73silva 3 жыл бұрын
Yes. This is good.
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Thanks!
@cesar73silva
@cesar73silva 3 жыл бұрын
@@AICoffeeBreak A question, how do I get the initial representations for h_i ? Or what possible ways are there
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
@@cesar73silva Depends a lot what you want to do. An an example: for NLP it is quite common to initialize h_0 with word embeddings. In applications where you really have nothing to start with, h_0 could be even initialized randomly.
@scottt9382
@scottt9382 2 жыл бұрын
This is excellent
@keithsanders6309
@keithsanders6309 2 жыл бұрын
This was a helpful and stellar video!!
@nicholasliu-sontag1585
@nicholasliu-sontag1585 3 жыл бұрын
Thanks for this video. I wanted to ask - does this video not also describe GNNs with RNNs? Given that nodes you describe have some short term memory?
@st0a
@st0a Жыл бұрын
Really cute channel. Why you don't have more subscribers is beyond my understanding.
@prathamprasoon2535
@prathamprasoon2535 2 жыл бұрын
Awesome video!
@AICoffeeBreak
@AICoffeeBreak 2 жыл бұрын
Thanks! You are THE Pratham Prasoon! I know you from Twitter. 😄
@safaelaat1868
@safaelaat1868 2 жыл бұрын
Thank you very much for all your videos. We always hear that computer vision performance has exceeded human performances. But where do we get this information from and how about other domains like NLP, Speech recognition, financial fraud detection, autonomous driving and malware detection ? thanks again
@zhangkin7896
@zhangkin7896 2 жыл бұрын
Hi Ms. Coffee Ban, could I translate the video and pub to another video domain(bilibili) with that language with cite the origin from?
@AICoffeeBreak
@AICoffeeBreak 2 жыл бұрын
Hey, thanks for reaching out! I would love to upload on bilibili myself, the problem is that I do not get past the account creation verification due to my poor Chinese. I would love some help with that. Would you mind writing me an email? ☺️
@goldfishjy95
@goldfishjy95 3 жыл бұрын
Thank you so much! #prayinghandsemoji
@rembautimes8808
@rembautimes8808 3 жыл бұрын
The term \Sum_(j in Nbr(i))1/c_(i,j) * h_j^t * U , does this follow from the Kolmogorov-Arnold representation theorem for continuous mulitvariate functions? Excellent video I may add.
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
There is certainly a striking similarity to it. But I am not really sure, I can only cite a paper I found about this and point you to it: "The Kolmogorov-Arnold representation decomposes a multivariate function into an interior and an outer function and therefore has indeed a similar structure as a neural network with two hidden layers. But there are distinctive differences. One of the main obstacles is that the outer function depends on the represented function and can be wildly varying even if the represented function is smooth." export.arxiv.org/pdf/2007.15884.pdf
@ehsanelahi190
@ehsanelahi190 2 жыл бұрын
Good explanation of such dry topic
@AICoffeeBreak
@AICoffeeBreak 2 жыл бұрын
🏜️ thanks!
@CallSaul489
@CallSaul489 2 жыл бұрын
Really nice explanation!
@l3nn13
@l3nn13 4 ай бұрын
i would love some links or names to the original papers you are referring to
@ckng6126
@ckng6126 2 жыл бұрын
Very succinct explanation
@bibs2091
@bibs2091 2 жыл бұрын
i still have many questions in mind, but this is surely a very good introduction
@heathenfire
@heathenfire 2 жыл бұрын
Wow This was such a good explanation.
@sarahjamal86
@sarahjamal86 2 жыл бұрын
Well done 🥳
@marcusbluestone2822
@marcusbluestone2822 Жыл бұрын
Well explained. Thanks!
@IndoPakComparison
@IndoPakComparison 2 жыл бұрын
Very basic info but you still used terms and notations hard for a new learner. haha. but it is good one 加油。
@christianmoreno7390
@christianmoreno7390 2 жыл бұрын
You have a beautiful accent girl
@sachin63442
@sachin63442 2 жыл бұрын
why can't you use XGboost or decision trees for node level classification instead of GCN?
@baqirhusain5652
@baqirhusain5652 2 жыл бұрын
I love you ma'am
@MalachiEleanore-s1t
@MalachiEleanore-s1t 23 күн бұрын
White Elizabeth Jackson Brian Hernandez Deborah
@odiiibo
@odiiibo Жыл бұрын
One way communication. No discussion in comments. What is the right word for it? - Lecture, thank you.
@AICoffeeBreak
@AICoffeeBreak Жыл бұрын
Yeah, I try my best to keep up with many comments. I am one person and there are many questions in the world. But I do always find some time to respond to lots of questions. Is there any in particular you would have liked to see responded?
@odiiibo
@odiiibo Жыл бұрын
@@AICoffeeBreak Sorry, I was wrong about discussions here. Just grew tired of this issue at other places. I see the problem of centralization in producing these models where prediction is actually influenced by data selection. Do you want to discuss this?
@AICoffeeBreak
@AICoffeeBreak Жыл бұрын
@@odiiibo I mean sure, it is not only the data selection that matters the most, but also the prompt that for example ChatGPT always sees but is not shown in the conversation UI that matters. In other words, there is a lot of ways in which the large data- and model holders can influence predictions. Therefore it is great to see that open source and organizations gathered on the Internet (LAION; Eleuther, OpenAssistant) is breathing down their neck. :) Or do you think open source is not enough?
@odiiibo
@odiiibo Жыл бұрын
@@AICoffeeBreak Open Source is fine but has its limits. I have a couple of concerns. One of them is fundamental. A board of experts is not able to create the tool that suits all applications. The obstacle is the same as with the semantic web. Life supplies us with unexpected situations. Even computer aided knowledge is limited. Material products can be produced by specialists, but knowledge and communication can't fit into a rigid frame. Word meanings are created each second by thousands. In any software the same function name can have drastically different meanings. And propmt history is not enough to overcome this shortcoming. Isn't it so?
@AICoffeeBreak
@AICoffeeBreak Жыл бұрын
@@odiiibo this is why we need research into models that update their knowledge. Systems relying on knowledge bases would be one direction: you update the president in the knowledge graph and the model relying on that is up to date in this way.
@tallwaters9708
@tallwaters9708 Жыл бұрын
GCNNs, what a horrible name!...
@AICoffeeBreak
@AICoffeeBreak 11 ай бұрын
Gone are the times of AlexNet... 😅
@kvnptl4400
@kvnptl4400 2 ай бұрын
Honestly, couldn't understand in the first attempt :(
@amanoswal7391
@amanoswal7391 3 жыл бұрын
The coffee bean acts as a distraction. Good explanation otherwise.
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Me: 😀🎉 Ms. Coffee Bean: 😶
@elinetshaaf75
@elinetshaaf75 3 жыл бұрын
lol
Can a neural network tell if an image is mirrored? - Visual Chirality
8:46
AI Coffee Break with Letitia
Рет қаралды 1,5 М.
Graph Neural Networks - a perspective from the ground up
14:28
БЕЛКА СЬЕЛА КОТЕНКА?#cat
00:13
Лайки Like
Рет қаралды 1,3 МЛН
Крутой фокус + секрет! #shorts
00:10
Роман Magic
Рет қаралды 17 МЛН
Шок. Никокадо Авокадо похудел на 110 кг
00:44
The joker favorite#joker  #shorts
00:15
Untitled Joker
Рет қаралды 30 МЛН
An Introduction to Graph Neural Networks: Models and Applications
59:00
Microsoft Research
Рет қаралды 282 М.
Transformers explained | The architecture behind LLMs
19:48
AI Coffee Break with Letitia
Рет қаралды 25 М.
What Do Neural Networks Really Learn? Exploring the Brain of an AI Model
17:35
Rational Animations
Рет қаралды 183 М.
[1hr Talk] Intro to Large Language Models
59:48
Andrej Karpathy
Рет қаралды 2,2 МЛН
MAMBA and State Space Models explained | SSM explained
22:27
AI Coffee Break with Letitia
Рет қаралды 48 М.
How Graph Neural Networks Are Transforming Industries
12:03
AssemblyAI
Рет қаралды 11 М.
Intro to graph neural networks (ML Tech Talks)
51:06
TensorFlow
Рет қаралды 177 М.
My PhD Journey in AI / ML (while doing YouTube on the side)
37:18
AI Coffee Break with Letitia
Рет қаралды 6 М.
Graph Convolutional Networks (GCN): From CNN point of view
13:08
Soroush Mehraban
Рет қаралды 5 М.
Theoretical Foundations of Graph Neural Networks
1:12:20
Petar Veličković
Рет қаралды 90 М.
БЕЛКА СЬЕЛА КОТЕНКА?#cat
00:13
Лайки Like
Рет қаралды 1,3 МЛН