Theoretical Foundations of Graph Neural Networks

  Рет қаралды 92,416

Petar Veličković

Petar Veličković

Күн бұрын

Пікірлер: 57
@leodu561
@leodu561 3 жыл бұрын
Petar's talk are great as always! (I remember attending his talk while at Google lol). Timestamps for those looking to rewatch specific sections :) 0:00 - Introduction by Pietro Lio 1:10 - Overview 1:56 - 1. Fantastic GNNs in the Wild 6:52 - 2. Talk Roadmap 9:00 - 3. Towards GNNs from first principles 10:34 - 4. Permutation invariance and equivariance 15:42 - 5. Learning on Graphs 20:22 - 6. Message passing on graphs 24:34 - 7. Perspectives on GNNs 25:42 - 7.1 Node Embedding Techniques 29:39 - 7.2 Natural Language Processing 31:23 - 7.3 Spectral GNNs 41:17 - 7.4 Probabilistic Graphical Models 45:09 - 7.5 Graph Isomorphism Testing 48:53 - 7.6 Geometric Deep Learning 50:23 - 7.7 Historical Concepts 51:15 - 7.8 Computational Chemistry 52:22 - Acknowledgements and Q&A
@kristofhorvath3812
@kristofhorvath3812 2 жыл бұрын
This is one of the cleanest, most sophisticated and organized scientific speeches I have ever heard...
@vladansaracpv
@vladansaracpv 2 жыл бұрын
Beautiful presentation. Dr Velickovic is one of the best lecturers I've heard in my life. Everything he says is so clear and concise. Add his charisma on top of all that and you can understand why he attracts more and more people to study GNNs. We are so proud to have him
@KyleCranmer
@KyleCranmer 3 жыл бұрын
Excellent talk Petar, so useful to have these different perspectives brought together in one consistent framing.
@pw7225
@pw7225 3 жыл бұрын
Your presentation skills only became better since Cambridge times. And they were stellar then already.
@nguyenthanhdat93
@nguyenthanhdat93 2 жыл бұрын
great presentation. Thank you for sharing, Dr Velickovic
@syleshgupta5957
@syleshgupta5957 Жыл бұрын
It was way informative and slides are self explanatory for the people with basic understanding of math equations :) Thank you !
@adityamishra348
@adityamishra348 3 жыл бұрын
Great talk! The first 20 minutes are simply brilliant! Kind of first principles explanation that I dream of when starting any new type of topic :)
@ceevaaaaa
@ceevaaaaa 3 жыл бұрын
Thank you very much! I just completed my undergrad, and I am in the process of discovering new ideas and topics to work upon and learn more. These kinds of videos really help me (esp as a young graduate who doesn't have much idea about multiple topics but want to discover more).
@love12xfuture
@love12xfuture 2 жыл бұрын
amazing talk! I like the way that you connect concepts with applicational and historical context. It makes me motivated to make this talk making all senses to a 7 year-old or a 107 year-old (:
@coder8i
@coder8i 3 жыл бұрын
Petar! This is solid work. Clear thinking and speaking.
@ryderbrooks1783
@ryderbrooks1783 2 жыл бұрын
This gentleman is very good at this.
@emmarocheteau5788
@emmarocheteau5788 3 жыл бұрын
Rewatching some of this talk - it is that good!
@bayrameda
@bayrameda 3 жыл бұрын
Hi Petar, it's been a nice reframing of GNNs, thanks! Noting that GAT can treat non-homophilic graphs strikes this analogy to me: If propagation is error smoothing then attention makes edge-aware smoothing (in image processing).
@tasveerahmad5002
@tasveerahmad5002 3 жыл бұрын
Very nice lecture talk. Good GNN resources, tools and exposure
@abyoussef19
@abyoussef19 3 жыл бұрын
We need more lectures like this! Nice Lecture!
@saeidsas2113
@saeidsas2113 Күн бұрын
Thank you very much for the presentation; it was very insightful. I have a minor question. As you pointed out, the situation is a bit different when it comes to the continuous node feature task. I wonder if some care needs to be taken when fitting a graph representation in the convolutional neural network? Thank you for your consideration.
@calebparks8318
@calebparks8318 2 жыл бұрын
This was a great talk. Thank you!
@alexmorehead6723
@alexmorehead6723 3 жыл бұрын
Thank you for the great talk, Petar!
@daoudpiracha9891
@daoudpiracha9891 3 жыл бұрын
Thank you Petar for this talk!
@blackguardian89
@blackguardian89 3 жыл бұрын
Great talk! It definitely improved my understanding about GNNs. Thank you!
@kaanyolsever1149
@kaanyolsever1149 3 жыл бұрын
Great presentation. very entertaining and informative. Thanks Petar
@GorsanMestiri
@GorsanMestiri 3 жыл бұрын
Excellent talk, Thank you so much. I'll be more than happy if you share the best resources to dive into GNN applied to combinatorial optimisation problems. 🙏
@petarvelickovic6033
@petarvelickovic6033 3 жыл бұрын
Thank you for the kind words! As a matter of fact, we've very recently put out a survey on GNNs for combinatoral tasks: arxiv.org/abs/2102.09544
@GorsanMestiri
@GorsanMestiri 3 жыл бұрын
@@petarvelickovic6033 This is amazing. Thanks
@amiltonwong
@amiltonwong 3 жыл бұрын
Hi, Petar, thanks a lot for the talk recording. Could you also release the slides of your talk?
@petarvelickovic6033
@petarvelickovic6033 3 жыл бұрын
They are provided in the video description now :)
@amiltonwong
@amiltonwong 3 жыл бұрын
@@petarvelickovic6033 Thanks a lot again :)
@amitsett8117
@amitsett8117 3 жыл бұрын
Found this on reddit. Great talk.
@stefanspalevic
@stefanspalevic 3 жыл бұрын
Hvala Petre!! :)
@GiovannaIwishyou
@GiovannaIwishyou 3 жыл бұрын
Hvala, Petre :)
@NelsonOtuma
@NelsonOtuma 2 жыл бұрын
This is an Interesting area of study, you would have dropped the link for pdf books on comments
@kristofneys2349
@kristofneys2349 3 жыл бұрын
thank you soo much - very good and useful!
@JorGe-eu3wi
@JorGe-eu3wi 3 жыл бұрын
Excellent presentation! A question came up though, which flavour of gnn layer could we say that graphsage uses for the embedding algorithm? Could the learned weights matrices W be considered as fixed weight inputs of the convolutional GNN layer?
@petarvelickovic6033
@petarvelickovic6033 3 жыл бұрын
An excellent question -- thanks for asking! This would depend on which type of GraphSAGE we're looking at :) GraphSAGE-mean, GraphSAGE-GCN and GraphSAGE-pool are all conv-GNNs. They transform every node in isolation, they then use a permutation invariant aggregator, and do not take the receiver node at all into account. The matrix W is just part of either of the two functions (psi or phi), depending on whether it's applied to individual neighbours or aggregated vectors. On the other hand, GraphSAGE-LSTM is not permutation equivariant, and hence does not fit any of the three flavours. It is possible to 'massage' the LSTM aggregator to make it fit, however; see Janossy Pooling (Murphy et al.). Lastly, I'd note that GraphSAGE's main contribution is its scalability to inductive learning on very large graphs (as per its title ;) ) through neighbourhood sampling. Many of the embedding algorithms it proposes are not unlike models already previously proposed in the literature (e.g. the GCN of Kipf & Welling).
@JorGe-eu3wi
@JorGe-eu3wi 3 жыл бұрын
@@petarvelickovic6033 thank you very much for the answer :D
@DeepFindr
@DeepFindr 3 жыл бұрын
Really great summary! :)
@kejianshi9196
@kejianshi9196 2 жыл бұрын
pretty fast . rewatching 3~4 times is helpful.
@sacramentofwilderness6656
@sacramentofwilderness6656 3 жыл бұрын
Thanks for a great talk! Very interesting and inspiring! I wonder, is there any research for graph networks in the limit, amenable for analytical treatment, like the NTK mode? Are there any special properties of the loss landscapes, not present in more common fully-connected NNs or usual CNNs?
@petarvelickovic6033
@petarvelickovic6033 3 жыл бұрын
Thank you for the kind words! While it may not fully align with what you're after, I think you might find the recent paper from Xu et al. on (G)NN extrapolation very interesting: arxiv.org/abs/2009.11848 Herein, the authors make several (nicely visualised) geometrical arguments on the properties of GNNs in the extrapolation regime. The main tool and setting for their analysis is, indeed, the NTK mode.
@kndlt
@kndlt 2 жыл бұрын
Would it be possible for someone without too much ML experiences (but have CS degree) learn theoretical part of GNN inside out within a month?
@petarvelickovic6033
@petarvelickovic6033 2 жыл бұрын
It should be possible, in my opinion. Especially since GNNs are fundamentally discrete structures, they align very well with the kind of computation typically studied in a theoretical CS degree.
@tae898
@tae898 3 жыл бұрын
If you are using Transformers, you are using GNNs!
@thegreatlazydazz
@thegreatlazydazz 2 жыл бұрын
Matrices that commute are jointly diagonalizable. I understand this as if AB = BA, then A and B have the same eigen vectors? However this cannot be true as I commutes with any matrix, and any vector is an eignevector of I.
@petarvelickovic6033
@petarvelickovic6033 2 жыл бұрын
Good sighting! I didn't have the time in the talk to get into the nuances of this, but essentially, you'll have exactly the same eigenbasis if the matrices commute _and_ have no repeated eigenvalues. If you have repeated eigenvalues (as is the case for I), this theorem becomes more ambiguous to apply. en.m.wikipedia.org/wiki/Commuting_matrices has a bucket list of properties wrt commutativity and diagonalisation.
@thegreatlazydazz
@thegreatlazydazz 2 жыл бұрын
I understand the assertion, with no repeated eigen values. I thought about it a bit more, for repeated eigen values, it's like there exists a change of basis that diagonalize all of them? Any unitary matrix diagonalizes I. Thanks for the reference. The shift matrix was an excellent hint for the next property.
@reidwyde5723
@reidwyde5723 3 жыл бұрын
Thank you so much!
@sahar2003
@sahar2003 3 жыл бұрын
Good talk. Thanks.
@sachin63442
@sachin63442 3 жыл бұрын
why can't you use XGboost or decision trees for node level classification instead of GCN?
@petarvelickovic6033
@petarvelickovic6033 3 жыл бұрын
Of course you can! Sergey Ivanov et al. recently showed it's a very strong baseline: arxiv.org/abs/2101.08543
@sachin63442
@sachin63442 3 жыл бұрын
@@petarvelickovic6033 so when to use GCN over XGB or decision trees? Not combined.
@petarvelickovic6033
@petarvelickovic6033 3 жыл бұрын
As far as I know, boosting and decision trees are great for dealing with data that is assumed tabular, i.e. where you don't assume your nodes are linked together. GCNs (and/or more expressive GNNs) should be used whenever you assume that the links between your data points are actually meaningful and should be exploited.
@fairuzshadmanishishir8171
@fairuzshadmanishishir8171 3 жыл бұрын
Relation between pgm and graph nn is not clear can you clear the concepts?
@insightfool
@insightfool 3 жыл бұрын
Brain.....melted
@kprakash9665
@kprakash9665 3 жыл бұрын
Theoretically also not clearly explained and practically not explained worst University
@sb-xq1sr
@sb-xq1sr 2 жыл бұрын
37:17 Why can't an adjacency matrix be eigen decomposed? AFAIK, any real symmetric matrix is diagonalizable. en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix#Real_symmetric_matrices I believe you were trying to substitute adjacency matrix with a positive semi-definite matrix that can also express all adjacency properties. That way, the eigen decomposed diagonal matrix Λ only has non-negative values.
@petarvelickovic6033
@petarvelickovic6033 2 жыл бұрын
Thanks for your note! You are certainly correct, and this is one aspect I hadn't qualified well enough in the talk. Yes, any undirected adjacency matrix can be eigendecomposed. However, the Laplacian's eigendecomposition has nicer properties. It guarantees all eigenvalues are nonnegative (while the multiplicity of the zero eigenvalues can be used to track connected components), and the resulting eigenvectors can be directly used to approximate many interesting problems on the graph (eg. optimal cuts).
Geometric Deep Learning: GNNs Beyond Permutation Equivariance
1:25:12
Petar Veličković
Рет қаралды 12 М.
The Strange Physics Principle That Shapes Reality
32:44
Veritasium
Рет қаралды 5 МЛН
ЛУЧШИЙ ФОКУС + секрет! #shorts
00:12
Роман Magic
Рет қаралды 30 МЛН
小路飞还不知道他把路飞给擦没有了 #路飞#海贼王
00:32
路飞与唐舞桐
Рет қаралды 76 МЛН
Ice Cream or Surprise Trip Around the World?
00:31
Hungry FAM
Рет қаралды 9 МЛН
Perfect Pitch Challenge? Easy! 🎤😎| Free Fire Official
00:13
Garena Free Fire Global
Рет қаралды 77 МЛН
Intro to graph neural networks (ML Tech Talks)
51:06
TensorFlow
Рет қаралды 179 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,2 МЛН
Graph Neural Networks - a perspective from the ground up
14:28
Into the Realm Categorical
37:47
Petar Veličković
Рет қаралды 3,2 М.
An Observation on Generalization
57:21
Simons Institute
Рет қаралды 162 М.
Low Battery 🪫🥹🥹🥹
0:10
dednahype
Рет қаралды 6 МЛН
Этот ТЕЛЕФОН несокрушим 😮
0:21
EpicShortsRussia
Рет қаралды 569 М.
А зачем ему рабочий ноутбук?..
0:12
Новостной Гусь
Рет қаралды 660 М.