Wow, I used to fear Graph Neural Networks thinking it was some sort of monster. But this presentation has changed everything for me. Excellent job Petar! Thank you, thank you so much!
@TheStargazer12215 ай бұрын
Changed the literature, still incredibly humble. Great representation of a scientist.
@giovannibianco59965 ай бұрын
Great video Petar; now I understood everything and I will never ever have any kind of fear towards the gat. Now I am friend with the gat. We hang around often and apply leaky relu to beers in a bars. When we cross the streets he always reminds me to pay attention to the other edges and it is also very computationally efficient. Love it!
@MMUnubi3 ай бұрын
next level stuff right here
@jingzhitay67363 жыл бұрын
Thank you for this introduction! This might be the last GNN overview that I need to watch :)
@WickedEssi2 жыл бұрын
Great explanation. Very calm and precise. Was a pleasure to listen to.
@muhammadharris44703 жыл бұрын
Thanks petar. Really love this intro to GNN been hearing about them for a while. needs got to know the actual graph computations and matrices with the context of ML
@ihmond3 жыл бұрын
Thank you for your sample code! Most of models I found are written by pytorch. So, this keras model can be my basic reference.
@dori81183 жыл бұрын
Thanks for video i was in love with knowledge graphs, i am trying to back to it some day.
@danielkorzekwa Жыл бұрын
Great talk, excellent starting point to Graph Neural Networks. Presentation first + hands on tutorial.
@nikolayfx3 жыл бұрын
Thanks Petar for presenting GNN
@fredquesnel18552 жыл бұрын
Thanks for the great tutorial! Straight to the point, easy to understand, with an exercice that is easy to follow!
@AliMohammedBakhietIssa5 ай бұрын
Many Thanks for your efforts :)
@Sangel67rus2 жыл бұрын
The brilliant explanations! Thank you, Petar!
@phillibob552 жыл бұрын
Those getting the error at load_data(), to quote @Alex Muresan's comment: So, at the time of this comment (spektral._version_ == 1.0.8), loading the cora dataset would be something like this: cora_dataset = spektral.datasets.citation.Citation(name='cora') test_mask = cora_dataset.mask_te train_mask = cora_dataset.mask_tr val_mask = cora_dataset.mask_va graph = cora_dataset.graphs[0] # zero since it's just one graph inside, there could be multiple for other datasets features = graph.x adj = graph.a labels = graph.y Hope this is helpful!
@ayanansari44632 жыл бұрын
it keeps returning /usr/local/lib/python3.7/dist-packages/scipy/sparse/_index.py:126: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient. self._set_arrayXarray(i, j, x) not sure it this is right?
@phillibob552 жыл бұрын
@@ayanansari4463 it'll give this warning, but it'll still work.
@carltonchu13 жыл бұрын
I just saw you on our DeepMind internal talks , then KZbin recommended this video to my personal account ?
@peterkonig95372 жыл бұрын
Very clear presentation. It nicely combines concepts and exercises.
@nastaranmarzban14193 жыл бұрын
Hi, hope you're doing well, i have a problem, when i use "Spektral datasets.citation.load_data" I receive an error "Spektral datasets.citation has no attribute 'load_data' " Would anyone help me with this problem? Tkanks🙏
@AlexMuresan2 жыл бұрын
So, at the time of this comment (spektral.__version__ == 1.0.8), loading the cora dataset would be something like this: cora_dataset = spektral.datasets.citation.Citation(name='cora') test_mask = cora_dataset.mask_te train_mask = cora_dataset.mask_tr val_mask = cora_dataset.mask_va graph = cora_dataset.graphs[0] # zero since it's just one graph inside, there could be multiple for other datasets features = graph.x adj = graph.a labels = graph.y Hope this is helpful!
@phillibob552 жыл бұрын
@@AlexMuresan Thankyou s much man!
@squarehead6c17 ай бұрын
Great tutorial!
@sleeping4cat Жыл бұрын
Waiting eagerly for a custom Tensorflow Library on GNN!!
@toandaominh19973 жыл бұрын
Thanks for the video. You bring useful knowledge
@mytelevisionisdead3 жыл бұрын
Clearly explained! even more impressive given the information density of the content..!
@frankl13 жыл бұрын
Thanks for this intro to GNN, I enjoyed it a lot
@ExperimentalAIML Жыл бұрын
Good explanation
@sachinvithubone42783 жыл бұрын
Thanks for video. I think GNN can be used more in health care system.
@masudcseku3 жыл бұрын
Thanks Petar, very comprehensive tutorial! It will be great if you can make a tutorial on GAT ;)
@margheritamaraschini3958 Жыл бұрын
Great presentation. If it can be useful, I may have found some small typos: - "toward a simple update rule" A~=A~+I should be A~=A+I. Also, in one of the instances W should be transpose (W) - "GCN" The subscript of the sum I think it's the other way around
@apaarsadhwani Жыл бұрын
Thanks, that was useful!
@fahemhamou61702 жыл бұрын
تحياتي الخالصة thank you
@randerson11843 жыл бұрын
I'm going to get a TON of use out of these! Thanks!
@LouisChiaki3 жыл бұрын
Glad that Google improve the ETA of my home city - Taichung! The traffic there was really bad and it must be really difficult for the model 😂 .
@MMUnubi3 ай бұрын
lol
@MrWater23 ай бұрын
incredible good!!!
@twitteranalyticsbyad39693 жыл бұрын
Changing Cake to Pie, Nice move :D You can only understand if you have seen Jure Leskovec's lectures.
@cetrusbr2 жыл бұрын
Fantastic Lecture! Thanks Petar, congrats for the amazing job!
@39srini3 жыл бұрын
Very good useful video
@ernestocontreras-torres91882 жыл бұрын
Great material!
@jtrtsay3 жыл бұрын
Love from Taichung city, Taiwan 🇹🇼
@ScriptureFirst3 жыл бұрын
A lovely city in an island nation 🇹🇼
@giorgigona2 жыл бұрын
Where can I see the presentation slides?
@ThanhPham-xz2yo2 жыл бұрын
thanks for sharing!
@pushkinarora5800 Жыл бұрын
Its a Binge watch!! epic!!
@phaZZi64613 жыл бұрын
thanks a lot!
@slkslk78412 жыл бұрын
What are Inductive problems?
@rahulseetharaman45252 жыл бұрын
Sir, could you please explain the part where the mask is divided by the mean ?
@vasylcf3 жыл бұрын
Thanks!
@dennisash72213 жыл бұрын
I am trying to follow the example but I get the following error: AttributeError: module 'spektral.datasets.citation' has no attribute 'load_data' Anyone know why this is happening, I can only see load_binary in the attributes list.
@sanketjoshi83873 жыл бұрын
Did you fix the issue?
@dennisash72213 жыл бұрын
@@sanketjoshi8387 I have not found out what the issue is. It might be something to do with some upgrades to Python, NP or Spektral ... I am hoping someone can help
@satyabansahoo18623 жыл бұрын
@@dennisash7221 check the version of spektral, he is using its 0.6.2 so try using that
@DanielBoyles3 жыл бұрын
# this should do it in Spektral Version 1.0.6 # I've used the same variable names, but haven't gone through the rest of the colab tutorial as yet from spektral.datasets.citation import Cora dataset = Cora() graph = dataset[0] adj, features, labels = graph.a, graph.x, graph.y train_mask, val_mask, test_mask = dataset.mask_tr, dataset.mask_va, dataset.mask_te
@dennisash72213 жыл бұрын
@@DanielBoyles awesome it seems to work, I will try to run the rest of the NB later but looks like this did the trick.
@turalsadik812 жыл бұрын
Where can I find notebook of the colab exercise?
@turalsadik81 Жыл бұрын
anybody?
@timfaverjon35972 жыл бұрын
I, thank you for the video, can I find the colab somewhere ?
@werewolf_133 жыл бұрын
Hey insightful lesson! Can anyone give me an idea on how to prepare a dataset for GNN? especially for recommendation systems
@_Intake__Gourab2 жыл бұрын
Hello, I am doing image classification using gcn, but I failed to understand how to use image data in a gcn model. I need some help!
@phillibob552 жыл бұрын
Is anyone else getting accuracies higher than 1? (I know something's wrong but I can't figure it out)
@sunaryaseo2 жыл бұрын
A nice tutorial, now I am thinking about how to implement GNN for signal processing such as classification/prediction problems. How do I design the graph, nodes, and edges?
@RAZZKIRAN3 жыл бұрын
can we apply on text classifcation problems like sentiment analysis, online hate classifcations?
@thefastreviewer Жыл бұрын
Is it possible to share the Colab file as well?
@halilibrahimakgun7569 Жыл бұрын
Can you share colab notebook
@张凌峰-c2j2 жыл бұрын
Could I ask why mask should be divided by mean? Thanks
@AvinashRanganath6 ай бұрын
I think it is to prevent the model from overfitting to nodes with a larger number of edges.
@quickpresent89872 жыл бұрын
Is anyone write the colab code following this video, I just get an error for the 'matmul
@taruneswar90363 жыл бұрын
🙏🙏
@wibulord9262 жыл бұрын
your source code pls
@asedaradioshowpodcast2 жыл бұрын
27:35
@jackholloway75163 жыл бұрын
1st
@rogiervdw2 жыл бұрын
Marvellous explanation, thank you. Typo on 17:47 sum over j \in N_i ?
@Amapramaadhy3 жыл бұрын
Really great content and presentation. The analogy between convolutional NN and GNN is one of the best I have heard. Petar should do more lectures
@philtoa334 Жыл бұрын
Nice.
@desrucca Жыл бұрын
Total nodes = 2708 nodes Train = 140 nodes Valid = 500 nodes Test = 1000 nodes Where did the remaining 1068 nodes gone?
@petarvelickovic6033 Жыл бұрын
They're still there -- their labels are just not assumed used for anything (training or eval) in this particular node split.
@cia05rf2 жыл бұрын
Great video, doesn't work with spektral 1.2.0. To save downgrading this can be used: ``` cora = spektral.datasets.citation.Cora() train_mask = cora.mask_tr val_mask = cora.mask_va test_mask = cora.mask_te graph = cora.read()[0] adj = cora.a features = graph.x labels = graph.y ```
@muhannadobeidat Жыл бұрын
Thanks for posting this. It's a time saver!
@NoNTr1v1aL Жыл бұрын
Absolutely amazing video!
@stephanembatchou53002 жыл бұрын
Excellent content. Thank You!
@DefendIntelligence3 жыл бұрын
Thank you it was really interesting
@SirajFlorida2 ай бұрын
It's taken me a while to discover your lectures, but I can't thank you enough for creating and posting them. Thank you.
@vibrationalmodes27292 жыл бұрын
Strong last name dude (just started video, was my first impression 😂)
@nabeelhasan65933 жыл бұрын
This is a very good series
@bdegraf Жыл бұрын
Is there a link to the Colab code? I see references to it but not finding it.
@ghensao40272 жыл бұрын
Typo in 17:35 should iterate j over neighborhood of node N_i
@iva13892 жыл бұрын
inferring soft adjacency -- what does that even mean?
@miladto2 жыл бұрын
Thank you for this great Presentation. Can you please share the Colab?
@BrendanW-c9l Жыл бұрын
Please correct me if I'm wrong, Petar, but in the tutorial, it looks like during training we are including the full graph (including test nodes) in the node-pooling step? This looks like information leakage--is there some reason I'm missing why it's considered allowed here?
@petarvelickovic6033 Жыл бұрын
This is correct, and it is only allowed under the "transductive" learning regime. In this regime, you're given a static graph, and you need to 'spread labels' to all other nodes. Conversely, in 'inductive' learning you are not allowed access to test nodes at training time. Naturally, the transductive regime is much easier, as you can use a lot of methods that exploit the properties of the graph structure provided. In inductive learning, instead, your method needs to in principle be capable of generalising to arbitrary, unseen, structures at test time.
@brunoalvisio2 жыл бұрын
Thank you for the great intro! Qq: In the equation for GCN is the bias being omitted just for clarity?
@Max-eo6vx2 жыл бұрын
Thank you Peter. Would you share the code or notebook?
@l.g.76942 жыл бұрын
Really nice presentation! A question regarding the colab: Anyone else having the problem that the validation accuracy stays at around 13%?
@l.g.76942 жыл бұрын
This ... is unfortunate. I made a typo (mask = tf.reduce_mean(mask) instead of mask /= tf.reduce_mean(mask)) which I literally noticed after hitting send. Now it works.
@payamkhorramshahi57262 жыл бұрын
Very transparent tutorial ! Thank you
@jimlbeaver3 жыл бұрын
Thanks...great stuff. I really appreciate you taking a slow and deliberate approach to this.
@mohajeramir3 жыл бұрын
This is so awesome. Excellent presenter
@michielim2 жыл бұрын
This was so so useful - thank you!
@mohammadforutan955 Жыл бұрын
very useful
@cybervigilante3 жыл бұрын
Consider graphs on our level - and even people are graphs. They exist only as nodes in a higher level network. But the edges of the higher level do not connect directly to any node in the lower level graph, otherwise you just have a lower level graph. The edges exert a Bias. Biases are common in nature - hormone biases, electrical biases, thermal biases, etc. However, there is a counter-bias feedback from the lower level graph, which can be any organism or complex structure, which can cause some higher level edges to either disconnect or connect in a benign or malign fashion, changing the bias. We provide the feedback. This explains very many things.
@innovationscode99093 жыл бұрын
Thanks. Great stuff. I really LOVE ML
@phillibob552 жыл бұрын
If anyone gets the "TypeError: sparse matrix length is ambiguous; use getnnz() or shape[0]" error at the matmul, use adj.todense() while calling the train_cora() method.
@oladipupoadekoya15592 жыл бұрын
Hello sir, Please can i have your email sir. i need you to explain how to represent my optimisation problem in GNN