Graph Node Embedding Algorithms (Stanford - Fall 2019)

  Рет қаралды 67,851

Machine Learning TV

Machine Learning TV

Күн бұрын

In this video a group of the most recent node embedding algorithms like Word2vec, Deepwalk, NBNE, Random Walk and GraphSAGE are explained by Jure Leskovec. Amazing class!

Пікірлер: 42
@sasankv9919
@sasankv9919 4 жыл бұрын
Watched it for the third time and now everything makes sense.
@i2005year
@i2005year 3 жыл бұрын
15:30 Basics of deep learning for graphs 51:00 Graph Convolutional Networks 1:02:07 Graph Attention Netwirks (GAT) 1:13:57 Practical tips and demos
@sm_xiii
@sm_xiii 4 жыл бұрын
Prof. Lescovec covered a lot of material in 1.5hr! It was very engaging because of his energy and teaching style.
@open_source
@open_source 4 жыл бұрын
;;####
@ernesttaf
@ernesttaf 4 жыл бұрын
Great Sir, Congratulations for your oustanding teaching capabilities. It really change my life and my view on Graph Network. Thank you very much, Professor
@jayantpriyadarshi9266
@jayantpriyadarshi9266 4 жыл бұрын
Thank you for this lecture. Really changed my view about GCNs
@sanjaygalami
@sanjaygalami 3 жыл бұрын
What's the major point that strik to your head? Lets others know if it convenient for you. Thanks
@znb5873
@znb5873 3 жыл бұрын
Thank you so much for making this lecture publicly available. I have a question, is it possible to apply node embedding to dynamic graphs (temporal)? Are there any specific methods/algorithms to follow? Thanks in advance for your answer.
@gautamrajit225
@gautamrajit225 4 жыл бұрын
Hello. These lectures are very interesting. Would it be possible to share the GitHub repositories so that I can get a better understanding of the code involved in the implementation of these concepts?
@Olivia-wu4ve
@Olivia-wu4ve 4 жыл бұрын
Awesome! Thanks for sharing. Will the hands on session be posted?
@TheAnna1101
@TheAnna1101 4 жыл бұрын
Awesome video. Please share more on this topic!
@Commonsenseisrare
@Commonsenseisrare Жыл бұрын
Amazing lecture of gnns.
@MingshanJia
@MingshanJia 4 жыл бұрын
Wanna learn the whole series...
@wwemara
@wwemara 4 жыл бұрын
kzbin.info/aero/PL-Y8zK4dwCrQyASidb2mjj_itW2-YYx6-
@fredconcklin1094
@fredconcklin1094 2 жыл бұрын
Classes so fun. The death here is different than the death in Computer Vision due to NSA death.
@MrSajjadathar
@MrSajjadathar 4 жыл бұрын
@Machine Learning TV yes, and please share the link where you shared all the graph representation learning lectures. i will be thankful..
@eyupunlu2944
@eyupunlu2944 4 жыл бұрын
I think it is this one: kzbin.info/www/bejne/j6PLc42Lqcx6aqc
@EOh-ew2qf
@EOh-ew2qf 2 жыл бұрын
43:40 I have a question for the slide here. How can you generalize for a new node when the model learns by aggregating the neighborhoods and the new nodes doesn't have a neighborhood yet.
@vgreddysaragada
@vgreddysaragada Жыл бұрын
Great work..
@alvin5424
@alvin5424 4 жыл бұрын
Any plans to publish lectures 17, 18 and 19?
@MachineLearningTV
@MachineLearningTV 4 жыл бұрын
Yep! Soon we will upload new lectures!
@eugeniomarinelli1104
@eugeniomarinelli1104 3 жыл бұрын
where do I find the slides fo this lecture
@kanishkmair2920
@kanishkmair2920 4 жыл бұрын
In GCN, we get a single output. In GraphSAGE you concatenate it to keep the info separate. So at each step, the output H^k will have 2 outputs, isn't it? If not, then how are they aggregated and still kept separate
@paulojhonny4364
@paulojhonny4364 4 жыл бұрын
Kanishk Mair hi, I didn’t understand either. Did you find anything about it?
@kanishkmair2920
@kanishkmair2920 4 жыл бұрын
I tried to work on pytorch geometric using it (SAGEConv). Not sure how it works but looking at it's source code might help
@sm_xiii
@sm_xiii 4 жыл бұрын
I think the concatenated output is the embedding of the target node. And it depends on the downstream task to further process it, by passing it through more layers, before having the final output.
@ShobhitSharmaMTAI
@ShobhitSharmaMTAI 3 жыл бұрын
My question at 31:00, what if previous layer embedding of same node is not multiply with Bk like Bk hv(k-1)...what will be the impact on embedding...
@ramin5665
@ramin5665 2 жыл бұрын
Can you share the hands on link?
@baharehnajafi9568
@baharehnajafi9568 4 жыл бұрын
Hi, where can I find the next lectures of him?
@MachineLearningTV
@MachineLearningTV 4 жыл бұрын
We will upload them soon
@wwemara
@wwemara 4 жыл бұрын
kzbin.info/aero/PL-Y8zK4dwCrQyASidb2mjj_itW2-YYx6-
@AdityaPatilR
@AdityaPatilR 3 жыл бұрын
Deeper networks will not always be more powerful as you may lose vector features in translation .And due to additional weight matrices the neural networks will be desensitized to feature input.Number of hidden layers should not be greater than input dimension.
@MrSajjadathar
@MrSajjadathar 4 жыл бұрын
Sir can you please share Tuesday lecture
@MachineLearningTV
@MachineLearningTV 4 жыл бұрын
The past Tuesday?
@MrSajjadathar
@MrSajjadathar 4 жыл бұрын
@@MachineLearningTV yes, and please share the link where you shared all the graph representation learning lectures. i will be thankful..
@MachineLearningTV
@MachineLearningTV 4 жыл бұрын
It is available now. Check the new video
@deweihu1003
@deweihu1003 3 жыл бұрын
On behalf a people from a remote eastern country: niubi!!!!
@phillipneal8194
@phillipneal8194 4 жыл бұрын
How do you aggregate dissimilar features ? For example sex, temperature, education level for each node ?
@이혜경-y8x
@이혜경-y8x 4 жыл бұрын
Where can I get slides?
@ducpham9991
@ducpham9991 4 жыл бұрын
you can find it at here web.stanford.edu/class/cs224w/
@kognitiva
@kognitiva 3 жыл бұрын
kzbin.info/www/bejne/bXuofYtsec6IrrM "what we would like to do is here input the graph and over here good predictions will come" Yes, that is exactly it! xD
@jcorona4755
@jcorona4755 Жыл бұрын
Pagan porque vean que tiene más seguidores. De echo pagas $10 pesos por cada video
Graph Representation Learning (Stanford university)
1:16:53
Machine Learning TV
Рет қаралды 95 М.
Deep Graph Generative Models (Stanford University - 2019)
1:22:31
Machine Learning TV
Рет қаралды 18 М.
Поветкин заставил себя уважать!
01:00
МИНУС БАЛЛ
Рет қаралды 5 МЛН
GIANT Gummy Worm Pt.6 #shorts
00:46
Mr DegrEE
Рет қаралды 32 МЛН
Angry Sigma Dog 🤣🤣 Aayush #momson #memes #funny #comedy
00:16
ASquare Crew
Рет қаралды 51 МЛН
The Attention Mechanism in Large Language Models
21:02
Serrano.Academy
Рет қаралды 94 М.
Graph Embeddings
31:39
Neo4j
Рет қаралды 34 М.
Hyperbolic Embeddings Tutorial (DiffGeo4DL NeurIPS 2020)
19:21
HazyResearch
Рет қаралды 8 М.
BlackRock: The Conspiracies You Don’t Know
15:13
More Perfect Union
Рет қаралды 400 М.
Limitations of Graph Neural Networks (Stanford University)
1:26:35
Machine Learning TV
Рет қаралды 14 М.
Page Ranking: Web as a Graph (Stanford University 2019)
1:26:56
Machine Learning TV
Рет қаралды 3,4 М.
How the Computer was Accidentally Invented
8:40
Newsthink
Рет қаралды 26 М.