Intro to Graphs and Label Propagation Algorithm in Machine Learning

  Рет қаралды 41,899

WelcomeAIOverlords

WelcomeAIOverlords

Күн бұрын

Пікірлер: 30
@DavenH
@DavenH 4 жыл бұрын
Oh I'm glad I found this channel. GNNs are of particular interest to me... I think there's so much potential for neural code generation.
@NoNTr1v1aL
@NoNTr1v1aL 2 жыл бұрын
Absolutely amazing playlist! Subscribed.
@chongtang6908
@chongtang6908 3 жыл бұрын
I'm so luck I found this channel. Thank you!!!
@KhalilMuhammad
@KhalilMuhammad 4 жыл бұрын
Amazingly clear explanation. Thanks a lot!
@amandhaliwal3499
@amandhaliwal3499 3 жыл бұрын
So glad I found your channel
@chinmayrath8494
@chinmayrath8494 Жыл бұрын
What an amazing educator. Thank you very much !!
@CerebroneAI
@CerebroneAI 4 жыл бұрын
Great visualisations , thank you for the meticulous content.
@snowjordan6822
@snowjordan6822 4 жыл бұрын
Keep doing videos like this.
@jtetrfs5367
@jtetrfs5367 3 жыл бұрын
What was the intro music, just before the main content starts at 1:30?
@welcomeaioverlords
@welcomeaioverlords 3 жыл бұрын
Elder - Legend: kzbin.info/www/bejne/gIG1f4uknLKNpdE
@YangJackWu
@YangJackWu 4 жыл бұрын
Thank you for doing this
@akritiupreti6974
@akritiupreti6974 3 жыл бұрын
This makes the picture really clear. Thanks a lot! Could you also point me to any good resources on how to readily use such a technique for a very large graphs in terms of the tech stack and packages that can be used to implement this?
@ondrejkuchta1785
@ondrejkuchta1785 4 жыл бұрын
Thanks a lot!! Very helpful.
@khim2970
@khim2970 Жыл бұрын
hello, appreciate your effort to make the great series of videos, and could make me clear a bit that matrix S at 6:41 is adjacency matrix, right?
@nid8490
@nid8490 Жыл бұрын
Yes it is
@devanshpurwar
@devanshpurwar Жыл бұрын
nice explanation
@swakshardeb7908
@swakshardeb7908 4 жыл бұрын
If we keep Y constant throughout this iterative process and initialize with random numbers for unlabeled nodes. What kind of sense does it make? Aren't we propagating wrong labels throughout the network for unlabeled nodes? Why we are not changing Y as the network keeps updating?
@welcomeaioverlords
@welcomeaioverlords 4 жыл бұрын
I struggled with this idea too, Swakshar. But the alternative is to update Y instantly, which then causes your labeled nodes to have their real labels overwritten by its unlabeled neighbors. This leads to quick convergence to a trivial solution. I think of Y as continually pumping energy into the system as we wait for it to spread and reach a steady state. The hope is that by using something uninformative as the value for unlabeled nodes, overtime it will be dominated by true signal. And keep in mind that you can tune the contribution of this term with alpha, so you could always evaluate these trade offs.
@swakshardeb7908
@swakshardeb7908 4 жыл бұрын
@@welcomeaioverlords Thanks for the clarification. But why not only change the values of the unlabeled node using the previous prediction in Y and keep the true label unchanged along the iterative process. In this way, we are not overwriting the ground truth and only changing the unlabeled node information.
@秦默雷
@秦默雷 3 жыл бұрын
I got a question about this as well. My idea about it is why we don't replace the Y with f(t), therefore it sort of become a learning process in a regular ml process.
@artem_isakow
@artem_isakow Жыл бұрын
Thanks a lot!
@haroldsu1696
@haroldsu1696 4 жыл бұрын
Thank you Sir
@kimminuk6042
@kimminuk6042 3 жыл бұрын
Is this, what's called, "Loopy Belief Propagation"?
@PnutJpg
@PnutJpg 2 жыл бұрын
pefect
@JamesSmith-dy5vu
@JamesSmith-dy5vu 4 жыл бұрын
Thanks, I'm excited for the series! I've seen something similar for label propagation done with a personalised pagerank algorithm. Do you know if there are many differences between the two?
@welcomeaioverlords
@welcomeaioverlords 4 жыл бұрын
PageRank is conceptually similar: you're taking the state of a node in a graph, and sending a signal through its connections to update the state of its neighbors. And this happens iteratively until things converge. But there are also some differences. PageRank is sort of unsupervised in that there isn't a ground truth label to send, but rather it's been modeled as "the probability a random internet surfer will arrive at the page". Implementations are almost identical if you replace LP's ground truth matrix, Y, with an uninformative constant matrix of 1/N. Again, this is because PR doesn't have ground truth labels. Then, PR's damping factor d is similar to LP's alpha parameter, in that it is the relative weight between the neighbor updates and the starting signal. I hope this helps.
@JamesSmith-dy5vu
@JamesSmith-dy5vu 4 жыл бұрын
@@welcomeaioverlords Yes, thank you!
@chongtang6908
@chongtang6908 3 жыл бұрын
Thanks James Smith too. Also, you are so handsome, god~
@NadavBenedek
@NadavBenedek Жыл бұрын
Good audio quality
@henrygengiti7861
@henrygengiti7861 6 ай бұрын
can you please clarify sigma one more time?
Graph Convolutional Networks (GCNs) made simple
9:25
WelcomeAIOverlords
Рет қаралды 126 М.
Genius Machine Learning Advice for 10 Minutes Straight
9:46
Data Sensei
Рет қаралды 103 М.
УЛИЧНЫЕ МУЗЫКАНТЫ В СОЧИ 🤘🏻
0:33
РОК ЗАВОД
Рет қаралды 7 МЛН
Andro, ELMAN, TONI, MONA - Зари (Official Music Video)
2:50
RAAVA MUSIC
Рет қаралды 2 МЛН
ССЫЛКА НА ИГРУ В КОММЕНТАХ #shorts
0:36
Паша Осадчий
Рет қаралды 8 МЛН
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 361 М.
Simple Message Passing on Graphs
10:51
WelcomeAIOverlords
Рет қаралды 34 М.
GraphRAG: The Marriage of Knowledge Graphs and RAG: Emil Eifrem
19:15
Network Analysis. Lecture 17 (part 1). Label propagation on graphs.
1:07:01
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 443 М.
Discovering Communities: Modularity & Louvain #SoMe3
41:34
Splience
Рет қаралды 4,2 М.
The basics of spatio-temporal graph neural networks
13:09
Jacob Heglund
Рет қаралды 27 М.