This makes the picture really clear. Thanks a lot! Could you also point me to any good resources on how to readily use such a technique for a very large graphs in terms of the tech stack and packages that can be used to implement this?
@ondrejkuchta17854 жыл бұрын
Thanks a lot!! Very helpful.
@khim2970 Жыл бұрын
hello, appreciate your effort to make the great series of videos, and could make me clear a bit that matrix S at 6:41 is adjacency matrix, right?
@nid8490 Жыл бұрын
Yes it is
@devanshpurwar Жыл бұрын
nice explanation
@swakshardeb79084 жыл бұрын
If we keep Y constant throughout this iterative process and initialize with random numbers for unlabeled nodes. What kind of sense does it make? Aren't we propagating wrong labels throughout the network for unlabeled nodes? Why we are not changing Y as the network keeps updating?
@welcomeaioverlords4 жыл бұрын
I struggled with this idea too, Swakshar. But the alternative is to update Y instantly, which then causes your labeled nodes to have their real labels overwritten by its unlabeled neighbors. This leads to quick convergence to a trivial solution. I think of Y as continually pumping energy into the system as we wait for it to spread and reach a steady state. The hope is that by using something uninformative as the value for unlabeled nodes, overtime it will be dominated by true signal. And keep in mind that you can tune the contribution of this term with alpha, so you could always evaluate these trade offs.
@swakshardeb79084 жыл бұрын
@@welcomeaioverlords Thanks for the clarification. But why not only change the values of the unlabeled node using the previous prediction in Y and keep the true label unchanged along the iterative process. In this way, we are not overwriting the ground truth and only changing the unlabeled node information.
@秦默雷3 жыл бұрын
I got a question about this as well. My idea about it is why we don't replace the Y with f(t), therefore it sort of become a learning process in a regular ml process.
@artem_isakow Жыл бұрын
Thanks a lot!
@haroldsu16964 жыл бұрын
Thank you Sir
@kimminuk60423 жыл бұрын
Is this, what's called, "Loopy Belief Propagation"?
@PnutJpg2 жыл бұрын
pefect
@JamesSmith-dy5vu4 жыл бұрын
Thanks, I'm excited for the series! I've seen something similar for label propagation done with a personalised pagerank algorithm. Do you know if there are many differences between the two?
@welcomeaioverlords4 жыл бұрын
PageRank is conceptually similar: you're taking the state of a node in a graph, and sending a signal through its connections to update the state of its neighbors. And this happens iteratively until things converge. But there are also some differences. PageRank is sort of unsupervised in that there isn't a ground truth label to send, but rather it's been modeled as "the probability a random internet surfer will arrive at the page". Implementations are almost identical if you replace LP's ground truth matrix, Y, with an uninformative constant matrix of 1/N. Again, this is because PR doesn't have ground truth labels. Then, PR's damping factor d is similar to LP's alpha parameter, in that it is the relative weight between the neighbor updates and the starting signal. I hope this helps.
@JamesSmith-dy5vu4 жыл бұрын
@@welcomeaioverlords Yes, thank you!
@chongtang69083 жыл бұрын
Thanks James Smith too. Also, you are so handsome, god~