NetSci 04-2 Eigenvector Centrality

  Рет қаралды 15,835

Andrew Beveridge

Andrew Beveridge

Күн бұрын

Пікірлер: 22
@nilslohrberg3877
@nilslohrberg3877 3 жыл бұрын
Great Video, this is the only source I could find, that finally explained to me the connection between the interpretation of the eigen centrality and its definition via the eigenvector equation. Thank you a lot!
@andrewbeveridge7476
@andrewbeveridge7476 3 жыл бұрын
Thanks! I am glad that you found it useful.
@rbg9854
@rbg9854 2 жыл бұрын
Thank you so much for the detailed explanation! I had been looking for this kind of videos for a long time but most of the videos only have a light touch on the maths side. This is a much more comprehensive one!
@debasismondal3619
@debasismondal3619 3 жыл бұрын
This video is great but underrated if one to understand the concepts of Eighenvector and Katz centrality I hardly came across any videos which explains it in so much detail.
@andrewbeveridge7476
@andrewbeveridge7476 3 жыл бұрын
I have a follow-up video on Katz centrality, which helps to make that connection and how they relate to PageRank. kzbin.info/www/bejne/b6m7ZY1riMuebck
@kyxas
@kyxas 3 жыл бұрын
Hi Andrew, I tried solving your example and got a different eigenvector. I guess its the adjacency matrix A where the fifth row should be (0, 0, 0, 1, 0, 1) and not (1, 0, 0, 1, 0, 1).
@andrewbeveridge7476
@andrewbeveridge7476 3 жыл бұрын
Yes, you are correct. My matrix has a typo. Thanks for pointing that out.
@PowerYAuthority
@PowerYAuthority Жыл бұрын
Came here to say this, was running these calculations with numpy and it wasn't matching
@brownchocohuman6595
@brownchocohuman6595 Жыл бұрын
I have a question. Instead of recursively updating the values, why don't we straight up calculate the normalised eigen vector and get the centrality value of each node from there?
@andrewbeveridge7476
@andrewbeveridge7476 11 ай бұрын
You are correct! In practice, we just find the eigenvector for the largest eigenvalue of matrix A. This is also called the "dominant" eigenvalue of A. My goal at 4:30 was give an example of the convergence of this recursive update rule. After that, the video uses the spectral decomposition theorem to show that we are converging to the eigenvector for the largest eigenvalue.
@souravdey1227
@souravdey1227 3 жыл бұрын
seriously good explanation in such a short time. only problem is suddenly the volume dipped so low that I could barely hear.
@andrewbeveridge7476
@andrewbeveridge7476 3 жыл бұрын
You are right: the audio does dip starting around 9:00. Sorry about that. It looks like I need to upload a new version (with a new URL) to fix it. Thanks for the feedback and I'm glad the video was helpful.
@Kane9530
@Kane9530 10 ай бұрын
Hi Andrew, for directed graphs, since the matrix is no longer symmetric, the eigenvectors are no longer necesarily orthonormal and so the spectral decomposition theorem wouldn't apply. In this case, how do we prove that the eigenvector centrality calculation works?
@andrewbeveridge7476
@andrewbeveridge7476 8 ай бұрын
Directed networks do get tricky for a few different reasons. When the directed network is strongly connected and not k-partitite (so the adjacency matrix is primitive and irreducible), the eigenvector for the dominant eigenvalue will still determine the limiting values of the vector produced by repeatedly multiplying by A^T. This is the same idea that we use in the "power method" to find the dominant eigenvalue and eigenvector. I recommend reading up on that method.
@Keyakina
@Keyakina 8 ай бұрын
How did you normalize and get 0.32 instead of zero at 5:50?
@andrewbeveridge7476
@andrewbeveridge7476 8 ай бұрын
I divided each entry by 52, which is the value of the largest entry. So the first entry becomes 17/52=0.329. My goal here was to make it easier for a human to compare the entries. So the first entry is about 32.9% of the largest entry. In hindsight, I shouldn't have called this "normalizing." since the resulting vector doesn't have length 1. "Rescaling" would be a better term. Thanks for the comment.
@eliesjj2207
@eliesjj2207 11 ай бұрын
This is awesome!
@avnimishra6874
@avnimishra6874 Жыл бұрын
How you got the normalized value ? Plz tell (time 5.54)
@andrewbeveridge7476
@andrewbeveridge7476 11 ай бұрын
We divide by the largest entry so that the largest value is 1. This makes it easy for a human to compare the relative sizes. In this case, [17, 38, 37, 52, 39, 47] becomes [17/52, 38/52, 37/52, 1, 39/52, 47/52] = [0.32, 0.73, 0.71, 1, 0.75, 0.90].
@zhichen2288
@zhichen2288 2 жыл бұрын
excellent, thank you!
@hernanepereira50
@hernanepereira50 3 жыл бұрын
Hi Andrew. How are you? a_{51}=0, isn't it?
@andrewbeveridge7476
@andrewbeveridge7476 3 жыл бұрын
Yes, you are right! Thank you for the correction for the matrix A starting at 6:50. So the fifth entry of Ax should be x_4 + x_6.
NetSci 04-3 Katz Centrality
11:14
Andrew Beveridge
Рет қаралды 8 М.
Eigenvectors and eigenvalues - simply explained
11:40
TileStats
Рет қаралды 36 М.
Network Analysis (1) Theory and Concept
42:19
Byoung-gyu Gong
Рет қаралды 33 М.
21. Eigenvalues and Eigenvectors
51:23
MIT OpenCourseWare
Рет қаралды 660 М.
Degree Centrality
13:09
John McCulloch
Рет қаралды 39 М.
Betweenness Centrality
14:32
John McCulloch
Рет қаралды 28 М.
The Matrix Transpose: Visual Intuition
26:01
Sam Levey
Рет қаралды 35 М.
Motivating Eigenvalues and Eigenvectors with Differential Equations
23:58
Google and eigenvalues
10:58
Dr Peyam
Рет қаралды 59 М.
Spectral Graph Theory For Dummies
28:17
Ron & Math
Рет қаралды 58 М.
NetSci 05-2 PageRank Centrality
9:09
Andrew Beveridge
Рет қаралды 3,2 М.