Principal Component Analysis (PCA)

  Рет қаралды 364,026

Steve Brunton

Steve Brunton

4 жыл бұрын

Principal component analysis (PCA) is a workhorse algorithm in statistics, where dominant correlation patterns are extracted from high-dimensional data.
Book PDF: databookuw.com/databook.pdf
Book Website: databookuw.com
These lectures follow Chapter 1 from: "Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control" by Brunton and Kutz
Amazon: www.amazon.com/Data-Driven-Sc...
Brunton Website: eigensteve.com
This video was produced at the University of Washington

Пікірлер: 207
@sjh7782
@sjh7782 3 жыл бұрын
Prof. Brunton always delivers the best explanations on the subjects! His videos really help me a lot! Kudos!
@mikeCavalle
@mikeCavalle Жыл бұрын
Indeed he does ...
@T4l0nITA
@T4l0nITA 4 жыл бұрын
The best video on PCA I could find on youtube, no messy blackboards, jokes or oversimplification, just solid explanation, great job.
@rezab314
@rezab314 3 жыл бұрын
What the hell. One can download your book for free?! You sir are a saint. I will work thru it and if I like it I will definitely purchase it!! (I'm pretty sure I will like it, because I like all your videos so far) PS: I am so proud of you guys. You are bringing humanity forward with content like this being free. I encourage everyone who can to purchase content from sources like this
@amisteiner66
@amisteiner66 Жыл бұрын
You explain complicated math in a brilliant way. Thank you so much
@aayushpatel5777
@aayushpatel5777 3 жыл бұрын
These videos are the PCA for data driven engineering!!Thank you for bringing up these series publicly!!
@danielzhang3070
@danielzhang3070 4 жыл бұрын
So far this is the best video of PCA explanation.
@pablo_brianese
@pablo_brianese 4 жыл бұрын
Steve's explanations are excellent.
@sheethouse15
@sheethouse15 4 ай бұрын
Finally, someone who explains statistics in a straight-forward way, whilst communicating in an adult like manner.
@vijanth
@vijanth 4 жыл бұрын
Steve, able to explain PCA from classical statistiscal point of view. Very clear
@GreenCreepLoL
@GreenCreepLoL 3 жыл бұрын
I logged in just for this, which I almost never do xD I wanted to say: Thank you! Your video series is great, enjoyful, and helps getting familiar with the topic rapidly. The same applies to the book, which you link at for free. Thank you.
@resap.9128
@resap.9128 3 жыл бұрын
I've watched a lot of PCA videos and this is really the best one. You're amazing!
@TheMangz1611
@TheMangz1611 3 жыл бұрын
yes he is but do visit statquest
@GundoganFatih
@GundoganFatih 2 жыл бұрын
@@TheMangz1611 Bam. Best wishes to anyone who makes teaching intuitive.
@richardlin6993
@richardlin6993 4 жыл бұрын
Best explanation. Looking forward to video about Kernel PCA!
@lucasheterjag
@lucasheterjag 2 жыл бұрын
Love this series! Just bought your book
@vijanth
@vijanth 3 жыл бұрын
The alst part of the video on how SVD and PCA are related really class of its own. IT show the expert should run video lectures
@criticalcog6363
@criticalcog6363 3 жыл бұрын
Wow, excellent explanation. Thank you so much.
@mickwilson99
@mickwilson99 3 күн бұрын
This is so technically correct, and simultaneously so obtuse, that my intuition fuse has melted. Please consider redoing this as 3D pseudo visualizations of data subsets.
@user-pb4pe5mk7c
@user-pb4pe5mk7c 3 жыл бұрын
I am a phd student learning inverse scattering, your lectures help me with understanding those concept :) greetings from naples
@shashidharmuniswamy2620
@shashidharmuniswamy2620 2 жыл бұрын
Thank you, Prof. Brunton. I have a question: supposing I have done this series of experiments with a target measure that cannot be categorized but is a continuous value, then can I use PCA?
@rodrigomaximo1034
@rodrigomaximo1034 2 жыл бұрын
Amazing video. I did the MIT lectures about Linear Algebra (that talked about SVD) and the Andrew Ng's ML course (that talked about PCA). This video was the perfect bridge to connect the two things in a coherent manner. Thank you very much, Dr. Brunton!
@tymothylim6550
@tymothylim6550 3 жыл бұрын
Thank you very much for this video! Learnt quite a bit from this :)
@VinodSharma-lj6yy
@VinodSharma-lj6yy 3 ай бұрын
Very good explanation for each symptom and its treatment
@mohamedemara6906
@mohamedemara6906 4 жыл бұрын
This is amazing!
@Sky-pg6xy
@Sky-pg6xy 2 жыл бұрын
This channel is amazing!
@bradjones06
@bradjones06 4 жыл бұрын
If there was a Nobel Prize in Education (which there absolutely should be), then you should absolutely win.
@jenssen97
@jenssen97 3 жыл бұрын
superb explanation. Thank you!
@jacobanderson5693
@jacobanderson5693 4 жыл бұрын
Do you have a patreon? How can I help support this content? Just these materials on Ch1 and 2 have been amazing. Will it extend to addiitonal chapters?
@Eigensteve
@Eigensteve 4 жыл бұрын
I don't, but I really appreciate the kind words! This will extend to all of the chapters eventually.
@edhas7988
@edhas7988 3 жыл бұрын
Thank you so much. You made it really easy to understand.
@Eigensteve
@Eigensteve 3 жыл бұрын
Glad to hear that!
@macmos1
@macmos1 4 жыл бұрын
Awesome! Thank you!
@muhammadali-jv1kr
@muhammadali-jv1kr 3 жыл бұрын
Excellent ,connected ,simple
@phytasea
@phytasea Жыл бұрын
Beautifully explained ~ and Thank you so much ^^
@Actanonverba01
@Actanonverba01 4 жыл бұрын
Very clear, excellent
@jaanuskiipli4647
@jaanuskiipli4647 3 жыл бұрын
Correct me if I'm wrong, but B transposed multiplied by B sums up the products of mean centered values, but to get the covariation we still need to divide by number of rows in X as covariation is defined as E{(X-E(X))*(Y-E(Y))} not just sum of (X-E(X))*(Y-E(Y)) over measurements
@TreldarForPresident
@TreldarForPresident Жыл бұрын
Nobody gonna say anything about how this man just wrote all of that backwards flawlessly?
@reginaldrobinson8007
@reginaldrobinson8007 3 жыл бұрын
Can you please share what software and equipment you're using for this presentation?
@matthijsg5983
@matthijsg5983 8 ай бұрын
Note @ 7:50 regarding CV = VD. The D here is a matrix where all the eigenvalues are on the diagonal.
@Kyubbie
@Kyubbie 3 жыл бұрын
I started watching from SVD til here and it was super helpful! Thank you so so much.
@noahbarrow7979
@noahbarrow7979 2 жыл бұрын
Hey Dr. Brunton. Awesome video yet again! I've been snooping around kaggle, and found a dataset on body performance given a host of variables. I thought i'd try using PCA to determine the most influential characteristics within the data and began working with it in matlab. I was able to get tons of outputs (a thrill unto itself) and a nice little scatter plot! However, when all was said and done I had difficulty understanding which variables were most influential by looking at the scatter plot and PCA breakdown. What should I be doing/thinking to gain that intuition? Thanks!
@thomaswilke9197
@thomaswilke9197 3 жыл бұрын
Is this done with a glass whiteboard and the recording is mirrored?
@cyaaronk7328
@cyaaronk7328 3 жыл бұрын
Amazing! Thank you!
@alhelibrito4480
@alhelibrito4480 3 жыл бұрын
Wow, I saw n videos before this, beautiful explanation¡
@vincecaulfield9368
@vincecaulfield9368 4 жыл бұрын
Hi Steve, There may be a tiny typo in Page#22 in your Data Driven Science book. The equation(1.26) is supposed to be $B = X - \bar X$ to represent demeaned data $X$ while it shows $B = X - \bar B$ on the book. Please correct me if I am wrong.
@wenhuaxu8589
@wenhuaxu8589 4 жыл бұрын
Thank you
@alexanderfeng860
@alexanderfeng860 3 жыл бұрын
I believe that there's a typo. The principal components are the columns of V.
@hectorponce2012
@hectorponce2012 3 жыл бұрын
Does the concept of cross-loadings exist in PCA like it does in EFA? If it does exist, what are the criteria to determine so?
@SM-tb9ux
@SM-tb9ux 2 жыл бұрын
It can be helpful to use the names "features" (to refer to the 'n' different pixels in a photo, or the 'n' different characteristics of rats which may predict cancer) and "snapshots" (to refer to the 'm' different measurements (e.g. people's photos, or rats)). Then, it doesn't matter whether you have the "features" as columns or rows - Corr(feat) = feature-wise correlation matrix, where entries represent the correlation between two features, and the eigenvectors of this matrix are the "eigenfeatures". If you happen to have "features" as columns, then Corr(feat) = [X][X^T]. If you happen to have the "features" as rows, then Corr(feat) = [X^T][X]. Similarly, for the "snapshots" we have the Corr(snap) = snapshot-wise correlation matrix, where entries represent the correlation between two snapshots, and the eigenvectors of this matrix are the "eigensnapshots". Again, depending on whether the "snapshots" are in the rows or columns of X, you can find Corr(snap). This also helps when doing PCA, as you generally wish to reduce the number of "features", and are therefore interested in determining the eigenvectors of Corr(feat). No need to sweat over how your data is organized in the matrix X, or any annoying conventions for PCA. In short, it is easier to think of "features & snapshots" than "rows & columns".
@Realiiii
@Realiiii 4 ай бұрын
I can't agree more. It is inconsistent in the video and the code. In the video, he emphasized that each row has to be the features collected from a single individual. If you have a 2*10000 matrix, you have 2 individuals and 2000 features. However, a matrix of 2*10000 is generated in the code, which actually means 2 features and 10000 individuals. That takes me a really long time to figure out what happened.
@huseyinsenol1769
@huseyinsenol1769 5 ай бұрын
Best math content is always the serious and straightforward ones.. Fuck the jokers, you are the king dude
@usmanmuhammad3439
@usmanmuhammad3439 6 ай бұрын
Principal Component Analysis (PCA) is a technique in statistics that simplifies complex data by identifying and emphasizing the most important patterns or features. It does this by transforming the original variables into a new set of uncorrelated variables called principal components, allowing for a more efficient representation of the data.
@alirezaparsay8518
@alirezaparsay8518 Жыл бұрын
We do the last part (T=BV) in order to calculate the inner product with the principal components.
@eduardoantonioroquediaz9504
@eduardoantonioroquediaz9504 3 жыл бұрын
If I have outliers in my dataset, can this affect the PCA?, because I have tried with cases of this type and it usually identifies a single principal component
@tathagatverma1806
@tathagatverma1806 Жыл бұрын
are you writing in the reverse order (right to left) on the board?
@ffelixvideos
@ffelixvideos 2 жыл бұрын
Hi professor, Just one question. If your X matrix has samples in the rows and sample features in the columns, then the correct shouldn't be to calculate the column-means(X), instead of row-means(X), and subtract each column-value by its respective column-mean? So, each X column (feature) has mean = 0.
@syoudipta
@syoudipta Жыл бұрын
I think, as he explained at the beginning, this mix-up happened due to the difference in representing the data in SVD literature and PCA literature. I am rewatching this lecture after watching the next one where MATLAB demonstration is given. The code does exactly that, take column-mean of each person and then subtract. I came down in the comment section to check if somebody else had this confusion also.
@ocarinaoftimelz
@ocarinaoftimelz 3 жыл бұрын
So as another way to look at this, are U the scores, sigma the eigenvalues, and V the loadings?
@hira9505040
@hira9505040 3 жыл бұрын
I like your explanation. Please check equation 1.26 on your databook.
@shashankgupta3549
@shashankgupta3549 5 ай бұрын
In some implementations, I find that along with mean centering, standard deviation division is followed (Z-scores), does this make a difference? I believe standard deviation division is important to keep the features on the same scale (Unit Variance).
@spyhunter0066
@spyhunter0066 2 жыл бұрын
Can you also show how to get covariance matrix from a Gaussian function results from its fit on a Gaussian looking data. Any suggestion for a book to explain this kind of stuff? Cheers.
@alifasayed4297
@alifasayed4297 8 ай бұрын
how do you write inverted letters so quick? or is it some kind of CGI?
@engr.israrkhan
@engr.israrkhan 4 жыл бұрын
Nice lecture
@mquant001
@mquant001 3 жыл бұрын
Please could you make a video about singular spectrum analysis?
@yt-1161
@yt-1161 Жыл бұрын
4:45 here you’re summing over the elements of each row, but in the book on page 21 it say x_j = sum_i X_ij so you’re building the sum of each column. Is it a typo ?
@ollieelmgreen7280
@ollieelmgreen7280 3 жыл бұрын
Everyone: Great video Me: Wondering how he can write backwards
@ricosrealm
@ricosrealm 2 жыл бұрын
I came to learn about PCA, but now I’m just focusing on how he can write backwards so clearly.
@jamesbra4410
@jamesbra4410 2 жыл бұрын
It's a trickle on the optocordical neural network involving image inversion
@nkotbs
@nkotbs 3 жыл бұрын
Amazing video. To the point and efficient.
@Eigensteve
@Eigensteve 3 жыл бұрын
Glad it was helpful!
@DrAndyShick
@DrAndyShick 7 ай бұрын
This guy is super good at writing backwards
@supervince110
@supervince110 Жыл бұрын
Brilliant!
@MrRynRules
@MrRynRules 3 жыл бұрын
Thank you!
@linkmaster959
@linkmaster959 3 жыл бұрын
The data matrix is a wide matrix, so if it is already zero mean, then in this case the PC XV is equal to XU (Considering U from the SVD lecture)?
@zsun0188
@zsun0188 3 жыл бұрын
BtB seems to calculate the cariance matrix of cols of B.
@Eigensteve
@Eigensteve 3 жыл бұрын
Yep, this essentially is a matrix of inner products of each column with each other.
@mangeshpathade5183
@mangeshpathade5183 4 жыл бұрын
Why did we calculate covariance matrix?
@leli9833
@leli9833 2 жыл бұрын
hi doctor,really usefull to watch your lecture,but in the video,you have pointed out that T matrix is the principle components, however ,this is what confused me, my knowlage is that the col vector of loading are principle components, T is just transformed version of the data B. pls correct me if im wrong, thanks.
@DataTranslator
@DataTranslator 7 ай бұрын
Should #3 be the covariance matrix of the columns rather than the row ?. It seems to me that leads to V rows = B columns
@poiuwnwang7109
@poiuwnwang7109 Жыл бұрын
@6:08, Can anybody confirm that C=B*BT instead of C=BT*B. That is because each row of B represents the measurement of a variable (0 mean).
@zhanfeipeng7625
@zhanfeipeng7625 4 жыл бұрын
Still confused how do we get BV=USigma🤔🤔 since Vt doesn’t cancel with V right?
@braydenst.pierre9761
@braydenst.pierre9761 3 жыл бұрын
Thanks for the great explanation! In your next video, can you please explain how you are writing backward!?
@AlistairLynn
@AlistairLynn 2 жыл бұрын
He writes forwards and then flips the video horizontally
@neoblackcyptron
@neoblackcyptron 2 жыл бұрын
Ha ha ha
@u2coldplay844
@u2coldplay844 3 жыл бұрын
I am confused with SVD of B in step 4 , Isn't we do SVD or Eigen decomposition of C the covariance matrix? i.e. T=CV=UE, C=UEV' ? thank you
@localguy123
@localguy123 3 жыл бұрын
Hello, can you show with examples how to curvilinear component analysis?
@MilianoAlvez
@MilianoAlvez 4 жыл бұрын
Excelent teaching. I have one question tho. When you wrote the covariance matrix of the rows (6:00) because each row is a measurement vector I thought its the covariance between the measurements but then you wrote C=(BT)(B) which is the covariance of the features. Can you explain please.
@GreenCreepLoL
@GreenCreepLoL 3 жыл бұрын
From what I could find in PCA literature, it depends on what you have more of (Objects or Variables/Features). Both (BT)(B) and (B)(BT) is possible when doing PCA, and the covariance matrix you calculate depends on this (you always take the larger one).
@MilianoAlvez
@MilianoAlvez 3 жыл бұрын
Here I am six month later but now I understand my problem. So, we have measurment vector M and every mesurment has some features such as age, height, disease and etc. Now what we are intersted in, is to understand the distributation and the covariance of these features to workout for example joint or posterior distributions or etc. For example, the positive covariance between age and testing positive for some disease means there is a relation between these two ,the more the age the more the risk of this disease. So, we need the (BT)*B that is Cov-Var between features, then we can find the joint or posterior probablity distribution.
@mkhex87
@mkhex87 Жыл бұрын
@@MilianoAlvez right, which means B*B is the covariance of the columns. Brunton I think accidentally wrote "rows"
@minjieshen6558
@minjieshen6558 4 жыл бұрын
If we do row-wise correlation with respect to B, should it be C=B * B_T instead of B_T * B?
@jianjia7261
@jianjia7261 2 жыл бұрын
i agree with you
@mkhex87
@mkhex87 Жыл бұрын
Yeah, he wrote "BTB is the covariance of the the rows of B", but I think he meant the columns (the features)
@sorvex9
@sorvex9 2 жыл бұрын
Can someone explain to me, why covariance matrix is just the inner product of B transposed B?
@Giantcjy
@Giantcjy 3 жыл бұрын
He just knows it all.
@Eigensteve
@Eigensteve 3 жыл бұрын
Lol, not even the first principal component! :)
@qasimahmad6714
@qasimahmad6714 3 жыл бұрын
Is it important to show 95% confidence ellipse in PCA? If my data is not drawing then what should i do ? can i used PCA score graph without 95% confidence ellipse?
@saalimzafar
@saalimzafar Жыл бұрын
Introduction is one thing, presentation is another. One who combines both gets all the attention!!
@tazking93
@tazking93 4 жыл бұрын
Thank you for the lecture, its been very helpful. On an unrelated note, how do you write backwards with such ease?
@LTForcedown
@LTForcedown 4 жыл бұрын
They probably just mirror the video
@estebanlopez1701
@estebanlopez1701 3 жыл бұрын
I was thinking the same!
@xephyr417
@xephyr417 3 жыл бұрын
@@LTForcedown no, he writes backwards.
@arkanasays
@arkanasays 3 жыл бұрын
it is a mirroring technique - he cannot write backwards with such ease
@HardLessonsOfLife
@HardLessonsOfLife 2 жыл бұрын
I think he is using a special technology which shows mirror image of his board in front of him
@deathbanana888
@deathbanana888 3 жыл бұрын
took me a minute to realise you record this and then mirror the video, rather than learning to write backwards hahaha
@MaeLSTRoM1997
@MaeLSTRoM1997 2 жыл бұрын
It took me a while to realize you are left handed and you just reflected the video so that what you write appears in the correct orientation for us. At first I was wondering if you managed to learn how to write backwards..
@rachh2750
@rachh2750 3 жыл бұрын
How does he write in reverse?
@maxli3195
@maxli3195 Жыл бұрын
this is amazing
@julianjnavasgonzalez1563
@julianjnavasgonzalez1563 3 жыл бұрын
I was following ok until the part that started with writing the lambdas. I got lost there. Also, how is this and the nipals algorithm related?
@hms3021
@hms3021 Жыл бұрын
At the 13’45 ‘’ mark why is the equation CV=VD? Should it be CV=DV?
@vinson2233
@vinson2233 2 жыл бұрын
You only said about the data should have 0 mean, but what about the standard deviation? Don't we need to scale the data first by dividing each measure by its standard deviation to make sure the PCA doesn't easily overfit to direction with the largest magnitude?
@pandu-kt3cz
@pandu-kt3cz 4 жыл бұрын
this is more than Awesome!! i want to ask you one question and it is here a1=[1,23,4,51,62,7,8,43,1,29] a2=[5,45,32,51,60,7,8,35,10,31] a3=[13,3,64,35,36,37,48,3,31,1] a4=[3,3,1,5,6,3,8,3,1,3] a5=[0,3,0,5,0,0,8,0,0,1] how can i figure out important columns (features) with eigenvalues and eigenvectors? As we can see here , importance of a4 and a5 is negligible! but how can i find out with this concept? I have eigenvalues and eigenvectors of this but do not know how to use them in this context ? after finding eigenvalues and eigenvectors , i know how to find PC.Because i have seen your videos . As i have seen in the comment section someone already asked this question . But i was not able to understand the Ans! kindly help me out.
@gamingandmusic9217
@gamingandmusic9217 3 жыл бұрын
If the images of X are not all independent, then X is not full rank matrix.then will we have only rank number of eigen faces?
@jayasimhayenumaladoddi1602
@jayasimhayenumaladoddi1602 Жыл бұрын
Can you please make a video on OLPP?
@manfredbogner9799
@manfredbogner9799 6 ай бұрын
very good, thanks a lot 😅
@notmomo4u
@notmomo4u 3 жыл бұрын
You are an angel!!!!!!!!!!!!!
@nurtenbakc2562
@nurtenbakc2562 23 күн бұрын
Dear Steve bu video da neden altyazılarda türkçe yok. Anlayamadim
@jyotipandey1664
@jyotipandey1664 Жыл бұрын
The following are measurements on the test scores (X, Y) of 6 candidates for two subject examinations: (50, 55), (62, 92), (80, 97), (65, 83), (64, 95), (73, 93) Determine the first principal components for the test scores, by using Hotelling's iterative procedure. Sir how to .....???
@viniciusviena8496
@viniciusviena8496 3 жыл бұрын
good explanation in general. but you really should play with matrix dimension during explanations.
@GamingShiiep
@GamingShiiep 2 жыл бұрын
2:09 I just don't get it: Let's say we measured 1600 samples. Each sample measurement resulted in a concentration value for each of 26 Elements. How would that look like in the matrix? So my matrix would have 1600 rows and 26 columns, right?
@_Gianluca
@_Gianluca 3 жыл бұрын
Thanks for this video! What's the difference between cumulative sum of sigma instead of lamba? (11:30) In "Principal Component Analysis (PCA) 2 [Python]" you do the cumulative sum of S (sigma elements). Thanks!
@HenryYi
@HenryYi 2 жыл бұрын
Can somebody explain to me why it's B'B not BB' since the data were stored row-wise.
@baseladams280
@baseladams280 3 ай бұрын
How to film such kind of tutorial videos?
Principal Component Analysis (PCA) [Matlab]
15:56
Steve Brunton
Рет қаралды 81 М.
StatQuest: Principal Component Analysis (PCA), Step-by-Step
21:58
StatQuest with Josh Starmer
Рет қаралды 2,8 МЛН
FOOTBALL WITH PLAY BUTTONS ▶️❤️ #roadto100million
00:20
Celine Dept
Рет қаралды 35 МЛН
World’s Deadliest Obstacle Course!
28:25
MrBeast
Рет қаралды 120 МЛН
I CAN’T BELIEVE I LOST 😱
00:46
Topper Guild
Рет қаралды 34 МЛН
Principal Component Analysis (PCA) - easy and practical explanation
10:56
Singular Value Decomposition (SVD): Overview
6:44
Steve Brunton
Рет қаралды 531 М.
Neural Network Architectures & Deep Learning
9:09
Steve Brunton
Рет қаралды 774 М.
Principal Component Analysis (PCA)
26:34
Serrano.Academy
Рет қаралды 400 М.
Robust Principal Component Analysis (RPCA)
22:11
Steve Brunton
Рет қаралды 68 М.
Data Analysis 6: Principal Component Analysis (PCA) - Computerphile
20:09
Principal Component Analysis (PCA) clearly explained (2015)
20:16
StatQuest with Josh Starmer
Рет қаралды 1 МЛН
6. Singular Value Decomposition (SVD)
53:34
MIT OpenCourseWare
Рет қаралды 217 М.
Singular Value Decomposition (SVD): Mathematical Overview
12:51
Steve Brunton
Рет қаралды 377 М.
i like you subscriber ♥️♥️ #trending #iphone #apple #iphonefold
0:14
Урна с айфонами!
0:30
По ту сторону Гугла
Рет қаралды 7 МЛН
iPhone 12 socket cleaning #fixit
0:30
Tamar DB (mt)
Рет қаралды 49 МЛН