17: Principal Components Analysis_ - Intro to Neural Computation

  Рет қаралды 39,229

MIT OpenCourseWare

MIT OpenCourseWare

Күн бұрын

Пікірлер
@SyedMohommadKumailAkbar
@SyedMohommadKumailAkbar 7 ай бұрын
Absolutely incredible teaching. started doing math again after a gap of 12 years and this just clicked.
@ksjksjgg
@ksjksjgg 2 жыл бұрын
Great thanks Prof. Fee and MIT for sharing this excellent lecture!!!
@markdebono1273
@markdebono1273 11 ай бұрын
I have been watching videos on this topic for the last 2-3 months. However, it was ONLY after watching this lecture that I understood EVERYTHING! Not to discredit any of the other videos I watched, but this video is made at a very good pace and I did not need to pause a hundred times to give my mind time to process all the incoming information and cool down 😁 If I could give a thumbs up 10 times, I would do it!
@ElectroCute10
@ElectroCute10 2 жыл бұрын
This is nice, I tried to replicate the de-noise in Python using sine and cosine "signals" with some random noise (basically sine(timepoint) + np.random()) and realised that it does not work the way it is described here because the variance is always roughly the same in every dimension (i.e. every timepoint). To be able to isolate the underlying trend using eigenvectors I had to skip the step Z = X - MU at 1:16:01, as that step causes the variance the be approximately the same in all dimensions. If we do not subtract the mean but define our "covariance" matrix as simply Z * ZT then our "variance" is actually higher when the underlying signal is higher and lower when the underlying signal is lower. That way I could isolate the signal. Having said that, maybe I have done something completely wrong. This is MIT after all :)
@prasanth_y_11
@prasanth_y_11 Жыл бұрын
Wow, This is an amazing explanation by building the topic from basics, Even spectral clustering comes with a similar final analysis of the eigenvalues of the covariance matrix.
@CristianTraina
@CristianTraina 3 жыл бұрын
Is really needed to zoom in and zoom out all the time? It doesn't let me focus on what I'm reading..
@SphereofTime
@SphereofTime Ай бұрын
9:21 for a diagonal matrix, Why lambda stretches the vector
@ganeshhampapura9842
@ganeshhampapura9842 2 жыл бұрын
Superb clarity professor God bless you
@jonilyndabalos7105
@jonilyndabalos7105 3 жыл бұрын
Thank you, MIT for sharing these awesome lecture series with the public. You make learning accessible to all, especially the underprivileged. Please keep these videos coming.
@djangoworldwide7925
@djangoworldwide7925 Жыл бұрын
This cloud is basically the electron cloud, isnt it? amazing how everything just fits to that same radial gussain distribution
@lijisisi
@lijisisi 3 жыл бұрын
wonderful material! thank you so much Dr. Fee
@jorgegonzalez-higueras3963
@jorgegonzalez-higueras3963 3 жыл бұрын
Thank you, Professor Fee, for a very clear explanation!
@nshilla
@nshilla 3 жыл бұрын
The application part of the lecture is at time 41.00
@et2124
@et2124 Жыл бұрын
lol Lina always asks the exact questions I want to
@animikhaghosh6536
@animikhaghosh6536 2 жыл бұрын
3:29 wow
@lohaniamit
@lohaniamit 2 жыл бұрын
40:24 Just tagging the PCA main part for my reference
@andrewfalcone2701
@andrewfalcone2701 4 жыл бұрын
Can someone confirm that lambda+ = a+b, lambda- = a-b is a mistake? I couldn't simplify the radical and double checked with MATLAB's symbolic equation solver (but it doesn't always simplify correctly) and some numbers.
@andrewfalcone2701
@andrewfalcone2701 4 жыл бұрын
The A matrix is not the general 2x2 symmetric matrix but [a, b; b,a]. For some reason, d is used on the right side.
@not_amanullah
@not_amanullah 18 күн бұрын
thanks ♥️🤍
@benjaminbazi9355
@benjaminbazi9355 4 жыл бұрын
Good lecture!
@abhay9994
@abhay9994 Жыл бұрын
Awesom
@jimmylok
@jimmylok 9 ай бұрын
All those zoom in and zoom out make me dizzy...
@djangoworldwide7925
@djangoworldwide7925 Жыл бұрын
I dont believe anyone actually goes thru this process. nowadays its a simple line of code + biplot
@not_amanullah
@not_amanullah 18 күн бұрын
this is helpful ♥️🤍
18: Recurrent Networks - Intro to Neural Computation
1:19:13
MIT OpenCourseWare
Рет қаралды 7 М.
Principal Component Analysis (PCA)
26:34
Serrano.Academy
Рет қаралды 419 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН
Каха и дочка
00:28
К-Media
Рет қаралды 3,4 МЛН
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 45 МЛН
The Simple Math Problem That Revolutionized Physics
32:44
Veritasium
Рет қаралды 8 МЛН
StatQuest: Principal Component Analysis (PCA), Step-by-Step
21:58
StatQuest with Josh Starmer
Рет қаралды 3 МЛН
11: Spectral Analysis Part 1 - Intro to Neural Computation
1:17:38
MIT OpenCourseWare
Рет қаралды 17 М.
Principal Component Analysis (PCA)
6:28
Visually Explained
Рет қаралды 244 М.
2024's Biggest Breakthroughs in Math
15:13
Quanta Magazine
Рет қаралды 615 М.
The Boundary of Computation
12:59
Mutual Information
Рет қаралды 1 МЛН