Machine learning - linear prediction

  Рет қаралды 70,755

Nando de Freitas

Nando de Freitas

Күн бұрын

Пікірлер: 32
@manzooranoori2536
@manzooranoori2536 2 жыл бұрын
I just love your lectures. Thank you. I'm learning a lot from you.
@forughghadamyari8281
@forughghadamyari8281 10 ай бұрын
hi. Thanks for wonderful videos. please introduce a book to study for this course.
@kumorikuma
@kumorikuma 9 жыл бұрын
Great lectures. Thanks so much! I learn much better when I can watch a pre-recorded lecture and pause it / go back at my own pace. (Also can skip 9:30AM class heh).
@bach1792
@bach1792 9 жыл бұрын
great courses man. thanks
@panayiotispanayiotou1469
@panayiotispanayiotou1469 6 жыл бұрын
57:17 The dimensions of the X matrix are n * (d + 1) while for the theta matrix are d * 2. The first x inside the X matrix should be renamed to x_12 instead so that the columns are kept to size d
@dhoomketu731
@dhoomketu731 4 жыл бұрын
You are right. The parameter matrix to the right of the data matrix X should have d+1 rows.
@el-ostada5849
@el-ostada5849 2 жыл бұрын
Thank you for everything you have given to us.
@joehsiao6224
@joehsiao6224 4 жыл бұрын
@45:59 The right hand side of prove looks wrong, specifically xi^TΘ, xi is 1xn, Θ is nx1, there is no need to transpose xi. Unless xi is nx1, but there is no definition of xi before.
@joehsiao6224
@joehsiao6224 4 жыл бұрын
I found the term is being corrected in another video kzbin.info/www/bejne/h3iylZRvot-Sr6M&ab_channel=NandodeFreitas&t=22m46s
@charlescoult
@charlescoult 2 жыл бұрын
This was an excellent lecture. Thank you.
@ShaunakDe
@ShaunakDe 11 жыл бұрын
Thanks for the informative lecture.
@parteekkansal
@parteekkansal 7 жыл бұрын
Matrix differentiation results are valid when A is symmetric.
@chihyanglee4473
@chihyanglee4473 7 жыл бұрын
Really appreciate this!!
@letranvinhtri
@letranvinhtri 7 жыл бұрын
Does any one know why in 53:17 difference(0^T * X^t * X0) = 2 * X^t * X * 0? I think it should be (t+1) * X^t * X * 0^t
@naveedtahir2463
@naveedtahir2463 7 жыл бұрын
Combine the theta vectors and you'll understand
@LunnarisLP
@LunnarisLP 7 жыл бұрын
make sure you understand, what 0^T*X*X^T is. It's really just that. T means a transposed vektor, not power T
@LunnarisLP
@LunnarisLP 7 жыл бұрын
and check out what X^T*X means and what a dotproduct with itself creates ;-)
@theneuralshift
@theneuralshift 10 жыл бұрын
Hi Nando, Thanks for the lecture. How is the undergraduate and the regular machine learning lectures different?
@adityajoshi5
@adityajoshi5 10 жыл бұрын
Undergrad course contains fundamental topics, straight from basic probability, to bayes theorem. If you don't have background in advanced statistics etc, undergrad one will be better for you..
@theneuralshift
@theneuralshift 10 жыл бұрын
Thanks aditya
@adityajoshi5
@adityajoshi5 10 жыл бұрын
I think I was a little - almost three quarters of a year - late in replying ;)
@lradhakrishnarao902
@lradhakrishnarao902 8 жыл бұрын
Undergrad courses include concept like linear algebra etc. You can skip those, if you know how to find differentiation and write matrix.
@amitav1978
@amitav1978 10 жыл бұрын
this looks ok but if you could help how to derive the R2 pearson coeficient and then detail that holds good I hope
@riccardopet
@riccardopet 8 жыл бұрын
Hello, I am following your lecture to learn a bit about ML. I was trying to derivate the expression (latex code) \sum_{i=1}^n (u_i - \vect{x}^T_i \theta)^2 that you present on the slide "Optimization Approach". I am not able to derive that expression because I get a very similar expression where the \vec{x} is not transpose. Actually, that should be in agreement with the fact that \theta is defined as a column vector and x_i should be row vector (as also shown in one of the last slides). Thank you very much, your lectures are very well done!
@ilijahadzic7468
@ilijahadzic7468 8 жыл бұрын
I noticed the same. The transposition in the last expression on that slide is probably an error. The x_{i} is a row-vector of dimension d, so when multiplied by a column-vector \theta yields a scalar. No need to transpose. The transposition in the first (leftmost) expression is correct.
@lradhakrishnarao902
@lradhakrishnarao902 8 жыл бұрын
you must be getting the expression y_i^Tx_i * theta. All you need to do is write it as (x_i*theta)T * y_i and your problem is solved. I know, it requires bit of rearrangement, and that's why that particular exercise has been given.
@KrishnaDN
@KrishnaDN 9 жыл бұрын
fantástico
@nikolamarkovic9906
@nikolamarkovic9906 2 жыл бұрын
49:40 str 46
@pradeeshbm5558
@pradeeshbm5558 7 жыл бұрын
I didn't understand how to draw slop. how to find theta
@LunnarisLP
@LunnarisLP 7 жыл бұрын
pick any, then you get your loss function, then you minimize the loss :)
@ashishjain2901
@ashishjain2901 8 жыл бұрын
anyone having ml related to insurance industry. please share
@yunfeichen9255
@yunfeichen9255 6 жыл бұрын
Calculus 101 anyone???? lolz???? a lot of calculus functions...
Machine learning - Maximum likelihood and linear regression
1:14:01
Nando de Freitas
Рет қаралды 111 М.
Machine learning - Bayesian learning
1:17:40
Nando de Freitas
Рет қаралды 62 М.
Хаги Ваги говорит разными голосами
0:22
Фани Хани
Рет қаралды 2,2 МЛН
$1 vs $500,000 Plane Ticket!
12:20
MrBeast
Рет қаралды 122 МЛН
Machine learning - regularization, cross-validation and data size
58:47
Nando de Freitas
Рет қаралды 28 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,3 МЛН
Machine learning - Gaussian processes
1:17:28
Nando de Freitas
Рет қаралды 93 М.
PAC Learning and VC Dimension
17:17
John Mount
Рет қаралды 19 М.
Machine learning - Regularization and regression
1:01:15
Nando de Freitas
Рет қаралды 47 М.
Machine learning - Random forests
1:16:55
Nando de Freitas
Рет қаралды 238 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 835 М.
Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)
1:44:31
Хаги Ваги говорит разными голосами
0:22
Фани Хани
Рет қаралды 2,2 МЛН