I just love your lectures. Thank you. I'm learning a lot from you.
@forughghadamyari828110 ай бұрын
hi. Thanks for wonderful videos. please introduce a book to study for this course.
@kumorikuma9 жыл бұрын
Great lectures. Thanks so much! I learn much better when I can watch a pre-recorded lecture and pause it / go back at my own pace. (Also can skip 9:30AM class heh).
@bach17929 жыл бұрын
great courses man. thanks
@panayiotispanayiotou14696 жыл бұрын
57:17 The dimensions of the X matrix are n * (d + 1) while for the theta matrix are d * 2. The first x inside the X matrix should be renamed to x_12 instead so that the columns are kept to size d
@dhoomketu7314 жыл бұрын
You are right. The parameter matrix to the right of the data matrix X should have d+1 rows.
@el-ostada58492 жыл бұрын
Thank you for everything you have given to us.
@joehsiao62244 жыл бұрын
@45:59 The right hand side of prove looks wrong, specifically xi^TΘ, xi is 1xn, Θ is nx1, there is no need to transpose xi. Unless xi is nx1, but there is no definition of xi before.
@joehsiao62244 жыл бұрын
I found the term is being corrected in another video kzbin.info/www/bejne/h3iylZRvot-Sr6M&ab_channel=NandodeFreitas&t=22m46s
@charlescoult2 жыл бұрын
This was an excellent lecture. Thank you.
@ShaunakDe11 жыл бұрын
Thanks for the informative lecture.
@parteekkansal7 жыл бұрын
Matrix differentiation results are valid when A is symmetric.
@chihyanglee44737 жыл бұрын
Really appreciate this!!
@letranvinhtri7 жыл бұрын
Does any one know why in 53:17 difference(0^T * X^t * X0) = 2 * X^t * X * 0? I think it should be (t+1) * X^t * X * 0^t
@naveedtahir24637 жыл бұрын
Combine the theta vectors and you'll understand
@LunnarisLP7 жыл бұрын
make sure you understand, what 0^T*X*X^T is. It's really just that. T means a transposed vektor, not power T
@LunnarisLP7 жыл бұрын
and check out what X^T*X means and what a dotproduct with itself creates ;-)
@theneuralshift10 жыл бұрын
Hi Nando, Thanks for the lecture. How is the undergraduate and the regular machine learning lectures different?
@adityajoshi510 жыл бұрын
Undergrad course contains fundamental topics, straight from basic probability, to bayes theorem. If you don't have background in advanced statistics etc, undergrad one will be better for you..
@theneuralshift10 жыл бұрын
Thanks aditya
@adityajoshi510 жыл бұрын
I think I was a little - almost three quarters of a year - late in replying ;)
@lradhakrishnarao9028 жыл бұрын
Undergrad courses include concept like linear algebra etc. You can skip those, if you know how to find differentiation and write matrix.
@amitav197810 жыл бұрын
this looks ok but if you could help how to derive the R2 pearson coeficient and then detail that holds good I hope
@riccardopet8 жыл бұрын
Hello, I am following your lecture to learn a bit about ML. I was trying to derivate the expression (latex code) \sum_{i=1}^n (u_i - \vect{x}^T_i \theta)^2 that you present on the slide "Optimization Approach". I am not able to derive that expression because I get a very similar expression where the \vec{x} is not transpose. Actually, that should be in agreement with the fact that \theta is defined as a column vector and x_i should be row vector (as also shown in one of the last slides). Thank you very much, your lectures are very well done!
@ilijahadzic74688 жыл бұрын
I noticed the same. The transposition in the last expression on that slide is probably an error. The x_{i} is a row-vector of dimension d, so when multiplied by a column-vector \theta yields a scalar. No need to transpose. The transposition in the first (leftmost) expression is correct.
@lradhakrishnarao9028 жыл бұрын
you must be getting the expression y_i^Tx_i * theta. All you need to do is write it as (x_i*theta)T * y_i and your problem is solved. I know, it requires bit of rearrangement, and that's why that particular exercise has been given.
@KrishnaDN9 жыл бұрын
fantástico
@nikolamarkovic99062 жыл бұрын
49:40 str 46
@pradeeshbm55587 жыл бұрын
I didn't understand how to draw slop. how to find theta
@LunnarisLP7 жыл бұрын
pick any, then you get your loss function, then you minimize the loss :)
@ashishjain29018 жыл бұрын
anyone having ml related to insurance industry. please share
@yunfeichen92556 жыл бұрын
Calculus 101 anyone???? lolz???? a lot of calculus functions...