Ali Ghodsi, Lec [2,1]: Deep Learning, Regularization

  Рет қаралды 12,247

Data Science Courses

Data Science Courses

Күн бұрын

Пікірлер: 14
@Nakameguro97
@Nakameguro97 2 жыл бұрын
Excellent exposition! This is the first place I have found on YT that explains (through a derivation) why regularization terms are ADDED to the objective function.
@quirkyquerty
@quirkyquerty 8 жыл бұрын
starts at 9.47
@siddharthsvnit
@siddharthsvnit 6 жыл бұрын
10:05 start here
@sobitregmi31
@sobitregmi31 4 жыл бұрын
At 34:23 why does expectation drops while summing over m points
@alisadeghi1370
@alisadeghi1370 5 жыл бұрын
ممنونم استاد
@nazhou7073
@nazhou7073 5 жыл бұрын
thanks very very much, professor
@sachinvernekar6711
@sachinvernekar6711 8 жыл бұрын
time: 33:26 How can co-variance be 0?
@ivishal1990
@ivishal1990 7 жыл бұрын
Covariance of independent terms is 0. Because the expected valueof XY i.e. E[XY]=E[X] * E[Y] if X and Y are independent. You can see it by looking at the formula of covariance and it gets zero. Intuitively covariance measures how 2 random variables effect each other(in a broad sense) and if they are independent then it becomes 0... Hope that helps...
@sachinvernekar6711
@sachinvernekar6711 7 жыл бұрын
co-variance of independent variables = E[(X-mean(x))(Y-mean(y))] will be zero. Point to note is at 33:26, the equation is : E[(y0 - f) (f^ - f)]. Here f is not mean(y0) and f is not mean(f^), hence can't be 0.
@sachinvernekar6711
@sachinvernekar6711 7 жыл бұрын
But the equation is not exactly co-variance. If you are convinced about it being 0, could you please post the solution?
@priyamdey3298
@priyamdey3298 5 жыл бұрын
@@sachinvernekar6711 E[(yo - fo)(fo_hat - fo)] = E[yo*fo_hat] - E[yo*fo] - E[fo*fo_hat] + E[fo*fo] 1st term: yo* E[fo_hat] = yo*fo (bcoz yo is a constant, and expected value of fo_hat should be fo) 2nd term: E[yo*fo] = yo*fo (both are determinstic, not random) 3rd term: E[fo*fo_hat] = fo*E[fo_hat] = fo*fo 4th term: E[fo*fo] = fo*fo 1st term cancels with 2nd, 3rd one cancels with 4th = 0
@ShahFahad-ez1cm
@ShahFahad-ez1cm 2 ай бұрын
watched for 41 mins but havent understood the motive of this lecture except constant derivations
@charliean9237
@charliean9237 8 жыл бұрын
CS prof trying to do some stats... :P
Ali Ghodsi, Lec [2,2]: Deep Learning, Regularization
46:05
Data Science Courses
Рет қаралды 4,6 М.
Ali Ghodsi, Lec [3,2]: Deep Learning, Word2vec
1:11:17
Data Science Courses
Рет қаралды 14 М.
Бенчик, пора купаться! 🛁 #бенчик #арти #симбочка
00:34
Симбочка Пимпочка
Рет қаралды 3,7 МЛН
Kluster Duo #настольныеигры #boardgames #игры #games #настолки #настольные_игры
00:47
Osman Kalyoncu Sonu Üzücü Saddest Videos Dream Engine 275 #shorts
00:29
Life hack 😂 Watermelon magic box! #shorts by Leisi Crazy
00:17
Leisi Crazy
Рет қаралды 81 МЛН
Ali Ghodsi, Lec [3,1]: Deep Learning, Word2vec
1:13:29
Data Science Courses
Рет қаралды 39 М.
Ali Ghodsi, Lec [1,1]: Deep Learning, Introduction
56:49
Data Science Courses
Рет қаралды 39 М.
Ali Ghodsi, Lec [1,2]: Deep Learning, Perceptron, Backpropagation
1:29:16
Data Science Courses
Рет қаралды 22 М.
Make Any Espresso Machine better than a $3500 one (With AI)
18:30
Chris Drake - That Automation Agency
Рет қаралды 369
Ali Ghodsi Lec 9, Regularization, Hard Margin SVM
1:15:21
Data Science Courses
Рет қаралды 9 М.
Ali Ghodsi, Lec 1: Principal Component Analysis
1:11:42
Data Science Courses
Рет қаралды 101 М.
Бенчик, пора купаться! 🛁 #бенчик #арти #симбочка
00:34
Симбочка Пимпочка
Рет қаралды 3,7 МЛН