Gradient Descent - THE MATH YOU SHOULD KNOW

  Рет қаралды 18,614

CodeEmporium

CodeEmporium

Күн бұрын

Пікірлер: 30
@emrek1
@emrek1 Жыл бұрын
Derivative of log(x) is 1/(ln10*x) . Since ln10 is a constant, it doesn't spoil gradient descent.
@patite3103
@patite3103 3 жыл бұрын
You've done an amazing work! I love your videos! I wish you could do some more math videos! For example about neural network backpropagation, perceptron convergence theorem, mixture gaussian models....At 8:39 shouldn't we add the negativ sign to the NLL function?
@DarshanSenTheComposer
@DarshanSenTheComposer 5 жыл бұрын
Having watched a couple of Prof. Andrew Ng's ML videos on Coursera, watching this video is pure satisfaction. I really liked the fact that you walked through the vectorized approaches as well. Isn't ML just a glorified form of Numerical Methods? Thank you. :)
@northsand
@northsand 3 жыл бұрын
This video saved my life
@kingkong792
@kingkong792 Жыл бұрын
love your content, thank you for taking the time to share your skills!
@danielrosas2240
@danielrosas2240 4 жыл бұрын
I like these series of videos. Thank u so much!
@saminchowdhury7995
@saminchowdhury7995 5 жыл бұрын
this is amazing brother. u made me understand maths ur a G.
@mmm777ization
@mmm777ization 3 жыл бұрын
prob of data inc. loss dec @4:20
@dvdsct
@dvdsct 3 жыл бұрын
dude, AMAZING video, thanks!
@kennethleung4487
@kennethleung4487 3 жыл бұрын
Fantastic work! Keep it up!
@forcedlevy
@forcedlevy 3 жыл бұрын
Thank you Tobias Funke
@forcedlevy
@forcedlevy 3 жыл бұрын
11:46 should'nt the update term of gradient descent be "thetha=theta-alpha*gradient" ? you seem to use addition instead of subtraction
@kestonsmith1354
@kestonsmith1354 3 жыл бұрын
Alpha*gradient will give you a negative number so that's why he puts the addition.
@NajmehMousavi
@NajmehMousavi 3 жыл бұрын
amazing and clear explanation, great work :)
@imed_rahmani
@imed_rahmani 3 жыл бұрын
Thank you so much, sir. I really appreciate your videos
@amarnathjagatap2339
@amarnathjagatap2339 5 жыл бұрын
Sir make more videos on machine learning
@95Bloulou
@95Bloulou 4 жыл бұрын
Thank you ! Great video in my opinion !
@rahuldeora5815
@rahuldeora5815 5 жыл бұрын
I had a question which has been troubling me for quite some time. When we take the derivative of the weight matrix times the output of its pervious layer we get a transposed matrix. I never understood how the transpose appeared. I searched alot but could not find a very clear explanation anywhere. How you guide me towards where I can find it explained simply?
@ripsirwin1
@ripsirwin1 3 жыл бұрын
It kinda blows my mind that the loss function is independent of the model. Is this correct?
@danishnawaz7869
@danishnawaz7869 5 жыл бұрын
Challenging you to make me understand the maths of gradient boosting algorithm. 😝
@CodeEmporium
@CodeEmporium 5 жыл бұрын
Challenge accepted
@ephraimwork351
@ephraimwork351 3 жыл бұрын
@@CodeEmporium how did the challenge go
@sooryaprakash6390
@sooryaprakash6390 3 жыл бұрын
@@ephraimwork351 kzbin.info/www/bejne/g3qznH5rj6amo9U&ab_channel=CodeEmporium
@yulinliu850
@yulinliu850 5 жыл бұрын
Thanks a lot!
@jose4877
@jose4877 3 жыл бұрын
You said to add a negative sign to the l function, and then proceeded to leave functional expression exactly the same. I do not see that you added the negative sign.
@KoKi-nx1xo
@KoKi-nx1xo 4 жыл бұрын
why is the Theta transposed??
@animeshsharma7332
@animeshsharma7332 3 жыл бұрын
Just to make the two matrices compatible with each other for multiplication. Theta is of order (1xd), also X is of (1xd). Transposing theta will change its order to (dx1), which will make it multipliable with X (1xd).
@moisessoto5061
@moisessoto5061 5 жыл бұрын
thanks alot
@kestonsmith1354
@kestonsmith1354 3 жыл бұрын
👌
@CodeEmporium
@CodeEmporium 3 жыл бұрын
🙂
Logistic Regression - VISUALIZED!
18:31
CodeEmporium
Рет қаралды 27 М.
Gradient Descent, Step-by-Step
23:54
StatQuest with Josh Starmer
Рет қаралды 1,4 МЛН
Одну кружечку 😂❤️
00:12
Денис Кукояка
Рет қаралды 2,2 МЛН
Don't underestimate anyone
00:47
奇軒Tricking
Рет қаралды 29 МЛН
One day.. 🙌
00:33
Celine Dept
Рет қаралды 66 МЛН
FOREVER BUNNY
00:14
Natan por Aí
Рет қаралды 37 МЛН
Gradient Descent Part 1 Chieh
10:46
Chieh Wu
Рет қаралды 64 М.
Logistic Regression - THE MATH YOU SHOULD KNOW!
9:14
CodeEmporium
Рет қаралды 156 М.
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 98 МЛН
Intro to Gradient Descent || Optimizing High-Dimensional Equations
11:04
Dr. Trefor Bazett
Рет қаралды 76 М.
Likelihood Estimation - THE MATH YOU SHOULD KNOW!
27:49
CodeEmporium
Рет қаралды 55 М.
Numerical Optimization - Gradient Descent
6:56
Computational Medicine
Рет қаралды 10 М.
Gradient Descent - Simply Explained! ML for beginners with Code Example!
12:35
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 530 М.
Building the Gradient Descent Algorithm in 15 Minutes | Coding Challenge
22:29
Одну кружечку 😂❤️
00:12
Денис Кукояка
Рет қаралды 2,2 МЛН