Lecture 4: Gradient Tape
14:17
3 ай бұрын
Lecture 3: TensorFlow Variables
9:47
Lecture 14: Support Vector Machine 2
55:19
Lecture 12: Perceptron Algorithm
1:02:10
Lecture 11: Logistic Regression 2
45:50
Lecture 10: Logistic Regression 1
50:55
Lecture 9: Proximal Gradient Descent
1:01:17
Lecture 8 : LASSO Regression
1:10:49
2 жыл бұрын
Lecture 6 :  Kernel Regression
1:07:14
2 жыл бұрын
Lecture 24 : Transformers
45:26
2 жыл бұрын
Lecture 4: Linear Regression 2
1:05:37
2 жыл бұрын
Lecture 23
50:23
2 жыл бұрын
Lecture 1: Introduction
1:14:47
2 жыл бұрын
Lecture 2 : Basics of Machine Learning
1:05:12
Lecture 3: Linear Regression
59:38
2 жыл бұрын
Lecture 22
58:19
2 жыл бұрын
Lecture 21
1:07:49
2 жыл бұрын
Lecture 20
1:07:17
2 жыл бұрын
Lecture 19
59:30
2 жыл бұрын
Lecture 18
1:09:40
2 жыл бұрын
Пікірлер
@ankitsingh-xl7bo
@ankitsingh-xl7bo 7 сағат бұрын
JSD in GAN is only used for the optimal discriminator and not for all the cases... isn't it?
@OJOMA1
@OJOMA1 2 күн бұрын
Thanks for this video It solved my headache
@ttreza5922
@ttreza5922 12 күн бұрын
Anyone from 2024 watching this?
@newtonleibniz879
@newtonleibniz879 25 күн бұрын
Can notes pdf be provided
@newtonleibniz879
@newtonleibniz879 29 күн бұрын
derivative of log(1-D(G(z))) at D(G(z)) = 0 is -1 so not 0 or dimished, then why the vansihing gradient problem and the need to change the G loss function???
@ket38
@ket38 Ай бұрын
Thank you Ahlad for the beautiful explanation!
@edsongeorgerebello547
@edsongeorgerebello547 Ай бұрын
This is so good !!!
@edsongeorgerebello547
@edsongeorgerebello547 Ай бұрын
Why is the course over but?
@CyberwizardProductions
@CyberwizardProductions Ай бұрын
appreciate your lectures
@muhammadmaazkhan9116
@muhammadmaazkhan9116 Ай бұрын
Thankyou
@ahafeel
@ahafeel Ай бұрын
Thank you Dr Kumar.. Appreciate your efforts very much.... A very useful lecture.. Is there a link to follow along with the pdf notes?
@homakashefiamiri3749
@homakashefiamiri3749 2 ай бұрын
it was very good. thanks
@ashwanibhardwaj4930
@ashwanibhardwaj4930 2 ай бұрын
In last part,Summation over p_ij, for all i (error here) it should be for all j.
@RomanPaunov
@RomanPaunov 2 ай бұрын
Ahmad, please let me know how to that in Linux/Ubuntu? You are using an Apple/MacOs machine. Thanks in advance
@pranavkumarjha1733
@pranavkumarjha1733 2 ай бұрын
I’m a bit confused at timestamp 17:16. When we feed the output from the generator, do we freeze the weights of the discriminator, or do we train both networks concurrently?
@oceanwave4502
@oceanwave4502 2 ай бұрын
26:19 Here, I'm not sure if we replace Σ(x) with exp(Σ(x)) really due to it being more numerically stable. I searched Google and found nothing special. The output of the hidden layer in Encoder is (μ, Σ). In the implementation part (the last video in this series), they are "Mean_layer" and "Standard_deviation_layer" variables. The output Σ can be negative because it is the output right off a Fully Connected layer (Dense Layer); however, Standard Deviation of a Distribution can be NEVER negative. To fix this, we simply interpret the "Standard_deviation_layer" (Σ) variable as "log of variance". When needing the variance inside, we simply do "exp(Σ)". I think this is the true motivation of why we need to replace Σ(x) with exp(Σ(x)) at 26:19. Because it is not a "real" variance, but (interpreted as) "log of variance".
@NancyLee-s5j
@NancyLee-s5j 2 ай бұрын
Emmanuelle Loop
@learnwitharefin3269
@learnwitharefin3269 2 ай бұрын
thanks sir
@__sandeepkuyadav
@__sandeepkuyadav 2 ай бұрын
can you make all video of reinforcement learning free pleaseeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
@the-ghost-in-the-machine1108
@the-ghost-in-the-machine1108 2 ай бұрын
this series was a nice review of CNN for me, thanks
@VinayPrasadTamta-s8o
@VinayPrasadTamta-s8o 2 ай бұрын
Sir I think with the diagram X can not be union of call Yi events. It should be union of some events from Yi.
@nourammar4368
@nourammar4368 2 ай бұрын
Valuable lictures. THANK YOU
@marinamaher8211
@marinamaher8211 2 ай бұрын
Magnificent!
@rounak8774
@rounak8774 3 ай бұрын
Thank you very much. 😊
@ankitsingh-xl7bo
@ankitsingh-xl7bo 3 ай бұрын
@19.57 in case 2 you are saying KL divergence (regularizer is present) but in the second figure (for case 2) it is written that 'without regularizer'??/
@ankitsingh-xl7bo
@ankitsingh-xl7bo 3 ай бұрын
what is prior distribution? is it the distribution of input data?. If it is, then how can we assume to be gaussian with zero mean and unit variance??
@vipulsangode8612
@vipulsangode8612 3 ай бұрын
are you missing a summation at 6:02 in the LSTM gradient equation. There are 2 summations in RNN equation but there should be atleast one summation in LSTM equation right? even if we are calculating gradients for one time stamp
@ankitsingh-xl7bo
@ankitsingh-xl7bo 3 ай бұрын
Sir can you reply to my mail
@hassenzaayra5419
@hassenzaayra5419 3 ай бұрын
thank you so much can you share the code
@ankitsingh-xl7bo
@ankitsingh-xl7bo 3 ай бұрын
Thank you so much for this.
@ankitsingh-xl7bo
@ankitsingh-xl7bo 3 ай бұрын
Is there a playlist for mathematical preliminary??
@RahulKumar-ez6vw
@RahulKumar-ez6vw 3 ай бұрын
sir Kindly finish your NLP playlist.
@SAhellenLily
@SAhellenLily 3 ай бұрын
It looks like Math equations of function code with Python language
@SAhellenLily
@SAhellenLily 3 ай бұрын
Thank you teacher 😊
@SAhellenLily
@SAhellenLily 3 ай бұрын
At 8:35 CS circuit av=-gm1ro2/1+(gm1*(1/gm3//ro3))...Answer At 18:12 Source fillower av=gm1*(RL//(1/gm1)=gm1*RL/(1+gm1*RL).….Answer At 20:59 av=gm1*(1/gm1+1/gmb) approximately gm1*(1/gm1)=1....Answer At 25:26 CG circuit av=vo/vi=-gmvgsRd/-vgs=gm*RD...Answer At 27:05 g_d connect and then i=-gm1vgs=-gm1(0-vin), vin/i= Rin=1/gm1...Answer
@AKASHYADAV-qf1sr
@AKASHYADAV-qf1sr 3 ай бұрын
How exactly is it Stacked Auto encoder? I could see only one AE was used for one single task which was denoising, Isn't it Denoising AE?
@arjunsaxena5207
@arjunsaxena5207 3 ай бұрын
Amazing lecture! Explained everything in very brief, Loved it!
@basab4797
@basab4797 3 ай бұрын
Professor, it will be great if you share the roadmap or the planning of the whole playlist.
@chandrakishtawal4595
@chandrakishtawal4595 3 ай бұрын
Very nicely explained 👍
@basab4797
@basab4797 4 ай бұрын
Professor, later you can start a video series with pytorch. Pytorch playlist is very much needed as less number of people knows it
@basab4797
@basab4797 4 ай бұрын
Awesome lecture
@virtualrealityworld9
@virtualrealityworld9 4 ай бұрын
Good to see you back sir 😊😊 after long time We are happy for that ❤
@souliconic6486
@souliconic6486 4 ай бұрын
Sir please upload videos fast 🙏
@mohammadyahya78
@mohammadyahya78 4 ай бұрын
Finally Professor! Thanks for coming back. We need series about LLMs please!
@basab4797
@basab4797 4 ай бұрын
Please share the notebook files also. Your lectures are awesome
@AhladKumar
@AhladKumar 4 ай бұрын
will share soon. working on it
@AhladKumar
@AhladKumar 4 ай бұрын
github.com/kumarahlad/TensorFlow_Lab
@DEVRAJ-np2og
@DEVRAJ-np2og 4 ай бұрын
is there any other video apart from these 14 video?
@DEVRAJ-np2og
@DEVRAJ-np2og 4 ай бұрын
sir is this full course?
@malathip4043
@malathip4043 4 ай бұрын
Reason for filters dimension 9*9
@sourabhverma9034
@sourabhverma9034 4 ай бұрын
This is not Backprop through time, this is just normal backprop. It does not work on LSTMs or even RNNs as the differential of loss on input weight does not only depend on hidden state, but as each hidden state in time t depends on state in t-1 which in turn depends on input weights again. the differential itself propagates backwards through all time steps. This was the whole point of the paper "Backpropagation through time".
@VoltVipin_VS
@VoltVipin_VS 4 ай бұрын
Still best video on VAE after all these years. I rewatch this series whenever i need to brush up VAE