Poll 4, I don't understand. desn't momentum will lead to larger step size?
@DeltaJes-co8yu14 күн бұрын
I really like how he gives the background information of scientists such as Donald Hebb. Makes it very engaging. Bhakshi an amazing professor! This must be the first lesson anyone should take on ML or neural network.
@sidharth92929 күн бұрын
Best lecture on CTC!
@rj-nj3uk2 ай бұрын
why does he have a spear 44:21?
@rohangarg77392 ай бұрын
its really tough to grasp.
@VonNeumannFan3 ай бұрын
These are the best deep learning lectures ever on the internet. Can you please upload the worked out home work problem videos or something sort of those? because i dont think anyone who is self studying this course would know if their answers are correct. Thanks for helping people like me
@josephmartin62193 ай бұрын
Was really helpful, thank you!
@amitabhachakraborty4973 ай бұрын
too much noise
@aravindreddyj46243 ай бұрын
I am following up with the schedule and want to do all the HWs and Hackathons. How can I get excess to those material
@VonNeumannFan3 ай бұрын
U can find them in the course website, just search for CMU 11-785 and you can find all of them
@NanXiao3 ай бұрын
23:47 starts the useful content
@amitabhachakraborty4973 ай бұрын
First viewer
@jesperolsen983 ай бұрын
The mouse cursor is buzzing around like a fly-an unnecessary distraction.
@vdudz4 ай бұрын
super swaggy
@MohammedAwney844 ай бұрын
could you please give a link to the notebook?
@AlgoNudger4 ай бұрын
Thanks.
@fahaddeshmukh38174 ай бұрын
What's the name of this professor?
@paulrajkhowa2 ай бұрын
Professor Bhiksha Raj
@gageshmadaan68194 ай бұрын
couldn't understand why we need to go through stochastic gradient updates randomly and not cyclically?
@AbidineVall2 ай бұрын
Otherwise you will updates the function greedily wrt to a subsets of the dataset features which will be suboptimal, given that you need the model to "see" to some extent the whole features. Randomness here is used to the extent that it will allow somewhat for a sweep on large set of training samples features, via the random sampling. Your bet here will be that the random sampling process will pick on average diverse subset which will allow for a more realistic update of the model parameters.
@cholocatelabs4 ай бұрын
Great lecture :)
@egeboguslu6825 ай бұрын
59:46 "Sometimes these formulae may not make sense, but then if you look at them just right, they begin telling their own story, rigth? Every single mathematical term in life tells you a story if you know how to read it" 🤓✨
@harshdeepsingh38725 ай бұрын
wow , everything falling in place !!!!!!!!
@egeboguslu6825 ай бұрын
Wow, breathtaking quality. This series might be the most comprehensive explanation available for deep neural nets, somehow the professor is able wear the students hat and asks the most critical questions every time! Big thanks to everyone involved in making these available.
@HetThakkar-h8h5 ай бұрын
This was absolutely brilliant. A masterclass in lecture content design. Very well pieced together -> great flow -> Wow moment towards the end -> evokes a lot of curiosity
@britaom32995 ай бұрын
A great and informative lecture!! Very much appreciated!
@chovaus5 ай бұрын
best course about deep learning. now 2024 and happy I found it back. well done!
@yadavadvait6 ай бұрын
nice lecture!
@ahmadmaroofkarimi91256 ай бұрын
lecture begins at at 6:02
@harshdeepsingh38726 ай бұрын
Best explanation , can't thank enough for uploading these lectures .
@emrullahcelik77046 ай бұрын
Great lecture, thank you!
@yadavadvait6 ай бұрын
I struggled with grasping how the dimensions of the filters and data change with the convolutions and pooling, and this video made it clear. Thank you!
@emrullahcelik77046 ай бұрын
Wonderfull lecture! Thank you.
@pangs116 ай бұрын
Lecture starts @ 2:26
@pangs116 ай бұрын
Lecture starts @ 3:38
@peichunhua71386 ай бұрын
Start at 12:42
@yadavadvait6 ай бұрын
thanks!
@peichunhua71387 ай бұрын
Lecture starts at 1:15
@ML_n00b7 ай бұрын
great carefully thought out original course, was watching this leisurely and didnt realise an hour went by
@vincentdey43137 ай бұрын
This is a very good teacher. He knows how to explain things to students very well
@nayanvats34247 ай бұрын
Your teaching unravels the exact concept that is missed by most tutors. Thanks for the great lecture ❤
@danhvo27028 ай бұрын
Thank you for great lecture! ps/ The stick you're holding is impressive.
@laalbujhakkar8 ай бұрын
These lectures are some of the best on the 'net along with Andrew Ng's lectures on Deep Learning. Mad props to the instructor who takes the time to go through the concepts. I wish I had access to the quizzes and group discussions.
@laalbujhakkar8 ай бұрын
What does "We have the id hiyore" mean?
@ian-haggerty8 ай бұрын
Thank you again to Carnegie Mellon University & Bhiksha Raj. I find these lectures fascinating.
@ian-haggerty8 ай бұрын
Couldn't help but think of 3B1B videos on hamming codes watching this.
@ian-haggerty8 ай бұрын
Loving this series! Such a talented lecturer.
@vctorroferz8 ай бұрын
thanks for sharing ! :) how can I find the rest of the lectures of the bootcam? thanks again for such nice job!
@javier2luna8 ай бұрын
30:12 question: When he says h1, h2 and h3 are k1, k2 and k3 but h1, h2 and h3 are hidden layers of a neural network. Right?
@florianstephan57459 ай бұрын
Amazing lecture as usual, thank you! 2 Cents from a German: Nouns (apple, name) start with a capital letter, so you would write "Apfel" and "Name"...but very happy you have chosen German in this example ;-)