Logistic Regression and the Perceptron Algorithm: A friendly introduction

  Рет қаралды 58,316

Serrano.Academy

Serrano.Academy

Күн бұрын

Пікірлер: 83
@paedrufernando2351
@paedrufernando2351 6 жыл бұрын
One of your greatest videos sir. Hands down... Perfect timing perfect explanation... Wow simply amazed.. I downloaded the video so that I have a copy of it. Just in case KZbin is non existent in a zillion years
@kartikpunjabi7373
@kartikpunjabi7373 4 жыл бұрын
The best ever video on Logistic Regression ever seen. SIr, keep sharing tutorials like this, very helpful in making deep understanding and implementations.
@vikaspundir6799
@vikaspundir6799 2 жыл бұрын
This is the best channel for Machine learning. Best to understand.
@blesucation4417
@blesucation4417 Жыл бұрын
Just want to leave a comment so that more people could learn from your amazing videos! Many thanks for the wonderful and fun creation!!!
@SerranoAcademy
@SerranoAcademy Жыл бұрын
Thank you! :)
@ngalatalla4032
@ngalatalla4032 4 жыл бұрын
Cheers to you men. I just opened a beer to celebrate. I finally understand what Gradient Descent is how it works and why . Thanks Luis
@sachinshelar8810
@sachinshelar8810 5 жыл бұрын
you are the best in the business of teaching. I came across your content couple of days ago and now I am addicted.
@thinguyen8865
@thinguyen8865 5 жыл бұрын
Your tutorial is very easy to grasp. Best channel on machine learning with good graphic demonstration. Thank you for your hard work
@456youuu
@456youuu 5 жыл бұрын
You deserve more subscribers. All my professors should be at your level. Great video, I learned too much!
@md-ed7ey
@md-ed7ey 3 жыл бұрын
Just saw this. Wow. I'm looking forward to seeing ALL of your vids. Thank you for your kindness and time to share with the world.
@craighennessy3183
@craighennessy3183 2 жыл бұрын
You sir are a great teacher! The way you explained that was simply amazing! That was so engaging!
@EngineeringChampion
@EngineeringChampion 5 жыл бұрын
Thank you very much for depicting the mathematical concepts into very simple graphs! Unforgettable!
@xavierhuijts2574
@xavierhuijts2574 Жыл бұрын
Best tutorial on perceptron out there! Thank you
@eladkipiani9571
@eladkipiani9571 5 жыл бұрын
Really really great. I looked for this explanation for months
@dragolov
@dragolov 5 жыл бұрын
The best #MachineLearning videos are by Luis Serrano. Respect + Thank you, Maestro!
@uzairfarooq7793
@uzairfarooq7793 3 жыл бұрын
The best video I've come across to understand perceptron algorithm concept... And believe me I've tried many! :-)
@darbhamullaeswaraphanipras6356
@darbhamullaeswaraphanipras6356 5 жыл бұрын
One of the best videos so far on the logistic regression..
@neelchattoraj
@neelchattoraj 3 жыл бұрын
Amazing, this really was as friendly as this topic can get.
@sandipansarkar9211
@sandipansarkar9211 3 жыл бұрын
Great video.Learnt the concept in an hour
@rupaliborkar8647
@rupaliborkar8647 3 жыл бұрын
Amazing Video! This video explains logistic regression in a layman language which helped me understand the algorithm in depth! Thank-you so much!
@harjos78
@harjos78 5 жыл бұрын
Awesome tutorial. The best explaination on Logistic regression and perceptron alogorithm! Take a bow Luis!
@mreddy7356
@mreddy7356 4 ай бұрын
thank you i am watching again and again I bought your book I am abeginner thank you again you and stammer made things very easy
@lakpanuru3400
@lakpanuru3400 Жыл бұрын
love u for making my life so much easier. And adding fuel for passion in learning. This video helped a ton.
@MrGbruges
@MrGbruges 3 жыл бұрын
Very good Luis. I am new in your channel. Im here because your book in grooking is nice
@Mandeep_Punia
@Mandeep_Punia 4 жыл бұрын
Rather then picking the points randomly, what if we linearly iterate over the array of points? I think by doing this we can improve our model accuracy because in case of picking a random point it may happen that random function keep on picking some few points every time. Sir what do you think upon this? By the way, your explanation was really good
@dhiahassen9414
@dhiahassen9414 5 жыл бұрын
"Hello grandson, I have made cookies, love grandMa" ... that's obviously spam
@luisbermudez7000
@luisbermudez7000 6 жыл бұрын
Halfway into the video, I thought: "This is the Khan Academy of Machine Learning!" Really great explanations. One thing you didn't drive home as much, is that I could just code this up and it would work. I imagine this is still true though (for perceptron and logistic).
@SerranoAcademy
@SerranoAcademy 6 жыл бұрын
Thanks! Yes, I plan to start making GitHub lavs with videos, just need to get around it. :) If you have any code you'd like to share, lemme know and I'll link it!
@DinaIlman
@DinaIlman 6 жыл бұрын
I hope you could create the "Maximum Entropy Markov Model" which training based with logistic regression but the test with Viterbi maximum entropy is logistic regression with more than one class
@kamalnayan9157
@kamalnayan9157 6 жыл бұрын
Thank you very much for such a great in-depth explanation. Please continue uploading more and more videos.
@debshankarnaskar794
@debshankarnaskar794 Жыл бұрын
Sir, you are awesome, I have just started learning Data Science and your explanations really clear the fog from the picture. Thank you so much for putting such wonderful content on internet. If you have any paid course on Data Science for beginners like me then please do share the link of that in reply of this comment. And once again, your explanations are just great, loved it❤
@2107mann
@2107mann 4 жыл бұрын
Watching for 3rd time.. not because I couldn't understand, but for the beauty and simplicity of explanation
@GiwooLee
@GiwooLee 5 жыл бұрын
Best video I've seen on Perceptron Algorithm hands down. Thank you Luis!
@javiercarrillomartinez1289
@javiercarrillomartinez1289 3 жыл бұрын
Wow, this is absoulute gold.
@yeeunsong3423
@yeeunsong3423 5 жыл бұрын
Thank for your excellent video. It was a real help!
@berknoyan7594
@berknoyan7594 5 жыл бұрын
In some videos about perceptron, lecturers say "if our total error is 0 (linearly separable) then perceptron algorithm finds that linear classifier after some iteration." Perceptron decreases the error but it may not be 0. And if perceptron finds minimum error linear classifier, we can call that as linear classifier as well. Not just a perfect one. Can you correct me about that Luis? Also thanks for your work, its a sound introduction.
@youssefdirani
@youssefdirani 4 жыл бұрын
11:11 I thought the epoch is how many times we scroll through *all* the points, not how many points we pick to repeat the process of slightly moving the line. Am I wrong ? Thanks for help BTW, why is it called *Perceptron algorithm* ? It has nothing to do with perceptron ?
@asarafraz
@asarafraz 4 жыл бұрын
you are right, I guess he is referring to batch not epoch
@karmabender
@karmabender 3 жыл бұрын
Please make the separate video on logistic regression that how it is different from linear. You mixed it with perceptron algorithm. You explained linear algorithm very well but I unable get clear understanding about logistic through this video.
@adorablecheetah2930
@adorablecheetah2930 5 жыл бұрын
Can you please please make a video on maximum likelihood estimation or expectation maximisation
@malyansoferi
@malyansoferi 5 жыл бұрын
Excellent clear and informative explanation. Amazing work, thank you!
@Viralvlogvideos
@Viralvlogvideos 4 жыл бұрын
sir, your awesome the way u explain things are very simple and clear
@SerranoAcademy
@SerranoAcademy 4 жыл бұрын
Thanks Indratej! There's a whole deep learning course I taught here (free)! www.udacity.com/course/deep-learning-pytorch--ud188
@Viralvlogvideos
@Viralvlogvideos 4 жыл бұрын
@@SerranoAcademy sure i will check out sir thanks for sharing
@adriengardais8950
@adriengardais8950 3 жыл бұрын
Thanks for that video !
@luisbermudez7000
@luisbermudez7000 6 жыл бұрын
The gradient descent challenge might need some further help on "What is Gradient Descent?" I know there are lots of videos on this online, but do you have you you like or you produced?
@SerranoAcademy
@SerranoAcademy 6 жыл бұрын
Yeah, I need to do some serious material on gradient descent. I have something on a video called "A friendly introduction to deep learning", if you'd like to take a look.
@gchumbes
@gchumbes 4 жыл бұрын
excellent video thanks for sharing!
@sandeepgill4282
@sandeepgill4282 2 жыл бұрын
Lovely, thanks a lot dear.
@MohamedMahmoud-ul4ip
@MohamedMahmoud-ul4ip 5 жыл бұрын
Amazing as always
@philtoa334
@philtoa334 3 жыл бұрын
Great video thank you so much.
@ojaswighate2588
@ojaswighate2588 Жыл бұрын
Thank you for sharing it
@gren287
@gren287 6 жыл бұрын
Thanks for the lesson Luis :)
@antonioyt7719
@antonioyt7719 6 жыл бұрын
Gracias por compartir este video!
@abdelobaid7681
@abdelobaid7681 4 жыл бұрын
Very clear. Thanks.
@rizwandurrani3392
@rizwandurrani3392 5 жыл бұрын
Thank you so much. Its very easy to understand and your presentation skill are awesome. Sir would u please share the slides ??
@calvinsbrennholzverleih3588
@calvinsbrennholzverleih3588 5 жыл бұрын
very nice video!
@loyodea5147
@loyodea5147 5 жыл бұрын
Thank you, once again!
@paedrufernando2351
@paedrufernando2351 6 жыл бұрын
Nailed it... Thanks
@abinashisingh758
@abinashisingh758 5 жыл бұрын
when are you planning to finish Chapter 7 on Linear Regression of your book : Grokking Machine Learning
@GourangoModak
@GourangoModak 5 жыл бұрын
Sir, Your explanation is very well. It helped me a lot. Thank you so much. Please make some others videos of ML.
@user-or7ji5hv8y
@user-or7ji5hv8y 5 жыл бұрын
Great for developing intuition!
@rodionromanovich3089
@rodionromanovich3089 6 жыл бұрын
THANK YOU SO MUCH!!!
@abdelrhmanrhyaseen6194
@abdelrhmanrhyaseen6194 2 жыл бұрын
wonderful Thanks
@luisbermudez7000
@luisbermudez7000 6 жыл бұрын
I kind of wonder why it's called Logistic Regression. Because it uses a Log-Based Error? Did Linear Regression use a linear error?
@SerranoAcademy
@SerranoAcademy 6 жыл бұрын
Yes great questions, I think it's because the log error, or the logit, which is the function sending everything to [0,1]. Although the strangest thing for me is that it's called logistic regression, even though it is not regression, but classification. Linear regression can use a quadratic error, or an absolute value (almost linear) error, so the reason it's called linear is because the output is a line.
@francislow1767
@francislow1767 5 жыл бұрын
Great content! Thanks so much :)
@sasna8800
@sasna8800 5 жыл бұрын
Wow I have been month try to understand ML thank you a lot
@ali8283
@ali8283 5 жыл бұрын
Thank you for this video. :)
@farzadfarzadian8827
@farzadfarzadian8827 5 жыл бұрын
You clever and clear my 5th grade son understands it.
@user-or7ji5hv8y
@user-or7ji5hv8y 5 жыл бұрын
Can you do a video on variational bayed and KL divergence?
@sharkk2979
@sharkk2979 5 жыл бұрын
I love you!!!!😍
@hichamallaham3787
@hichamallaham3787 Жыл бұрын
i want implementation of the following pseudo code : Step I: Start with a random line of equation ax + by + c = O Step 2: Pick a large number. 1000 (number of repetitions, or epochs) Step 3: Pick a small 0.01 (learning rate) Step 4: (repeat 1000 times) Pick random point from If point is correctly classified Do nothing If point is incorrectly classified Add ± 0.01 to a Add ± 0.01 to b Add ± 0.01 to c help me
@SerranoAcademy
@SerranoAcademy Жыл бұрын
Definitely! HEre it is: github.com/luisguiserrano/manning/tree/master/Chapter_5_Perceptron_Algorithm In that same repo github.com/luisguiserrano/manning/ I have many others from the videos
@hichamallaham3787
@hichamallaham3787 Жыл бұрын
🥰🥰😍😍@@SerranoAcademy
@Shabbir2749
@Shabbir2749 5 жыл бұрын
Sir, please explain ANCOVA & GLM
@scherwinn
@scherwinn 5 жыл бұрын
Clever great!
@nenslen679
@nenslen679 5 жыл бұрын
Hi Luis, great video! The way you explained these concepts was nice and easy to understand. I decided to implement the basic perceptron algorithm in python, feel free to check it out here: github.com/nenslen/perceptron I tried to use the same terminology as you did in the video (eg. red and blue points), so it should be easier to relate it to your explanation. If you end up taking a look, any comments or feedback are appreciated, thanks!
@Enterprise-Architect
@Enterprise-Architect 2 жыл бұрын
Nice...
@4767039
@4767039 4 жыл бұрын
The error is minimized as the derivative of the error function approaches zero.
@ابولفضلجهانی-ص2و
@ابولفضلجهانی-ص2و 2 жыл бұрын
Support Vector Machines (SVMs): A friendly introduction
30:58
Serrano.Academy
Рет қаралды 91 М.
Linear Regression: A friendly introduction
31:05
Serrano.Academy
Рет қаралды 40 М.
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,7 МЛН
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
Арыстанның айқасы, Тәуіржанның шайқасы!
25:51
QosLike / ҚосЛайк / Косылайық
Рет қаралды 700 М.
We Attempted The Impossible 😱
00:54
Topper Guild
Рет қаралды 56 МЛН
Machine Learning: Testing and Error Metrics
44:43
Serrano.Academy
Рет қаралды 110 М.
A Friendly Introduction to Generative Adversarial Networks (GANs)
21:01
Serrano.Academy
Рет қаралды 264 М.
Naive Bayes classifier: A friendly approach
20:29
Serrano.Academy
Рет қаралды 147 М.
A friendly introduction to Deep Learning and Neural Networks
33:20
Serrano.Academy
Рет қаралды 703 М.
MIT: Machine Learning 6.036, Lecture 4: Logistic regression (Fall 2020)
1:21:14
Shannon Entropy and Information Gain
21:16
Serrano.Academy
Рет қаралды 208 М.
Logistic regression : the basics - simply explained
20:25
TileStats
Рет қаралды 38 М.
Convolutional Neural Network from Scratch | Mathematics & Python Code
33:23
The Independent Code
Рет қаралды 194 М.
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,7 МЛН