Respect Sir, Thank you so much- I am more than satisfied with your lecture.
@lcsxwtian5 жыл бұрын
Please keep doing this Brandon. We can't thank you enough & leaving you a nice comment like this is the least we can do :)
@oauthaccount79742 жыл бұрын
How vivid was NN concept? OMG I can't believe it. I was looking for it for long time. finally I got it. :)
@BrandonRohrer2 жыл бұрын
That's great to hear!
@Air_seeker10 ай бұрын
the depth of the explanation and visualization, there is no word to describe how much it express and help to grasp the most fundamental and core concept of Neural Networks. THANKS Bra
@mostinho74 жыл бұрын
12:00, how a logistic regression curve is used as a classifier. It classifies a continuous variable into categories (one input) but can be extended to multiple inputs. 12:52 With two inputs, the curve becomes 3D IMPORTANT: the contour lines projected on the x1,x2 plane, shows the “decision boundary”. Logistic regression always has linear contour lines, that’s why logistic regression is considered a linear classifier. 16:40 (non-linear) how a curve is used as a classifier (network has a single output, plotted as a function of the input) If only one input x, and one output y, then it is classified by this 2d curve, where when the curve is above a certain line (say y=0) this is category A, and where the curve is below the line is category B The non linear classifier doesn’t just split the points into two categories, the categories can also be interleaved. So we can classify an input into two categories using only one output node, by specifying this threshold line/value of the output (y= threshold) and seeing the intersection of that line with our curve. If our neural network has two inputs and one output, then the generated curve is 3d 19:40 Another way to classify an input into two classes is to have two output nodes (what I’m used to) Todo continue 21:59
@lakeguy656166 жыл бұрын
Hands down, this is the best video explaining the concepts and boundaries of Neural Networks I've ever watched. Well done!
@shedrackjassen913 Жыл бұрын
This was very satisfying. Keep on the good work
@xarisalkiviadis216210 ай бұрын
What a diamond of a channel i just found ... incredible!
@chloegame58382 жыл бұрын
Finally the video i was looking for! A clear and brilliant explanation of NNs that ties in decision boundaries, linear and nonlinear functions and what the values mean throughout a NN.
@BrandonRohrer2 жыл бұрын
I'm really happy to hear it Chloe. This video has resonated with a much smaller audience than some of my others, but it's one of my favorites and one that I'm proud of.
@АндрейПавлов-л7г2 жыл бұрын
!
@waterflowzz3 жыл бұрын
Wow this is the best explanation of a neural network I’ve ever seen! This channel is so underrated. I hope you get way more subs/views.
@BrandonRohrer3 жыл бұрын
Thanks! :) I'm really happy it hit the spot.
@petrmervart49946 жыл бұрын
Excellent explanation. I really like examples with different numbers of outputs and hidden nodes.
@sridharjayaraman80945 жыл бұрын
Awesome - many many thanks. One of the best lectures for intuitive understanding of NN.
@somdubey54364 жыл бұрын
great work and very informative. One thing that really made me think is how can someone even dislike this video.
@philipthatcher20686 жыл бұрын
Brilliant. Best explaination of this ever. Great visuals also.
@carlavirhuez47855 жыл бұрын
Dear Brandon, you have just saved me a lot of time and your explanation is very simple and intuitive. You have helped this humble student :´)
@larsoevlisen6 жыл бұрын
Thank you for your videos! They have helped me a lot to digest related information. One piece of feedback on the visual side: I believe that, when working with complex structures in a visual way (as the layer diagrams in this video), adding focus to the objects that you talk about could greatly help viewers ability to follow along your narrative. How you create these graphics I don't know, but with this video as an example, you could e.g. outline and light up the nodes that you speak of and outline the whole of the layer boxes (green boxes surrounding layers). Thank you for your contribution.
@BrandonRohrer6 жыл бұрын
Thanks Lars! I appreciate the feedback. I really like your idea for diagram highlighting, and I'll see if I can fold it into my next video.
@stasgridnev5 жыл бұрын
Wow, that is just great. Thx for awesom explanation with visualisation. Made a several notes based on your video. Thanks, wish you luck.
@johanneszwilling6 жыл бұрын
😍 Thank You! 😊 Got myself a whiteboard now 😀 Much more fun learning this stuff
@hackercop3 жыл бұрын
This explaines activation funcitons very well!
@abhinav95613 жыл бұрын
Thanks Brandon. Very helpful and much need. The graphs really helped with the intuition. Can I ask how did you make those non-linear graphs?
@BrandonRohrer3 жыл бұрын
I'm glad to hear it Abhinav! Here is the code I used to make the illustrations: github.com/brohrer/what_nns_learn
@abhinav95613 жыл бұрын
@@BrandonRohrer thanks
@ayush6126 жыл бұрын
Thanks Brandon. You are awessooomee!
@MartinLichtblau6 жыл бұрын
Excellent explanations for so many deep insights. 👍🏼
@177heimu5 жыл бұрын
Great explanation on the topic! Any plans on starting a mini series on mathematics covering calculus, statistics, linear algebra? I believe many will benefit from it :)
@Silvannetwork5 жыл бұрын
Great and informative video. You are definitely underrated
@gregslu6 жыл бұрын
Very well explained, thank You
@igorg41294 жыл бұрын
Great video thanks. Please someone help me understand something. 1) I do not understand where(in which stage of the network) you can get observe a sigmoidal 3d surface like it is shown on 13:05? In my opinion you only get 2d plane defined by weights and bias: (x1*w1 + x2*w2+b= output). To this plane you plug in X1 and X2 you have, and get a simple number (scalar) on an output axis. This scalar will NEVER tell anyone that he came from a 2d plane, when being plugged into a sigmoid formula at the next step . Thus the sigmoid trick is totaly 2d operation: you just plug the scalar from previous step at the X axis, and get Y value as a final output of 1st layer. So such 3d sigmoidal surface as shown never exists in my opinion... Please ell me what did i miss. 2) at 14:50 (similar to the 1st question) What do you mean by "when we add them together", I mean Where do you make mathematicaly this addition of one 3d curve to another? Fix me If i am wrong, but each activation function gives me at the end only one simple number(scalar)! In case of a sigmoid this scalar is between 0 - 1. Lets say 0.7, but it is just a scalar and NOT a surface! Technicaly when this 0.7 reaches the second layer' it acts like a regular input and NO ONE KNOWS that it was born by some sigmoid. Please could you clarify this point for me?
@jenyasidyakin80615 жыл бұрын
Wow, that was very clear! can you do a course on bayesian statistics ?
@BrandonRohrer5 жыл бұрын
Thanks! Have you watched this one yet?: kzbin.info/www/bejne/a3-wqZyFfLFmb68
@AshutoshRaj5 жыл бұрын
Awesome man !! Can you relate the basis and weights of a neural network?
@purnasaigudikandula35326 жыл бұрын
Please try to a make a video explaining math behind every Machine learning algorithm. Every beginner out there could get the theory of alogirthm but couldn't get the math behind it.
@lifeinruroc59182 жыл бұрын
Any chance to quickly explain why resulting models are straight lines?
@sakcee Жыл бұрын
excellent!
@harrypotter6505 Жыл бұрын
I am stuck at understanding why "i" below the summation notation is necessary, someone please, would it make a difference to not write that "i" below?
@BrandonRohrer Жыл бұрын
The i underneath SUM_i a_i means to sum up all the terms for all values of i, for example, add up a_0 + a_1 + a_2 + a_3 ... If you leave the i off, it is usually understood. It means the same thing.
@svein23305 жыл бұрын
Excellent.
@MohamedMahmoud-ul4ip5 жыл бұрын
AMAZING !!!!!!!!!!!!!!!!!!!! , THANK YOU VERY MUCH
@gaureesha98406 жыл бұрын
can a bunch of sigmoid activations produce non-linear classifier?
@BrandonRohrer6 жыл бұрын
Yep, in multiple layers they can do the same thing as hyperbolic tangents, except that the functions they create fall between 0 and 1, rather than between -1 and 1.