What do neural networks learn?

  Рет қаралды 29,140

Brandon Rohrer

Brandon Rohrer

5 жыл бұрын

Part of the End-to-End Machine Learning Course 193, How Neural Networks Work at e2eml.school/193
Blog post: brohrer.github.io/what_nns_le...
We open the black box of neural networks and take a closer look at what they can actually learn.
This is exploration and exposition in preparation for the next End-to-End Machine Learning course.

Пікірлер: 50
@estifanosabebaw1468
@estifanosabebaw1468 2 ай бұрын
the depth of the explanation and visualization, there is no word to describe how much it express and help to grasp the most fundamental and core concept of Neural Networks. THANKS Bra
@lcsxwtian
@lcsxwtian 5 жыл бұрын
Please keep doing this Brandon. We can't thank you enough & leaving you a nice comment like this is the least we can do :)
@xarisalkiviadis2162
@xarisalkiviadis2162 2 ай бұрын
What a diamond of a channel i just found ... incredible!
@chernettuge4629
@chernettuge4629 3 ай бұрын
Respect Sir, Thank you so much- I am more than satisfied with your lecture.
@lakeguy65616
@lakeguy65616 5 жыл бұрын
Hands down, this is the best video explaining the concepts and boundaries of Neural Networks I've ever watched. Well done!
@mostinho7
@mostinho7 4 жыл бұрын
12:00, how a logistic regression curve is used as a classifier. It classifies a continuous variable into categories (one input) but can be extended to multiple inputs. 12:52 With two inputs, the curve becomes 3D IMPORTANT: the contour lines projected on the x1,x2 plane, shows the “decision boundary”. Logistic regression always has linear contour lines, that’s why logistic regression is considered a linear classifier. 16:40 (non-linear) how a curve is used as a classifier (network has a single output, plotted as a function of the input) If only one input x, and one output y, then it is classified by this 2d curve, where when the curve is above a certain line (say y=0) this is category A, and where the curve is below the line is category B The non linear classifier doesn’t just split the points into two categories, the categories can also be interleaved. So we can classify an input into two categories using only one output node, by specifying this threshold line/value of the output (y= threshold) and seeing the intersection of that line with our curve. If our neural network has two inputs and one output, then the generated curve is 3d 19:40 Another way to classify an input into two classes is to have two output nodes (what I’m used to) Todo continue 21:59
@shedrackjassen913
@shedrackjassen913 Жыл бұрын
This was very satisfying. Keep on the good work
@oauthaccount7974
@oauthaccount7974 Жыл бұрын
How vivid was NN concept? OMG I can't believe it. I was looking for it for long time. finally I got it. :)
@BrandonRohrer
@BrandonRohrer Жыл бұрын
That's great to hear!
@philipthatcher2068
@philipthatcher2068 5 жыл бұрын
Brilliant. Best explaination of this ever. Great visuals also.
@waterflowzz
@waterflowzz 2 жыл бұрын
Wow this is the best explanation of a neural network I’ve ever seen! This channel is so underrated. I hope you get way more subs/views.
@BrandonRohrer
@BrandonRohrer 2 жыл бұрын
Thanks! :) I'm really happy it hit the spot.
@petrmervart4994
@petrmervart4994 5 жыл бұрын
Excellent explanation. I really like examples with different numbers of outputs and hidden nodes.
@sridharjayaraman8094
@sridharjayaraman8094 5 жыл бұрын
Awesome - many many thanks. One of the best lectures for intuitive understanding of NN.
@johanneszwilling
@johanneszwilling 5 жыл бұрын
😍 Thank You! 😊 Got myself a whiteboard now 😀 Much more fun learning this stuff
@stasgridnev
@stasgridnev 5 жыл бұрын
Wow, that is just great. Thx for awesom explanation with visualisation. Made a several notes based on your video. Thanks, wish you luck.
@MartinLichtblau
@MartinLichtblau 5 жыл бұрын
Excellent explanations for so many deep insights. 👍🏼
@chloegame5838
@chloegame5838 Жыл бұрын
Finally the video i was looking for! A clear and brilliant explanation of NNs that ties in decision boundaries, linear and nonlinear functions and what the values mean throughout a NN.
@BrandonRohrer
@BrandonRohrer Жыл бұрын
I'm really happy to hear it Chloe. This video has resonated with a much smaller audience than some of my others, but it's one of my favorites and one that I'm proud of.
@user-kf5jx1ug2v
@user-kf5jx1ug2v Жыл бұрын
!
@somdubey5436
@somdubey5436 4 жыл бұрын
great work and very informative. One thing that really made me think is how can someone even dislike this video.
@carlavirhuez4785
@carlavirhuez4785 4 жыл бұрын
Dear Brandon, you have just saved me a lot of time and your explanation is very simple and intuitive. You have helped this humble student :´)
@greglee7708
@greglee7708 5 жыл бұрын
Very well explained, thank You
@ayush612
@ayush612 5 жыл бұрын
Thanks Brandon. You are awessooomee!
@hackercop
@hackercop 2 жыл бұрын
This explaines activation funcitons very well!
@Silvannetwork
@Silvannetwork 5 жыл бұрын
Great and informative video. You are definitely underrated
@larsoevlisen
@larsoevlisen 5 жыл бұрын
Thank you for your videos! They have helped me a lot to digest related information. One piece of feedback on the visual side: I believe that, when working with complex structures in a visual way (as the layer diagrams in this video), adding focus to the objects that you talk about could greatly help viewers ability to follow along your narrative. How you create these graphics I don't know, but with this video as an example, you could e.g. outline and light up the nodes that you speak of and outline the whole of the layer boxes (green boxes surrounding layers). Thank you for your contribution.
@BrandonRohrer
@BrandonRohrer 5 жыл бұрын
Thanks Lars! I appreciate the feedback. I really like your idea for diagram highlighting, and I'll see if I can fold it into my next video.
@177heimu
@177heimu 4 жыл бұрын
Great explanation on the topic! Any plans on starting a mini series on mathematics covering calculus, statistics, linear algebra? I believe many will benefit from it :)
@sakcee
@sakcee 4 ай бұрын
excellent!
@MohamedMahmoud-ul4ip
@MohamedMahmoud-ul4ip 5 жыл бұрын
AMAZING !!!!!!!!!!!!!!!!!!!! , THANK YOU VERY MUCH
@svein2330
@svein2330 4 жыл бұрын
Excellent.
@AshutoshRaj
@AshutoshRaj 4 жыл бұрын
Awesome man !! Can you relate the basis and weights of a neural network?
@purnasaigudikandula3532
@purnasaigudikandula3532 5 жыл бұрын
Please try to a make a video explaining math behind every Machine learning algorithm. Every beginner out there could get the theory of alogirthm but couldn't get the math behind it.
@abhinav9561
@abhinav9561 2 жыл бұрын
Thanks Brandon. Very helpful and much need. The graphs really helped with the intuition. Can I ask how did you make those non-linear graphs?
@BrandonRohrer
@BrandonRohrer 2 жыл бұрын
I'm glad to hear it Abhinav! Here is the code I used to make the illustrations: github.com/brohrer/what_nns_learn
@abhinav9561
@abhinav9561 2 жыл бұрын
@@BrandonRohrer thanks
@jenyasidyakin8061
@jenyasidyakin8061 5 жыл бұрын
Wow, that was very clear! can you do a course on bayesian statistics ?
@BrandonRohrer
@BrandonRohrer 5 жыл бұрын
Thanks! Have you watched this one yet?: kzbin.info/www/bejne/a3-wqZyFfLFmb68
@igorg4129
@igorg4129 4 жыл бұрын
Great video thanks. Please someone help me understand something. 1) I do not understand where(in which stage of the network) you can get observe a sigmoidal 3d surface like it is shown on 13:05? In my opinion you only get 2d plane defined by weights and bias: (x1*w1 + x2*w2+b= output). To this plane you plug in X1 and X2 you have, and get a simple number (scalar) on an output axis. This scalar will NEVER tell anyone that he came from a 2d plane, when being plugged into a sigmoid formula at the next step . Thus the sigmoid trick is totaly 2d operation: you just plug the scalar from previous step at the X axis, and get Y value as a final output of 1st layer. So such 3d sigmoidal surface as shown never exists in my opinion... Please ell me what did i miss. 2) at 14:50 (similar to the 1st question) What do you mean by "when we add them together", I mean Where do you make mathematicaly this addition of one 3d curve to another? Fix me If i am wrong, but each activation function gives me at the end only one simple number(scalar)! In case of a sigmoid this scalar is between 0 - 1. Lets say 0.7, but it is just a scalar and NOT a surface! Technicaly when this 0.7 reaches the second layer' it acts like a regular input and NO ONE KNOWS that it was born by some sigmoid. Please could you clarify this point for me?
@harrypotter6505
@harrypotter6505 Жыл бұрын
I am stuck at understanding why "i" below the summation notation is necessary, someone please, would it make a difference to not write that "i" below?
@BrandonRohrer
@BrandonRohrer Жыл бұрын
The i underneath SUM_i a_i means to sum up all the terms for all values of i, for example, add up a_0 + a_1 + a_2 + a_3 ... If you leave the i off, it is usually understood. It means the same thing.
@lifeinruroc5918
@lifeinruroc5918 Жыл бұрын
Any chance to quickly explain why resulting models are straight lines?
@pauldacus4590
@pauldacus4590 5 жыл бұрын
Thanks embroidered stockings Grampa Christmas!
@gaureesha9840
@gaureesha9840 5 жыл бұрын
can a bunch of sigmoid activations produce non-linear classifier?
@BrandonRohrer
@BrandonRohrer 5 жыл бұрын
Yep, in multiple layers they can do the same thing as hyperbolic tangents, except that the functions they create fall between 0 and 1, rather than between -1 and 1.
@gavin5861
@gavin5861 5 жыл бұрын
🤯
@user-kf5jx1ug2v
@user-kf5jx1ug2v Жыл бұрын
!
@SnoopyDoofie
@SnoopyDoofie Жыл бұрын
8 minutes in, and still very abstract. No thanks. There are better explanations.
How Convolutional Neural Networks work
26:14
Brandon Rohrer
Рет қаралды 955 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 246 М.
UFC 302 : Махачев VS Порье
02:54
Setanta Sports UFC
Рет қаралды 1,3 МЛН
когда достали одноклассники!
00:49
БРУНО
Рет қаралды 4,1 МЛН
Intro to Machine Learning & Neural Networks.  How Do They Work?
1:42:18
Math and Science
Рет қаралды 129 М.
What Are Neural Networks Even Doing? (Manifold Hypothesis)
13:20
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,1 МЛН
How are memories stored in neural networks? | The Hopfield Network #SoME2
15:14
How Backpropagation Works
18:50
Brandon Rohrer
Рет қаралды 47 М.
Dendrites: Why Biological Neurons Are Deep Neural Networks
25:28
Artem Kirsanov
Рет қаралды 216 М.
How convolutional neural networks work, in depth
1:01:28
Brandon Rohrer
Рет қаралды 201 М.
How Support Vector Machines work / How to open a black box
17:54
Brandon Rohrer
Рет қаралды 80 М.
Liquid Neural Networks
49:30
MITCBMM
Рет қаралды 224 М.
Выложил СВОЙ АЙФОН НА АВИТО #shorts
0:42
Дмитрий Левандовский
Рет қаралды 1,9 МЛН
Xiaomi Note 13 Pro по безумной цене в России
0:43
Простые Технологии
Рет қаралды 2,1 МЛН