What do neural networks learn?

  Рет қаралды 30,732

Brandon Rohrer

Brandon Rohrer

Күн бұрын

Пікірлер: 49
@chernettuge4629
@chernettuge4629 10 ай бұрын
Respect Sir, Thank you so much- I am more than satisfied with your lecture.
@lcsxwtian
@lcsxwtian 5 жыл бұрын
Please keep doing this Brandon. We can't thank you enough & leaving you a nice comment like this is the least we can do :)
@oauthaccount7974
@oauthaccount7974 2 жыл бұрын
How vivid was NN concept? OMG I can't believe it. I was looking for it for long time. finally I got it. :)
@BrandonRohrer
@BrandonRohrer 2 жыл бұрын
That's great to hear!
@Air_seeker
@Air_seeker 10 ай бұрын
the depth of the explanation and visualization, there is no word to describe how much it express and help to grasp the most fundamental and core concept of Neural Networks. THANKS Bra
@mostinho7
@mostinho7 4 жыл бұрын
12:00, how a logistic regression curve is used as a classifier. It classifies a continuous variable into categories (one input) but can be extended to multiple inputs. 12:52 With two inputs, the curve becomes 3D IMPORTANT: the contour lines projected on the x1,x2 plane, shows the “decision boundary”. Logistic regression always has linear contour lines, that’s why logistic regression is considered a linear classifier. 16:40 (non-linear) how a curve is used as a classifier (network has a single output, plotted as a function of the input) If only one input x, and one output y, then it is classified by this 2d curve, where when the curve is above a certain line (say y=0) this is category A, and where the curve is below the line is category B The non linear classifier doesn’t just split the points into two categories, the categories can also be interleaved. So we can classify an input into two categories using only one output node, by specifying this threshold line/value of the output (y= threshold) and seeing the intersection of that line with our curve. If our neural network has two inputs and one output, then the generated curve is 3d 19:40 Another way to classify an input into two classes is to have two output nodes (what I’m used to) Todo continue 21:59
@lakeguy65616
@lakeguy65616 6 жыл бұрын
Hands down, this is the best video explaining the concepts and boundaries of Neural Networks I've ever watched. Well done!
@shedrackjassen913
@shedrackjassen913 Жыл бұрын
This was very satisfying. Keep on the good work
@xarisalkiviadis2162
@xarisalkiviadis2162 10 ай бұрын
What a diamond of a channel i just found ... incredible!
@chloegame5838
@chloegame5838 2 жыл бұрын
Finally the video i was looking for! A clear and brilliant explanation of NNs that ties in decision boundaries, linear and nonlinear functions and what the values mean throughout a NN.
@BrandonRohrer
@BrandonRohrer 2 жыл бұрын
I'm really happy to hear it Chloe. This video has resonated with a much smaller audience than some of my others, but it's one of my favorites and one that I'm proud of.
@АндрейПавлов-л7г
@АндрейПавлов-л7г 2 жыл бұрын
!
@waterflowzz
@waterflowzz 3 жыл бұрын
Wow this is the best explanation of a neural network I’ve ever seen! This channel is so underrated. I hope you get way more subs/views.
@BrandonRohrer
@BrandonRohrer 3 жыл бұрын
Thanks! :) I'm really happy it hit the spot.
@petrmervart4994
@petrmervart4994 6 жыл бұрын
Excellent explanation. I really like examples with different numbers of outputs and hidden nodes.
@sridharjayaraman8094
@sridharjayaraman8094 5 жыл бұрын
Awesome - many many thanks. One of the best lectures for intuitive understanding of NN.
@somdubey5436
@somdubey5436 4 жыл бұрын
great work and very informative. One thing that really made me think is how can someone even dislike this video.
@philipthatcher2068
@philipthatcher2068 6 жыл бұрын
Brilliant. Best explaination of this ever. Great visuals also.
@carlavirhuez4785
@carlavirhuez4785 5 жыл бұрын
Dear Brandon, you have just saved me a lot of time and your explanation is very simple and intuitive. You have helped this humble student :´)
@larsoevlisen
@larsoevlisen 6 жыл бұрын
Thank you for your videos! They have helped me a lot to digest related information. One piece of feedback on the visual side: I believe that, when working with complex structures in a visual way (as the layer diagrams in this video), adding focus to the objects that you talk about could greatly help viewers ability to follow along your narrative. How you create these graphics I don't know, but with this video as an example, you could e.g. outline and light up the nodes that you speak of and outline the whole of the layer boxes (green boxes surrounding layers). Thank you for your contribution.
@BrandonRohrer
@BrandonRohrer 6 жыл бұрын
Thanks Lars! I appreciate the feedback. I really like your idea for diagram highlighting, and I'll see if I can fold it into my next video.
@stasgridnev
@stasgridnev 5 жыл бұрын
Wow, that is just great. Thx for awesom explanation with visualisation. Made a several notes based on your video. Thanks, wish you luck.
@johanneszwilling
@johanneszwilling 6 жыл бұрын
😍 Thank You! 😊 Got myself a whiteboard now 😀 Much more fun learning this stuff
@hackercop
@hackercop 3 жыл бұрын
This explaines activation funcitons very well!
@abhinav9561
@abhinav9561 3 жыл бұрын
Thanks Brandon. Very helpful and much need. The graphs really helped with the intuition. Can I ask how did you make those non-linear graphs?
@BrandonRohrer
@BrandonRohrer 3 жыл бұрын
I'm glad to hear it Abhinav! Here is the code I used to make the illustrations: github.com/brohrer/what_nns_learn
@abhinav9561
@abhinav9561 3 жыл бұрын
@@BrandonRohrer thanks
@ayush612
@ayush612 6 жыл бұрын
Thanks Brandon. You are awessooomee!
@MartinLichtblau
@MartinLichtblau 6 жыл бұрын
Excellent explanations for so many deep insights. 👍🏼
@177heimu
@177heimu 5 жыл бұрын
Great explanation on the topic! Any plans on starting a mini series on mathematics covering calculus, statistics, linear algebra? I believe many will benefit from it :)
@Silvannetwork
@Silvannetwork 5 жыл бұрын
Great and informative video. You are definitely underrated
@gregslu
@gregslu 6 жыл бұрын
Very well explained, thank You
@igorg4129
@igorg4129 4 жыл бұрын
Great video thanks. Please someone help me understand something. 1) I do not understand where(in which stage of the network) you can get observe a sigmoidal 3d surface like it is shown on 13:05? In my opinion you only get 2d plane defined by weights and bias: (x1*w1 + x2*w2+b= output). To this plane you plug in X1 and X2 you have, and get a simple number (scalar) on an output axis. This scalar will NEVER tell anyone that he came from a 2d plane, when being plugged into a sigmoid formula at the next step . Thus the sigmoid trick is totaly 2d operation: you just plug the scalar from previous step at the X axis, and get Y value as a final output of 1st layer. So such 3d sigmoidal surface as shown never exists in my opinion... Please ell me what did i miss. 2) at 14:50 (similar to the 1st question) What do you mean by "when we add them together", I mean Where do you make mathematicaly this addition of one 3d curve to another? Fix me If i am wrong, but each activation function gives me at the end only one simple number(scalar)! In case of a sigmoid this scalar is between 0 - 1. Lets say 0.7, but it is just a scalar and NOT a surface! Technicaly when this 0.7 reaches the second layer' it acts like a regular input and NO ONE KNOWS that it was born by some sigmoid. Please could you clarify this point for me?
@jenyasidyakin8061
@jenyasidyakin8061 5 жыл бұрын
Wow, that was very clear! can you do a course on bayesian statistics ?
@BrandonRohrer
@BrandonRohrer 5 жыл бұрын
Thanks! Have you watched this one yet?: kzbin.info/www/bejne/a3-wqZyFfLFmb68
@AshutoshRaj
@AshutoshRaj 5 жыл бұрын
Awesome man !! Can you relate the basis and weights of a neural network?
@purnasaigudikandula3532
@purnasaigudikandula3532 6 жыл бұрын
Please try to a make a video explaining math behind every Machine learning algorithm. Every beginner out there could get the theory of alogirthm but couldn't get the math behind it.
@lifeinruroc5918
@lifeinruroc5918 2 жыл бұрын
Any chance to quickly explain why resulting models are straight lines?
@sakcee
@sakcee Жыл бұрын
excellent!
@harrypotter6505
@harrypotter6505 Жыл бұрын
I am stuck at understanding why "i" below the summation notation is necessary, someone please, would it make a difference to not write that "i" below?
@BrandonRohrer
@BrandonRohrer Жыл бұрын
The i underneath SUM_i a_i means to sum up all the terms for all values of i, for example, add up a_0 + a_1 + a_2 + a_3 ... If you leave the i off, it is usually understood. It means the same thing.
@svein2330
@svein2330 5 жыл бұрын
Excellent.
@MohamedMahmoud-ul4ip
@MohamedMahmoud-ul4ip 5 жыл бұрын
AMAZING !!!!!!!!!!!!!!!!!!!! , THANK YOU VERY MUCH
@gaureesha9840
@gaureesha9840 6 жыл бұрын
can a bunch of sigmoid activations produce non-linear classifier?
@BrandonRohrer
@BrandonRohrer 6 жыл бұрын
Yep, in multiple layers they can do the same thing as hyperbolic tangents, except that the functions they create fall between 0 and 1, rather than between -1 and 1.
@pauldacus4590
@pauldacus4590 6 жыл бұрын
Thanks embroidered stockings Grampa Christmas!
@gavin5861
@gavin5861 6 жыл бұрын
🤯
@АндрейПавлов-л7г
@АндрейПавлов-л7г 2 жыл бұрын
!
How Convolutional Neural Networks work
26:14
Brandon Rohrer
Рет қаралды 968 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,4 МЛН
The evil clown plays a prank on the angel
00:39
超人夫妇
Рет қаралды 53 МЛН
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
1% vs 100% #beatbox #tiktok
01:10
BeatboxJCOP
Рет қаралды 67 МЛН
How convolutional neural networks work, in depth
1:01:28
Brandon Rohrer
Рет қаралды 212 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 588 М.
Dendrites: Why Biological Neurons Are Deep Neural Networks
25:28
Artem Kirsanov
Рет қаралды 243 М.
How Support Vector Machines work / How to open a black box
17:54
Brandon Rohrer
Рет қаралды 81 М.
How Backpropagation Works
18:50
Brandon Rohrer
Рет қаралды 48 М.
Neural Networks Explained from Scratch using Python
17:38
Bot Academy
Рет қаралды 356 М.
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,8 МЛН
MAMBA from Scratch: Neural Nets Better and Faster than Transformers
31:51
Algorithmic Simplicity
Рет қаралды 220 М.
How Deep Neural Networks Work
24:38
Brandon Rohrer
Рет қаралды 1,5 МЛН