Understanding AI - Lesson 2 / 15: Hidden Layers

  Рет қаралды 4,481

Radu Mariescu-Istodor

Radu Mariescu-Istodor

Күн бұрын

PLAYLIST: • Self-driving Car :: Ph...
Dive deeper into the world of Neural Networks with Lesson 2 of the "Understanding AI" course! In this session, we explore how a simple genetic algorithm helps optimize network parameters. We'll also see the power of hidden layers and their role in shaping the behavior of neural networks. Join me as we move beyond single-input neurons and venture into the realm of multi-layer perceptrons!
Discover the significance of hidden neurons and nodes, understanding why they're termed "hidden". Gain insights into the terminology and we'll also debunk common misconceptions about activation functions.
Join me on this learning journey! 🚀🧠
🚗THE PLAYGROUND🚗
radufromfinland.com/projects/...
💬DISCORD💬
discord.gg/gJFcF5XVn9
⭐LINKS⭐
Phase 1: • Self-driving Car :: Ph...
Phase 2: • Self-driving Car :: Ph...
Desmos 3D: www.desmos.com/3d
Another Playground: playground.tensorflow.org
#HiddenLayers #NeuralNetworks #AIPlayground #MachineLearning #Perceptron #ActivationFunctions #ScikitLearn #AIProgression
⭐TIMESTAMPS⭐
00:00 Introduction
00:45 Genetic Algorithm
07:03 What the Network Really Learns
11:40 Two Inputs
31:08 Hidden Layers
38:37 Boolean Operations
41:22 Homework 3
41:32 Misconceptions
42:32 Homework 4

Пікірлер: 58
@Coder.tahsin
@Coder.tahsin 4 ай бұрын
I can't just explain how much I love your content, I tried to understand what is actually a ML model looks like after training but I was unable to because everyone in KZbin used library for this, but then I found your mini image recognition tutorial where you have beautiful illustrated how to extract features and by only using a couple how decision boundary can be set.You have also gave your trained model on the source code and that's aha moment for me finally I saw ML model is nothing but some coordinate. You did it also for neural network.I hadn't understood it properly before this series. You are really making me and others likes me believe it's possible for us to learn and understand complex topics like this. Ben Eater's 8bit breadboard computer demystified that computer isn't really black box by building a Turing complete breadboard computer that you have to program by toggling dip switch on and off just like the early computer! You are doing same thing for AI, optimizing neural networks weight and bias by just like the very first perceptron! I don't actually get not having millons of views,but I hope it will be very soon. Wish you all the best from Bangladesh 🇧🇩
@Radu
@Radu 4 ай бұрын
Thanks for the nice comment :-) Glad to hear you're getting so much out of it!
@diegocassinera
@diegocassinera 4 ай бұрын
So well explained, another great lesson.
@Radu
@Radu 3 ай бұрын
Thank you :-)
@tomekatomek5694
@tomekatomek5694 3 ай бұрын
The universe thanks you for this course.
@Radu
@Radu 3 ай бұрын
:-) and I thank you, for watching!
@MRX-nm5dn
@MRX-nm5dn 3 ай бұрын
i start the 2nd lesson course here. thanks
@Radu
@Radu 3 ай бұрын
Cool. Good luck! :-)
@volodyslove
@volodyslove 2 ай бұрын
I'm starting to understand what I'm doing when training dl model with keras 😂 thank you 😁
@Radu
@Radu 2 ай бұрын
Happy to help! :-)
@ekalyvio
@ekalyvio 4 ай бұрын
Very nice tutorial! I am working as a professional software engineer the last 25 years but on NN I am a kind of newbie. Such kind of tutorial helped me to easier understand a few concepts. Quick note: I would like to see a bit more explanation on the boolean operators. It took me a while to understand that we OR or AND the gray area and not the black one. Other than this, I can't wait for the next tutorials! :D
@Radu
@Radu 4 ай бұрын
Aha, I see... I might make a live-stream later, in April where I explain some of the things again. Thanks for the comment :-)
@__angle
@__angle 4 ай бұрын
Amazing ! Thank you very much.
@pizdaxyu
@pizdaxyu 4 ай бұрын
AND OR with perceptrons!!!
@Radu
@Radu 4 ай бұрын
:-)
@adilsonbuset738
@adilsonbuset738 4 ай бұрын
Obrigado, Mestre!
@Radu
@Radu 4 ай бұрын
You're welcome :-)
@fdorsman
@fdorsman 4 ай бұрын
Thanks again! And again I learned something new from your video!
@Radu
@Radu 4 ай бұрын
Glad to hear it! :-)
@taposhbarman7447
@taposhbarman7447 4 ай бұрын
coding with Radu , coing with Radu
@Radu
@Radu 4 ай бұрын
:-))
@amir3645
@amir3645 4 ай бұрын
Thank you 🙌
@Radu
@Radu 4 ай бұрын
You're welcome :-)
@eridarael6541
@eridarael6541 4 ай бұрын
Thanks Radu. I learn something new today. I had to make NN for boolen express just to predict for a set of data. I couldn't imagine the NN of boolen expression could be applied for this self driving car
@Radu
@Radu 3 ай бұрын
Happy to hear you learned something :-) Thanks for watching!
@2difficult2do
@2difficult2do 4 ай бұрын
Thank you for the interesting, detailed explanation, I learned something. Coding this Radu 👍
@Radu
@Radu 4 ай бұрын
Glad to hear :-)
@merarebbadro9328
@merarebbadro9328 4 ай бұрын
thank you very much
@Radu
@Radu 4 ай бұрын
You're welcome :-)
@mdsalahuddin46464
@mdsalahuddin46464 4 ай бұрын
Awesome explanation
@Radu
@Radu 4 ай бұрын
Glad you think so! :-)
@DanielJoseAutodesk
@DanielJoseAutodesk 4 ай бұрын
This idea of interaction with the neural network was excellent 👏👏👏👏😁. Because of some questions, I was thinking: But why does the cart keep walking even when the signal stops ?!?! 🤔 then I remembered phase one, when you showed how to add physics to the system. So using only one neuron, I changed those physical parameters. And it is very interesting how small changes influence the entire system. It is a full plate for mathematicians who like to explain chaos theory. 😂😂😅😁👍
@Radu
@Radu 4 ай бұрын
Yeah, just remember that the network we're working with at the moment doesn't know what friction is... or its parameter setting. If we expect the friction to change based on some parameters (like the temperature outside), then the neural network should have that as an input. Now the friction is a hard-coded value that always affects the car's behavior in the same way.
@ChandrashekarCN
@ChandrashekarCN 4 ай бұрын
💖💖💖💖
@Radu
@Radu 4 ай бұрын
:-)
@alwysrite
@alwysrite 4 ай бұрын
@25:30 you talk about just friction slowing them down ? Are you allowing a constant friction and accelerate just to overcome that or is the friction you talk about the brakes? Because if it was brakes why cant we prevent collision?
@Radu
@Radu 4 ай бұрын
There is a constant friction that eventually stops the car from moving (if the car doesn't accelerate). The car can also accelerate backwards (kind of like breaking, and then it starts moving in reverse), but we haven't played with that yet (next time). Even if you press the reverse button, the car doesn't instantly come to a stop; still needs to overcome the inertia.
@AyorindeAdesugba
@AyorindeAdesugba 4 ай бұрын
Bonus sneak peek 🎉
@Radu
@Radu 4 ай бұрын
:-) yes, hope you like it
@Krzako
@Krzako 4 ай бұрын
​@@Radu You know we do 😁
@Radu
@Radu 4 ай бұрын
:-) happy to hear
@garryokeeffe591
@garryokeeffe591 4 ай бұрын
I'm using Chrome on a Mac and can't get the fine control of the values to work. I've tried holding down the shift key and many others but I can only change the values in steps of 0.1, Is there a work around. Thanks for the videos, they are excellent.
@Radu
@Radu 4 ай бұрын
I see... I've now added fine control if you press the plus (+) and minus (-) keys on the keyboard. You may need to clear the cache for it to work.
@garryokeeffe591
@garryokeeffe591 4 ай бұрын
It works! Thanks@@Radu
@Radu
@Radu 4 ай бұрын
@@garryokeeffe591 Cool :-)
@hamzamizo6391
@hamzamizo6391 3 ай бұрын
As a beginner please give me where to start to understand well NN
@Radu
@Radu 3 ай бұрын
Well, this is lesson 2 of my course on NN. Maybe start with lesson 1? The course is quite basic and aimed at understanding how and why NNs work.
@hamzamizo6391
@hamzamizo6391 3 ай бұрын
@@Radu i hope you can follow the lessons with us and thank you
@javifontalva7752
@javifontalva7752 4 ай бұрын
🤯🤯🤯
@Radu
@Radu 4 ай бұрын
Looks like you're really excited :-)
@josh5231
@josh5231 4 ай бұрын
Radu, I really like your videos, but for me, this one misses the mark. 1. The graph for your trigger points could have been explained as simply: " the x axis is distance measured and the y axis is speed. The line is the point where the output changes state. " I still have no idea why you would go through the trouble of showing it as a 3d plain. 2. Hidden layers are demonstrated but not really explained. I would say that "each node provides an output based on a weighted combination of inputs verse threshold(or bais as you refer to it). This allows for the formation of logic gates as demonstrated". Not hating, just think us programmers tend to needlessly over complicate things at time, and it tends to turn off a good portion of the audience your targeting.
@Radu
@Radu 4 ай бұрын
Thanks for the feedback. Yeah... it's always the case when teaching that some students like it / some don't and reasons can vary a lot. I suspect you know (some of) these things already and I probably teach them in a different (weird) way for your taste. Let me try to reply to some of the things you mention: 1. x is not the distance measured, it's what the proximity sensor measures (a value between 0 and 1 (100%), correlated to the distance, indeed, if the distance is less than the range of the sensor, but not anymore after that). Similarly, for the y... it's a normalized value for the speed, so, values between -1 and 1 with 1 (100%) when the car is moving as fast as it can. I think your explanation is not bad, it's the big picture which can often be more useful (so we don't get lost in the details). Few things you suggest are quite confusing, though :-) like: - 'the line is the point' :-) a line is not a point - 'showing it as a 3d plane' planes flat 2d surfaces, what I show is the plane inside a 3d space because that's what w1x+w2y-b needs... we need a third (z axis) to visualize the result for any given (x,y) pair. Adding a '> 0' to that means we can get away with using a 2d space, where we just mark down the (x,y) values for which the neuron turns on. What I really like from your wording, though, is the phrase 'changes state'. I would have used that a lot when recording these lessons if it only came to me when writing the script :-) 2. You're right, I don't explain everything. But it's on purpose... I don't know about you, but many of the things I know really well, are those things that I put in a lot of effort to understand or figure out. Hoping people using the playground get to some 'aha moment' like that... Hearing me say something can make sense in the moment, but the knowledge may not stick. I'm experimenting different things with this course... using different language, showing more (telling less). So, many things are probably bad... I just had a live workshop with these things and found better ways of talking about them by listening to students and how they understand different things :-)
@josh5231
@josh5231 4 ай бұрын
Thanks for the response. And yeah, "the line is the point" was a poor choice of wording. I also should have used the word normalized, this I just assumed was known. As for the plain in 3d space, what you are showing is a graph of a 2d function. Where there is a simple correlation between inputs(normalized), position(relative), and result(a+b>c). This is where the line/point thing came from, as two inputs always refer to one point on the graph, and the line result form iterating over all valid inputs(points). Anyways, as I said I do like your videos and I'm learning a lot. I also want to acknowledge the large amount of work that goes into making these videos, and thank you for doing it. You seem to be the only one tackling this topic in ground up way( not resorting to using premade libraries ). Thanks again.
@Radu
@Radu 4 ай бұрын
As I said, I think you know these things quite well. Only reason I pointed out the mistakes or... incomplete explanations is to show how easy it is to confuse someone. Terminology is useful when things are well understood, but if beginners hear about: normalization, correlation, dimensionality reduction... they might just ignore those big words and not understand things well as a result... I want to see what happens when I talk less and show more. Thanks for watching and noticing the efforts :-) Reason for the 'no libraries' is so we can see what's going on under the hood. We also learn to manage larger projects and avoid spaghetti code. I explain the code the best I can, but I think the real learning happens when people start tinkering with the code.
@disrael2101
@disrael2101 4 ай бұрын
i totally agree, you explained it well better , simpler and way quicker than radu did in his almost 2h try (those past 2 vids) , regardless i appiercate radu efforts and i cant wait for more videos just would prefer it to be more eli5 and direct to the point rather than go back and forth without touching the actual point much
@Radu
@Radu 4 ай бұрын
​@@disrael2101 I think going back and forth (exploring) is important. But I see some of you are quite advanced already and you could benefit from a faster pace. It's hard to know the level of those watching, here on KZbin, so... Only time and more feedback will tell if taking it slow was a good idea or not.
Understanding AI - Lesson 3 / 15: Multilabel Neural Networks
42:17
Radu Mariescu-Istodor
Рет қаралды 3,3 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 258 М.
IS THIS REAL FOOD OR NOT?🤔 PIKACHU AND SONIC CONFUSE THE CAT! 😺🍫
00:41
Купили айфон для собачки #shorts #iribaby
00:31
WHY IS A CAR MORE EXPENSIVE THAN A GIRL?
00:37
Levsob
Рет қаралды 20 МЛН
Understanding AI - Lesson 5 / 15: Navigating to Target
34:01
Radu Mariescu-Istodor
Рет қаралды 2,5 М.
xLSTM: Extended Long Short-Term Memory
57:00
Yannic Kilcher
Рет қаралды 27 М.
Learn Neural Networks through coding
24:31
Radu Mariescu-Istodor
Рет қаралды 29 М.
Understanding AI - Lesson 4 / 15: Where Extra Dimensions Come From?
28:49
Radu Mariescu-Istodor
Рет қаралды 2,5 М.
Dijkstra's Algorithm in JavaScript [Understanding AI - Lesson 6 / 15]
27:38
Radu Mariescu-Istodor
Рет қаралды 3,5 М.
Layer Normalization - EXPLAINED (in Transformer Neural Networks)
13:34
IS THIS REAL FOOD OR NOT?🤔 PIKACHU AND SONIC CONFUSE THE CAT! 😺🍫
00:41