There are NO input layer Neurons!
4:23
How does Machine Learning works?
5:25
Пікірлер
@lhxperimental
@lhxperimental Күн бұрын
The music is too dramatic for the topic 😂
@SheelByTorn
@SheelByTorn 2 күн бұрын
since when did we call "input layers" as "input neurons"? I think you're the only one who thought of that
@tonyc4978
@tonyc4978 2 күн бұрын
I would say that we meed to think of a neural network as a function. The inputs are just variables from the observations row, and the number of these "orange dots" or inputs are just the features of X observation (columns are features and rows are observations). Difference between this and a linear regression function is the fact that a neural network is a function that can twist and turn to learn any pattern of data (a universal function approximator)
@nickernara
@nickernara 2 күн бұрын
here in final diagram, input is changed as rectangle to represent it as a placeholder but output is still shown as green circle how are output's represented?
@thinking_neuron
@thinking_neuron 2 күн бұрын
The output layer contains neurons, hence circle representation is correct for it.
@nickernara
@nickernara 2 күн бұрын
@@thinking_neuron gotcha. thanks. i forgot that output is a layer and not a placeholder and it contains a neuron
@tapanmahata8330
@tapanmahata8330 6 күн бұрын
wrong definition of noise and border point.
@aparnavigneshwaran9580
@aparnavigneshwaran9580 7 күн бұрын
Great explanation but the video blurs in between and everything becomes unreadable. Please rectify that if possible.
@matteoandriolo1144
@matteoandriolo1144 7 күн бұрын
reported for misinformation...
@thinking_neuron
@thinking_neuron 7 күн бұрын
:(
@kreont1
@kreont1 9 күн бұрын
Best best biggest. I need it
@RaviMishra-b7r
@RaviMishra-b7r 10 күн бұрын
Bro creates a problem 😊
@riverlight777
@riverlight777 13 күн бұрын
I subscribed
@riverlight777
@riverlight777 13 күн бұрын
Underrated channel. Superb teaching. No channel comes even close to how eloquently he is educating. Can we expect complete courses on machine learning, deep learning and new age ai trends like agi, llm, etc? Can You bring a complete course on developing end-end ai based projects? Forgive me as I asked for so many things, it's because I have never experienced an educator like You Sir.
@thinking_neuron
@thinking_neuron 11 күн бұрын
Thank you so much for your kind words! You made my day! Sure, I am working on more videos that will help you to understand end to end implementation of AI projects in the industry. GenAI will follow shortly.
@riverlight777
@riverlight777 11 күн бұрын
@@thinking_neuron 😀 warm welcome Sir. Ultra thanks and your continuing efforts are incredible!
@Hoolahoopla1
@Hoolahoopla1 14 күн бұрын
Why do you think any one thinks that the input layer represented as circle is called a neuron? I have watched many videos and didn’t find any such thing. The diagram is repeated like this to make it look appealing. Don’t create unnecessary misconceptions to get views and likes!
@thinking_neuron
@thinking_neuron 11 күн бұрын
Thank you for the feedback! The common understanding is that those input layer circles are neurons! This is what I have tried to explain that it is not the case. Based on how we code it! Honestly, my intention is just to point out a discrepancy based on real examples not just theory. Look at the full video to understand if not already done. kzbin.info/www/bejne/naXNi6qBiKaJh6csi=e2lmjptuwPf6SGJR
@elpablitorodriguezharrera
@elpablitorodriguezharrera 14 күн бұрын
What the fuck man? Everybody knows this even my 8 years old niece
@marutikallimani7529
@marutikallimani7529 16 күн бұрын
Hi Faruk, very good explanation. need to connect with you about my carrier and road map and it will take 10 to 20 min , If this fine I can connect ?
@TheDiverJim
@TheDiverJim 16 күн бұрын
That’s a really good point about the activation or transform function
@lennartv.1529
@lennartv.1529 17 күн бұрын
No shit sherlock
@futuretechmoney
@futuretechmoney 19 күн бұрын
"They are even called input neurons" then proceeds to write "Input Neurons" himself.
@aasthadubey6277
@aasthadubey6277 22 күн бұрын
Very well explained. Thanks for creating such videos.
@thinking_neuron
@thinking_neuron 22 күн бұрын
Thank you for the kind words!
@almightysapling
@almightysapling 27 күн бұрын
Meh, disagree. While it's universally the case that Hidden nodes have non-linear activation (otherwise what's the point), it is often the case that Output nodes have a completely different activation function or none at all, just like input nodes. Are you going to argue that they are not Neurons too? Sometimes? But my preferred way to view it is not to say "there is no activation function" but to say "the activation function is id(x)=x". There you go, now it's a Neuron. Everything is a neuron. And heck, sometimes Input neurons *do* have activation functions. It's often the case that the data needs to go through some sort of normalization/serialization process before it is ready to be placed in the network. That's fundamentally activation. As for adding rectangles to the graph: Go for it, draw it however you like to help. I thought different colors and the fact that they are at the extreme ends of the graph were enough to illustrate that they were special, but you do you.
@kingki1953
@kingki1953 29 күн бұрын
Who said input layer as neuron layer? 😅
@richsoftwareguy
@richsoftwareguy 29 күн бұрын
Lame indian genius
@NLPprompter
@NLPprompter Ай бұрын
circle is for something had computation inside, circle represent a function, process, and??? rectangle has no computation inside this represent data input/output help to define it I already maxed all my token to learn with AI, my local AI too stupid... :(
@adityaraj-j9k4t
@adityaraj-j9k4t Ай бұрын
Great explanation interview-focused. Thanks a lot!
@chiefmiester3801
@chiefmiester3801 Ай бұрын
ironic
@adityaraj-j9k4t
@adityaraj-j9k4t Ай бұрын
fantastic explanation sir
@adityaraj-j9k4t
@adityaraj-j9k4t Ай бұрын
That is a great explanation, clear and crisp definitely focused on interview
@SimonPartogi-y8i
@SimonPartogi-y8i Ай бұрын
Very good clarification
@adityaraj-j9k4t
@adityaraj-j9k4t Ай бұрын
great lecture to know everything about the decision Tree for answering interview qs.
@thinking_neuron
@thinking_neuron Ай бұрын
Thank you Aditya!
@filoautomata
@filoautomata Ай бұрын
it is indeed an input layer it performs identity function with weights all 1.0 y = matmul(1.0*x, np.eye(...))) you will understand it is correctly when your MLP needs to be stacked on top of CNN layer for example.
@julianricom404
@julianricom404 Ай бұрын
Never ever EVER heard about that misconception
@edwardcullen1739
@edwardcullen1739 27 күн бұрын
Do a better job of reading the comments then. You might learn something.
@julianricom404
@julianricom404 25 күн бұрын
@edwardcullen1739 Thanks for the suggestion, but I prefer to spend my time reading articles or books or watching tutorials to learn new things. Perhaps you should too, maybe you'll learn something
@edwardcullen1739
@edwardcullen1739 25 күн бұрын
@@julianricom404 "I have my preconceptions and when they are challenged, I refuse to consider that they may be incorrect." Uh-huh, got it. Do you teach? No, you don't. You likely have never taught. This video was clearly created by someone who does. You wear your lack of experience as a badge of honour, like you're smarter than everyone else. To someone like me, your ignorance and poor attitude are easy to see, even with just the 7 words you originally wrote. Your response only confirms it.
@abhroshomepias1999
@abhroshomepias1999 Ай бұрын
bro produced the problem and sold the solution
@thinking_neuron
@thinking_neuron Ай бұрын
I seriously did not! :/
@edwardcullen1739
@edwardcullen1739 27 күн бұрын
You have not reviewed the comments. More than one person found this useful and had been struggling with "conventional" descriptions. So, you are simply wrong. 🤷‍♂️
@salimhammadi5125
@salimhammadi5125 Ай бұрын
I think he created a problem and solved it
@thinking_neuron
@thinking_neuron Ай бұрын
Seriously I did not :|
@Felipe-zl1rj
@Felipe-zl1rj Ай бұрын
I had this problem today, I was confused about why CBOW had 2 layers but showed as 3. Chatgpt explained what you've said here. Your video had almost the perfect timing.
@mistafizz5195
@mistafizz5195 Ай бұрын
This is a bad video
@TahiraAnum
@TahiraAnum Ай бұрын
Your way of teaching is quite good. It would be great help if you teach us ML algorithms practically on actual data.
@thinking_neuron
@thinking_neuron Ай бұрын
Hey Tahira! Thank you for the kind words 😊 Sure, I am working on more practical videos!
@scholasticperspective3917
@scholasticperspective3917 Ай бұрын
Very good explanation. But I would say one thing is missing here. There is a subtle difference between a data scientist and a machine learning engineer. Data scientists mostly deal with the business data and ML engineers work in the process of building a product based on machine learning. It is true that there are lot of similarities between the task of a data scientist and a machine learning engineer. A data scientist can also create products and ML engineers can also work in the business domain. But still these are just possibilities. These are not specifications. Beginners often feel so much confused between these two things. And many of them started to think that two are similar. In current job market, the requirements of a data scientist and ML engineer are quite different. ML engineers need lot of software engineering skills along with machine learning skills. ML engineers are just a special kind of software engineers.
@thinking_neuron
@thinking_neuron Ай бұрын
Your observation is bang on! A data scientist holds knowledge regarding the domain along with ML algorithms and their applications. They need not be domain experts but should understand the basics of the industry like CPG, Healthcare, Insurance etc. depending on whatever project they are working on. Typically, that happens when you have gained some experience working as a ML engineer under the guidance of senior Data Scientists in the project. This is actually a separate topic in itself and I have covered it in the below video! kzbin.info/www/bejne/Y2mcdKGchaiAqrMsi=KoxCS11MUIImgwIZ Thank you for the feedback! Cheers!
@Gurureddy777
@Gurureddy777 Ай бұрын
You deserve more views, thankyou bro.
@thinking_neuron
@thinking_neuron Ай бұрын
Thank you Guru! Keep sharing with friends! 😃
@aracreatives5550
@aracreatives5550 Ай бұрын
🫡Hats off Sir.. You're the most valuable I've ever seen in my life.. The concepts explanation from scratch is god damn... It's precise & not deviating the content.. I'm very much impressed on your sharable knowledge in Machine learning ❤️❤️ Keep up the great work Sir.. Love from India 🫶
@thinking_neuron
@thinking_neuron Ай бұрын
Wow! I am very happy and glad to see that these videos are helping you. Thank you so much for your kind words, this encourages me a lot! Cheers! 😊
@dhanushgoud6134
@dhanushgoud6134 2 ай бұрын
grate work
@thinking_neuron
@thinking_neuron 2 ай бұрын
Thank you Dhanush!
@yebaatkuchhazamnahihui2452
@yebaatkuchhazamnahihui2452 2 ай бұрын
Nice work
@thinking_neuron
@thinking_neuron 2 ай бұрын
Thank you for the appreciation!
@Technaton_English
@Technaton_English 2 ай бұрын
I don't think it's a problem... People will be able to understand that with even the most basic knowledge of neural networks... People and if they are like me won't be focusing on the terms and terminologies(cuz I always find a hard time with them😢)...
@thinking_neuron
@thinking_neuron 2 ай бұрын
Thank you for the feedback! I really feel the ANN diagrammatic representation could be better and in turn it will fast track the understanding of how data travels via the ANN. As you know why they are infamous as black boxes, that is because its understanding is complex due to such kind of methods of illustration.
@mjawale12345
@mjawale12345 2 ай бұрын
Can we have a series of DS or ML roadmap? I'm confuse bcz they overlap. Wait r u @indiainpixels ?
@thinking_neuron
@thinking_neuron 2 ай бұрын
Sure, look at this video to understand the difference and overlap between ML and DS! kzbin.info/www/bejne/Y2mcdKGchaiAqrM and no I am not indiainpixels! :)
@jesusmolivaresceja
@jesusmolivaresceja 2 ай бұрын
That is true, hopefully you reach many informed people
@ayushdwivediyt
@ayushdwivediyt 2 ай бұрын
great video
@thinking_neuron
@thinking_neuron 2 ай бұрын
Thank you Ayush!
@BROKENGAMER-nh6cd
@BROKENGAMER-nh6cd 2 ай бұрын
Just clicked on your video thinking another clikbait video . But I have to admit you proved me wrong. Hoping for more valuable content like this from you in future.
@thinking_neuron
@thinking_neuron 2 ай бұрын
I am glad you liked this one! Thank you for taking time to provide feedback! Sure, I will work harder and create more useful videos. Cheers!
@Skynet5D
@Skynet5D 2 ай бұрын
But if needed, you could generalize the transfer function to the input layer by considering it as a pass-thru transfer function. Thus the input layer could be considered as a neuron layer with a specific transfer function and you are safe from any "artificial" confusion.
@thinking_neuron
@thinking_neuron 2 ай бұрын
Sure! With that logic, I can force transform anything into a Neuron!
@edwardcullen1739
@edwardcullen1739 27 күн бұрын
In implementation, you often don't have any function at all - the correct mapping for a typical implementation would have lines coming directly from the inputs to the first "hidden" layer. The issue is that the conceptual representation is at-odds with the implemtations beginners typically use and this creates confusion. Emphasising that the "first" layer is special should reduce this and (according to this comment section) has helped at least one person. Further, the different shape could represent whole additional systems, e.g. preprocessing, which is done "offline".
@churaslast
@churaslast 2 ай бұрын
funny
@ahmedown8200
@ahmedown8200 2 ай бұрын
Thank you
@apolonn
@apolonn 2 ай бұрын
🤓
@MrWeb-dev
@MrWeb-dev 2 ай бұрын
This is correct, except that using "full neurons" for the input layer will still work just fine. You just fix the transfer and activation functions.