since when did we call "input layers" as "input neurons"? I think you're the only one who thought of that
@tonyc49782 күн бұрын
I would say that we meed to think of a neural network as a function. The inputs are just variables from the observations row, and the number of these "orange dots" or inputs are just the features of X observation (columns are features and rows are observations). Difference between this and a linear regression function is the fact that a neural network is a function that can twist and turn to learn any pattern of data (a universal function approximator)
@nickernara2 күн бұрын
here in final diagram, input is changed as rectangle to represent it as a placeholder but output is still shown as green circle how are output's represented?
@thinking_neuron2 күн бұрын
The output layer contains neurons, hence circle representation is correct for it.
@nickernara2 күн бұрын
@@thinking_neuron gotcha. thanks. i forgot that output is a layer and not a placeholder and it contains a neuron
@tapanmahata83306 күн бұрын
wrong definition of noise and border point.
@aparnavigneshwaran95807 күн бұрын
Great explanation but the video blurs in between and everything becomes unreadable. Please rectify that if possible.
@matteoandriolo11447 күн бұрын
reported for misinformation...
@thinking_neuron7 күн бұрын
:(
@kreont19 күн бұрын
Best best biggest. I need it
@RaviMishra-b7r10 күн бұрын
Bro creates a problem 😊
@riverlight77713 күн бұрын
I subscribed
@riverlight77713 күн бұрын
Underrated channel. Superb teaching. No channel comes even close to how eloquently he is educating. Can we expect complete courses on machine learning, deep learning and new age ai trends like agi, llm, etc? Can You bring a complete course on developing end-end ai based projects? Forgive me as I asked for so many things, it's because I have never experienced an educator like You Sir.
@thinking_neuron11 күн бұрын
Thank you so much for your kind words! You made my day! Sure, I am working on more videos that will help you to understand end to end implementation of AI projects in the industry. GenAI will follow shortly.
@riverlight77711 күн бұрын
@@thinking_neuron 😀 warm welcome Sir. Ultra thanks and your continuing efforts are incredible!
@Hoolahoopla114 күн бұрын
Why do you think any one thinks that the input layer represented as circle is called a neuron? I have watched many videos and didn’t find any such thing. The diagram is repeated like this to make it look appealing. Don’t create unnecessary misconceptions to get views and likes!
@thinking_neuron11 күн бұрын
Thank you for the feedback! The common understanding is that those input layer circles are neurons! This is what I have tried to explain that it is not the case. Based on how we code it! Honestly, my intention is just to point out a discrepancy based on real examples not just theory. Look at the full video to understand if not already done. kzbin.info/www/bejne/naXNi6qBiKaJh6csi=e2lmjptuwPf6SGJR
@elpablitorodriguezharrera14 күн бұрын
What the fuck man? Everybody knows this even my 8 years old niece
@marutikallimani752916 күн бұрын
Hi Faruk, very good explanation. need to connect with you about my carrier and road map and it will take 10 to 20 min , If this fine I can connect ?
@TheDiverJim16 күн бұрын
That’s a really good point about the activation or transform function
@lennartv.152917 күн бұрын
No shit sherlock
@futuretechmoney19 күн бұрын
"They are even called input neurons" then proceeds to write "Input Neurons" himself.
@aasthadubey627722 күн бұрын
Very well explained. Thanks for creating such videos.
@thinking_neuron22 күн бұрын
Thank you for the kind words!
@almightysapling27 күн бұрын
Meh, disagree. While it's universally the case that Hidden nodes have non-linear activation (otherwise what's the point), it is often the case that Output nodes have a completely different activation function or none at all, just like input nodes. Are you going to argue that they are not Neurons too? Sometimes? But my preferred way to view it is not to say "there is no activation function" but to say "the activation function is id(x)=x". There you go, now it's a Neuron. Everything is a neuron. And heck, sometimes Input neurons *do* have activation functions. It's often the case that the data needs to go through some sort of normalization/serialization process before it is ready to be placed in the network. That's fundamentally activation. As for adding rectangles to the graph: Go for it, draw it however you like to help. I thought different colors and the fact that they are at the extreme ends of the graph were enough to illustrate that they were special, but you do you.
@kingki195329 күн бұрын
Who said input layer as neuron layer? 😅
@richsoftwareguy29 күн бұрын
Lame indian genius
@NLPprompterАй бұрын
circle is for something had computation inside, circle represent a function, process, and??? rectangle has no computation inside this represent data input/output help to define it I already maxed all my token to learn with AI, my local AI too stupid... :(
@adityaraj-j9k4tАй бұрын
Great explanation interview-focused. Thanks a lot!
@chiefmiester3801Ай бұрын
ironic
@adityaraj-j9k4tАй бұрын
fantastic explanation sir
@adityaraj-j9k4tАй бұрын
That is a great explanation, clear and crisp definitely focused on interview
@SimonPartogi-y8iАй бұрын
Very good clarification
@adityaraj-j9k4tАй бұрын
great lecture to know everything about the decision Tree for answering interview qs.
@thinking_neuronАй бұрын
Thank you Aditya!
@filoautomataАй бұрын
it is indeed an input layer it performs identity function with weights all 1.0 y = matmul(1.0*x, np.eye(...))) you will understand it is correctly when your MLP needs to be stacked on top of CNN layer for example.
@julianricom404Ай бұрын
Never ever EVER heard about that misconception
@edwardcullen173927 күн бұрын
Do a better job of reading the comments then. You might learn something.
@julianricom40425 күн бұрын
@edwardcullen1739 Thanks for the suggestion, but I prefer to spend my time reading articles or books or watching tutorials to learn new things. Perhaps you should too, maybe you'll learn something
@edwardcullen173925 күн бұрын
@@julianricom404 "I have my preconceptions and when they are challenged, I refuse to consider that they may be incorrect." Uh-huh, got it. Do you teach? No, you don't. You likely have never taught. This video was clearly created by someone who does. You wear your lack of experience as a badge of honour, like you're smarter than everyone else. To someone like me, your ignorance and poor attitude are easy to see, even with just the 7 words you originally wrote. Your response only confirms it.
@abhroshomepias1999Ай бұрын
bro produced the problem and sold the solution
@thinking_neuronАй бұрын
I seriously did not! :/
@edwardcullen173927 күн бұрын
You have not reviewed the comments. More than one person found this useful and had been struggling with "conventional" descriptions. So, you are simply wrong. 🤷♂️
@salimhammadi5125Ай бұрын
I think he created a problem and solved it
@thinking_neuronАй бұрын
Seriously I did not :|
@Felipe-zl1rjАй бұрын
I had this problem today, I was confused about why CBOW had 2 layers but showed as 3. Chatgpt explained what you've said here. Your video had almost the perfect timing.
@mistafizz5195Ай бұрын
This is a bad video
@TahiraAnumАй бұрын
Your way of teaching is quite good. It would be great help if you teach us ML algorithms practically on actual data.
@thinking_neuronАй бұрын
Hey Tahira! Thank you for the kind words 😊 Sure, I am working on more practical videos!
@scholasticperspective3917Ай бұрын
Very good explanation. But I would say one thing is missing here. There is a subtle difference between a data scientist and a machine learning engineer. Data scientists mostly deal with the business data and ML engineers work in the process of building a product based on machine learning. It is true that there are lot of similarities between the task of a data scientist and a machine learning engineer. A data scientist can also create products and ML engineers can also work in the business domain. But still these are just possibilities. These are not specifications. Beginners often feel so much confused between these two things. And many of them started to think that two are similar. In current job market, the requirements of a data scientist and ML engineer are quite different. ML engineers need lot of software engineering skills along with machine learning skills. ML engineers are just a special kind of software engineers.
@thinking_neuronАй бұрын
Your observation is bang on! A data scientist holds knowledge regarding the domain along with ML algorithms and their applications. They need not be domain experts but should understand the basics of the industry like CPG, Healthcare, Insurance etc. depending on whatever project they are working on. Typically, that happens when you have gained some experience working as a ML engineer under the guidance of senior Data Scientists in the project. This is actually a separate topic in itself and I have covered it in the below video! kzbin.info/www/bejne/Y2mcdKGchaiAqrMsi=KoxCS11MUIImgwIZ Thank you for the feedback! Cheers!
@Gurureddy777Ай бұрын
You deserve more views, thankyou bro.
@thinking_neuronАй бұрын
Thank you Guru! Keep sharing with friends! 😃
@aracreatives5550Ай бұрын
🫡Hats off Sir.. You're the most valuable I've ever seen in my life.. The concepts explanation from scratch is god damn... It's precise & not deviating the content.. I'm very much impressed on your sharable knowledge in Machine learning ❤️❤️ Keep up the great work Sir.. Love from India 🫶
@thinking_neuronАй бұрын
Wow! I am very happy and glad to see that these videos are helping you. Thank you so much for your kind words, this encourages me a lot! Cheers! 😊
@dhanushgoud61342 ай бұрын
grate work
@thinking_neuron2 ай бұрын
Thank you Dhanush!
@yebaatkuchhazamnahihui24522 ай бұрын
Nice work
@thinking_neuron2 ай бұрын
Thank you for the appreciation!
@Technaton_English2 ай бұрын
I don't think it's a problem... People will be able to understand that with even the most basic knowledge of neural networks... People and if they are like me won't be focusing on the terms and terminologies(cuz I always find a hard time with them😢)...
@thinking_neuron2 ай бұрын
Thank you for the feedback! I really feel the ANN diagrammatic representation could be better and in turn it will fast track the understanding of how data travels via the ANN. As you know why they are infamous as black boxes, that is because its understanding is complex due to such kind of methods of illustration.
@mjawale123452 ай бұрын
Can we have a series of DS or ML roadmap? I'm confuse bcz they overlap. Wait r u @indiainpixels ?
@thinking_neuron2 ай бұрын
Sure, look at this video to understand the difference and overlap between ML and DS! kzbin.info/www/bejne/Y2mcdKGchaiAqrM and no I am not indiainpixels! :)
@jesusmolivaresceja2 ай бұрын
That is true, hopefully you reach many informed people
@ayushdwivediyt2 ай бұрын
great video
@thinking_neuron2 ай бұрын
Thank you Ayush!
@BROKENGAMER-nh6cd2 ай бұрын
Just clicked on your video thinking another clikbait video . But I have to admit you proved me wrong. Hoping for more valuable content like this from you in future.
@thinking_neuron2 ай бұрын
I am glad you liked this one! Thank you for taking time to provide feedback! Sure, I will work harder and create more useful videos. Cheers!
@Skynet5D2 ай бұрын
But if needed, you could generalize the transfer function to the input layer by considering it as a pass-thru transfer function. Thus the input layer could be considered as a neuron layer with a specific transfer function and you are safe from any "artificial" confusion.
@thinking_neuron2 ай бұрын
Sure! With that logic, I can force transform anything into a Neuron!
@edwardcullen173927 күн бұрын
In implementation, you often don't have any function at all - the correct mapping for a typical implementation would have lines coming directly from the inputs to the first "hidden" layer. The issue is that the conceptual representation is at-odds with the implemtations beginners typically use and this creates confusion. Emphasising that the "first" layer is special should reduce this and (according to this comment section) has helped at least one person. Further, the different shape could represent whole additional systems, e.g. preprocessing, which is done "offline".
@churaslast2 ай бұрын
funny
@ahmedown82002 ай бұрын
Thank you
@apolonn2 ай бұрын
🤓
@MrWeb-dev2 ай бұрын
This is correct, except that using "full neurons" for the input layer will still work just fine. You just fix the transfer and activation functions.