Input Neuron is a Lie!

  Рет қаралды 172,147

Thinking Neuron

Thinking Neuron

Күн бұрын

Artificial Neural Networks (ANN) Input layer circles are NOT neurons! This is one of the biggest misconceptions in deep Learning due to the way the input layer is traditionally represented using circles... which look similar to the hidden layer and the output layer. Ideally, the input layer should be represented using any other shape, because circles are used for Neurons.
Full Video with Coding Context:
• There are NO input lay...
Further Study on this topic...
How does a neuron work in Deep Learning ANN?
• How an Artificial Neur...
How does ANN work? in-depth tutorial.
• How Artificial Neural ...
Deep Learning Tutorials
• Deep Learning
#deeplearning #viral #neuralnetworks #inputlayer

Пікірлер: 291
@thinking_neuron
@thinking_neuron 8 ай бұрын
Thank you everyone for the input! The full video describing the context of the code is here: kzbin.info/www/bejne/naXNi6qBiKaJh6c I understand it may be obvious for many people and I respect your views. However, this is not obvious to a lot of students that I have interacted with. When you create a Neural Network using Keras for example, input layer neurons are not defined, we start by declaring the first hidden layer and mentioning the input_dim parameter, because the input layer does not contain neurons but just placeholders. The traditional ANN diagrams do not represent it explicitly. You only understand this clearly when you implement it for the first time. Cheers!
@Matlockization
@Matlockization 6 ай бұрын
So you're saying that a circle _is_ a neuron if it has a function inside it, right ?
@mistafizz5195
@mistafizz5195 4 ай бұрын
Be original stop acting like an expert on a topic you haven't mastered
@aritramukhopadhyay7163
@aritramukhopadhyay7163 12 күн бұрын
That's why beginners should start with pytorch... then you will have basics cleared... but again keras and tf does give a good high level API and people who don't want to understand much, just use should go for
@iam_himanshu_chauhan
@iam_himanshu_chauhan 8 ай бұрын
i think bro solved his own misconception
@mira8950
@mira8950 8 ай бұрын
it called bait
@w花b
@w花b 7 ай бұрын
Lmaooo
@supimon9146
@supimon9146 6 ай бұрын
thats right 🤣🤣🤣🤣🤣🤣🤣
@greyhound1982
@greyhound1982 5 ай бұрын
No one has that misconception. I an yesteryear ML engineer who used neural network for control and vision problems , never ever we have any kind of misconception
@limxiuxian349
@limxiuxian349 5 ай бұрын
😂😂😂 exatcly
@charan01ai
@charan01ai 9 ай бұрын
You created that misconception
@Newb1eProgrammer
@Newb1eProgrammer 8 ай бұрын
Lol yes
@YambeeStarVideo
@YambeeStarVideo 7 ай бұрын
and added loud pretentious Beautiful Mind like soundtrack on top.. urgh
@absolute___zero
@absolute___zero 3 күн бұрын
he never wrote his own network activation process, that's why he doesn't know
@frog7863
@frog7863 7 ай бұрын
bro thought he cooked
@anonymous9217w2
@anonymous9217w2 3 ай бұрын
😂😂
@GalexEye
@GalexEye 25 күн бұрын
lmaoooo
@YT-yt-yt-3
@YT-yt-yt-3 9 ай бұрын
I don’t think that misconception even exists.
@anonymuscarminis3829
@anonymuscarminis3829 7 ай бұрын
dude i have ml exam and i'm studing this as neuron
@digital_down
@digital_down 5 ай бұрын
It definitely exists
@ShubhamDasCoder
@ShubhamDasCoder 4 ай бұрын
He solved a problem which didn't even existed
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
Other comments on this video prove you wrong. Conceptually, calling this a neuron is incorrect, because the correct analogue would be a "sensory nerve." Concepts != Implementation
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
Other comments on this video prove you wrong. Conceptually, calling this a neuron is incorrect, because the correct analogue would be a "sensory nerve." Concepts != Implementation
@aritramukhopadhyay7163
@aritramukhopadhyay7163 8 ай бұрын
Calculation is being done by the edges joining two circles, those are the weights... not the circles
@matejhladky4460
@matejhladky4460 8 ай бұрын
And the activation?
@zappy9880
@zappy9880 8 ай бұрын
What about bias and activation?
@importantforlifeyasharora9042
@importantforlifeyasharora9042 12 күн бұрын
Ye lo ek or aar gaye maharaaj😂😂😂 Abe fande smjho kaha line gole me lage ho be.
@aritramukhopadhyay7163
@aritramukhopadhyay7163 12 күн бұрын
@@matejhladky4460 activation is generally not shown, but as it works on each point independently, you can say they are being done inside the hidden layer circles... anyway if it comes down to activation, it is not mentioned which activation, so I don't think those pictures even want to go into details about activation etc...
@aritramukhopadhyay7163
@aritramukhopadhyay7163 12 күн бұрын
@@zappy9880 bias is also not shown... if you look at the andrew ng lectures you will see that he treats the bias as another circle which has the data 1... and the 1 comes in and that is constant... and the weight corresponding to the 1 is the bias
@muhammadfaisalahsan3198
@muhammadfaisalahsan3198 9 ай бұрын
The activation function is a linear identity function, bias is 0 and the weights are 1 (here! your placeholder became a neurone)
@TheJinsup
@TheJinsup 8 ай бұрын
Perfect
@awaisahmad5908
@awaisahmad5908 8 ай бұрын
Wow clapping 🥴
@raiji74
@raiji74 8 ай бұрын
What
@j.r.r.tolkien8724
@j.r.r.tolkien8724 8 ай бұрын
Can someone explain this?
@raiji74
@raiji74 8 ай бұрын
@@j.r.r.tolkien8724it’s scam
@Nerdimo
@Nerdimo 7 ай бұрын
The depictions are fine. The input layer could be thought of a neuron who only has one connection (the input feature) and a weight of 1 and bias of 0 that stays fixed. The transfer or activations from is just the identity. While this not applied in practice, it’s not wrong to make diagrams in the way they’re depicted. They’re more intuitive for someone who’s just learning neural networks.
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
I disagree. Visually identifying the input layer as being distinct makes a lot of sense, given how they are typically configured. And calling them nerves or sensors or something also makes sense, because that's what they're analogous to.
@REDPUMPERNICKEL
@REDPUMPERNICKEL 2 ай бұрын
@@edwardcullen1739 In my imagination, I see a sensor controlling the value of a *single* neuron whose axon terminates in multiple synapses. In the diagrams I don't see the sensors, only the value-specified-by-sensor-neuron-circles whose radiating lines represent multiple synapses. I can also imagine the diagrams as truncated subsets of larger diagrams in which what's being called the 'first layer' is, in the bigger picture, not. Cheers, eh!
@no-lagteardown3558
@no-lagteardown3558 8 ай бұрын
I dont think it matters.
@elirane85
@elirane85 8 ай бұрын
It really doesn't. And I'm saying that as someone who wrote an AI library from scratch (just for learning purpose, nothing fancy)
@vtrandal
@vtrandal 6 ай бұрын
It does not matter to you and the next guy that gets bothered when people try to be clear.
@CedrusDang
@CedrusDang 6 ай бұрын
​​@@vtrandalwhat clear? like any idiot do think ball is neuron, and the name is INPUT, damn INPUT, just get a class, open input function, please. Since when people put some factory by hand?
@whobitmyneck
@whobitmyneck 6 ай бұрын
​@@elirane85For real lol. Bro is into NOTHING. The input may or may not be classified as a neuron with a weight of exactly the input data
@Megagulle123
@Megagulle123 3 ай бұрын
As a scientific diagram, it is very misleading, since no processing occurs. Generally processing occurs where there are weights. The input layer should just be a layer of nodes where an input can be applied.
@macx7760
@macx7760 8 ай бұрын
if you put it like this, the "output neurons" are also not neurons "since nothing is computed". Now, you could argue that a softmax is computed, but thats not always the case, generally, the logits are the output
@DrewWarren-sv4sv
@DrewWarren-sv4sv 8 ай бұрын
True, and to further that, even though one could argue outputs sometimes have calculations (such as softmax), in the same way, inputs often do too (normalization is common when input neurons have a large variance, and some could argue tokenization counts too)
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
​@@DrewWarren-sv4sv So, using different shapes to represent that they typically do different tasks, unique to their position, by virtue of being input/output, isn't that outrageous a suggestion then? Almost like the input and output layers represent sensor or activator nerves, rather than neurons in the brain? I mean, if we're analogising biological systems, doesn't it make sense to do the mapping completely?
@aritramukhopadhyay7163
@aritramukhopadhyay7163 12 күн бұрын
Wahi naaa... these types of diagrams generally do not want to even talk / claim anything about what activation being used... you read below in the paper and it is writen in detail there...
@MrZelektronz
@MrZelektronz 6 ай бұрын
Wow! I wrote my bachelor thesis about feed forward neural networks last year and this is something which costs me some days. I implemented neural networks and back propagation and realised, wait: there are biases missing for the input layer, but wait biases don't make any sense here, just forward the values through without a bias, but then I don't even need neurons at all 😂. I just randomly stumbled onto this short, what a coincidence
@kscnanaki
@kscnanaki 9 ай бұрын
The circles on the first layer are there to have something to connect the first-layer weights to, otherwise the plot would look weird =) It is also a useful generalization as any subset of layers is a (sub)network in itself, so the input layer could also be understood as the preprocessed data available so far, be it because it's read directly from the dataset, or because it's produced by earlier layers. Actually, if you consider the input values as if they were post-activation values of an earlier layer, then there is no difference between input an any other layer. That's why they appear in the plot. A bit confusing, I agree, but typically these plots are used alongside explanations that clarify this point, so not a big deal!
@thinking_neuron
@thinking_neuron 9 ай бұрын
Hey what a wonderful perspective! This is indeed understood by anyone who has great knowledge like yours Or anyone who has coded the neural networks in Keras or Tensorflow. I tried here to help the beginners who misunderstand the first layer as neurons and then get confused when any corresponding code lines don't appear while creating it in this complete video kzbin.info/www/bejne/naXNi6qBiKaJh6csi=G5U4rfoe7a4JkcNt Highly appreciate your inputs! Cheers!
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
"A bit confusing, I agree..." So an alternative perspective; a different way of presenting the concepts may be helpful in understanding what's going on? Got it.
@mikeiavelli
@mikeiavelli 9 ай бұрын
I have never seen the input layer being called the "input neurons". And a neuron is the *combination* of the input branches (usually labeled with their weight) with the node representing the activation. That either makes the first layer of nodes NOT neurons, or, as depicted here, they are degenerate neurons with a single branch of unit weight and a node with the identity activation function.
@CC1.unposted
@CC1.unposted 9 ай бұрын
It's mostly called Node
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
So there is ambiguity in language and teaching? Therefore disambiguation would be a Good Thing? I think 98% of the people slamming this vids alternative perspective wouldn't know what "degenerate" means in this context. Personally, I've been mapping the "input layer" as sensor nerves, rather than neurons, which opens up interesting idea and helps me connect the abstract to real world.
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
So there is ambiguity in language and teaching? Therefore disambiguation would be a Good Thing? I think 98% of the people slamming this vids alternative perspective wouldn't know what "degenerate" means in this context. Personally, I've been mapping the "input layer" as sensor nerves, rather than neurons, which opens up interesting idea and helps me connect the abstract to real world.
@anhtudo4713
@anhtudo4713 8 ай бұрын
You’re right. It’s misleading because the code architecture is actually a little different. The visualization is just to somehow help us imagine how it looks or probably works. But in real code, if you have seen the project micrograd, the architecture is that a neuron is a wrapper class that has a list of weights and a bias. The layer class is a wrapper class that has a list of neuron classes. Then the input is simply a list of values going in the layer class and then passing down to each neuron class. So the input layer does not really have neuron classes there. To begin with, the input layer is not even a layer class in code. So, the input values are passed directly to the first hidden layer class. The output is a real layer class with its neuron classes, so the output there in the visualization is still correct.
@DrewWarren-sv4sv
@DrewWarren-sv4sv 8 ай бұрын
Couldn't you then argue that each hidden layer is just directly passed to the next "after being processed?" Inputs do often have calculations/processing done, and they act similarly to the other neurons, especially output neurons. In the micrograd example, the reason this is set up this way is because the inputs do not have any weights/bias affecting them, no? And that is only due to the fact that they are the first layer. However, they still act the same way towards the first layer that the first layer acts towards the second, in that they are neurons which hold processed values. It's sort of like saying most cars have an engine, and if you take that engine out, yes, it is missing a common component of cars, but it is still a car regardless of if it has ALL of the same functions as other cars.
@anhtudo4713
@anhtudo4713 8 ай бұрын
​@@DrewWarren-sv4sv I look at it from the code declaration perspective. Like if you declare a very basic neural network in Tensorflow for instance, you start declaring from the first hidden layer, whether it's dense or not, and so on. You don't start with the input layer. Rather, you just declare the input shape for the first hidden layer. Then, input is passed directly to the neural net and hits the first hidden layer right away. But of course, like you say, we can always have an extra layer that represents the input layer. I think in Tensorflow, there's an Identity Layer that might fit this description. And of course, you can even view the input you pass in as an explicit layer, but for many examples I've seen, they don't act as a real layer tho. But yeah, I think the video just tries to emphasize this point that you usually don't explicitly declare an input layer. That's all. When I first saw the declaration of the neural network, I also had the question about where the input layer is. So the video might help beginner people like me. Regarding the processing for inputs, I have seen many examples doing pre-processing instead of it being a layer in the neural net. And after pre-processing, input is finalized and you don't really apply weights or biases to inputs. But there might be examples out there that do it. If so, I want to learn more too.
@Arxiee
@Arxiee 2 ай бұрын
It’s more accurate to call it input neurons rather than a placeholder. Here’s why: 1. Input Layer: • The input layer consists of neurons that receive the raw data (like pixel values in an image or numerical data). • Each neuron in this layer corresponds to one feature or dimension in the input data. For example, if you input a 28x28 grayscale image, the input layer will have 784 neurons (one for each pixel). 2. Placeholder? • In some programming contexts (like TensorFlow), you might encounter the term placeholder. This refers to a variable that temporarily holds data to be fed into the model during training or inference. • However, “placeholder” refers to the way the input is handled in code, not the neurons themselves in the architecture of the network. Conclusion You should call them input neurons when referring to the structure of a neural network, as this is standard terminology in the field. “Placeholder” is more appropriate when discussing code implementation and data handling during model execution. THATS WHAT CHAT GPT HIMSELF SAID💀
@Blackhole.Studios
@Blackhole.Studios 2 ай бұрын
Sometimes input neurons have some weights from the actual inputs, they also sometimes have biases along with a simple ReLU. Which is a calculation, making it just as a normal neuron.
@tonyc4978
@tonyc4978 2 ай бұрын
I would say that we meed to think of a neural network as a function. The inputs are just variables from the observations row, and the number of these "orange dots" or inputs are just the features of X observation (columns are features and rows are observations). Difference between this and a linear regression function is the fact that a neural network is a function that can twist and turn to learn any pattern of data (a universal function approximator)
@kkjbkkkkk148
@kkjbkkkkk148 8 ай бұрын
Input layer represents featured data as the type of data hidden layer can work with. So its not just a place holder.
@Satisfying_meditation
@Satisfying_meditation 8 ай бұрын
We've learn about ANN our teacher told us that input layer as synapsis then it goes to hidden layer that do some calculation if the threshold is less then 50% the hidden layer send back the info into the synapsis that is called back propagation. And if it is greater then 50% the information will be forwarded to Axon and then generate the output .if there is some mistakes in my paragraph correct them plz
@thinking_neuron
@thinking_neuron 8 ай бұрын
Hey there! Happy to see that you are learning well! Please see this video for in-depth details of backpropagation and forward propagation. kzbin.info/www/bejne/qKaZiZ2EaK-Uoas
@selforganisation
@selforganisation 9 күн бұрын
Maybe the confusion wouldn't exist if the output layer was also placeholders, although that would be always just the same set of nodes connected each with the corresponding neuron.
@user-mj2lm5fh1j
@user-mj2lm5fh1j 8 ай бұрын
Any number can be represented using a function. So it can be treated as a kind of neuron.
@Megagulle123
@Megagulle123 3 ай бұрын
No, since no processing takes place. Hence, not a neuron, just simple input nodes
@vsudbdk5363
@vsudbdk5363 9 ай бұрын
So basically input layers carry the encoded information of data(image,text, etc)..there on from the inner hidden layer the entire activations, multiplications occur and so on until some flattened distribution numbers are seen which are the predictions
@thinking_neuron
@thinking_neuron 9 ай бұрын
You got it!
@abdelmananabdelrahman4099
@abdelmananabdelrahman4099 8 ай бұрын
I don't know what you are implying
@REDPUMPERNICKEL
@REDPUMPERNICKEL 2 ай бұрын
In my imagination, I see a sensor controlling the value of a *single* neuron whose axon terminates in multiple synapses. In the diagrams I don't see the sensors, only the value-specified-by-sensor-neuron-circles whose radiating lines represent multiple synapses. I can also imagine the diagrams as truncated subsets of larger diagrams in which what's being called the 'first layer' is, in the bigger picture, not.
@azarel7
@azarel7 7 ай бұрын
Good video...I just saw a video where the input layer was called a neuron, but your explanation in terms of them not having transfer or activation functions is something I've also seen before and makes sense as to why they would not strictly be considered neurons.
@TheDiverJim
@TheDiverJim 3 ай бұрын
That’s a really good point about the activation or transform function
@TheNewton
@TheNewton 8 ай бұрын
Symbolism in compsci is still very vague and easily leads to weird diagrams, or UX of UI's like this.
@thinkingcitizen
@thinkingcitizen 8 ай бұрын
When people with a math or physics background study computer science topics, this is a huge area of frustration!
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
Agreed! I'm a "pure" compsci and the ambiguity of symbology drives _me_ insane! What's worse is that, in the hard reality of ANN systems, the input and output layers _are_ special and different! These are conceptual diagrams, so using different symbols makes perfect sense (colour alone is a poor substitute, for cognitive psychological reasons).
@testales
@testales 6 ай бұрын
I was always confused about the input neurons in diagrams. Then this video cleared up the issue. Then I read the comments and now I'm confused again. :)
@thinking_neuron
@thinking_neuron 5 ай бұрын
:( sorry for that! Please see the complete video where I show how it is coded, that will make it super clear! kzbin.info/www/bejne/naXNi6qBiKaJh6c
@entropia6938
@entropia6938 7 ай бұрын
If you want to make them neurons, you can see them as neurons with the identity as an activation function and non-trainable weights, with all the weights equal to 0 except one that is equal to 1
@samas69420
@samas69420 6 ай бұрын
you can still consider it as an activation with f(x)=x
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
But no one ever implements it that way, which is the point.
@azhuransmx126
@azhuransmx126 7 ай бұрын
But in neurology, there exists Sensitive (Input) Neurons that carry the nervous signals to the brain through the spinal cord Posterior Stem and the Motor Neurons (Output) that spread the nervous signals from the brain to the body muscles through the Spinal cord Anterior Stem. So 🤷 if we will copy the biological capabilities we have to copy well.😂
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
Specialised neurons, that are not like other neurons? So special, that they have their own names? There when using an analogy and conceptual diagrams, using a different shape to emphasise the special nature would be... appropriate? 🤔🤔🤔🤔🤔
@DrewWarren-sv4sv
@DrewWarren-sv4sv 8 ай бұрын
I think most people read this just fine. Also, sometimes there ARE calculations on the input layer. Such is common with reinforcement learning algorithms which often have a large variance in input magnitudes, where the inputs will almost always be normalized first in order to stop neurons with typically larger magnitude inputs from "drowning out" neurons with typically smaller magnitude inputs.
@Siroitin
@Siroitin 8 ай бұрын
True, I wonder why normalization isn't considered one of layers. Maybe because normalization is like a self-evident preprocessing and cleaning
@nickernara
@nickernara 2 ай бұрын
here in final diagram, input is changed as rectangle to represent it as a placeholder but output is still shown as green circle how are output's represented?
@thinking_neuron
@thinking_neuron 2 ай бұрын
The output layer contains neurons, hence circle representation is correct for it.
@nickernara
@nickernara 2 ай бұрын
@@thinking_neuron gotcha. thanks. i forgot that output is a layer and not a placeholder and it contains a neuron
@THEPAGMAN
@THEPAGMAN 28 күн бұрын
it quite literally can be neurons with 1 as the weight being multiplied by the input, and linear activation function (so no change from the input). Implementation wise, there is no point doing that and you can pass the input directly by skipping the "input neurons". Anyway, this misconception doesn't even exist, and if it did it still makes sense 💀
@anipacify1163
@anipacify1163 8 ай бұрын
The input layer is just used to input to the hidden layer . It technically doesn't do anything. Ya I did get a bit confused in the start but it's quite easy to find out as it's written in descriptions of every book . Input layer is used to take the input and provide it to other neurons . It's just used to receive the input
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
So, emphasising this in your diagrams... Is a Good Thing? Cool, glad we agree.
@iSaac-kp5lk
@iSaac-kp5lk 7 ай бұрын
Isn't that why they've used a different color?
@epic_miner
@epic_miner 8 ай бұрын
Thankyou❤
@sann6688
@sann6688 6 ай бұрын
Or one can say that the input neurons with the same shape are having identity function as their activation function! That's all and no need to further alter the diagram.
@yuvrajkukreja1248
@yuvrajkukreja1248 5 ай бұрын
Then the output layer also doesn't perform any further calculation. So why not make it also straight line like you did to input layer 😅.
@thinking_neuron
@thinking_neuron 5 ай бұрын
The output layer does perform calculations! You might have noticed that when you define the ANN you specify the activation function for the output layer. I recommend to check the complete video!
@yuvrajkukreja1248
@yuvrajkukreja1248 5 ай бұрын
Thanks 😊👍
@liambury529
@liambury529 9 ай бұрын
You're missing the fundamentals. In an ANN you're assuming that your inputs are rate encoded, just as the output values or neurons are. Artificial neurons are an abstraction, so you need to understand the underlying reasons for why these models are the way there are.
@markusklyver6277
@markusklyver6277 8 ай бұрын
They are neurons that output whatever the input is. So, the identity function.
@ilyastoletov
@ilyastoletov 6 ай бұрын
I have thought that every neuron in diagrams light that you shown is a binary vector of some kind, the connections between neurons is the calculations performed by network and eventually every "neuron" in diagram represent binary vector of some kind, either formatted input from user or calculated value
@filoautomata
@filoautomata 4 ай бұрын
it is indeed an input layer it performs identity function with weights all 1.0 y = matmul(1.0*x, np.eye(...))) you will understand it is correctly when your MLP needs to be stacked on top of CNN layer for example.
@auslei
@auslei 6 ай бұрын
I thought they were called perceptrons. It is still them calculated with weights, bias and transferred to the next layer. Although inputs they have also been “cleansed”. But anyway does it even require such dramatic music?
@tintrinh4729
@tintrinh4729 7 ай бұрын
output is nn or not ?
@philippk5446
@philippk5446 6 ай бұрын
If you present it this way, you should present all neurons in the network as vectors, as they are actually also values, and not neurons.
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
This is a conceptual diagram. Try to keep up.
@sahilkrshah6399
@sahilkrshah6399 9 ай бұрын
I though you were never returning. Thankyou for being back. Your lectures have been very helpful.
@thinking_neuron
@thinking_neuron 8 ай бұрын
Hey Sahil! You are very kind! Yes, I am back for good! More videos coming soon!
@DrGulgulumal
@DrGulgulumal 6 ай бұрын
What’s with the dramatic music
@thinking_neuron
@thinking_neuron 5 ай бұрын
I thought it would help to make it interesting :|
@DrGulgulumal
@DrGulgulumal 5 ай бұрын
@@thinking_neuron :)
@bharadwajroyal9507
@bharadwajroyal9507 8 ай бұрын
Bro they just represent the input value holder in a circle... that's it nothing misconception and constipation try to understand the subject 😊
@Technaton_English
@Technaton_English 4 ай бұрын
I don't think it's a problem... People will be able to understand that with even the most basic knowledge of neural networks... People and if they are like me won't be focusing on the terms and terminologies(cuz I always find a hard time with them😢)...
@thinking_neuron
@thinking_neuron 4 ай бұрын
Thank you for the feedback! I really feel the ANN diagrammatic representation could be better and in turn it will fast track the understanding of how data travels via the ANN. As you know why they are infamous as black boxes, that is because its understanding is complex due to such kind of methods of illustration.
@jawaharlalnehru4224
@jawaharlalnehru4224 Ай бұрын
Then what about output layer
@thinking_neuron
@thinking_neuron Ай бұрын
The Output layer are neurons. We specify the activation function for it. You can take a look into the full video to see the coding explanation.
@acasualviewer5861
@acasualviewer5861 8 ай бұрын
there are no neurons, they are matrices.. the "neuron" name is dumb but I guess matrix math function doesn't sound as cool as neural net.
@macx7760
@macx7760 8 ай бұрын
thats false, the matrix youre talking about jsut represents the weights
@acasualviewer5861
@acasualviewer5861 8 ай бұрын
@@macx7760 a neural net is literally just matrix (or tensor) multiplication with a non-linearity.. the formula is : nonlin( X W) (X for inputs, W for weights, X is a vector or matrix or tensor, W is a matrix or tensor) For several layers its nonlin(nonlin(nonlin(X W1) W2) W3) more sophisticated ones will include more complex tensor operations.. but its all linear algebra. There are no "neurons" nor axons, nor dendrites or anything else neural related. Just fancy linear algebra. The inputs you refer to are often represented as a vector, or a matrix or a tensor (especially with LLMs)
@greenaum
@greenaum 9 ай бұрын
So, doesn't that make the next layer the real input neurons? And the first layer just buffers? What is the first layer for, anyway? Just to spread inputs out to several of the neurons in the next layer?
@thinking_neuron
@thinking_neuron 9 ай бұрын
You are correct! The First 'hidden layer' and output layer has the neurons. The so called input neurons are just placeholders that spread the input values to the first hidden layer.
@Dalroc
@Dalroc 9 ай бұрын
Depends on the architecture really, as you could have connections that go past the first layer directly into the second layer, or even deeper.
@_xentropy
@_xentropy 8 ай бұрын
This is a dumb take. The circles are there to show the relationship between the inputs and the hidden layer, which is correct. There are no layers connected to the input of the input layer, which implicitly describes that there are no weights and no inputs for each input. This is just a bad take. Sorry.
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
"The circles are there there to show the relationship between the inputs and the hidden layer..." That's precisely what he said _which is why a different symbol is appropriate_ because _in implementation_ networks typically do not do any processing at all in the first layer and this is often confusing to newcomers to the subject. Symbols have meaning. Using a different shape emphasises that there's something special going on, which would both, practically and conceptually, be correct. You're clearly not as smart as you think you are. Have some humility.
@_xentropy
@_xentropy 3 ай бұрын
@@edwardcullen1739 it's the connections between the neurons that give the meaning in the diagram. The weights are associated with the edges connecting the nodes in the graph, not the nodes themselves dude. The standard graph is not confusing at all. Looking through the comments here, I see multiple people stating the same thing, that there is no misconception or confusion among people studying ml about what the input layer represents.
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
@@_xentropy Every part has meaning. If it didn't, it wouldn't be there. You didn't read what I said - or didn't understand it - or you wouldn't be trying to explain something to me (badly) that I already know; the nodes have meaning, because they represent the activation function. But you go on ignoring me and demonstrating to the world just how "smart" you are 🤷‍♂️
@prakhars962
@prakhars962 5 ай бұрын
just know the maths, no diagram can mislead you.
@thinking_neuron
@thinking_neuron 5 ай бұрын
Absolutely!
@BitterTruth24
@BitterTruth24 9 ай бұрын
hi, do you have paid data science course covering all area's
@thinking_neuron
@thinking_neuron 9 ай бұрын
Hey, Thank you for asking. I do not have any paid courses at the moment, however, I have covered most of the topics here starting from the basics. Please have a look! www.youtube.com/@thinking_neuron/playlists
@BitterTruth24
@BitterTruth24 9 ай бұрын
@@thinking_neuron how can I reach out to you . Can I have mentorship call with you.
@nuralam_cse_kuet
@nuralam_cse_kuet Ай бұрын
I was confused at first and I asked many people.
@rahulpramanick2001
@rahulpramanick2001 8 ай бұрын
Well it really doesn't matter. Whether u consider activity in input layer or not. Its actually convention but u are ruler of your own .
@itumekanik
@itumekanik Ай бұрын
No calculation but only passing data seem to me some neurounic behviour 🙃
@jesusmolivaresceja
@jesusmolivaresceja 5 ай бұрын
That is true, hopefully you reach many informed people
@almightysapling
@almightysapling 3 ай бұрын
Meh, disagree. While it's universally the case that Hidden nodes have non-linear activation (otherwise what's the point), it is often the case that Output nodes have a completely different activation function or none at all, just like input nodes. Are you going to argue that they are not Neurons too? Sometimes? But my preferred way to view it is not to say "there is no activation function" but to say "the activation function is id(x)=x". There you go, now it's a Neuron. Everything is a neuron. And heck, sometimes Input neurons *do* have activation functions. It's often the case that the data needs to go through some sort of normalization/serialization process before it is ready to be placed in the network. That's fundamentally activation. As for adding rectangles to the graph: Go for it, draw it however you like to help. I thought different colors and the fact that they are at the extreme ends of the graph were enough to illustrate that they were special, but you do you.
@OMPRAKASH-uz8jw
@OMPRAKASH-uz8jw 8 ай бұрын
May I have supportive documents for your arguments please? Any papers you read or anything could be fine.
@thinking_neuron
@thinking_neuron 8 ай бұрын
Hi Omprakash! Please see the details here. When you code the ANN it becomes clear. kzbin.info/www/bejne/naXNi6qBiKaJh6c
@OMPRAKASH-uz8jw
@OMPRAKASH-uz8jw 8 ай бұрын
@@thinking_neuron thank you
@flo7968
@flo7968 6 ай бұрын
it's just a computation graph
@ikartikthakur
@ikartikthakur 6 ай бұрын
it's good that you explained..so it'll help beginner who are searching web , cuz rn still only enthusiastic are into AI
@progamer1125
@progamer1125 6 ай бұрын
you can think of the input layer as neurons where the output is simply set by the data you have and then passed into the next layer also yeah i think no one had that misconception
@RaviPrakash-dz9fm
@RaviPrakash-dz9fm 2 ай бұрын
This was me realizing stuff in 2017. I didn't make a video about it though🤣
@caiocouto3450
@caiocouto3450 6 ай бұрын
bro has gone crazy because of a minor technical definition
@edwardcullen1739
@edwardcullen1739 3 ай бұрын
It's not a "minor technical definition". It's fundamental concepts. 🤦‍♂️
@angelochristou3695
@angelochristou3695 5 ай бұрын
well a neuron would be between two linear lines.
@prajna_meher
@prajna_meher 2 ай бұрын
Change putput layer also..
@saiprem5380
@saiprem5380 8 ай бұрын
I got confused that way😅
@Laya-dw6tn
@Laya-dw6tn 2 ай бұрын
Where is there a calculation there is a neuron - my professor said
@supimon9146
@supimon9146 6 ай бұрын
overthinking neuron
@thinking_neuron
@thinking_neuron 6 ай бұрын
😆 this one got me!
@adheep399
@adheep399 4 күн бұрын
everyone here is saying it's obvious but I actually struggled because of this, thank you!
@thinking_neuron
@thinking_neuron 4 күн бұрын
Cheers to that!
@mohammedasif2964
@mohammedasif2964 8 ай бұрын
So are you saying teachers and students are that dumb that they blindly take all circles as neurons without following the definitions written there..! Kuch b
@luluw9699
@luluw9699 8 ай бұрын
This is misleading. Input layer is also a neuron it uses transformation of raw data to tensors. There should be circle 🔴 in input layer and before that layer comes raw data (rectangle) that signifies tensor transformation taking place
@lokahit6940
@lokahit6940 6 ай бұрын
true
@trumpet_boooi
@trumpet_boooi 8 ай бұрын
kinda nitpicked but ok thanks for the info
@beaverbuoy3011
@beaverbuoy3011 8 ай бұрын
Interesting and very good observation!
@JenniferTopas
@JenniferTopas 9 ай бұрын
Great video for engineers that can't read or doesn't understand what the word input on top of the orange circles imply. 😂😂😂
@developerrandom6961
@developerrandom6961 7 ай бұрын
Thanks bro, it took me a while to understand those diagrams. But, just a quick ask to a chat bot resolves the misconception. Great vid tho.
@lhxperimental
@lhxperimental 2 ай бұрын
The music is too dramatic for the topic 😂
@sdt3247
@sdt3247 6 ай бұрын
I am sure you had this misconception . 😂
@thinking_neuron
@thinking_neuron 5 ай бұрын
yes I did when I coded the ANN first time, I was a bit confused to not see any lines for the input layer. I am happy that you got it easily, but you will be surprised how many students ask this doubt every month! Cheers!
@leonxger
@leonxger 6 ай бұрын
Thank you microsoft tech support
@savire.ergheiz
@savire.ergheiz 7 ай бұрын
So funny 😂 Even those beginners knew what input means dude
@icced_lattae4864
@icced_lattae4864 7 ай бұрын
me a med student 👁️👄👁️
@SheelByTorn
@SheelByTorn 2 ай бұрын
since when did we call "input layers" as "input neurons"? I think you're the only one who thought of that
@SimonPartogi-y8i
@SimonPartogi-y8i 3 ай бұрын
Very good clarification
@atulkishore9159
@atulkishore9159 8 ай бұрын
This is the foundational diagram. Why are you creating fuss.
@kreont1
@kreont1 3 ай бұрын
Best best biggest. I need it
@kingki1953
@kingki1953 3 ай бұрын
Who said input layer as neuron layer? 😅
@Aranzahas
@Aranzahas 25 күн бұрын
Higher level of music next time, please 🤦
@thinking_neuron
@thinking_neuron 20 күн бұрын
Apologies 😬. Noted for the next one.
@bktemplar
@bktemplar 8 ай бұрын
But they don't look the same, and they are not called input neuron.
@dipayanmukhopadhyay3256
@dipayanmukhopadhyay3256 8 ай бұрын
What's new in it !! Almost every practitioner knows this.
@Hoolahoopla1
@Hoolahoopla1 3 ай бұрын
Why do you think any one thinks that the input layer represented as circle is called a neuron? I have watched many videos and didn’t find any such thing. The diagram is repeated like this to make it look appealing. Don’t create unnecessary misconceptions to get views and likes!
@thinking_neuron
@thinking_neuron 3 ай бұрын
Thank you for the feedback! The common understanding is that those input layer circles are neurons! This is what I have tried to explain that it is not the case. Based on how we code it! Honestly, my intention is just to point out a discrepancy based on real examples not just theory. Look at the full video to understand if not already done. kzbin.info/www/bejne/naXNi6qBiKaJh6csi=e2lmjptuwPf6SGJR
@yym436
@yym436 8 ай бұрын
The same as output layer
@photorealm
@photorealm 9 ай бұрын
Very true, but it didn't confuse me near as much as back propagation. Now that was a pain to understand with zero math background.
@Newb1eProgrammer
@Newb1eProgrammer 8 ай бұрын
For anyone who understands how AI works, no that is not a neuron and most people understand that
@Slow_weeper07
@Slow_weeper07 Ай бұрын
input are just tokens
@NLPprompter
@NLPprompter 3 ай бұрын
circle is for something had computation inside, circle represent a function, process, and??? rectangle has no computation inside this represent data input/output help to define it I already maxed all my token to learn with AI, my local AI too stupid... :(
@awaisahmad5908
@awaisahmad5908 8 ай бұрын
we already knew this thing 😕
@alirezamohamadi_ml
@alirezamohamadi_ml 8 ай бұрын
Bro solved problem which wasn't exist
@dahiruibrahimdahiru2690
@dahiruibrahimdahiru2690 2 ай бұрын
😂😂😂😂 Bro just really didn’t want them circles
@thinking_neuron
@thinking_neuron 2 ай бұрын
LOL
My 17 Minute AI Workflow To Stand Out At Work
17:30
Vicky Zhao [BEEAMP]
Рет қаралды 57 М.
How AI Took Over The World
24:03
Art of the Problem
Рет қаралды 13 М.
My scorpion was taken away from me 😢
00:55
TyphoonFast 5
Рет қаралды 2,7 МЛН
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,5 МЛН
Can you really use ANY activation function? (Universal Approximation Theorem)
8:21
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,5 МЛН
Microsoft's PHI-4 14B in 5 Minutes
5:47
Developers Digest
Рет қаралды 42 М.
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 420 М.
All Machine Learning algorithms explained in 17 min
16:30
Infinite Codes
Рет қаралды 510 М.
nothing is going to change
8:46
ArjanCodes
Рет қаралды 20 М.
Layers in a Neural Network explained
6:16
deeplizard
Рет қаралды 198 М.
Large Language Models explained briefly
7:58
3Blue1Brown
Рет қаралды 1,1 МЛН