Deep Learning(CS7015): Lec 4.1 Feedforward Neural Networks (a.k.a multilayered network of neurons)

  Рет қаралды 84,586

NPTEL-NOC IITM

NPTEL-NOC IITM

Күн бұрын

Пікірлер: 22
@RahulMadhavan
@RahulMadhavan 4 жыл бұрын
@15:41 "with great complexity comes....great power" with great power comes great responsibility. with great responsibility comes great expectations. with great expectations comes great sacrifice. with great sacrifice comes great reward. And thus... the objective function was maximized
@nishkarshtripathi6123
@nishkarshtripathi6123 4 жыл бұрын
But we have to minimize it here.
@RahulMadhavan
@RahulMadhavan 4 жыл бұрын
​@@nishkarshtripathi6123 Thank you for the correction! min f(x) = max -f(x) and thus the great sacrifices were not in vain :-)
@lakshman587
@lakshman587 3 жыл бұрын
Awesome!!!
@rahulpramanick2001
@rahulpramanick2001 Жыл бұрын
@@RahulMadhavan this is only true if max f(x) is the global maxima.
@RahulMadhavan
@RahulMadhavan Жыл бұрын
​@@rahulpramanick2001But alas we only seek from the function great reward, and not the greatest reward. For achieving such greatness, you need a dash of convexity apart from the aforementioned complexity!
@syedhasany1809
@syedhasany1809 5 жыл бұрын
Shouldn't W_L at 6:31 be 'kxn' and not the other way around?
@jagadeeshkumarm3333
@jagadeeshkumarm3333 6 жыл бұрын
In a = b+w*h formula either w should be transposed or w size should be (no.of outputs by no.of inputs). only then the matrix multiplication w*h happens as expected.
@mratanusarkar
@mratanusarkar 4 жыл бұрын
Ya, It completely depends on how you represent the X vectors... If you make it a column vector or a row vector, the matrix will be re-written accordingly! get the idea, and you can do the math yourself... with so many courses out there, different people do it differently, but the idea remains the same... while writing the formula, write down the vector/matrix dimensions and proceed accordingly... in the end, the summation formula should hold...
@BhuShu972
@BhuShu972 4 жыл бұрын
I think the objective loss function (yi_hat-yi)^2 is correct. It minimizes the error for all the samples while training which are i = 1 to N. What you did was write the error function in granularly. bith are needed.
@rijuphilipjames2860
@rijuphilipjames2860 2 жыл бұрын
y hat and y are both of dimension k. they are column vectors. had the same doubt. tq👍
@ashiqhussainkumar1391
@ashiqhussainkumar1391 3 жыл бұрын
There is a slight mistake in the formula ai = bi + W(i)`*h(i-1) It makes sense when we see which weight wi is multiplied by which xi
@vaibhavthalanki6317
@vaibhavthalanki6317 2 жыл бұрын
@7:38 , b11 = b12= b13?
@coolarun283
@coolarun283 2 жыл бұрын
Not necessarily..
@mlofficial9175
@mlofficial9175 5 жыл бұрын
Can anyone plz explain the last error. What does summation over i instances mean?
@TheDelcin
@TheDelcin 5 жыл бұрын
We are trying to fit the model for ‘N’ number of training data. So we are trying to minimise the error of training data as a collection. And since the output is a vector he sums error in each elements of a vector also. Gradient descent algorithm will work only if f(x) is a real number.
@vin_jha
@vin_jha 4 жыл бұрын
So actual y_i corresponding to each training example i, will be a k dimensional vector, with 1 at co-ordinate of the vector for the class it belongs to and 0 for the rest. That is, if the example lies in class 'p', then 'pth' co-ordinate of the vector y_i will be 1 and 0 for rest of the dimensions. Now our NN can spit out arbitrary k dimension vector. So our loss function is sample mean of element wise difference of the 2 vectors.
@anshrai6
@anshrai6 2 ай бұрын
it will be min(i/k(fun)) not min(i/n(fun))
@MuhammadWaseem-vb3qe
@MuhammadWaseem-vb3qe 4 жыл бұрын
Find if following is a Linearly Separable Problem or not. ((¬A OR B) AND 1) OR 0 Also create a Neural Network for given equation with a suitable set of weights.
@bhoomeendra
@bhoomeendra 2 жыл бұрын
if you look at it closely its just an or function
@fahadaslam820
@fahadaslam820 4 жыл бұрын
(Y)
@deepak_kori
@deepak_kori Жыл бұрын
SIR you look like khan sir
Deep Learning(CS7015): Lec 3.1 Sigmoid Neuron
12:30
NPTEL-NOC IITM
Рет қаралды 74 М.
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 59 МЛН
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 155 МЛН
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 19 МЛН
The Essential Main Ideas of Neural Networks
18:54
StatQuest with Josh Starmer
Рет қаралды 1 МЛН
Feed forward neural networks
26:35
IIT Madras - B.S. Degree Programme
Рет қаралды 6 М.
But what is a neural network? | Deep learning chapter 1
18:40
3Blue1Brown
Рет қаралды 18 МЛН
All Machine Learning algorithms explained in 17 min
16:30
Infinite Codes
Рет қаралды 423 М.
10.4: Neural Networks: Multilayer Perceptron Part 1 - The Nature of Code
15:56
Deep Learning(CS7015): Lec 15.3 Attention Mechanism
27:38
NPTEL-NOC IITM
Рет қаралды 45 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,4 МЛН
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 59 МЛН