Great Video on Auto Grads, Amazing as Always, Loved it Dr.Jeff
@NisseOhlsen2 жыл бұрын
Small correction @1:36 : You don't "take the partial derivative of each weight". You do take the partial derivative of the loss function with respect to each weight. Also @7:24, the derivative of x^2 is 2x, not x. Also, @7:46, that IS the definition of ANALYTIC derivation. It is also used in the discrete case, the difference being that the jumps are finite, not infinitesimal.
@ChandraShekhar-rn9ty4 жыл бұрын
Hi Jeff. Thank you so much. I spent couple of hours figuring out how the hell Keras is managing any changes in custom loss so easily. I was worried if it is even checking if the function is differentiable. With this video, things are pretty clear now.
@tanyajain34614 жыл бұрын
does gradient tape break when math operations are applied on custom indexes of input_tensor? also while stacking tensors and then using in our loss function? Please suggest a workaround, I've been trying to implement it but it returns all gradients as NaN
@SrEngr2 жыл бұрын
Which version of tensorflow is this?
@kbd28203 жыл бұрын
learning from the legend. It was an amazing experience. Thank you
@tonsandes2 жыл бұрын
Hi Jeff, is there a way to access y_pred information? I want to build my loss function, but not using a conventional way, which pass y_pred and y_true to a tf of backend function. I need to apply a step to access y_pred information and after that apply a function to estimate the std and return the std value as the output of my loss function. Do you know how to do this?
@heecheolcho32464 жыл бұрын
Thank you for a good lecture. I have a question. y = tf.divide(1.0,tf.add(1,tf.exp(tf.negative(x)))) vs y = 1.0/(1+tf.exp(-x)) Is there any difference?
@subhajitpaul8391 Жыл бұрын
Thank you so much for this amazing video.
@prajith36764 жыл бұрын
i was actually looking for this gradienttape() everywhere , thankyou..finally my doubt is cleared ... :-)
@StormiestOdin25 жыл бұрын
Hi Jeff, Thank you for all these great videos. I have a question about tensorflow. If I create a model but have no hidden layers does this make my model not a neural network but linear discriminant analysis. Like this : model = keras.Sequential([ keras.layers.Dense(12, activation="relu"), keras.layers.Dense(3, activation="softmax")
@HeatonResearch5 жыл бұрын
Its both at that point.
@StormiestOdin25 жыл бұрын
Ahh. thank you. Really appreciate all the videos you put on KZbin has helped me loads with making my own neural network:)
@zonexo53644 жыл бұрын
Stranget, why did I get "Tensor("AddN:0", shape=(), dtype=float32)" as output instead?
@zonexo53644 жыл бұрын
realise that in tensorflow, I have to run a session - with tf.Session() as sess: print(dz_dx.eval())
@tonihullzer16114 жыл бұрын
Awesome work, liked and subscribed, excited to see more.
@slime1212124 жыл бұрын
Thank you for this video, this question was very important to me and now I know how to work it out)
@brubrudsi5 жыл бұрын
Also, in the beginning you said the derivative of x^2 is x. It is 2x.
@HeatonResearch5 жыл бұрын
Yes you are correct, good point. All the more reason for me to use automatic differentiation. 😞
@maulikmadhavi4 жыл бұрын
super explanation! subscribed!
@shunnie84823 жыл бұрын
Thanks for the amazing explanation, I finally understand GradientTape (I think at least haha).
@HeatonResearch3 жыл бұрын
Glad it helped!
@luchofrancisco4 жыл бұрын
Thanks, nice video!
@brubrudsi5 жыл бұрын
The derivative of 4^2 is 0, not 8.
@mockingbird38095 жыл бұрын
I think you should place take derivatives of the function, not the Number inputted values. You should take the derivative of the Function and place the value(A Number) into the derivated function to get the numeric output. You were wrong. 0 comes only if you take derivative a constant Derivative of f(x) = C is Zero, In this case, the funtion is f(x)=X^2.