This is "THE BEST" workshop on Tensor flow. Thank you so much Martin !!!!!
@KatySei7 жыл бұрын
Thanks Martin .One of the best lectures I've ever heard.
@pymzorr8 жыл бұрын
Many thanks, following a Deep learning course this semester, and this presentation is worth at least 15 hours of lectures.
@MartinGorner8 жыл бұрын
You can now run this yourself with a self-paced code lab: codelabs.developers.google.com/codelabs/cloud-tensorflow-mnist Have fun
@dbiswas8 жыл бұрын
You are the "BEST" !!!!!
@LuisFernandoValenzuela7 жыл бұрын
Martin Görner thank you very much! 👏
@bejoscha7 жыл бұрын
Awesome. Thanks so much. (Both for the video and now for the excellent code lab).
@STIVESification6 жыл бұрын
awesome video
@jogatavid8 жыл бұрын
That was absolutely great! Thanks Martin!
@milesdavidsmith8 жыл бұрын
This dude is giving a great talk.
@arriva12568 жыл бұрын
Really pleasent explanation of the intricacies of deep learning ! thanks so much
@vishnuviswanath258 жыл бұрын
awesome talk!
@maciejswiechowski60648 жыл бұрын
I'm glad I stayed with this video even though I do have a PhD (=> the title).
@mohamadbakhsh3746 жыл бұрын
You are the best, thanks
@rl-rc7kb8 жыл бұрын
From geometrical pov equation L = W.X + b can be interpreted as rotate and translation. Why we need to redefine addition?
@MartinGorner8 жыл бұрын
Yes, it is a kind of "rotation". For one image. We want to rotate 100 images with one formula, but keeping b the same for all. We need to replicate b 100 times to make the matrix dimensions comptible. The little trick called "broadcasting add" is to redefine add so that we can still write the "replicate and add" as just "+".
@Brad_Jacob8 жыл бұрын
great video I'm still trying to wrap my head around all of this... Could a PID loop be used in conjunction with the learning rate to reduce the number of oscillations on the gradient descent?
@othmaneelmeslouhi77028 жыл бұрын
hi, it's a very very good and clear presentation. But, i tested the source code, always i found 0 % for accuracy. Why?
@bensibree-paul72898 жыл бұрын
So with enough memory could it learn to see?
@milesdavidsmith8 жыл бұрын
Google is always amazing me with their absolute excellence in software engineering.
@BeyondTheBrink8 жыл бұрын
A bit confused about the chart at 10:52. The "computed probabilities" should sum up to 1, no? They do sum up to above 2 though on the chart. In the real world the input to the xent would have to be softmaxed to sum up to 1, right? Thx
@MartinGorner8 жыл бұрын
Well, softmax "normalises" the vector so the guarantee is that the "norm" of the output vector is 1. Using L1, the sum should be 1. You are right. And that is indeed the norm usually used with softmax. Using L2 is OK too and in this case it is the sum of squares that is 1. That is what I used on that slide.
@BeyondTheBrink8 жыл бұрын
Awesome, thx for clarifying
@siaboonleong8 жыл бұрын
where do get the source data ?
@bencdavis8 жыл бұрын
It uses MNIST which ships with tensorflow. Check the intro tutorial on the tensorflow site.
@ProjectSuperSport8 жыл бұрын
link to source code please?
@maksymonufriienko97888 жыл бұрын
github.com/martin-gorner/tensorflow-mnist-tutorial other info you can find at the end of video
@ProjectSuperSport8 жыл бұрын
Thanks :)
@Blomiley8 жыл бұрын
I never thought I would learn so much from a guy in a t-shirt and cuffed jeans. updating model...
@MartinGorner7 жыл бұрын
The next video in the series in online: kzbin.info/www/bejne/rJKvYnxod6mSrrs "Tensorflow, deep learning and modern convolutional neural nets". We build from scratch a neural network that can spot airplanes in aerial imagery and also cover recent (TF 1.4) Tensorflow high-level APIs like tf.layers, tf.estimator and the Dataset API. Developers that already know some basics (relu, softmax, dropout, ...) I recommend you start there to see how a real deep model is built using the latest best practices for convnet design.
@monicaheddneck81908 жыл бұрын
spoiler: 99.3%! :D
@MartinGorner8 жыл бұрын
I got to 99.5% with batch normalisation :-)
@MartinGorner8 жыл бұрын
Here you go: github.com/martin-gorner/tensorflow-mnist-tutorial/blob/master/mnist_4.2_batchnorm_convolutional.py
@MartinGorner8 жыл бұрын
and the video proof: kzbin.info/www/bejne/m4iQY6OViLOEsJY max accuracy 99.56%, average accuracy in the last 3000 iterations: 99.52%
@milesdavidsmith8 жыл бұрын
I got 101% accuracy
@MartinGorner8 жыл бұрын
Slides for this video: goo.gl/pHeXe7 and there is second part as well now, covering recurrent networks and more: kzbin.info/www/bejne/rKKVn6GAacxphJI (from 1h15'30") Questions welcome, here or on Twitter: @martin_gorner