Neural Network Optimization

  Рет қаралды 9

bbdtv

bbdtv

Күн бұрын

Ivan goes into detail on how neural networks can be optimized using methods such as RMSprop and momentum, and how we can use these two together to form Adam (adaptive momentum) optimization, which eventually leads to AMSgrad optimization. It is also important to optimize the activation function, solving issues with the standard sigmoid & hyperbolic tangent functions with ReLU and SoftMax, and then eventually Leaky ReLU (to solve the exploding & vanishing gradient problems associated with ReLU and sigmoid/tanh respectively).

Пікірлер
Requirements Gathering within an IT context
13:41
The Qubit
8:49
bbdtv
Рет қаралды 36
Пришёл к другу на ночёвку 😂
01:00
Cadrol&Fatich
Рет қаралды 5 МЛН
هذه الحلوى قد تقتلني 😱🍬
00:22
Cool Tool SHORTS Arabic
Рет қаралды 97 МЛН
Electric Flying Bird with Hanging Wire Automatic for Ceiling Parrot
00:15
Dad Makes Daughter Clean Up Spilled Chips #shorts
00:16
Fabiosa Stories
Рет қаралды 8 МЛН
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,3 МЛН
So, you want to code some music
14:27
bbdtv
Рет қаралды 53
Transformer Neural Networks Derived from Scratch
18:08
Algorithmic Simplicity
Рет қаралды 138 М.
Demystifying Quantum Computing
9:23
bbdtv
Рет қаралды 10
But what is a neural network? | Chapter 1, Deep learning
18:40
3Blue1Brown
Рет қаралды 17 МЛН
Layer Normalization - EXPLAINED (in Transformer Neural Networks)
13:34
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 543 М.
Backpropagation Details Pt. 1: Optimizing 3 parameters simultaneously.
18:32
StatQuest with Josh Starmer
Рет қаралды 202 М.
Пришёл к другу на ночёвку 😂
01:00
Cadrol&Fatich
Рет қаралды 5 МЛН