10.2 Generative Adversarial Networks
23:22
9.3 Policy Gradients
26:55
5 жыл бұрын
4.3 Dropout
11:05
6 жыл бұрын
8.1 Convolutional Neural Networks
36:24
6.1 Restricted Boltzmann Machines
44:45
5.1  Radial Basis Function Networks
14:40
3.6 Batch Normalization
12:25
6 жыл бұрын
4.2  Penalty-based Regularization
13:41
1.4: Multilayer Neural Networks
15:02
10.3 Kohonen Self-Organizing Map
19:06
Пікірлер
@LouisDuran
@LouisDuran Жыл бұрын
Thanks for this content. I just bought your 2nd edition book from Springer
@ryanm852
@ryanm852 2 жыл бұрын
Is there a published solution to obtaining the QSP matrices from UV matrices?
@radsimu
@radsimu 2 жыл бұрын
very interesting. but what is the activation function for performing classification of new inputs with this thing? It is something that encourages small distances between weights and inputs, which is very different from the classic weighted sum of some activation function what you normally see in neural networks. Which makes me wonder why are we even calling the nodes/centroids things "neurons"
@Nobody-ws4ix
@Nobody-ws4ix 2 жыл бұрын
Can you tell me a case of RBF network where there is 3 input neurons, 6 hidden layers and 3 output layers
@Ash-bc8vw
@Ash-bc8vw 2 жыл бұрын
very enlightening
@emanali1197
@emanali1197 2 жыл бұрын
Best video that explain this model. Thank you
@deebo7276
@deebo7276 2 жыл бұрын
Great presentation for the fact you point out and clarify that the initial allocation of clusters to 2D nodes is not important, and that the 2D node proximity to neighboring nodes is strictly a result of the neighborhood learning. You say in some cases clusters are allocated to 2D nodes (prior to learning I assume?), could you say anything about this? Is the point of that to speed up the learning as n-dimensional nodes will already have closer proximities as indicated by the 2D map, and don't require as many iterations?
@billykotsos4642
@billykotsos4642 3 жыл бұрын
really looking forward to the 2nd edition ! Must have ! Thanks for putting these videos online !
@yashodharaholkar
@yashodharaholkar 3 жыл бұрын
You said "weights are shared across the whole volume" ? Does that mean for a batch of input images the same weights are used ?( for all the images present in that batch )
@billykotsos4642
@billykotsos4642 3 жыл бұрын
1st edition was amazing. 2nd edition is a day 1 buy for me !
@ajitkumar1579
@ajitkumar1579 3 жыл бұрын
This is the best video on encoder decoder... Thanks
@moussatouhami7567
@moussatouhami7567 3 жыл бұрын
I'm sorry, The notation you used starting from 9:44 for the nodes is completely vague and misleading.
@nouraaboudi
@nouraaboudi 3 жыл бұрын
Please can you explain the SENets application on grayscale image(channel=1)??
@Srg9000
@Srg9000 3 жыл бұрын
in grayscae image, se-net might not be used in 1st hidden layer, but after that, number of channels will depend on number of units in layer
@xuxiaobin5942
@xuxiaobin5942 3 жыл бұрын
a lost gem in the sea of youtube. Hope more people will get notice of this great series!! Like your book as well. thank you!
@prakashb1094
@prakashb1094 3 жыл бұрын
This is great and I'm happy to have seen this exactly when I was learning attention mechanisms in NLP :)
@pedrocamargos7184
@pedrocamargos7184 3 жыл бұрын
Good Morning I'm student of the master's program in electrical engineering at Brazil. I am studying "backpropagation", a topic linked to artificial neural networks. I have a question in a mathematical notation of summation symbol. Can you help me? Attached is the notation (in latex format): These notation refers to equation (3.10), page 144 - CHAPTER 3. TRAINING DEEP NEURAL NETWORKS from " Neural Networks and Deep Learning A Textbook" book: \begin{equation} $\sum_{h:h_r\Rightarrow h}^{ "empty " }$ \end{equation} What means this subscript in summation "h:h_r\Rightarrow h"? Thanks for everything and all knowledge shared.
@pedrocamargos7184
@pedrocamargos7184 3 жыл бұрын
Good Morning I'm student of the master's program in electrical engineering at Brazil. I am studying "backpropagation", a topic linked to artificial neural networks. I have a question in a mathematical notation of summation symbol. Can you help me? Attached is the notation (in latex format): These notation refers to equation (3.10), page 144 - CHAPTER 3. TRAINING DEEP NEURAL NETWORKS from " Neural Networks and Deep Learning A Textbook" book: \begin{equation} $\sum_{h:h_r\Rightarrow h}^{ "empty " }$ \end{equation} What means this subscript in summation "h:h_r\Rightarrow h"? Thanks for everything and all knowledge shared.
@pedrocamargos7184
@pedrocamargos7184 3 жыл бұрын
Good Morning I'm student of the master's program in electrical engineering at Brazil. I am studying "backpropagation", a topic linked to artificial neural networks. I have a question in a mathematical notation of summation symbol. Can you help me? Attached is the notation (in latex format): These notation refers to equation (3.10), page 144 - CHAPTER 3. TRAINING DEEP NEURAL NETWORKS from " Neural Networks and Deep Learning A Textbook" book: \begin{equation} $\sum_{h:h_r\Rightarrow h}^{ "empty " }$ \end{equation} What means this subscript in summation "h:h_r\Rightarrow h"? Thanks for everything and all knowledge shared.
@thechadeuropeanfederalist893
@thechadeuropeanfederalist893 Жыл бұрын
I asked ChatGPT: The subscript notation in the summation symbol you provided, $\sum_{h:h_r\Rightarrow h}^{ "empty " }$, is specifying a summation over all values of $h$ that are reachable from $h_r$ through one or more layers of the neural network. The notation $h_r\Rightarrow h$ indicates that $h$ is reachable from $h_r$. Specifically, it means that there is a path of connections between neurons starting from $h_r$ and ending at $h$ in the neural network. The subscript $h:h_r\Rightarrow h$ then restricts the summation to only include those values of $h$ that satisfy this condition. The symbol $\emptyset$ as the superscript denotes that the summation is over an empty set if there are no such values of $h$. In the context of the book chapter you mentioned, this notation is being used to calculate the gradients of the cost function with respect to the weights in the neural network using backpropagation. The summation over reachable values of $h$ is necessary to account for all possible paths that contribute to the gradient.
@pedrocamargos7184
@pedrocamargos7184 3 жыл бұрын
Good Morning I'm student of the master's program in electrical engineering at Brazil. I am studying "backpropagation", a topic linked to artificial neural networks. I have a question in a mathematical notation of summation symbol. Can you help me? Attached is the notation (in latex format): These notation refers to equation (3.10), page 144 - CHAPTER 3. TRAINING DEEP NEURAL NETWORKS from " Neural Networks and Deep Learning A Textbook" book: \begin{equation} $\sum_{h:h_r\Rightarrow h}^{ "empty " }$ \end{equation} What means this subscript in summation "h:h_r\Rightarrow h"? Thanks for everything and all knowledge shared.
@pedrocamargos7184
@pedrocamargos7184 3 жыл бұрын
Good Morning I'm student of the master's program in electrical engineering at Brazil. I am studying "backpropagation", a topic linked to artificial neural networks. I have a question in a mathematical notation of summation symbol. Can you help me? Attached is the notation (in latex format): These notation refers to equation (3.10), page 144 - CHAPTER 3. TRAINING DEEP NEURAL NETWORKS from " Neural Networks and Deep Learning A Textbook" book: \begin{equation} $\sum_{h:h_r\Rightarrow h}^{ "empty " }$ \end{equation} What means this subscript in summation "h:h_r\Rightarrow h"? Thanks for everything and all knowledge shared.
@theduffrichie2050
@theduffrichie2050 3 жыл бұрын
could not find the equation . please share the link of the book you are using. pg 144 is somewhere in chapter 4
@ame8973
@ame8973 3 жыл бұрын
Thanks, it answered my question. Content is good, but quality is ... could be improved ;-)
@glaswasser
@glaswasser 4 жыл бұрын
How do we get an input vector containing information about the genres? Will it just contain numeric values for "Drama", "music", "arts" and "music", with a 1 or a 0 depending if the input belongs to the class or not, or how does that work?
@charuaggarwal2740
@charuaggarwal2740 4 жыл бұрын
The weight vector W_i is like the centroid of k-means. It can be considered a numeric description of the cluster, and can also be considered a prototype. The dimensionality of this vector is the same as that of the data set.
@subramaniamsrivatsa2719
@subramaniamsrivatsa2719 4 жыл бұрын
very beautiful and succintly explained.. Thank you!
@piyushwagh7960
@piyushwagh7960 4 жыл бұрын
can u prrovide python code link?
@WahranRai
@WahranRai 4 жыл бұрын
Too much bla bla bla, show it by example !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
@theduffrichie2050
@theduffrichie2050 3 жыл бұрын
exactly
@nex7886
@nex7886 4 жыл бұрын
thank you sir, very helpful good explanation
@conceptsofprogramming7532
@conceptsofprogramming7532 4 жыл бұрын
Sir apke book ka solutions kaha mile ga neural networks and deep learning a textbook by charu c. aggarwal
@AdityaSingh-kp9tj
@AdityaSingh-kp9tj 4 жыл бұрын
Thank you :)
@sshiv908
@sshiv908 4 жыл бұрын
nice slides but it would have been better if u uses diagrams to explain it
@doyourealise
@doyourealise 4 жыл бұрын
nice
@indrakumarsahu
@indrakumarsahu 4 жыл бұрын
Great explanation
@mohamedshabara1138
@mohamedshabara1138 4 жыл бұрын
The presentation is great, but you must use a cursor while explaining, to point at the line and terms you are talking about. I lost track manytimes
@bharath-tc7fw
@bharath-tc7fw 4 жыл бұрын
Thank you Dr. Charu Aggarwal for making these videos freely available through youtube. Tutorials directly from author of a book makes the subject really interesting.
@ranarana-sp7um
@ranarana-sp7um 4 жыл бұрын
waste lectures..u speaking like u r bihearted
@LouisDuran
@LouisDuran Жыл бұрын
lame reply
@hasanmashat6306
@hasanmashat6306 5 жыл бұрын
Great explanation charu!
@ykx2579
@ykx2579 5 жыл бұрын
This teacher's books and courses are perfectly, which inspired me a lot! www.charuaggarwal.net/ to check his other resources.
@noamaon
@noamaon 5 жыл бұрын
Very informative, really helped me understand the intuition behind these methods
@vallabhchugh2075
@vallabhchugh2075 5 жыл бұрын
good but not for starters
@autripat
@autripat 5 жыл бұрын
Nice, thank you. In addition to responding better in noisy situations, I'll add that RBFs are less likely to be fooled by adversarial example. Deep nets consistently mis-classify adversarial examples.
@km-ss7st
@km-ss7st 5 жыл бұрын
Very helfful explaination! Thankyou
@AndyD6
@AndyD6 6 жыл бұрын
very thank for this video, i use for my study in degree computer science thank you prof
@nex7886
@nex7886 4 жыл бұрын
i, too am using this for class computer science degree in studying. i like this teaching videa very good thank you prof
@AndyD6
@AndyD6 4 жыл бұрын
@@nex7886 i passed!
@jackjohnsen9719
@jackjohnsen9719 6 жыл бұрын
Thanks for the video. You might want to slow down a bit - you are speaking too fast and it gets pretty hard to understand. Anyway, I still managed to learn something, so thanks :)
@AndyD6
@AndyD6 6 жыл бұрын
i have tip for you sir, you can set speed to 0.75x or 0.5x or 0.25x via video interface. i use this plenty time for learning. i am very fast brain so i set video to 2x speed and i learn twice as fast haha
@glaswasser
@glaswasser 4 жыл бұрын
I watched at 1.25 and it was fine for me - so it always depends if the topic is completely new to you or not I think
@kukurmutta
@kukurmutta 6 жыл бұрын
Great explanation as always!
@dr.rajanishkumarkaushal9012
@dr.rajanishkumarkaushal9012 6 жыл бұрын
Nothing showing on slides
@AndyD6
@AndyD6 6 жыл бұрын
are you blind sir, i see slides fine on video. very informative good teaching
@dr.rajanishkumarkaushal9012
@dr.rajanishkumarkaushal9012 6 жыл бұрын
R u teaching someone are reading self
@AndyD6
@AndyD6 6 жыл бұрын
he is teaching to all of us who watch video, like i am. good learning
@DhananjayKumar-kn5uq
@DhananjayKumar-kn5uq 6 жыл бұрын
Hi Charu, I was reading your book and I must say it is one of the best on the subject. Thanks for writing such a book and for your video series. Would be great if you try more often to impart your knowledge.
@ronmedina429
@ronmedina429 4 жыл бұрын
Found this lecture series as well by reading the book. I'm surprised the book is not as well known!
@rahuldeora5815
@rahuldeora5815 6 жыл бұрын
Hey, good video! Not many people speak about this. Is there a tensor flow implementation for this?
@rajeev1071
@rajeev1071 6 жыл бұрын
Very well explained. While we can write ML algorithms without thinking about these aspects, by knowing these details give different perspective about 'why' are we doing it a certain way.