Thanks for this content. I just bought your 2nd edition book from Springer
@ryanm8522 жыл бұрын
Is there a published solution to obtaining the QSP matrices from UV matrices?
@radsimu2 жыл бұрын
very interesting. but what is the activation function for performing classification of new inputs with this thing? It is something that encourages small distances between weights and inputs, which is very different from the classic weighted sum of some activation function what you normally see in neural networks. Which makes me wonder why are we even calling the nodes/centroids things "neurons"
@Nobody-ws4ix2 жыл бұрын
Can you tell me a case of RBF network where there is 3 input neurons, 6 hidden layers and 3 output layers
@Ash-bc8vw2 жыл бұрын
very enlightening
@emanali11972 жыл бұрын
Best video that explain this model. Thank you
@deebo72762 жыл бұрын
Great presentation for the fact you point out and clarify that the initial allocation of clusters to 2D nodes is not important, and that the 2D node proximity to neighboring nodes is strictly a result of the neighborhood learning. You say in some cases clusters are allocated to 2D nodes (prior to learning I assume?), could you say anything about this? Is the point of that to speed up the learning as n-dimensional nodes will already have closer proximities as indicated by the 2D map, and don't require as many iterations?
@billykotsos46423 жыл бұрын
really looking forward to the 2nd edition ! Must have ! Thanks for putting these videos online !
@yashodharaholkar3 жыл бұрын
You said "weights are shared across the whole volume" ? Does that mean for a batch of input images the same weights are used ?( for all the images present in that batch )
@billykotsos46423 жыл бұрын
1st edition was amazing. 2nd edition is a day 1 buy for me !
@ajitkumar15793 жыл бұрын
This is the best video on encoder decoder... Thanks
@moussatouhami75673 жыл бұрын
I'm sorry, The notation you used starting from 9:44 for the nodes is completely vague and misleading.
@nouraaboudi3 жыл бұрын
Please can you explain the SENets application on grayscale image(channel=1)??
@Srg90003 жыл бұрын
in grayscae image, se-net might not be used in 1st hidden layer, but after that, number of channels will depend on number of units in layer
@xuxiaobin59423 жыл бұрын
a lost gem in the sea of youtube. Hope more people will get notice of this great series!! Like your book as well. thank you!
@prakashb10943 жыл бұрын
This is great and I'm happy to have seen this exactly when I was learning attention mechanisms in NLP :)
@pedrocamargos71843 жыл бұрын
Good Morning I'm student of the master's program in electrical engineering at Brazil. I am studying "backpropagation", a topic linked to artificial neural networks. I have a question in a mathematical notation of summation symbol. Can you help me? Attached is the notation (in latex format): These notation refers to equation (3.10), page 144 - CHAPTER 3. TRAINING DEEP NEURAL NETWORKS from " Neural Networks and Deep Learning A Textbook" book: \begin{equation} $\sum_{h:h_r\Rightarrow h}^{ "empty " }$ \end{equation} What means this subscript in summation "h:h_r\Rightarrow h"? Thanks for everything and all knowledge shared.
@pedrocamargos71843 жыл бұрын
Good Morning I'm student of the master's program in electrical engineering at Brazil. I am studying "backpropagation", a topic linked to artificial neural networks. I have a question in a mathematical notation of summation symbol. Can you help me? Attached is the notation (in latex format): These notation refers to equation (3.10), page 144 - CHAPTER 3. TRAINING DEEP NEURAL NETWORKS from " Neural Networks and Deep Learning A Textbook" book: \begin{equation} $\sum_{h:h_r\Rightarrow h}^{ "empty " }$ \end{equation} What means this subscript in summation "h:h_r\Rightarrow h"? Thanks for everything and all knowledge shared.
@pedrocamargos71843 жыл бұрын
Good Morning I'm student of the master's program in electrical engineering at Brazil. I am studying "backpropagation", a topic linked to artificial neural networks. I have a question in a mathematical notation of summation symbol. Can you help me? Attached is the notation (in latex format): These notation refers to equation (3.10), page 144 - CHAPTER 3. TRAINING DEEP NEURAL NETWORKS from " Neural Networks and Deep Learning A Textbook" book: \begin{equation} $\sum_{h:h_r\Rightarrow h}^{ "empty " }$ \end{equation} What means this subscript in summation "h:h_r\Rightarrow h"? Thanks for everything and all knowledge shared.
@thechadeuropeanfederalist893 Жыл бұрын
I asked ChatGPT: The subscript notation in the summation symbol you provided, $\sum_{h:h_r\Rightarrow h}^{ "empty " }$, is specifying a summation over all values of $h$ that are reachable from $h_r$ through one or more layers of the neural network. The notation $h_r\Rightarrow h$ indicates that $h$ is reachable from $h_r$. Specifically, it means that there is a path of connections between neurons starting from $h_r$ and ending at $h$ in the neural network. The subscript $h:h_r\Rightarrow h$ then restricts the summation to only include those values of $h$ that satisfy this condition. The symbol $\emptyset$ as the superscript denotes that the summation is over an empty set if there are no such values of $h$. In the context of the book chapter you mentioned, this notation is being used to calculate the gradients of the cost function with respect to the weights in the neural network using backpropagation. The summation over reachable values of $h$ is necessary to account for all possible paths that contribute to the gradient.
@pedrocamargos71843 жыл бұрын
Good Morning I'm student of the master's program in electrical engineering at Brazil. I am studying "backpropagation", a topic linked to artificial neural networks. I have a question in a mathematical notation of summation symbol. Can you help me? Attached is the notation (in latex format): These notation refers to equation (3.10), page 144 - CHAPTER 3. TRAINING DEEP NEURAL NETWORKS from " Neural Networks and Deep Learning A Textbook" book: \begin{equation} $\sum_{h:h_r\Rightarrow h}^{ "empty " }$ \end{equation} What means this subscript in summation "h:h_r\Rightarrow h"? Thanks for everything and all knowledge shared.
@pedrocamargos71843 жыл бұрын
Good Morning I'm student of the master's program in electrical engineering at Brazil. I am studying "backpropagation", a topic linked to artificial neural networks. I have a question in a mathematical notation of summation symbol. Can you help me? Attached is the notation (in latex format): These notation refers to equation (3.10), page 144 - CHAPTER 3. TRAINING DEEP NEURAL NETWORKS from " Neural Networks and Deep Learning A Textbook" book: \begin{equation} $\sum_{h:h_r\Rightarrow h}^{ "empty " }$ \end{equation} What means this subscript in summation "h:h_r\Rightarrow h"? Thanks for everything and all knowledge shared.
@theduffrichie20503 жыл бұрын
could not find the equation . please share the link of the book you are using. pg 144 is somewhere in chapter 4
@ame89733 жыл бұрын
Thanks, it answered my question. Content is good, but quality is ... could be improved ;-)
@glaswasser4 жыл бұрын
How do we get an input vector containing information about the genres? Will it just contain numeric values for "Drama", "music", "arts" and "music", with a 1 or a 0 depending if the input belongs to the class or not, or how does that work?
@charuaggarwal27404 жыл бұрын
The weight vector W_i is like the centroid of k-means. It can be considered a numeric description of the cluster, and can also be considered a prototype. The dimensionality of this vector is the same as that of the data set.
@subramaniamsrivatsa27194 жыл бұрын
very beautiful and succintly explained.. Thank you!
@piyushwagh79604 жыл бұрын
can u prrovide python code link?
@WahranRai4 жыл бұрын
Too much bla bla bla, show it by example !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
@theduffrichie20503 жыл бұрын
exactly
@nex78864 жыл бұрын
thank you sir, very helpful good explanation
@conceptsofprogramming75324 жыл бұрын
Sir apke book ka solutions kaha mile ga neural networks and deep learning a textbook by charu c. aggarwal
@AdityaSingh-kp9tj4 жыл бұрын
Thank you :)
@sshiv9084 жыл бұрын
nice slides but it would have been better if u uses diagrams to explain it
@doyourealise4 жыл бұрын
nice
@indrakumarsahu4 жыл бұрын
Great explanation
@mohamedshabara11384 жыл бұрын
The presentation is great, but you must use a cursor while explaining, to point at the line and terms you are talking about. I lost track manytimes
@bharath-tc7fw4 жыл бұрын
Thank you Dr. Charu Aggarwal for making these videos freely available through youtube. Tutorials directly from author of a book makes the subject really interesting.
@ranarana-sp7um4 жыл бұрын
waste lectures..u speaking like u r bihearted
@LouisDuran Жыл бұрын
lame reply
@hasanmashat63065 жыл бұрын
Great explanation charu!
@ykx25795 жыл бұрын
This teacher's books and courses are perfectly, which inspired me a lot! www.charuaggarwal.net/ to check his other resources.
@noamaon5 жыл бұрын
Very informative, really helped me understand the intuition behind these methods
@vallabhchugh20755 жыл бұрын
good but not for starters
@autripat5 жыл бұрын
Nice, thank you. In addition to responding better in noisy situations, I'll add that RBFs are less likely to be fooled by adversarial example. Deep nets consistently mis-classify adversarial examples.
@km-ss7st5 жыл бұрын
Very helfful explaination! Thankyou
@AndyD66 жыл бұрын
very thank for this video, i use for my study in degree computer science thank you prof
@nex78864 жыл бұрын
i, too am using this for class computer science degree in studying. i like this teaching videa very good thank you prof
@AndyD64 жыл бұрын
@@nex7886 i passed!
@jackjohnsen97196 жыл бұрын
Thanks for the video. You might want to slow down a bit - you are speaking too fast and it gets pretty hard to understand. Anyway, I still managed to learn something, so thanks :)
@AndyD66 жыл бұрын
i have tip for you sir, you can set speed to 0.75x or 0.5x or 0.25x via video interface. i use this plenty time for learning. i am very fast brain so i set video to 2x speed and i learn twice as fast haha
@glaswasser4 жыл бұрын
I watched at 1.25 and it was fine for me - so it always depends if the topic is completely new to you or not I think
@kukurmutta6 жыл бұрын
Great explanation as always!
@dr.rajanishkumarkaushal90126 жыл бұрын
Nothing showing on slides
@AndyD66 жыл бұрын
are you blind sir, i see slides fine on video. very informative good teaching
@dr.rajanishkumarkaushal90126 жыл бұрын
R u teaching someone are reading self
@AndyD66 жыл бұрын
he is teaching to all of us who watch video, like i am. good learning
@DhananjayKumar-kn5uq6 жыл бұрын
Hi Charu, I was reading your book and I must say it is one of the best on the subject. Thanks for writing such a book and for your video series. Would be great if you try more often to impart your knowledge.
@ronmedina4294 жыл бұрын
Found this lecture series as well by reading the book. I'm surprised the book is not as well known!
@rahuldeora58156 жыл бұрын
Hey, good video! Not many people speak about this. Is there a tensor flow implementation for this?
@rajeev10716 жыл бұрын
Very well explained. While we can write ML algorithms without thinking about these aspects, by knowing these details give different perspective about 'why' are we doing it a certain way.