Something that would have helped me to understand this is that in an RBM, the states of the neurons are calculated according to a probability function that is dependent on the weights on the network. So rather than the neurons having a real-valued activation based on the weights, the neurons is either on or off, 0 or 1, in a given trial, based on a probability that comes from the formula based on the weights. Without knowing this I couldn't understand how a fixed set of weights could generate data. Each trial generates a different sample. It is a good sample if it matches (doesn't have to match perfectly, but should match a pattern it detected) in the training data. You can set the RBM to a sample it hasn't seen, activate the hidden later, then activate the visible layer and it should "recall" something it has seen in the data, even if there was noise or missing data. For example, if you train it on 9000 hand-written digits, it should learn that many of these numbers share common features. If you give it a number 3 that it has never seen before it should excite the neurons in the hidden layer that recognize features of threes and in turn, those features should activate something in the visible layer that looks 3ish.
@vishwajeetohal91372 жыл бұрын
Best video to build an intuition for RBM that I have found so far!!!!!
@milanpandey99518 жыл бұрын
seriously !! it really helped me in my research paper. thank you for making such awesome videos
@DeepLearningTV8 жыл бұрын
Glad you liked it!
@RajShuklamantu9 жыл бұрын
Happy to see such initiative. Its really very helpful, further i want algorithm for RBM and Mathematics behind it, along with implementation.
@BigDudeSuperstar8 жыл бұрын
Great video that does give you a good top-down understanding of RBMs and their features. Keep it up!
@DeepLearningTV8 жыл бұрын
Glad you like it :-)
@MaksudulAlam7 жыл бұрын
Very good series for introductory Deep Learning!
@gopesh975 жыл бұрын
its so amazing. I did this topic long ago , but only now I have the proper inituition.
@DeepLearningTV5 жыл бұрын
Glad this is helpful!
@kkochubey9 жыл бұрын
This only first part of solution (mentioned in previous video) for vanishing gradient for back propagation. Key outcome is automatic training layer by layer (RBM) from starting layer to the end layer. Instead of one complete pass of forward + complete all layers back propagation. However even I got an idea that this is possible I need to learn how to train RBM.
@DeepLearningTV9 жыл бұрын
+Kirill Kochubey I agree - we touch on that in the next clip on Deep Belief Nets.
@ENGMESkandaSVaidya3 жыл бұрын
Y input and reconstructed input should be compared. By that do we get to know which weighs are important or for pattern recognition??
@DeepLearningTV3 жыл бұрын
Its for training. Each time the comparison is made, weights are tweaked with the hope that the next comparison would bring the two closer. Doing this over and over again, you train the net.
@ENGMESkandaSVaidya3 жыл бұрын
@@DeepLearningTV tnk u so much
@shubhamchandra92584 жыл бұрын
Amazing introduction. Instant subscription.
@chiru67534 жыл бұрын
How RBMs are used to decide on the important features here? Is it through the learned weights of the visible layer? Higher the weights implies good features. Are the weights represented in the RBM directional? Thank you.
@DeepLearningTV4 жыл бұрын
Technically RBMs don't decide on which feature is important; that is a natural result from the training process. Strictly speaking, RBMs are used to reproduce input. For complex problems like face re-creation, it is hard to answer your question about how to detect important features. Since the recreation happens as a result of both forward and backward passes so the hidden weights also matter.
@xuzhang8728 жыл бұрын
I believe the most popular solution for vanishing gradient is adaptive learning rate and good initialization of weights. Let me know if I am wrong.
@DeepLearningTV8 жыл бұрын
Yea those are very popular. Also the use of RELU really changed things; given that it's gradient is 1.
@xuzhang8728 жыл бұрын
Agree. There is new model of RELU (I forgot the name) now with y=\alpha*x when x
@aaronnichols4378 жыл бұрын
I could be wrong but is it a "leaky" relu unit? smaller "slope" when x < 0 than when x > 0?
@DeepLearningTV8 жыл бұрын
cs231n.github.io/neural-networks-1/
@xuzhang8728 жыл бұрын
I think that's right.
@AnIgnoramus8 жыл бұрын
Amazing videos! I have one suggestion though. Whenever there is a term that is introduced but has not been explained yet, could you please explicitly mention that it would be explained later. I find myself rewinding sometimes to see if I missed out on some terms
@DeepLearningTV8 жыл бұрын
Thats good to know - if you would provide examples, that would help. But in general, that is a good suggestion.
@vishnuviswanath259 жыл бұрын
Thanks for the videos. short and simple and interesting. Looking forward for the video about Deep belief net.
@-dialecticsforkids29785 жыл бұрын
Doesnt an RBM also require an output layer, just so you rearrange the weights? Or how would you reconstruct anything from a hidden layers which just spills out gibberish for the human eye?
@InternalException9 жыл бұрын
Hello, Thanks (and congratulations) for disseminating information about the area. We really need to popularize hard sciences, that sometimes are seen as "magic" by people. I do believe, however, that some things are really confusing in the videos. The gradient explanation wasn't exactly simplified (nor necessary), and if I didn't already know what it's all about, I would be totally lost by the 5th video. Talking about all those different kinds of neural networks doesn't help either... I think it would be far more simple to explain basic concepts about neural networks, in general, and then start introducing specifics, one by one (and not all at the same time). I would also point that there is a lot of controversy about the "pioneerism" of Hinton, Le Cun and Bengio. They sure are great researchers, and have big contributions to the area, but as pointed by other researchers (e.g. Jürgen Schmidhuber), an ethical clarification is needed. If the idea is to have something for people who want to use deep models as a black box, I would recommend to skip most of the heavy (not really, but you get the point) historical introduction and technicalities (vanishing gradients?), and focus on the use cases and applications. Thanks and keep up the good work ! Cheers.
@DeepLearningTV9 жыл бұрын
+InternalException Glad you like the videos :-) To your point, there are many deep nets available out of the box, and if you wish to use them as is, there may not be a need to understand some of the inner workings - you dont need to know how a petrol engine works to drive a car. There are actually many software platforms available for Deep Nets, nvidia Digits for example that let you use pre-built models like the AlexNet with an intuitive UI. For that sort of situation, all one needs to know about are basic concepts of neural nets and they can probably get it to work and work well. In this series, once we get past the model videos, there are several that focus on use cases, as well as platforms and libraries, taking one straight to the practical aspect. On the flipside, there are people that need to develop custom deep net applications which means there is a need to understand what is going on under the hood. This series may not cover everything that one needs to know, but discussing the intuition behind some of the models has its place. But that is not something that is easily done - these concepts are not simple to grasp nor are they easy to explain. So the challenge is to figure out how much detail to mention and what to leave out. To me, if I had something like this to begin with when I was first learning about deep nets, my learning path would have been shorter. This series is intended as a broad overview of the area, letting people know what is out there and how to navigate. We actually moved the "How to choose" video up front, before any of the model videos as a means for people to pick and choose which ones they should watch. For instance, if you are not dealing with patterns that change with time, you wont need to learn recurrent nets, unless you want to. Take care!
@DHorse8 жыл бұрын
I'm just going through your video series in Feb 2017. Regarding preparing for studies in this area, have you considered putting together a summary guide that might include specify the prerequisites and study areas that address the math and programming? As you mention, I know you cover current platforms to some degree. Would you consider addressing preparation and further studies?
@DeepLearningTV8 жыл бұрын
Potentially - the list of topics for videos is very long and I'll add this one to that without any promises for when it will actually be done. As for study areas, check out Andrew Ng's Machine Learning MOOC on coursera, or Michael Nielsen's online deep learning book.
@denverconger95777 жыл бұрын
DeepLearning.TV thanks! I started his course and find it very great! A little hard to learn as a newbie but Google works great
@jameswaugh1838 жыл бұрын
Hi thanks for the great videos. In this video one thing that wasn't clear to me is why the results of the reconstruction would differ at all from the original inputs. I.e. if the activations in the hidden layer were fed directly back into the backward pass and there is no randomness in the process why would there be any difference. Geoffrey Hinton says in his Coursera course that the activations can be interpreted as probabilities on hidden nodes, then on the backward pass "binary outcomes formed from the probabilities are passed back for the reconstruction pass" (not the actual activations). Is this always the case? If so I think this could be a little clearer in your video. Thanks for the great work!
@DeepLearningTV8 жыл бұрын
If you fed the activations back (and used the same weights and biases as the forward pass), and you had exactly one input, you would still not be able to recreate the input exactly. The activation is a non-linearity and transforms the weighted input in, as the name suggests, a non-linear way. So inverting the weights/biases on an output transformed that way will give you a value that is related to the input, but not an exact reconstruction. Moreover, with each training pass, you are making the weight/bias updates across multiple inputs, each of which would have been modified by different extents during the passes. Also, the spirit of these videos is to showcase the intuition behind each model. While I appreciate Hinton's decision in using binary outcomes and that it makes sense for the model, there is more than one way to set up an RBM. With each video, our aim is to try to be provide as much intuition as possible without making it too detailed.
@jameswaugh1838 жыл бұрын
No worries - I completely understand your decision to showcase the intuition behind the models which is what has made the videos so useful to me personally. I understand your decision to not dive into too much of the maths however this point left me a little confused as to how the whole thing could work. Thanks for your thorough explanation here - it makes a lot of sense.
@DeepLearningTV8 жыл бұрын
Glad you like the content! If you have any other questions, do let us know.
@parinyahi9 жыл бұрын
Really great video, finally I do understand RBM. Thanks :)
@KirillBerezin8 жыл бұрын
Whoa this is blowing my mind. Didn't quite get how reconstruction works. Are we activate one by one hidden neurons? And each time result compared to input date?
@DeepLearningTV8 жыл бұрын
Well, whatever weights you use to generate the hidden layer activations, can be used in getting the input back. A very simple math analogy: In the equation, y = mx + c, if you have y, m and c, you can calculate x as (y - c)/m. Thats the sort of principle at work here.
@KirillBerezin8 жыл бұрын
DeepLearning.TV oh, i got it thanks. I am using current result, and look what it resembles in. But i don't get why it should be closest to input image? Neural network work is to remember things that characteristic for input, what helps to pick out it. I am vastly sure that speech recognition network can't talk, only listen. And if it can, still, words pronounced with much noise/distortion.
@DeepLearningTV8 жыл бұрын
yea - the re-constructed faces are decent, but may not have the same quality as the original. There is always signal loss in the process. However, we are using a metric - KL Divergence to track and minimize that loss so depending on how you train it, your reconstruction should be decent. I believe this to be true for audio as well - there will be noise but there should also be signal.
@KirillBerezin8 жыл бұрын
DeepLearning.TV for me it is hard to imagine signal loss in image recognition, lets stick to the voice. On vowels/words recognition trained network i could apply special tests. Make distortion on some frequencies, and explain network that original is better source. So it will be able detect noises that not affects recognition. And i can apply this extra information to construct more clearer sound. But that's best i came up with. It should produce clear vowels from words, with random pitch/happiness etc. But i expect it will sound like group of people interrupting each other. Challenge i see - is invent such distortions that really affects pronunciations. Because, simple noise will not make a lot of sense. Probably. Thank for your responsiveness! You are making a big effort for humanity(or machinity) progress.
@DeepLearningTV8 жыл бұрын
For image recognition, check out reconstructions of Labelled Faces of the Wild using an RBM - I saw it in a video by Adam Gibson of Deeplearning4j. I have the same problem with speech that you do with images - I am not worked in speech recognition. So maybe it is hard. And you're welcome!
@perfectMind918 жыл бұрын
Nice effort, I enjoy watching. But I have a small note about videos construction; do you really have to remind us in each video to "comment and share experience" ? It is kind of stopping my chain of thoughts :P
@DeepLearningTV8 жыл бұрын
Noted! And we hear you loud and clear. In Ep 21 onwards, we no longer have the call to comment :-)
@DHorse8 жыл бұрын
Well, if you pulled out the "stop and reflect here and provide feedback" prompts... That is a minor issue that I have no problem with. Another user experience (chat area, site area) where you can prompt for feedback and have directed discussion is one alternative. The style of prompting you were using provides guidance or moderation of discussion and gives a clear indication of when a student should stop, give pause and reflect. A time worn and useful instructional approach that invokes the student to perform some thought on their own, like all novices should.
@ranc47117 жыл бұрын
That’s because the video is created and edited by an artificial intelligence. 😉
@fluffmiller10847 жыл бұрын
On RBMs, I'm missing something... I was assuming the weights on forward pass would be same as backward pass (like in neural net). But in this case, surely the image would be perfectly reconstructable, so the divergence would be zero? Conversely, if the weights are not the same, then the way for learning algorithm to optimise (i.e. minimise the divergence) would be to make them the same...but then you've not achieved anything
@mehdigheisari40108 жыл бұрын
You told in episode 6 that RBM and autoencoder are from the same category but i searched in a well known survey, it said they are different please clarify to me
@DeepLearningTV8 жыл бұрын
Well I cant speak for the survey. RBMs are different in that autoencoders don't have the backward pass. They are the same as autoencoders in that they both are about unsupervised learning, feature extraction and/or smart weight initialization.
@mehdigheisari40108 жыл бұрын
I want the categorization of deep models but can not find . Espacially RNN where is it in the categorization
@DeepLearningTV8 жыл бұрын
I don't think there is really an official categorization. RNNs loosely fall under a set of deep net models that have a working memory.
@Kunal-ix4bw7 жыл бұрын
scholar.google.com.au/scholar?q=Deep+Learning+in+Intrusion+Detection+System+An+Overview&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ved=0ahUKEwiVsee9h-XSAhWDspQKHXueB7YQgQMIJjAA Categorization is given in this paper
@mohammadbadriahmadi98628 жыл бұрын
I have 2 questions: 1- in the "recreate input" part, we create the output of each node of hidden layer by adding the multiplied inputs and their weights to each other and the bias and then applying that node function. it is the forward prop. right ? and then we take the output of each node of hidden layer as an input multiplied it to the previous weight and compare to the real inputs. it is the backward prop. right? what is the point of this part? why should the differences btw this inputs and real inputs be as low as possible? I can not understand the logic behind this part. and moreover, why we use previous weights? so we are multiplying weights again(we multiplied it once in the forward prop). so in this part, are we assuming that each hidden layer node should be enough for our purpose? 2-about the unsupervised part, as far as I understand, each hidden layer node is presenting a label, so somehow, we are defining the labels when we are constructing the architecture of our neural network. I mean if we create four nodes for our hidden layer, we already defined that our data have 4 different labels. is that right?
@PaulJackson-hk3jy8 жыл бұрын
Forward and back prop are specific formulas for training deterministic neural nets. Forward prop calculates an answer based on current weights, back prop adjusts the weights so that the answer will be closer. In an RBM, an adjustment is calculated in both passes. In the first pass, the weights are adjusted upward to help the network remember the input data. The second pass has the network unlearning what it generates from the hidden layer. The idea is that over time as the network gets more accurate these learning and unlearning will cancel out but until the anything that doesn't cancel out is an error and so unlearning that will improve the model. It's better to think of those hidden nodes as features than labels. Labels are things that have meaning to users. Features are patterns that are useful to the network to help it generate the data. Interesting thing is after training an RBM you can use it as the bottom two layers of a traditional back prop neural network. In this setup the features of the RBM are like the inputs to the neural net. You can continue to adjust the weights found using the RBM using back prop or keep them.
@corey333p8 жыл бұрын
"why should the differences btw this inputs and real inputs be as low as possible?" This technique has been mathematically shown to tune the nodes to relevant features of the input data. The mathematics are complex, but I think the intuition is that if the nodes are sensitive to relevant features, their values should be able to feed backward and reconstruct the inputs. Whereas if the nodes are sensitive to irrelevant aspects of the input, they will fail to reconstruct the inputs. This is why we use the same weights forward and backward, because the weights are what we are testing and adjusting. "about the unsupervised part, as far as I understand, each hidden layer node is presenting a label, so somehow, we are defining the labels when we are constructing the architecture of our neural network. I mean if we create four nodes for our hidden layer, we already defined that our data have 4 different labels. is that right?" This is my intuition, and I'm not a computer scientist. I think you are assuming the nodes work in a linear fashion, but in reality they act as parts in a nonlinear system. What this means is that, for example, if we have a neural net trained for facial recognition, perhaps a low-level aspect of this task would be edge detection. Somewhere in the lower layers, maybe there is a characteristic for recognizing a curved edge as opposed to a straight edge. The assumption that there is one single node somewhere in the network that differentiates curved edges and straight edges is false. If the neural net has that capability, then the edge detection functionality is distributed among many or possibly all nodes in the network. The nodes work together in complex ways. The features the NN can detect, EG edge-detection, depth perception, whatever else, don't exist in separate boxes, but they interconnect, and so do the nodes in a neural net.
@mscir9 жыл бұрын
Looking forward to more videos, thanks.
@AlanKevinBourke8 жыл бұрын
Hi there. You'd mentioned that if you have labelled data that a deep learning net is not necessarily the way to go. We have labelled a large data-set of physical activity data from adult subjects and have synchronous data from body worn sensors. The data was labelled at 25 samples per second, for about 8 labels {standing, sitting, lying, shuffling, walking, climbing stairs, descending stairs, bending forward}. Can you suggest a neural network for this problem? also, from my research so far it seems that choosing the number of hidden layers and nodes in each layer requires some in-depth knowledge of the field also. If you could suggest values for the number of hidden layers and number of nodes with an input of 10 features for a data-set of 4 million data points, that would be great.
@DeepLearningTV8 жыл бұрын
+Alan Bourke Sorry about the late reply - first, if you have labelled data, a deep net is certainly an option. It also depends on how complex the problem is. In your case, complexity seems simple to moderate so perhaps a shallow neural net will suffice. Your input layer = number of factors, and the output layer = number of classes. As for the other hyper parameters like number of hidden layers and number of nodes per layer, you may want to figure that out iteratively given it varies for every problem. Start with 4 layers and perhaps 10 to 15 nodes per layer. If you have too many layers/nodes, you will overfit the problem and too few would mean your precision/recall would suffer.
@DHorse8 жыл бұрын
Hmmm. Could you briefly compare that to a more complex problem using similar terms?
@mrvchaudron7 жыл бұрын
Nice video's. I would prefer the examples to show some actual numbers. Now I fail to understand what the examples are illustration - I just see some nodes light up...
@peaceforlove9 жыл бұрын
Hello, I am currently working on deep learning at school, we implemented RBM to classify the MNIST Database, however we lack data on the performance we should achieve. We use 784 input nodes (28*28) and 40 output nodes (this is what Geoff Hinton advices in his Guide to Train RBM) for each RBM and one RBM is solely trained on one class eg a number. However our test, is the input pat of the class the rbm was trained on (using the error on reconstruction) only gives 40% accuracy. Any advices ?
@beizhou24885 жыл бұрын
Does the RBM only consist of two layers? How if I add one more layer on it?
@DeepLearningTV5 жыл бұрын
Well, you can add layers to an RBM or stack RBMs, but if you do it is no longer an RBM, but becomes a DBN (next video). The RBM is a special model that only has two layers.
@-dialecticsforkids29785 жыл бұрын
@@DeepLearningTV How about an output layer? dont you need a output to judge the result and to rearrange the weights? Isn't this missing?
@DeepLearningTV5 жыл бұрын
@@-dialecticsforkids2978 The RBM is a special model in that the input layer also serves as the output layer. Once you transform an input and get to the hidden layer, the question an RBM asks is whether you can reverse that transformation to recreate the input. As an example, if the question asked is "2 times what is 6?", the answer is obviously 3; the reverse would be "6 times what is 2", where the answer is 1/3. The point of training the RBM for a problem set is that it works across a million inputs. So if you feed it a million faces, can it approximately recreate all of them, without overfitting for just a subset. Hope that makes sense.
@radjaaattou97288 жыл бұрын
Excellent tutorial! I have one question, what functions are used to train the RBM? (forward, backward and the comparing step) Thank you.
@DeepLearningTV8 жыл бұрын
A nice overview here - deeplearning4j.org/restrictedboltzmannmachine. Well, the forward and backward passes are about applying an activation to the weighted input (with a bias included). Comparisons are done using KL Divergence.
@premalathavenkatraman89395 жыл бұрын
@@DeepLearningTV Seems link is broken. I checked in the DL4J site, but could not locate RBM content. an you send right link, please
@dwaneho13478 жыл бұрын
could you pls tell me how this video is made? I want to make a presentation by making a video, and I am very interested in the way you used.
@DeepLearningTV8 жыл бұрын
I used Prezi + ScreenFlow.
@dwaneho13478 жыл бұрын
Thank you very much~~~!!
@imme27635 жыл бұрын
This is so helpful thank you very much
@AndreyMoskvichev4 жыл бұрын
Amazing video! Thanks!
@remariorichards82378 жыл бұрын
What exactly differentiates Restricted Boltzmann Machines to that of convolutional neural networks. Is that one , rbm can be integrated within a cnn. It is recomputed and as such can be used a data feeder to the cnn?
@srinivasvalekar99048 жыл бұрын
Labeled data VS Unlabeled data - Could you please explain more on this?
@srinivasvalekar99048 жыл бұрын
OK. I found it here - stackoverflow.com/questions/19170603/what-is-the-difference-between-labeled-and-unlabeled-data
@DeepLearningTV8 жыл бұрын
That article sums it up nicely. As an added comment, good quality labeled data sets are harder to come by because of the labor involved in the labeling process.
@dr.marypriyasebastian38957 жыл бұрын
HI, I have a large collection of English text and i need to identify different verb pattern structures from the sentences. Is this technique suitable for such a task.
@DeepLearningTV7 жыл бұрын
Sounds like you want to syntactically parse your text. Check nlp.stanford.edu for parsers.
@dr.marypriyasebastian38957 жыл бұрын
Thank you for the reply.
@harshgawai10786 жыл бұрын
As far i know no deep net is connecting the node with its other node in same layer so why this only called "Restricted"?
@DeepLearningTV6 жыл бұрын
A Boltzmann machine will allow intra-layer connections...
@harshgawai10786 жыл бұрын
ohk and Thanx too!! Your content is best
@DeepLearningTV9 жыл бұрын
Hey all - RBMs are cool - they let you determine which patterns are significant in your data. Check them out :-)
@Cameron-ue7lu7 жыл бұрын
Very useful videos, thank you. However, would be nice to learn earlier how weights and biases are achieved given how much they are mentioned...I'm sure you'll come on to this later
@DeepLearningTV7 жыл бұрын
Weights are biases are set by the training process, the most popular one being backpropagation. While we don't have a video directly related to backprop, the essence is that during training, the model's output is compared against the known output for a set of training data. While they start off being set randomly (there is a lot of work done on smart initialization), with each comparison, the weights and biases are adjusted slightly, and the process repeated many many times - often in the millions - so that eventually the predicted output is as close as possible to the actual. The weights and biases at that point represent the learning or the intelligence of the model. You can also watch the episode on Vanishing gradients, to learn about a weakness of back prop.
@Cameron-ue7lu7 жыл бұрын
DeepLearning.TV Thanks, makes sense.
@mathforai-j5y2 ай бұрын
@@DeepLearningTV awesome explanation💗
@heejuneAhn8 жыл бұрын
What do you mean by "Visible" in RBM, and What do you mean "Label"?
@DeepLearningTV8 жыл бұрын
Visible is the input layer, and hidden is the other one. Check the video around 0:54. Labels are classes or tags and used for classification problems or supervised learning.
@PaulJackson-hk3jy8 жыл бұрын
Labels are used in supervised learning. They are the names of the classes assigned to input data. So to perform supervised learning you must have training data with the answers (labels) that you want the network to learn. An RBM is unsupervised. You do not need to give it labels. You just give it input data - you repeatedly set the visible layer to the different values sampled from you (unlabeled) training data. It learns how to generate data that it saw in the training data.
@remariorichards82378 жыл бұрын
Also where do you recommend learning materials for the deep learning topic(i mean clear understand tutorials,courses).
@johnrickert55727 жыл бұрын
Is an RBM truly "deciding" or just applying an algorithm? The highly anthropomorphic language doesn't seem accurate to me.
@DeepLearningTV7 жыл бұрын
Thats a deep philosophical point about the nature of intelligence. It is subjective in nature. Once trained, an RBM can reproduce an input and in doing that automatically figures out which inputs are more important.
@johnrickert55727 жыл бұрын
I must admit I do not grasp your answer in the following respect: First it is said that the definition of intelligence is subjective, then it is stated that an RBM can be "trained" and "will automatically select" an answer. So, is only subjective to speak of training? Is it just running a program to reach the end result? It seems much more accurate to speak of calibrating the machine.
@DeepLearningTV7 жыл бұрын
The question of what intelligence is, is subjective. Your original query was - "is it truly deciding?" To "truly decide", one must arguably be "truly intelligent". Thus far we don't have a definitive working scientific model of what intelligence is. As for algorithms, they are usually a set of rules - If this condition is true, do that. With complex problems like facial recognition in a digital image, there are so many variations that it is hard to define a set of rules and truly capture the large majority of scenarios (face at an angle, zoomed in, zoomed out, male, female, race, age, location etc). So in Machine Learning, the idea is to have the model learn a set of inputs, like a library of faces, and associate that with an output, like a tag ("Yes/No", "Face", etc). If it then encounters a face picture, it will recognize it as a face picture. If you provide a large variety of faces for training, it does an incredible job picking out the underlying patterns of what makes a face, and hence recognize a large variety of new faces it did not see in training. With the RBM in particular, instead of associating with an output (otherwise known as a supervised learning problem), the idea is to associate an input to itself (otherwise known as an unsupervised learning problem). For example, an RBM can be trained to recreate faces from themselves. If you input a face, it will output a recreation of the face.
@alika9648 жыл бұрын
Thanks for the great videos. I just lost you from 6th video (restricted Boltzmann machine).
@aounallahrayane81644 жыл бұрын
thank you so much you are the best
@ambrishsoni99337 жыл бұрын
i did not get .... the sense of activate . can you please explain in detail ?
@davederdudigedude7 жыл бұрын
Is an RBM a type of RNN?
@DeepLearningTV7 жыл бұрын
Nope. RBMs are an unsupervised learning technique used to better initialize DBNs. For RNNs, watch episode 9.
@lorforlinux6 жыл бұрын
Your voice is mesmerizing
@gitanjalinair20137 жыл бұрын
isnt the rbm's working similar to that of an Som
@devonk2988 жыл бұрын
Nice explanation - ty
@aj-tg5 жыл бұрын
Thanks!
@bzqp22 жыл бұрын
Why did you choose such fonts :(
@satishjasthi25008 жыл бұрын
Thanks a lot
@fajarulinnuha67968 жыл бұрын
great videos thankyou, btw why do I listen to you as if you are crying? stay calm lol
@rahmatalbariqi62368 жыл бұрын
GG
@zukaanddaze38748 жыл бұрын
so if I can give an AI a deep learning can I give it commands?
@DeepLearningTV8 жыл бұрын
Mmmm - well with deep generative models, you could give overall commands and have it automatically figure out which tasks to perform to execute them. But the level of sophistication for that kind of application is still primitive - we are still several months if not years away from having it like do anything remotely meaningful at the level we are used to.
@sanjaykrish87197 жыл бұрын
You have a nice voice..
@everythingiswhat7 жыл бұрын
This is very difficult to understand without examples that build...
@le_ep5 жыл бұрын
Why does she sound like she's about to cry?
@cocoritosss86695 жыл бұрын
lol nice youtube bug... I commented just once wtf
@punkntded6 жыл бұрын
sorry, this video has useful content, but the intro gives me a mini headache. Really bad use of tones.
@brawler-school7 жыл бұрын
very complicated without any practical examples
@user_375a82 Жыл бұрын
yeah, sure, easy
@cocoritosss86695 жыл бұрын
blah blah blah if so please comment and let me know your thoughts blah blah blah if so please comment and let me know your thoughts blah blah blah if so please comment and let me know your thoughts blah blah blah please comment and tell me about your experience this time she said that differently... my pattern recognition works
@RamkrishanYT7 жыл бұрын
My name is Jeff ( sorry)
@onlybryanliu8 жыл бұрын
Stop with the please comment stuff.
@billyte12658 жыл бұрын
I'm gonna say this until you catch on: STOP ASKING FOR COMMENTS IN THE MIDDLE OF YOUR VIDEO. Its so distracting and unhelpful.
@DeepLearningTV8 жыл бұрын
This is the second and final time we are saying this - show respect! Any more comments like these and we will have to block you from this channel.
@DHorse8 жыл бұрын
Great job! Thank you! Keep up the good work!
@jyothishkumar30987 жыл бұрын
I actually agree with him on this.
@cocoritosss86695 жыл бұрын
blah blah blah if so please comment and let me know your thoughts blah blah blah if so please comment and let me know your thoughts blah blah blah if so please comment and let me know your thoughts blah blah blah please comment and tell me about your experience this time she said that differently... my pattern recognition works
@cocoritosss86695 жыл бұрын
blah blah blah if so please comment and let me know your thoughts blah blah blah if so please comment and let me know your thoughts blah blah blah if so please comment and let me know your thoughts blah blah blah please comment and tell me about your experience this time she said that differently... my pattern recognition works
@cocoritosss86695 жыл бұрын
blah blah blah if so please comment and let me know your thoughts blah blah blah if so please comment and let me know your thoughts blah blah blah if so please comment and let me know your thoughts blah blah blah please comment and tell me about your experience this time she said that differently... my pattern recognition works
@computerguycj15 жыл бұрын
Which videos have you published? Let me know so I can drop an ignorant comment about what annoys me in your videos?