Create an artificial neural network with Keras

  Рет қаралды 64,359

deeplizard

deeplizard

Күн бұрын

Пікірлер: 144
@deeplizard
@deeplizard 6 жыл бұрын
Keras Machine Learning / Deep Learning Tutorial playlist: kzbin.info/aero/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL Machine Learning / Deep Learning Tutorials for Programmers playlist: kzbin.info/aero/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU
@hcgaron
@hcgaron 6 жыл бұрын
Thank you for making this playlist. You've also been super responsive to all the comments which is WAY above and beyond. You already provided excellent content. Wanted to express that there are MANY people (myself included) who really appreciate you.
@deeplizard
@deeplizard 6 жыл бұрын
Thank you, Hans! That certainly means a lot, and I appreciate you letting me know :)
@pranavhegde6470
@pranavhegde6470 2 жыл бұрын
Crisp, clear and to the point. Thank you!
@RinaSC1
@RinaSC1 5 жыл бұрын
Miss, I sincerely thank you for this playlist. It really helped me implement a good and performance efficient model for a ConvNet that predicts sentiment on Steam Review texts that I used as my graduation thesis and will be using on my master's. Your explanation is very clear and helpful for learning Keras :)
@deeplizard
@deeplizard 5 жыл бұрын
Very glad to hear that! Thanks for letting me know :)
@aguyitzkak3871
@aguyitzkak3871 4 жыл бұрын
This is absolutely the best Keras tutorial ever on the internet. This is much understandable for beginners than the Keras tutorials on the Tensorflow channel. Thank you very much, ma'am.
@tekiero
@tekiero 4 жыл бұрын
you kidding me?
@aguyitzkak3871
@aguyitzkak3871 4 жыл бұрын
@@tekiero Any suggestion? To me, this is the best I have seen.
@maxtom5035
@maxtom5035 5 жыл бұрын
Thank you for creating this video, saves my thesis!
@stefantincescu7169
@stefantincescu7169 6 жыл бұрын
Simply the best. Good job!
@FelipeV3444
@FelipeV3444 6 жыл бұрын
This really helps a lot, thank you for taking the time to explain this :D
@deeplizard
@deeplizard 6 жыл бұрын
I'm glad to hear that, Felipe! You're welcome!
@dr.saidboumaraf2126
@dr.saidboumaraf2126 5 жыл бұрын
Great !!! Thank you too much teacher
@abramswee
@abramswee 5 жыл бұрын
fantastic channel. thanks for teaching me.
@fritz-c
@fritz-c 4 жыл бұрын
I spotted a slight typo in the article for this video. inupt ↓ input I really enjoy your courses so far, by the way. I've stopped and started a few times with studying ML in the past, but this has been a pleasure to go through.
@deeplizard
@deeplizard 4 жыл бұрын
Fixed, thanks Chris! :D
@AtifImamAatuif
@AtifImamAatuif 5 жыл бұрын
Hi , Thank you so much for this wonderful tutorial . Kindly consider few suggestions. While typing , please zoom the area as on small screen it is not visible . Also please share the notebook , so that we can follow with the video .Thanks once again
@deeplizard
@deeplizard 5 жыл бұрын
Thank you for the suggestions, Atif! In later videos, I zoom in on the code, so it is much more visible. In regards to the notebooks, download access to code files and notebooks is available as a perk for the deeplizard hivemind. Check out the details regarding deeplizard perks and rewards at: deeplizard.com/hivemind If you choose to join, you will gain access to download the code at the link below: www.patreon.com/posts/27743395
@amritakaul3300
@amritakaul3300 2 жыл бұрын
kindly provide the link for basic terminology playlist..thank you for your teaching...its really helpful
@deeplizard
@deeplizard 2 жыл бұрын
Glad to hear that Amrita! The course that covers the terminology is the Deep Learning Fundamentals course. We have two editions of this course. Here is the classic edition: deeplizard.com/learn/playlist/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Here is the premium edition: deeplizard.com/course/dlcpailzrd
@lotfullahandishmand753
@lotfullahandishmand753 3 жыл бұрын
when ever I am putting in output layer any value rather than 1 , it s showing the below error Shapes (10, 1) and (10, 2) are incompatible and if I put output = 1 , its running but accuracy is 0.5 always and loss =0 I tried a lot but couldn't figure out , i would be so happy if you tell what is the problem
@knoxvoxx
@knoxvoxx 6 жыл бұрын
Why you have used 2 output units in output layer? I think we have to use only 1 output unit. As the labels are only of 0 or 1 And in our model we have 2 hidden layers with units 16,32. Am I correct ?
@deeplizard
@deeplizard 6 жыл бұрын
Hey mahajan, Yes, actually both options will work equally well. Since this data only has two classes, then we could configure our model to only have one output unit, and with that, we would need to use binary cross entropy as our loss function. As you observed in this video, I'm using two output units along with categorical cross entropy loss. These two options essentially achieve the same result. With binary, however, the last layer would need to use sigmoid, rather than softmax, as its activation function. Also, yes this model has two hidden layers with the first layer having 16 units and the second layer having 32 units. Hope this clears everything up!
@knoxvoxx
@knoxvoxx 6 жыл бұрын
What about y ? As u have used 2 output units the shape of labels must be of the form (n_smaples,2) but in our case the shape was (n_samples,) Then also it worked fine but why? and when we use model.predict function what it will output like what will be shape?
@deeplizard
@deeplizard 6 жыл бұрын
The reason why this works is because binary cross entropy with one output node is the equivalent of categorical cross entropy with two output nodes. The equation for binary cross entropy loss is the exact equation for categorical cross entropy loss with one output node. For your question regarding predictions, we do the predictions on this data in this video: kzbin.info/www/bejne/aJeQf516ituNf6c The shape of each prediction is (2,), which represents the two possible outputs. I print out the predictions in that video so that you can visually observe the shape.
@knoxvoxx
@knoxvoxx 6 жыл бұрын
Thank you :) :)
@kamogahsn
@kamogahsn 4 жыл бұрын
When you say the data is one dimensional on ln 2 , do you mean that the input values are forming a matrix which is one by one ? I am new to this please clarify
@josemojica7570
@josemojica7570 6 жыл бұрын
Very good video, I have a doubt, if the last neuron takes 2 values ​​(1 and 0), value that can be given by a single output, it would not be then, Dense (1, activation = 'softmax'), instead of Dense (2, activation = 'softmax')
@deeplizard
@deeplizard 6 жыл бұрын
Hey Joseph - Yes, actually both options will work equally well. Since this data only has two classes, then we could configure our model to only have one output unit, and with that, we would need to use binary cross entropy as our loss function. As you observed in this video, I'm using two output units along with categorical cross entropy loss. These two options essentially achieve the same result. With binary, however, the last layer would need to use sigmoid, rather than softmax, as its activation function.
@josemojica7570
@josemojica7570 6 жыл бұрын
Thank you very much for the response, they have definitely done a marvelous job. I would like to take a question as well as an array of 8 columns and 700 rows the correct way to configure the density input Dense (16, input_shape = (8,), activation = 'relu'), or Denso (16, input_shape = (8,700), activation = 'relu'), thank you very much in advance
@piyushkandwal
@piyushkandwal 4 жыл бұрын
I was reading about sequential model but can't find the working anywhere. How is sequential model working behind the scene. Can you make any video regarding this or provide any link. Thanks
@rajuthapa9005
@rajuthapa9005 6 жыл бұрын
wow great tutorial. very practical... nice happy happy... :D
4 жыл бұрын
Why do I get an error with the model.summary() method
@thedrei24
@thedrei24 5 жыл бұрын
ya, dope video!! can you please enlarge your jupyter notebook on the entire screen? if I am coding in parallel on a split screen, I can no longer see your code. thanks
@deeplizard
@deeplizard 5 жыл бұрын
Thanks, Andrei! In later videos, the text is larger and I zoom in on the code :)
@thedrei24
@thedrei24 5 жыл бұрын
@@deeplizard amazing, thanks
@nmana9759
@nmana9759 4 жыл бұрын
I don't understand. Is there a reason why you put 16 as the input neurons??
@doncollins6795
@doncollins6795 4 жыл бұрын
Hey, i think you should update this video. Categorical_crossentropy is a loss function, not a metrics function
@deeplizard
@deeplizard 4 жыл бұрын
Hey Don - I don't think I talk about crossentropy in this episode. I do in the next one, but I talk about it as a loss function, not as metrics. kzbin.info/www/bejne/e3nFkqxsnamNfaM
@krishnaik06
@krishnaik06 6 жыл бұрын
do you have videos using Tensorflow as backend ?
@deeplizard
@deeplizard 6 жыл бұрын
Hey Krish - No, but we may consider adding some in the future.
@lucasl1047
@lucasl1047 6 жыл бұрын
does changing the backend changes a lot of the code you used? thanks for the awesome material
@deeplizard
@deeplizard 6 жыл бұрын
Hey Lucas - I believe everything covered in this playlist should also be compatible the way it is with Tensorflow. The only difference that comes to mind for the purposes of this playlist is that everywhere we give image dimensions in the form of channels x width x height in the videos will instead be in the order of width x height x channels for Tensorflow. Hope this info helps!
@lucasl1047
@lucasl1047 6 жыл бұрын
deeplizard thank you a lot!
@wonderfulvamsi
@wonderfulvamsi 5 жыл бұрын
how determine the number of nodes we need in a layer???
@zettn7es742
@zettn7es742 6 жыл бұрын
视频做的真心不错,感谢
@wilsonsam3209
@wilsonsam3209 5 жыл бұрын
Why 16 neurons in the input layer? I think that’s the parameter, such as age?
@MyMpc1
@MyMpc1 4 жыл бұрын
I'm really confused with determining the number of neurons in the input layer (here you specified 16). Googling around this isn't really explained, other than the closest I can get to an explanation is that the number of neurons should equal the number of features, but in this example that you provide the number of features is just 1 (which is age in the example from the previous video), so obviously here we are not equating number of neurons with number of features.
@deeplizard
@deeplizard 4 жыл бұрын
The input layer isn’t explicitly declared here, as Keras creates the input layer implicitly given the input_shape passed to the first hidden layer. Given this, the model looks like this: input layer that accepts data of shape (1, ) hidden Dense layer1 with 16 outputs hidden Dense layer2 with 32 outputs output layer with 2 outputs Does this clarify?
@MyMpc1
@MyMpc1 4 жыл бұрын
@@deeplizard yes, very much, and thanks for replying. Could I just follow up with one remaining question - how did you arrive at 16 for the dense layer1 outputs? Is there a way of determining what this number of outputs per dense layer should be? Also thanks for the channel and this series in particular. There aren't many resources on it generally and I'm really grateful you've taken the time to do it ;-)
@deeplizard
@deeplizard 4 жыл бұрын
You're very welcome! Happy you're finding value in the content. In this case, 16 was arbitrarily chosen, but in general, the number of outputs in your layers is something you may have to experiment with.
@MyMpc1
@MyMpc1 4 жыл бұрын
@@deeplizard Thank you very much. Have a great day :-)
@mkmanic9468
@mkmanic9468 6 жыл бұрын
Please explain how to decide the number of neurons present in each layer. That is in your example the first layer have 16 neurons, the second one have 32 neurons ? And the second question is that i have go through some keras documentation where I have seen that the input_shape arguments takes number of arguments but generally in take two arguments which are (batch_size, input_dim) so what is ""(1, )"" in above example.
@deeplizard
@deeplizard 6 жыл бұрын
Hey Sachin, The choice of 16 and 32 neurons here was pretty arbitrary. Generally speaking, the more complex your data, the more layers and more neurons in each layer you'll likely need. Also, generally, the number of neurons increases with each layer as they layers become deeper in the network. There is not a general rule of thumb that I'm aware of that you can follow for choosing how many neurons to include in each layer. It's more of mixing trial and error along with experience from what's worked in previous models. input_shape=(1,) means that the shape of each of my input samples is (1,). This is not the batch size. In the Keras documentation, it states that the input_shape is just a tuple of integers, and that it doesn't include the batch size. keras.io/layers/core/ Hope this helps!
@TusharGupta33
@TusharGupta33 6 жыл бұрын
in addition to his question can we have the size of input layer less than the size of the input vector , example if input vector is [4,10] can we have the first layer with number of neurons less than 10 or should it be equall to 10 or can it be more than 10....
@deeplizard
@deeplizard 6 жыл бұрын
Hey Tushar - With the Keras Sequential model, you do not explicitly add an input layer. Instead, the first layer that you see in the video is actually the first hidden layer. Keras implicitly creates the input layer behind the scenes based on the input_shape parameter that you pass to the first hidden layer. The video below on layers from my other deep learning playlist may be helpful here. Starting at 3:38 in that video, we discuss the point I made above. kzbin.info/www/bejne/fHyaaK2QrcZ1pas So given this explanation, for the first layer you specify, you can have as little or as many neurons as you'd like, regardless of your input shape because the layer you're working with actually isn't the input layer. Keras is handling the input layer for you without you explicitly defining it.
@TusharGupta33
@TusharGupta33 6 жыл бұрын
hey thanks for the video reference made things more clear now, . If i may i was checking out the weights for model , there are 4 weight matrix generated shouldn't only 3 weight matrices should be there?
@deeplizard
@deeplizard 6 жыл бұрын
Yes, the weights are represented as the connections between the layers. So, in the model in this video, we have connections from the implicit input layer to the first hidden layer. Then the first hidden layer to the second hidden layer. Then the second hidden layer to the output layer. Three total. If you run model.layers[0].get_weights(), you can see the weight matrix corresponding to the input layer. Then If you sub 1 for 0, you can see it for the first hidden layer, and then sub 2 for the second hidden layer. Total, when you do this, you will see three matrices.
@ozne_2358
@ozne_2358 4 жыл бұрын
Looks good, not sure why the backend is Theano while we discussed the installation through Tensorflow two videos ago........
@deeplizard
@deeplizard 4 жыл бұрын
The Keras course was originally created using Theano as the backend. We have recently started revamping the course for tf.keras :) Even the lessons that have not received an updated video yet can still be followed with TensorFlow backend.
@ozne_2358
@ozne_2358 4 жыл бұрын
@@deeplizard Thanks, I just realized that the last two videos are indeed an update.
@exploringMyself998
@exploringMyself998 4 жыл бұрын
@@deeplizard hope to see more revamped videos soon, and thank you for the awsome videos
@_jiwi2674
@_jiwi2674 5 жыл бұрын
Hi deeplizard, thanks tons for making these tutorials. I'd just like to clarify the structure of the model: is the first Dense the input layer that receives data of shape (16, 1), second Dense the hidden (sole one in this case) layer that has shape (32, 1) and third Dense the output layer that has shape (2, 1) ?
@deeplizard
@deeplizard 5 жыл бұрын
Hey Logan - The input layer isn’t explicitly declared here, as Keras creates the input layer implicitly given the input_shape passed to the first hidden layer. Given this, the model looks like this: input layer that accepts data of shape (1, ) hidden Dense layer1 with 16 outputs hidden Dense layer2 with 32 outputs output layer with 2 outputs
@rocktimjyotidas4543
@rocktimjyotidas4543 4 жыл бұрын
I got an error saying no Module as keras
@leoeduardo3016
@leoeduardo3016 5 жыл бұрын
Thanks a lot for the video. I learn and apply a lot. I follow you methodology of adding all layers in a list. I need the leaky relu activation option and everything I found is following the methodology of model.add(). Does anyone know how to apply 'leaky relu' following this video methodology? I really appreciate your answer. Thanks
@Fact-ing
@Fact-ing 5 жыл бұрын
Very informative and excellent explanatory video, can you please explain the nurons (16,32,2) that you put in the layers,
@gurogiri9909
@gurogiri9909 3 жыл бұрын
if input shape is 2d how to specify it in first layer
@dylanwang6818
@dylanwang6818 4 жыл бұрын
How did u select the neuron numbers? thanks!
@miguelpetrarca5540
@miguelpetrarca5540 5 жыл бұрын
For binary classification, Couldn't we use a single output node with a sigmoid activation function
@deeplizard
@deeplizard 5 жыл бұрын
Hey miguel - Yes, actually both options will work equally well. Since this data only has two classes, then we could configure our model to only have one output node, and with that, we would need to use binary cross entropy as our loss function. As you observed in this video, I'm using two output units along with categorical cross entropy loss. These two options essentially achieve the exact same result. With binary, however, the last layer would need to use sigmoid rather than softmax as its activation function, as you noticed.
@hamdiabed9673
@hamdiabed9673 6 жыл бұрын
Hello , I am usin python 3.6 , tf = 1.7.0 , keras 2.6.1 , and got this message regarding to softmax activation function : TypeError: softmax() got an unexpected keyword argument 'axis', any recommendations please?
@deeplizard
@deeplizard 6 жыл бұрын
Hey Hamdi - This error appears to be due to certain combinations of TensorFlow and Keras versions. More details and potential solutions are given here: github.com/keras-team/keras/issues/9621
@miguelpetrarca5540
@miguelpetrarca5540 5 жыл бұрын
I am a little bit confused as to why the first layer you defined, which I assume is the input layer, has an activation function. I thought an input layer is just all the features multiplied by a weight and passed to the first hidden layer which is where an activation function is used on the weighted sums of the input layer
@deeplizard
@deeplizard 5 жыл бұрын
The first Dense layer shown is the first hidden layer. The input shape parameter passed to this layer implicitly creates an input layer before this Dense layer. Does this help clarify?
@miguelpetrarca5540
@miguelpetrarca5540 5 жыл бұрын
@@deeplizard yes thank you!
@boxu4948
@boxu4948 5 жыл бұрын
thank you for your video. I use TensorFlow as backend, and in the model construction step(model = Sequential...), an error occurred : module 'tensorflow' has no attribute 'get_default_graph'. Does any one know how to fix it? Or we should use different code for the different backend? Thank you.
@boxu4948
@boxu4948 5 жыл бұрын
I fixed it . I downgrade my tensorflow form 2.0 to 1.8, and it works
@BhanuTejapolukonda
@BhanuTejapolukonda 6 жыл бұрын
is it possible to optimise hyperparameters in keras having predefined values no of layers and neural network present in it .any explained video pls link here tq a very begginer
@deeplizard
@deeplizard 6 жыл бұрын
Hey Bhanu - In general, there is no "standard" or predefined configuration that can be applied to artificial neural networks. There is something called "fine-tuning" where you use a predefined model that was built for a task similar to yours and then tune it accordingly depending on your data and your training results. There are fine-tuning videos for Keras later in this playlist: kzbin.info/aero/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL Additionally, since you're just beginning, it would probably be of interest to you to check out the playlist below as well. It covers fundamental concepts and vocabulary in the field of machine learning / deep learning. kzbin.info/aero/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Hope this helps!
@jatinderarora2261
@jatinderarora2261 6 жыл бұрын
Thanks for sharing this video. I must say Excellent. Can you please help me in knowing 1. On what basis do we decide the number of hidden layers. In your example, you have taken 1 hidden layer to my understanding 2. On what basis do we decide the number of nuerons in the hidden layer ? In your example: you have taken 32 neurons in the hidden layer. Is there any specific logic to decide this ?
@deeplizard
@deeplizard 6 жыл бұрын
Hey Jatinder - Generally speaking, the more complex your data, the more layers you can expect to have in your model, and the more neurons in each layer you'll likely need. Also, you'll typically see that the number of neurons increases within each layer as the layers become deeper in the network. There is not a general rule of thumb that I'm aware of to follow for choosing how many layers or how many neurons to include in each layer. The process is more about mixing trial and error along with experience from what's worked in previous models.
@jatinderarora2261
@jatinderarora2261 6 жыл бұрын
deeplizard Thanks a lot. This is helpful. Appreciate your quick response.
@avishekhbt
@avishekhbt 6 жыл бұрын
Thank you for your wonderful videos. I am trying to create the same model on my Mac. But I get an error: "Optimization failure due to: constant_folding" and "node: Elemwise{Cast{float32}}(TensorConstant{(1, 1) of ..5257860046})" . Any idea what these errors are? Google search haven't been particularly useful.
@deeplizard
@deeplizard 6 жыл бұрын
You're welcome Avishekh! I've not experienced that error before. It appears this error is from Theano and may be thrown for a number of reasons. I saw several potential solutions available in stackoverflow and github when I searched for it. Note, this Keras series can also be followed if you use TensorFlow rather than Theano as the Keras backend. So, you could change your Keras backend back to TensorFlow if the only reason you changed it to Theano was to follow this series. If you stick with Theano and end up resolving the issue, please post your solution here. I'd be interested in hearing what the cause/solution was.
@avishekhbt
@avishekhbt 6 жыл бұрын
Thank you for responding. I managed to resolve this issue. The did go through the stackoverflow and github solutions, but those did not work for me. What worked for me was creating a new environment and reinstalling all the packages. I could complete training as well. Now, strangely, I am facing an issue, probably with matplotlib. I will leave a comment on the other video. As a sidenote, these tutorials are probably the best in the internet now. I think you start your lessons on Udemy as well. :)
@deeplizard
@deeplizard 6 жыл бұрын
Thank you! Happy to hear that you're enjoying the videos. Also, I'm glad you were able to get past the issue. Thanks for posting your solution. We have some reservations about posting to Udemy. Check out this video that explains some of the issues: kzbin.info/www/bejne/jmjNl2pmmbOli9E
@anuragshrivastava7855
@anuragshrivastava7855 3 жыл бұрын
provide machine learning playlist seperately
@omarrafique7039
@omarrafique7039 6 жыл бұрын
Great tutorials
@deeplizard
@deeplizard 6 жыл бұрын
Thanks, Omar!
@hiroshiperera7107
@hiroshiperera7107 6 жыл бұрын
Great tutorial :) Thanks for sharing.. Can we know how the number of parameters to be learnt are calculated? In first layer it is 2*16=32... But how the other 3 layer parameters are getting calculated?
@deeplizard
@deeplizard 6 жыл бұрын
Hey Hiroshi - This is how the parameters are calculated in each layer. First layer parameters = 16 weights + 16 biases = 32 parameters Second layer parameters = 32*16 weights + 32 biases = 544 parameters Third layer parameters = 32*2 weights + 2 biases = 66 parameters Total parameters = first + second + third layer parameters = 32 + 544 + 66 = 642 total parameters Let me know if this helps.
@hiroshiperera7107
@hiroshiperera7107 6 жыл бұрын
Oh understood... :) Thank you......
@deeplizard
@deeplizard 6 жыл бұрын
Hey Hiroshi - I just released a video on this topic and thought to circle back around to you to share it since it is relevant to your original question here. kzbin.info/www/bejne/ppiWmX2miNSjfrM
@abdelrahmanosama375
@abdelrahmanosama375 4 жыл бұрын
I need some help with the “cannot import name 'sequential'” issue I can’t proceed with my work. Thank you for your efforts.
@tekiero
@tekiero 4 жыл бұрын
install keras package in Anaconda Navigator Environments
@mdyeasinarafath4450
@mdyeasinarafath4450 5 жыл бұрын
Great explanation as usual! Mam, would you kindly explain this piece of confusion of mine? model = Sequential() model.add(Dense(32, input_shape=(784,))) Here, 32 is the number of nodes in the first layer, right? But, what is the input_shape here?
@deeplizard
@deeplizard 5 жыл бұрын
Hey Md.Yasin - Yes 32 is the number of nodes in this layer. This layer is actually the first hidden layer in the model, not the input layer. The input layer is implicitly created based on the input_shape parameter. This parameter tells Keras the shape of the data being passed to the model.
@mdyeasinarafath4450
@mdyeasinarafath4450 5 жыл бұрын
@@deeplizard I got it, Mam. Thanks a lot! But two confusions still haunting me real bad! i. Can we just use a Flatten layer as an input layer without specifying the input layer and shape? Like this code bellow: model = tf.keras.models.Sequential() model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax)) I used this model to train Mnist handwritten digits dataset and also predicted. So, here no input layer? Flatten layer is the first hidden layer? ii. I have also seen another approach that is specifying and input layer first. like this one: model = Sequential() model.add(Conv2D(20, kernel_size=(3, 3), activation='relu', input_shape=(image_row, image_col, 1))) model.add(Conv2D(20, kernel_size=(3, 3), activation='relu')) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(num_of_classes, activation='softmax')) In the second conv layer, why didn't we specify the number of nodes?
@upendra8050
@upendra8050 5 жыл бұрын
@@deeplizard Thanks for the explanation. I have been struggling to understand this for a long time. One follow up question, how do we select the number of nodes in a layer? Is it random to start with and then optimize based on the model performance?
@James-un6kx
@James-un6kx 6 жыл бұрын
hey, I just have a quick question about the number of neurons in the first layer. Why is that number 16? Shouldn't it be 1 because we have one input number that represents the age?
@deeplizard
@deeplizard 6 жыл бұрын
Hey James - The first Dense layer with 16 nodes is actually the first _hidden_ layer. It is not the input layer. The input layer is implicitly created by the input_shape parameter passed to the first hidden layer. Your intuition is correct about the shape of the input. You can see this where we specified input_shape = (1, ) in the first Dense layer. Let me know if this helps clarify!
@James-un6kx
@James-un6kx 6 жыл бұрын
deeplizard yup got it! thank you
@amuhlongwane5714
@amuhlongwane5714 5 жыл бұрын
Hi Deeplizard, Good work indeed. Keep it up. Did you manage to create a course covering general machine learning content? Thanks.
@deeplizard
@deeplizard 5 жыл бұрын
Thank you, Elvis. Yes, I created a Deep Learning Fundamentals course. The first link below is to the course on deeplizard.com. It has all the KZbin videos as well as corresponding written blogs. The second link is to the KZbin playlist alone. deeplizard.com/learn/playlist/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU kzbin.info/aero/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU
@BanuMukhtarFarms
@BanuMukhtarFarms 6 жыл бұрын
Great tutorials! Will using tensorflow as the backend affect the syntax of Keras?
@deeplizard
@deeplizard 6 жыл бұрын
Thanks, abdullah! I believe everything covered in this playlist should be compatible the way it is with Tensorflow. The only difference that comes to mind is that everywhere we give image dimensions in the form of *channels x width x height* in the videos will instead be in the order of *width x height x channels* for Tensorflow.
@BanuMukhtarFarms
@BanuMukhtarFarms 6 жыл бұрын
Thank You :)
@bardhprenkaj9335
@bardhprenkaj9335 5 жыл бұрын
Wouldn't you just use a sigmoid function for a binary classification task? I mean you have zeros or ones in your dataset. Why waste computation time for the softmax when you could use a sigmoid?
@deeplizard
@deeplizard 5 жыл бұрын
For binary classification, you could indeed use sigmoid on the last layer with binary crossentropy as opposed to softmax with categorical crossentropy.
@nazmulshuvo03
@nazmulshuvo03 6 жыл бұрын
how do you set the output nodes, like 16 and 32? thank you 😊
@deeplizard
@deeplizard 6 жыл бұрын
Hey nazmul - Are you asking about how to decide on the number of nodes in each layer? If so, note that the choice of 16 and 32 neurons here was pretty arbitrary. Generally speaking, the more complex your data, the more layers and more neurons in each layer you'll likely need. Also, generally, the number of neurons increases with each layer as they layers become deeper in the network. There is not a general rule of thumb that I'm aware of that you can follow for choosing how many neurons to include in each layer. It's more of mixing trial and error along with experience from what's worked in previous models.
@nazmulshuvo03
@nazmulshuvo03 6 жыл бұрын
yes, that was my question. thank you.
@meetayan15
@meetayan15 6 жыл бұрын
firstly, I would like to thank you for this wonderful lecture. I have a doubt in model.summary.Could you be explicit about what it describes?
@deeplizard
@deeplizard 6 жыл бұрын
You're welcome, Ayan! model.summary() prints a summary of the model showing the model's layers, the output shape for each layer, and the total number of parameters in each layer.
@zackyvt1319
@zackyvt1319 5 жыл бұрын
@@deeplizard I don't get what the number of parameters is reffering too
@deeplizard
@deeplizard 5 жыл бұрын
​ Zacky Talib I'd first recommend checking out our two videos/blogs below that talk about what learnable parameters are and how they're calculated for both general artificial neural networks and convolutional neural networks.. deeplizard.com/learn/video/pg3hJpSopHQ deeplizard.com/learn/video/gmBfb6LNnZs Then check out the corresponding Keras videos that show how to interpret these parameters in Keras: kzbin.info/www/bejne/maOkc2mupJunb8k kzbin.info/www/bejne/bpWQaoakfNloe5I
@yamanyaseen9922
@yamanyaseen9922 5 жыл бұрын
Can a VDSR (Very Deep Super Resolution) be implemented with the help of keras? If yes then how? Please help deep lizard 🙌
@deeplizard
@deeplizard 5 жыл бұрын
Hey Yaman - I believe so, but I have not done it myself. Googling "VDSR Keras," I see several public github projects where it appears this has been done.
@yamanyaseen9922
@yamanyaseen9922 5 жыл бұрын
@@deeplizard My project is based on VDSR and I've seen code in which CNTK library is used but for learning DL I watched your videos and hence I am good with keras. What is your comment about CNTK? Is it like keras or more complicated and different?
@deeplizard
@deeplizard 5 жыл бұрын
Keras is a higher-level library than CNTK. Keras runs on top of low level libraries that handle lower level operations. In fact, CNTK is one of the three lower level libraries that Keras can use as its backend: CNTK, TensorFlow, or Theano. More on this here: keras.io/backend/.
@biyootifuldev2012
@biyootifuldev2012 4 жыл бұрын
thought rectifier unit abbreviation was pronounced "ree-lu"
@Ramiphylo
@Ramiphylo 6 жыл бұрын
How can we get the code?
@deeplizard
@deeplizard 6 жыл бұрын
Hey Alaeddine - The code files are available as a perk for the deeplizard hivemind at the following link: www.patreon.com/posts/code-for-keras-1-19266488 Check out the details regarding deeplizard perks and rewards at: deeplizard.com/hivemind
@ifeanyindukwe3086
@ifeanyindukwe3086 6 жыл бұрын
This really helps, thank you for this great video :) However, I have encountered a challenge I created a 2 by 8 matrix of training: train_samples_temp = [ [13, 64, 33, 14, 80, 72, 34, 50], [44, 45, 44, 54, 30, 12, 90, 45]] and train_labels_temp = [0, 1, 0, 0, 1, 1, 0, 1] #To convert it to a format keras is expecting train_labels = np.array(train_samples_temp) train_samples = np.array(train_labels_temp) #Tried this but it seems not to work correctly: model = Sequential([ Dense(16, input_shape=(2,8), activation='relu'), Dense(32, activation='relu'), Dense(2, activation='softmax') ]) How do I define my model, especially my input_shape?
@deeplizard
@deeplizard 6 жыл бұрын
Hey Ifeanyi - What is considered to be one individual sample within your train_samples_temp variable? It looks like you have two samples, but 8 labels. For example, it looks like one sample is [13, 64, 33, 14, 80, 72, 34, 50] and another sample is [44, 45, 44, 54, 30, 12, 90, 45], so two samples total. But you have labels for 8 samples.
@ifeanyindukwe3086
@ifeanyindukwe3086 6 жыл бұрын
deeplizard, this is what I meant (multiple features with one label) samples label 13 44 0 64 45 1 33 44 0 14 54 0 80 30 1 72 12 1 34 90 0 50 45 1
@deeplizard
@deeplizard 6 жыл бұрын
Ok, I see. If each sample has two features, you need to organize your samples in this manner: [[13,44], [64,45], [33,44], ....]
@ifeanyindukwe3086
@ifeanyindukwe3086 6 жыл бұрын
Okay, thanks deeplizard.
@BraddCarey
@BraddCarey 6 жыл бұрын
Get a microphone that attaches to a headset. It'll stop your voice from fading in and out throughout the video.
@deeplizard
@deeplizard 6 жыл бұрын
Hey Bradd - Thanks for the suggestion! I've updated my equipment since this video. In my later-released videos, the audio quality is improved.
@fazlfazl2346
@fazlfazl2346 6 жыл бұрын
nice vid. Please make ur code available online. Maybe on git. Thanks
@deeplizard
@deeplizard 6 жыл бұрын
Thanks, Fazl! Download access to code files and notebooks are available as a perk for the deeplizard hivemind. Check out the details regarding deeplizard perks and rewards at: deeplizard.com/hivemind If you choose to join, you will gain access to the notebook from this video at the link below: www.patreon.com/posts/code-for-keras-1-19266488
@HemangJoshi
@HemangJoshi 6 жыл бұрын
please increase text size.cant read it. thanks
@deeplizard
@deeplizard 6 жыл бұрын
Hey hj - Thanks for the feedback. I started zooming in on the code in later videos of the playlist. Check this one out for example, and let me know if you think it's better: kzbin.info/www/bejne/sKPEnayfZ6unaJI
@HemangJoshi
@HemangJoshi 6 жыл бұрын
I watched that video... And it was amazing... What is your name ??
@HemangJoshi
@HemangJoshi 6 жыл бұрын
can you please share the source code or python notebook.
@deeplizard
@deeplizard 6 жыл бұрын
Hey hj - Code files are available as a perk for the deeplizard hivemind. Check out the details regarding deeplizard perks and rewards at: deeplizard.com/hivemind if you choose to join, you will gain access to the code from this video at the link below: www.patreon.com/posts/code-for-keras-1-19266488
@samerayoub5846
@samerayoub5846 6 жыл бұрын
you should have maximized the screen :-(
@deeplizard
@deeplizard 6 жыл бұрын
Hey Samer - Thanks for the feedback. I started zooming in on the code in later videos of the playlist. Check this one out for example, and let me know if you think it's better: kzbin.info/www/bejne/sKPEnayfZ6unaJI
@HattoriHanzo62
@HattoriHanzo62 5 жыл бұрын
Please, next time you make a video don't waste space leaving two huge empty to the left and to the right side of the browser window. It gives no information and the characters are so small that it's hard to read them.
Train an artificial neural network with Keras
8:25
deeplizard
Рет қаралды 47 М.
Make predictions with an artificial neural network using Keras
7:52
АЗАРТНИК 4 |СЕЗОН 2 Серия
31:45
Inter Production
Рет қаралды 896 М.
МЕБЕЛЬ ВЫДАСТ СОТРУДНИКАМ ПОЛИЦИИ ТАБЕЛЬНУЮ МЕБЕЛЬ
00:20
AI, Machine Learning, Deep Learning and Generative AI Explained
10:01
IBM Technology
Рет қаралды 174 М.
Fine-tune VGG16 Image Classifier with Keras | Part 1: Build
7:46
Layers in a Neural Network explained
6:16
deeplizard
Рет қаралды 192 М.
Keras Explained
9:20
Siraj Raval
Рет қаралды 246 М.
Getting Started with Keras
8:11
Google Cloud Tech
Рет қаралды 97 М.
Deep Learning Using Keras - Training Neural Network
12:48
LearnOpenCV
Рет қаралды 1,4 М.
Save and load a Keras model
7:21
deeplizard
Рет қаралды 67 М.
Why Neural Networks can learn (almost) anything
10:30
Emergent Garden
Рет қаралды 1,2 МЛН