9- How to implement a (simple) neural network with TensorFlow 2

  Рет қаралды 23,155

Valerio Velardo - The Sound of AI

Valerio Velardo - The Sound of AI

Күн бұрын

Пікірлер: 62
@Magistrado1914
@Magistrado1914 4 жыл бұрын
Excellent course 14/11/2020 Install scikit-learn 1) pip install -U scikit-learn Tensorflow 2) pip install --upgrade tensorflow
@natnaelsisay1424
@natnaelsisay1424 3 жыл бұрын
I train my first neural network. and i have understood some of the concepts that i couldn't wrap my head around before.
@airesearch0844
@airesearch0844 4 жыл бұрын
Nice transition from custom Multi Layer Predictor (MLP) from your previous video to TensorFlow, Keras. The sample data (addition of two numbers) assumed is as linear as linear can be. But the activation functions are highly non-linear. With sigmoid activation function, we can have good results near the sum of the two numbers = 0.5. Outside this value, and farther away from it, it will be hard to optimize the error. Moreover, this network works strictly for the total sums between 0 and 1, and will never converge if the sum is > 1. With sigmoid, this will not be accurate for target values (sum) away from 0.5. So, this goes to prove that the activation function is dependent on the problem domain and what is expected in the output layer. If we switch the activation function to 'linear' instead of 'sigmoid' in both the Dense layers, the results will be spectacular. I am only a newbie and please excuse me if I overstated something. Cheers.
@gvcallen
@gvcallen Жыл бұрын
Hi, you're totally right. Switching it to linear made it accurate to 9 decimal places or so. Obviously that was not the point of the video but its cool to know that the activation functions can play such a role!
@mukeshverma8051
@mukeshverma8051 4 жыл бұрын
Awesome Playlist man, learning a lot more here compared to my lecture hall. Keep up with the great work
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 4 жыл бұрын
Thank you!
@iliasp4275
@iliasp4275 2 жыл бұрын
I came here cause I have snoring classification problem and wanted to look for Nns that deal with audio. I am really sad that I discovered this Playlist after understanding Nns. This is sobgood for beginners, building things from scratch rather than installing tf in the first tutorial. Thank you for these videos
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 2 жыл бұрын
You're welcome!
@Maceta444
@Maceta444 2 жыл бұрын
This is an amazing series so far
@DarrellGauci
@DarrellGauci 4 жыл бұрын
Hi, first of all thank you so much for this course, it is incredibly insightful and you deserve so much more views. I'm running into an issue where the stated samples (when using a dataset of 5000 samples) shows up as 110 instead of 3500, and 47 instead of 1500. This is a sample output: Epoch 99/100 110/110 [==============================] - 0s 1ms/step - loss: 4.2511e-04 Epoch 100/100 110/110 [==============================] - 0s 1ms/step - loss: 4.1883e-04 Evaluation on the test set: 47/47 - 0s - loss: 4.3456e-04 Predictions: 0.1 + 0.2 = 0.29694369435310364 0.2 + 0.2 = 0.39520588517189026 The results are fine but I'm worried that only part of the dataset is being used or something of the sort. I've been searching all over for this issue but I can't seem to find anyone who can relate. I am on Python 3.8.3 with TensorFlow 2.2.0.
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 4 жыл бұрын
Thank you Darrell! It's difficult to say what the problem is without seeing the code. It seems you're only passing 110 training samples at the moment. Can you double check the length of the array you pass for training?
@DarrellGauci
@DarrellGauci 4 жыл бұрын
@@ValerioVelardoTheSoundofAI Thank you for replying Valerio! This is actually done through the code you shared on GitHub so the array length is the same, which is what makes this rather strange. I ran the code on Google Colab to try to isolate the issue from my system but even there I get the apparent 110 training samples in the output. I tried to eliminate the scikit-learn library for the data splitting, as I thought the issue was that since there was an update last month, but to no avail. Today I also went through the genre classifier algorithm (turorial 13) with the same exact issue, this time showing the following: Epoch 49/50 3/3 [==============================] - 0s 11ms/step - loss: 1.3357e-04 - accuracy: 1.0000 - val_loss: 10.5951 - val_accuracy: 0.3333 Epoch 50/50 3/3 [==============================] - 0s 10ms/step - loss: 1.3056e-04 - accuracy: 1.0000 - val_loss: 10.5873 - val_accuracy: 0.3333 Would it be possible to confirm if this is a common issue or if I am missing something? I am suspecting this might be a TensorFlow 2.2.0 issue but I cannot confirm this at the moment. Thanks again Valerio.
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 4 жыл бұрын
@@DarrellGauci this is the first time I heard of this issue. It may be related to TF 2.2. But for the time being I'm still on TF 2.1, so I can't really say.
@akshaymenon3856
@akshaymenon3856 4 жыл бұрын
I am facing the exact same problem. 110/110 and 47/47. Wondering what the issue here is. I am using colab with tf 2.2.0 @Darrell Gauci
@BuKa100s
@BuKa100s 2 жыл бұрын
Same here..
@javadmahdavi1151
@javadmahdavi1151 3 жыл бұрын
dude , that was amazing course
@Moonwalkerrabhi
@Moonwalkerrabhi 3 жыл бұрын
Awesome explanation.
@danielcapel470
@danielcapel470 2 жыл бұрын
Hello Valerio, thank you for this video. Could you please tell me how to choose the number of hidden layers with respect to the application? If the number of inputs is 13 for instance and the number of outputs 64?
@surafelm.w4058
@surafelm.w4058 4 жыл бұрын
Dear, in sequential model building using Tensorflow/ Keras method uses (4, [4], 1) uses 4 inputs (x1, x2, c1 and c2). How to incorporate the constant inputs with the variable inputs? In this case I have x1 and x2 as input variables (precipitation and temperature time series), and c1 and c2 as input constants (constant gridded datasets). I appreciate your time and helpful learning information to share. Kind regards
@kotnikrishnachaitanya
@kotnikrishnachaitanya 4 жыл бұрын
DLL load failed: The specified module could not be found. I am getting this error while inporting tensorflow
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 4 жыл бұрын
Have you tried installing TF using pip?
@airesearch0844
@airesearch0844 4 жыл бұрын
If you don't have the right Cuda rated GPU, you will get the DLL warning, but TF still works with CPU. If you have modern day GPU, then install the correct GPU drivers.
@vignesh8616
@vignesh8616 4 жыл бұрын
Better try it in Jupyter Notebook
@caioalmeida3213
@caioalmeida3213 2 жыл бұрын
Thanks for the video!
@terry083000
@terry083000 3 жыл бұрын
using 5000 and 0.3 only gives me a training data set of size 110, not 3500 for some reason
@ujwalbasnet1
@ujwalbasnet1 3 жыл бұрын
model.fit(x_train, y_train, batch_size=1, epochs=100)
@mateuscastro4974
@mateuscastro4974 3 жыл бұрын
for me too, but it gives me 47/47 instead of 1500/1500
@mateuscastro4974
@mateuscastro4974 3 жыл бұрын
@@ujwalbasnet1 but it slowed my code down
@surafelm.w4058
@surafelm.w4058 4 жыл бұрын
Hi Valerio, I was seeing "np.array([[random()/2 for _ in range(inputs)] for _ in range(samples-size)])" that you are using this in many tutorials. I am wondering if you may explain a little bit what "random()/2" meaning? Kindly
@vignesh8616
@vignesh8616 4 жыл бұрын
random() function generates random values like less than 1 most of the time If you give random(5,23) It gives random between 5 and 23 So [[Random()/2 for _ in range(input)]] this can be explained as Let input=2 So the for loop iterates 2 times so that we will get random()/2 for 1st iteration random()/2 for 2nd iteration And the same for next for loop (let sample_size=5) then [[1,2],[1.24,3.4]...5 times]] This will be output Hope this helps :)
@surafelm.w4058
@surafelm.w4058 4 жыл бұрын
@@vignesh8616 Then you a lot for sharing the concepts with clear instances, appreciated!
@MrHeatification
@MrHeatification 3 жыл бұрын
Sooo good best tutorial thanks sir
@mohammadareebsiddiqui5739
@mohammadareebsiddiqui5739 4 жыл бұрын
Each time you ran the code, the loss was very different i.e from 9e^-4 to 2e^-3 thats almost twice the size. This is because of the random weights selection but isnt that a lot of difference for something that is "random"?
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 4 жыл бұрын
That variation is in the realm of variance you would expect when training multiple times. You're right - the reason why that oscillation occurs is because of the random initialisation of the parameters (weights + biases).
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 4 жыл бұрын
@Paul Dadzie indeed!
@mohammadareebsiddiqui5739
@mohammadareebsiddiqui5739 4 жыл бұрын
@Paul Dadzie but the behaviour/distribution of the dataset would be exactly the same right?
@mohammadareebsiddiqui5739
@mohammadareebsiddiqui5739 4 жыл бұрын
@@ValerioVelardoTheSoundofAI variance in training the same dataset multiple times?
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 4 жыл бұрын
@@mohammadareebsiddiqui5739 1) as long as there are enough samples yes. 2) Even if you had the same dataset you would still expect different results. That's because of the initial randomization for the nets' parameters.
@marimudaInc
@marimudaInc 4 жыл бұрын
@The Sound of AI , a little critique point. You can't use MSE as the loss function when working with values between 0 and 1. Since when you square the error values between those it does not have the intended effect. Like 2^2 = 4 ---> Good case. 0.5^2 = 0.25 ---> Bad case. Using metrics like MAE is more appropriate for those type of ranges. Keep pushing out good content ! :) Tried to run your code, only changing the loss to MAE, gives me the result, 0.29766 , 0.39570
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 4 жыл бұрын
Thanks for the feedback! You can actually use MSE with normalised data. The fact that data is normalised doesn't change the relation between values, which is what we're interested in in this case. Take for example 3 values: y1 = 2, y2 = 10 and y3 = 4. Let's calculate the squared error for y1, y2: (y1 - y2)^2 = (2 - 10)^2 = 64 Let's do the same for y1, y3: (2 - 4)^2 = 4 After normalising the values, we obtain for y1_n, y2_n: (0.2 - 1)^2 = 0.64, and for y1_n, y3_n: (0.2 - 0.4)^2 = 0.04. Now, the absolute difference for the SEs for y1, y2 and y1, y3 (i.e., 64 - 4 = 60) is larger than that for the respective normalised values (i.e., 0.64 - 0.04 = 0.6). However, the relations between them is the same: SE(y1, y2)/SE(y1, y3) = SE(y1_n, y2_n)/SE(y1_n, y3_n). In other words, the relative distances (on different scales) remain unchanged. This is true for both L1 and L2 metrics (i.e., MAE, MSE). The fundamental characteristic of MSE of being more sensitive to outliers than MAE holds true also in the case of normalised data. This is because of the unchanged relative distances between values I showed before. Just to show that MSE works decently in this case, I re-ran the script with MSE and I got better predictions: 0.30650, 0.4019. You should consider that there's a lot of variability in the predictions due to random initialisation of the weights and the dataset. Then, there's the issue of optimising hyperparameters like number of epochs and learning rate. Hope this helps!
@casafurix
@casafurix 3 жыл бұрын
apparently my test set has done better(Lesser error) than the training set, which was opposite to yours. Is that possible too?
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 3 жыл бұрын
That's quite unusual.
@TheZein2
@TheZein2 4 жыл бұрын
Hi, if I run the code you provided in the description I don't have the same results, I know random weights and bias and dataset splits change every time, but your "loss value" goes really down, mine starts from 0.0497 and reaches 0.0375 after 100 epochs, not to mention that it predicts always something really close to 0.4849189. Everything is installed properly and the code is exactly the same as yours (copy-pasted). What could my problem be?
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 4 жыл бұрын
mmhh... this is weird! If you re-run the code multiple times, do you always get similar results for the error and predictions? The model is quite simple, but it should perform the arithmetic sum, at least decently, as I show in the video. Let me know.
@TheZein2
@TheZein2 4 жыл бұрын
@@ValerioVelardoTheSoundofAI Yup, always the same even if I re-run a dozen times. Can it depend on Python's version? (Mine is 3.7.6) Because I also noticed that differently from your code I need to set "batch_size = 3500" otherwise Tensorflow automatically sets it to be something like 32
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 4 жыл бұрын
@@TheZein2 Interesting - I'm on Python 3.6, but a different Python version shouldn't be an issue in this instance. Since I haven't specified the batch_size in the code, I'm also defaulting to 32. Have you tried running the code with the default batch size? Also, can you try to increase the number of epochs to 200 and see if there's an improvement in the error?
@TheZein2
@TheZein2 4 жыл бұрын
​@@ValerioVelardoTheSoundofAI It was the missing "verbose=2" ! Without it the training was taking too much time (due to the default batch size of 32) that's why I changed into batchsize=3500, which destroyed the accuracy of the model! Thank you for your time!
@sippee93
@sippee93 Жыл бұрын
name 'Any' is not defined is the error
@saremish
@saremish Жыл бұрын
Very helpful
@i_am-ki_m
@i_am-ki_m 2 жыл бұрын
Serious ... Tkx! KW ;)
@achmadarifmunaji3320
@achmadarifmunaji3320 4 жыл бұрын
why the amount of training and testing on my output is not the same as yours. mine shows 110/110 training and 47/47 testing. even though the code is the same. this is my code: # implement a simple NN with tensorflow 2 import numpy as np import tensorflow as tf from random import random from sklearn.model_selection import train_test_split # array([0.1, 0.2], [0.2, 0.2]) # array([0.3, 0.4]) def generate_dataset(num_samples, test_size): x = np.array([[random()/2 for _ in range(2)] for _ in range(num_samples)]) y = np.array([[i[0] + i[1]] for i in x]) x_train, x_test, y_train, y_test = train_test_split(x, y, test_size= test_size) return x_train, x_test, y_train, y_test if __name__ == "__main__": x_train, x_test, y_train, y_test = generate_dataset(5000, 0.3) # print('x_test: {}'.format(x_test)) # build model # use keras, keras is a high level library sit on top tensorflow # build model: 2 input -> 5 hidden -> 1 output model = tf.keras.Sequential([ tf.keras.layers.Dense(5, input_dim=2, activation="sigmoid"), tf.keras.layers.Dense(1, activation="sigmoid") ]) # compile model optimizer = tf.keras.optimizers.SGD(learning_rate=0.1) model.compile(optimizer=optimizer, loss="MSE") # train model model.fit(x_train, y_train, epochs=100) # evaluate model print(" Model evaluation") model.evaluate(x_test, y_test, verbose=1)
@eshanpatel8470
@eshanpatel8470 4 жыл бұрын
That's okay. This is due to model.fit() having one more parameter called "batch_size" which defaults to 32. If you want to see the same numbers as Valerio, set it to 1 and run your code. Although, I am not very sure why in his implementation the batch_size defaults to 1 instead of 32. I would appreciate it if anyone would explain that to me as well.
@vaibhavtalwadker818
@vaibhavtalwadker818 3 жыл бұрын
@@eshanpatel8470 This drastically reduces the accuracy though
@wokeman9928
@wokeman9928 4 жыл бұрын
What you think of mandela effect ?
@ValerioVelardoTheSoundofAI
@ValerioVelardoTheSoundofAI 4 жыл бұрын
I've never done any research into false memories, so I don't really have an opinion on the matter. I'm sure you'll find a lot of interesting research on the topic googling around :)
@wokeman9928
@wokeman9928 4 жыл бұрын
@@ValerioVelardoTheSoundofAI ok
@rahulnagwanshi2348
@rahulnagwanshi2348 4 жыл бұрын
my results were : Some predictions 0.1 + 0.2 = 0.31199273467063904 0.2 + 0.2 = 0.40434715151786804 much closer, thanks for the tutorial
10 - Understanding audio data for deep learning
32:55
Valerio Velardo - The Sound of AI
Рет қаралды 61 М.
6- Implementing a neural network from scratch in Python
21:03
Valerio Velardo - The Sound of AI
Рет қаралды 47 М.
黑天使只对C罗有感觉#short #angel #clown
00:39
Super Beauty team
Рет қаралды 36 МЛН
Quilt Challenge, No Skills, Just Luck#Funnyfamily #Partygames #Funny
00:32
Family Games Media
Рет қаралды 55 МЛН
Арыстанның айқасы, Тәуіржанның шайқасы!
25:51
QosLike / ҚосЛайк / Косылайық
Рет қаралды 694 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 541 М.
How I animate 3Blue1Brown | A Manim demo with Ben Sparks
53:41
3Blue1Brown
Рет қаралды 1,1 МЛН
19- How to Implement an RNN-LSTM Network for Music Genre Classification
14:29
Valerio Velardo - The Sound of AI
Рет қаралды 38 М.
8- TRAINING A NEURAL NETWORK: Implementing backpropagation and gradient descent from scratch
1:03:00
How to Build a Neural Network with TensorFlow and Keras in 10 Minutes
13:46
Kindson The Genius
Рет қаралды 82 М.
7- Training a neural network: Backward propagation and gradient descent
21:41
Valerio Velardo - The Sound of AI
Рет қаралды 31 М.
11- Preprocessing audio data for Deep Learning
25:04
Valerio Velardo - The Sound of AI
Рет қаралды 96 М.
黑天使只对C罗有感觉#short #angel #clown
00:39
Super Beauty team
Рет қаралды 36 МЛН