TensorFlow Tutorial 3 - Neural Networks with Sequential and Functional API

  Рет қаралды 138,536

Aladdin Persson

Aladdin Persson

Күн бұрын

Пікірлер: 133
@anandiborade6349
@anandiborade6349 4 жыл бұрын
This is the most underrated Tensorflow tutorial series I have ever seen.
@H1n1n1
@H1n1n1 3 жыл бұрын
true
@tejasindani1760
@tejasindani1760 3 жыл бұрын
Well said!
@akshanshsingh3766
@akshanshsingh3766 3 жыл бұрын
This is the best TensorFlow tutorial series I found. It's better than all other platforms like Coursera, udemy etc. Thank you!
@thevoid5181
@thevoid5181 Жыл бұрын
even better than tensorflow? (they have tutorials too)
@junaiddooast7435
@junaiddooast7435 5 ай бұрын
i have visited all these platforms but this is still the best of all
@SizigiaTaps
@SizigiaTaps Жыл бұрын
Great Tutorial! The best part, obviously, is when you said "número tres".
@AladdinPersson
@AladdinPersson Жыл бұрын
Agree i listen to this part on repeat
@im-Anarchy
@im-Anarchy Жыл бұрын
@@AladdinPersson why did you said "número tres" instead of neural nets
@anujprasad001
@anujprasad001 4 жыл бұрын
one of the best set of tutorials for TensorFlow.
@puneethj9920
@puneethj9920 3 жыл бұрын
This is the best Tensorflow series I have seen on Internet. Thanks Man! Cheers
@emotionblur7214
@emotionblur7214 4 ай бұрын
I'm trying with tensorflow 2.17.0 and the line inputs = keras.Input(shape=(28*28)) produces the error "cannot convert '784' to a shape. solution: apparently shape has to be explicitly a tuple, so shape(28*28, ) (with comma) does it.
@cutyoursoul4398
@cutyoursoul4398 4 жыл бұрын
As a beginner, this series is perfect, thanks a lot
@henkjekel4081
@henkjekel4081 3 жыл бұрын
Thank you for the videos man really helpfull! Some comments, correct me if I'm wrong: 1. When doing x_train.reshape you should use x_train.reshape(60000,-1). In your video you use x_train.reshape(-1,784), stating that the -1 will keep the 60000 the same. Actually the -1 will cause reshape to automatically find the 784 without you having to compute 28*28, so to take full use of the syntax its easier to use x_train.reshape(60000,-1) 2. You mention that the type of the data is float64, but the type of the data is unit8. Therefore, I don't think we will be computationally more efficient by changing to float32. 3. Add this to the first lines of your script if you want a clear terminal output: import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' from os import system system("clear") 4. Add this to your vscode settings if you want everything to work nicely: { //to open settings in json format: "workbench.settings.editor": "json", //to open default settings when opening user settings "workbench.settings.openDefaultSettings": false, //"python.pythonPath": "C:\\Users\\31627\\pyver\\383\\Scripts\\python.exe", //this is the default python //"python.pythonPath": "C:\\Users\\31627\\.conda\\envs\\tf2.4\\python.exe" "python.pythonPath": "C:\\ProgramData\\Anaconda3\\python.exe", //this is the default python "python.disableInstallationCheck": true, //dont know why "editor.tabCompletion": "on", //to be able to tab out of '' "breadcrumbs.enabled": false, "workbench.startupEditor": "newUntitledFile", "workbench.editorAssociations": { "*.ipynb": "jupyter-notebook" }, "workbench.colorTheme": "Default High Contrast", //to not show the file path at the top of the code file //we installed the extension code runner to have a clear code ouptue in the terminal "editor.fontSize": 17, "editor.fontWeight": "500", "debug.console.fontSize": 17, "terminal.integrated.fontSize": 17, "terminal.integrated.fontWeight": "600", "kite.showWelcomeNotificationOnStartup": false, "python.formatting.provider": "autopep8", "editor.formatOnSave": false, "python.formatting.autopep8Args": [ "--ignore", "E402" ], "code-runner.executorMap": { "python": "$pythonPath -u $fullFileName" }, "code-runner.clearPreviousOutput": true, "code-runner.showExecutionMessage": false, "code-runner.saveFileBeforeRun": true, "code-runner.runInTerminal": true, "python.showStartPage": false, "python.condaPath": "C:\\ProgramData\\Anaconda3\\_conda.exe", "python.defaultInterpreterPath": "C:\\ProgramData\\Anaconda3\\python.exe", "notebook.cellToolbarLocation": { "default": "right", "jupyter-notebook": "left" }, //this is the default conda }
@reimartsarmiento8364
@reimartsarmiento8364 3 жыл бұрын
thanks
@hamzajaved5283
@hamzajaved5283 3 жыл бұрын
Thanks for this, always enjoy reading comments that offer alternative suggestions. Just my two cents: 1. Nice spot, certainly seems neater to do it this way! To be more general one could also write: x_train = x_train.reshape(x_train.shape[0], -1) Assuming the first dimension corresponds to the samples, different x_train sets will still be reshaped correctly using this, rather than having to hard code the exact number of samples each time 2. Actually by rescaling the pixel values to be between 0.--1.0 (accomplished by the /255.0 operation), then the datatype does by default become float64. Manually setting the datatype therefore to float32 cuts memory usage by half One can check this for themselves using x_train.dtype (which will return int8, float64 or float32 depending on which transforms have been applied) And to get the actual size of this object in memory, you can use: import sys; sys.getsizeof(x_train) 3. & 4. Didn't look into these as not too bothered by the warning messages, and not a vscode user :)
@You-7860
@You-7860 4 ай бұрын
Bro I am not getting the things used inside those built in function as I am new i have not watched the theory part.....what would be your suggestions ​@@hamzajaved5283
@You-7860
@You-7860 4 ай бұрын
​@@hamzajaved5283bro I am not able to understand the function parameters like some what ... I amnew and I have not watched theory of neural network correctly... I started watching this course of tensorflow ...bro what would be your suggestion
@praneethbhat7977
@praneethbhat7977 4 ай бұрын
​@@You-7860 watch those videos first which he suggests in this video u will get an idea
@balakrishnakumar1588
@balakrishnakumar1588 4 жыл бұрын
Superb enjoyed the tutorial. Waiting for the playlist to grow.
@PraYogiz
@PraYogiz 4 жыл бұрын
I think this tutorial is the best, explain and cover all the thing needed.
@mayankarya7045
@mayankarya7045 2 жыл бұрын
Hi Aladdin, You proved it is the best one. Thanks ❤️
@joysanimationstudio2375
@joysanimationstudio2375 3 жыл бұрын
4:06 i cant compile its give me error : list index out of range from tf.config.experimental.set_memory_growth(physical_devices[0], True) line. please help me
@kohinoortanishq3968
@kohinoortanishq3968 3 жыл бұрын
Thank you so much. This helped me a lot. Previously I was not aware of the functional API and I badly needed this for my project.
@yashvardhannegi5909
@yashvardhannegi5909 4 жыл бұрын
You sir deserve more views and likes
@iantaggart3064
@iantaggart3064 11 ай бұрын
Accuracy increased by 0.07% with the inclusion of another layer size 128, and a further 0.34% with seven epochs instead of five.
@jeremynx
@jeremynx 3 жыл бұрын
I am so happy to find this videos!
@kanishkgandhi101
@kanishkgandhi101 3 жыл бұрын
using the exact same code, I am getting x_train.size=60000,28,28 but when I run the model I am getting 1875/1875 in each epoh rather than 60000/60000....why is this happening??
@johnhawkins8914
@johnhawkins8914 3 жыл бұрын
1875 iteration with 32 samples (batch size) each iteration = 60000
@cx4917
@cx4917 4 жыл бұрын
Hey when you train the model why do you have 60k observation while using a batch size.
@ahmedmohamedmohamedmohamed282
@ahmedmohamedmohamedmohamed282 2 жыл бұрын
great explanation and great video
@UniverseOfIntentionality
@UniverseOfIntentionality 6 ай бұрын
Thank you for these tutorials - and hello from 2024!
@michelchaghoury870
@michelchaghoury870 2 жыл бұрын
MANNNN so usefull, please do more and more of these videos pleasee and keep going
@vidyakadam4021
@vidyakadam4021 4 жыл бұрын
Very nicely explained the code and its uses. Thanks a lot.
@nerymarques42
@nerymarques42 2 жыл бұрын
I could not thank you enough! superb content
@donfeto7636
@donfeto7636 2 жыл бұрын
we can use the extracting specific layer features for transfer learning ?
@__-op4qm
@__-op4qm 2 жыл бұрын
can just select which parameters/tensors to get derivatives for and train (aka step, update) only them, while selected others are kept fixed. PS (long tangent): Generally can use derivative of anything wrt any tf.Variable type tensors for whatever purposes, when using tf.GradientTape for training a nn or for anything else (like when fitting arbitrary parametric functions or anything where derivatives are useful). Can save (e.g., as pickles), load, swap or manually update the trainable weights (which are all tf.Variable type) or any other Variable type tensors using .assign() method, generally is convenient. Any consecutive steps (can be sequence of methods called from several classes) that need to happen fast (e.g,. during training) can be just wrap in a function decorated with @tf.function, frees up the whole pipeline to be handled/debugged eagerly (i.e. numpy like behaviour) outside of training, and by the way when the graph mode is on (only inside @tf.function executions) frees and option train eagerly/interactively but slower. Moreover, hands on control of what Jacobeans constrain when running tf.GradientTape in eager more allows to check and skip gradient updates if there are any tf.nan values, which would otherwise break the model. There jacobians can also be manually fiddled with before passing to optimiser, like can do clipping, can normalise them, maybe even reweight them per datapoint. In summary tf.GradientTape and @tf.function, mixed with .Layer and .Model inheritance tools, allow user to do almost whatever they want with this library.
@donfeto7636
@donfeto7636 2 жыл бұрын
@@__-op4qm thank you for taking time for writing this comment , i appricate you can you recommend book for studying tensorflow for beginners
@__-op4qm
@__-op4qm 2 жыл бұрын
@@donfeto7636 This channel is awesome! Also, on coursera there is a useful course called: 'custom-models-layers-loss-functions-with-tensorflow'. [my 1st reply got deleted probs cos i put this full link in.] The main thing I was saying is that some tf decorators have some nuances to consider, to train faster and correctly. Googling forums etc and trying things out until it trains equally well as the official best practice code, step by step, was a good exercise when pushing the library to do unusual custom things. I really like the fact that tf now very nicely supports the eager mode functionality, meaning that things can be selectively ran in graph mode only when and where that's needed for speed up.
@carlosfloresspindola9448
@carlosfloresspindola9448 3 ай бұрын
Hello, first of all, thank you for the tutorial, it has been very useful, but I would like to know if you could help me. The two lines that you write after importing the libraries to avoid errors, tell me that they are out of range. Could you help me understand why they say they are out of range and how to solve it? And then when I want to read the mnist database it tells me that it cannot import anything, it takes a long time to do it and at the end it sends an error that it could not do it. Could you help me solve it so I can continue with your course? I look forward to your comments, thank you.
@dhruvnegi422
@dhruvnegi422 4 жыл бұрын
hey i used adadelta and rmsprop as my optimizers and in both cases for training set the accuracy was above 0.99 and loss of around 0.0062, but while evaluating the loss function is pretty high 1.32 and 1.42 resp with accuracies of 0.95 for both. What might be the reason for this huge devaiation, is it due to over-fitting or any other concept i am missing??
@AladdinPersson
@AladdinPersson 4 жыл бұрын
Sounds like overfitting to me, try and add dropout, l2 regularization and/or data augmentation which we cover in future videos :)
@dhruvnegi422
@dhruvnegi422 4 жыл бұрын
@@AladdinPersson ohh cool cool, thanks mate
@lukehyde6942
@lukehyde6942 9 ай бұрын
The code provided doesn't work, if I modify it to work with latest tensorflow, I get 0.15 accuracy. I don't have a GPU to use, its an older laptop, does that change the answer? Currently I'm using the sequential method.
@lukehyde6942
@lukehyde6942 8 ай бұрын
I think the issue was the line inputs=inputs, etc. that is very critical, hopefully this is useful info for someone
@meethansaliya4885
@meethansaliya4885 3 жыл бұрын
when i am doing on hand practice i got a error for value of y_true and y_pred in loss, can anyone help me out , thanks in advance
@samthrimavithana8243
@samthrimavithana8243 3 жыл бұрын
Hi when I print (x) it gives an error ypeError: 'tensorflow.python.framework.ops.EagerTensor' object is not callable
@aaaqaaaa2720
@aaaqaaaa2720 3 жыл бұрын
Hi sir when I excute the code ihave problem that error name sequential is not defined why?Thanks advance
@coding10yearold
@coding10yearold 3 жыл бұрын
Shouldn't each Epoch have 60,000 total_size /32 batch_size= 1875 whatyoumacallits?
@tanvik5427
@tanvik5427 3 жыл бұрын
YEP
@utpalpodder-pk6vq
@utpalpodder-pk6vq 3 жыл бұрын
sir,in the last portion of this lecture you mentioned about extracting the model features for the layers of the model.. so my doubt is when we should extract the model features ...is it we can extract the model features both after training the model and before training the model.Since while I build the cnn model then I found we can extract the features of the layers of the cnn model both after and before training the model and even we can predict the inputs using those features. So,its creating some confusion about how we are able to predict the inputs using the intermediate layer features even before we train the model. please help me to solve this confusion.
@shikhargupta7080
@shikhargupta7080 Жыл бұрын
Its possible but the output will not be correct in this case
@malik_fa
@malik_fa 4 жыл бұрын
How can we load a .csv or img dataset from a local directory instead of using built in mnist dataset?
@Rohankumar-dd2ss
@Rohankumar-dd2ss 4 жыл бұрын
for csv use pandas and change it into numpy
@granatapfel6661
@granatapfel6661 3 жыл бұрын
Why the commands import Tensorflow as tf and further more isn't highlighted? I think I use the right interpreter. Can someone help me
@matinfazel8240
@matinfazel8240 3 жыл бұрын
thnaks, it was awesome
@abhishekbhosale5310
@abhishekbhosale5310 Жыл бұрын
I've been trying to train the exactly same model, but for some reason i was able to get max accuracy of only about 92 percent. I even tried tuning the hyper parameters but the results were same. Can you tell me what might be the probable issue?
@NayarJoolfoo
@NayarJoolfoo Жыл бұрын
Such a clear explanation (y)
@akashbhoi1951
@akashbhoi1951 3 жыл бұрын
best tutorial ever
@dicesdw
@dicesdw 3 жыл бұрын
If we remove the normalization from data, we'll get the same results but taking more time to compute?
@tsegaamanuel5907
@tsegaamanuel5907 4 жыл бұрын
Really superb tutorial
@delyartabatabai9636
@delyartabatabai9636 3 жыл бұрын
great explanation! Thanks!
@IseseleVictor
@IseseleVictor 9 ай бұрын
The tutorial is very helpful, though I'm getting an error when trying to print model summary following tutorial 3: print(model.summary()) : raise ValueError(f"Cannot convert '{shape}' to a shape.") ValueError: Cannot convert '784' to a shape.
@WildSurferYT
@WildSurferYT 9 ай бұрын
I have same problem
@Ven0mm04
@Ven0mm04 11 ай бұрын
Hey, is there any possibility that I can get 2 output? So like I'm gave one picture in and the output is 2 new pictures?
@Ven0mm04
@Ven0mm04 11 ай бұрын
nvmd don't senn the complete video xD
@dijkstra4678
@dijkstra4678 2 жыл бұрын
unfortunately in Tensorflow 2.8.0 the Functional API is broken. After consulting google it appears that recently the Functional API is outputting the same error message that nobody knows how to solve.
@jeremynx
@jeremynx 3 жыл бұрын
Thank you very much!!! You are really great!
@watcharakietewongcharoenbh6963
@watcharakietewongcharoenbh6963 3 жыл бұрын
I don't understand what is the "one input and one output" you mean, and why is the Sequential API cannot do that case?
@im-Anarchy
@im-Anarchy Жыл бұрын
same but now did you understand?
@shudharsanmuthuraj1076
@shudharsanmuthuraj1076 4 жыл бұрын
hello, why while printing it shows 60000 samples instead of based on batch_size (60000/64)?
@navalsurange3588
@navalsurange3588 4 жыл бұрын
yes same is happening with me did you got it resolved??
@XhunterDragon96
@XhunterDragon96 4 жыл бұрын
Hi, i have a question. Why is it the the shape of the input is 28*28? I understand that the images are 28 by 28 pixels, but i thought the entries were 60k? I dont understand exactly how this part goes.
@SaiKiranAdusumilli
@SaiKiranAdusumilli 2 жыл бұрын
60k are the number of images... and each image consistent of 28*28 pixels.. that means 784 values.... so for each image we are having 784 values...
@nandhagopalcs5608
@nandhagopalcs5608 4 жыл бұрын
you are the best sir
@dhawals9176
@dhawals9176 4 жыл бұрын
Why is it showing 6000/6000, wasn't your batch size 32?
@neillunavat
@neillunavat 4 жыл бұрын
How to only print accuracy while training a Keras functional API model? Please help? If you are here? I am trying to compare 3 different output layers with different activation functions. But the problem is, I only want the accuracy and not the loss while training. I have no issues with it but the line is too LONG. I want to compare each layers' accuracy. Epoch 1/5 1875/1875 - 4s - loss: 3.7070 - Sigmoid_loss: 1.1836 - Softmax_loss: 1.2291 - Softplus_loss: 1.2943 - Sigmoid_accuracy: 0.9021 - Softmax_accuracy: 0.9020 - Softplus_accuracy: 0.5787
@essamgouda1609
@essamgouda1609 4 жыл бұрын
God Bless you Sir !
@utkar1
@utkar1 3 жыл бұрын
Hey thanks for this awesome lesson. A query though. The same model with functional API gives me 9.7-9.8% accuracy while the Sequential is giving me 97-98% accuracy
@nikai4249
@nikai4249 2 жыл бұрын
try from_logits = True
@tahaa1994
@tahaa1994 4 жыл бұрын
You are the best
@thetensordude
@thetensordude 4 жыл бұрын
Nice tutorial! Can you also make a video about The Tensorflow Core?
@shlepeekeg1412
@shlepeekeg1412 2 жыл бұрын
I have have started this after coding in SQL for a year
@alternativepotato
@alternativepotato 3 жыл бұрын
you should watch statquest when it comes to theory seriusly guys, although aladdin resources are good. Stat quest is by far more concisive, and understandable
@jeremynx
@jeremynx 3 жыл бұрын
The best!
@tyrian007
@tyrian007 3 жыл бұрын
to normalize you divided by 255.0, where that number came from? Did you already know the max value?
@Rohankumar-dd2ss
@Rohankumar-dd2ss 3 жыл бұрын
pixel ranges from 0 to 255
@abuabdullah9878
@abuabdullah9878 3 жыл бұрын
@@Rohankumar-dd2ss Thank you!!!
@HomonculusPort
@HomonculusPort 3 жыл бұрын
why is my y_test only have 10000 data in it compared to 60000 on y_train
@johnhawkins8914
@johnhawkins8914 3 жыл бұрын
for model training we need a lot of data - 60000 samples from y_train but for evaluation smaller dataset will be enough, hence only 10000 samples in y_test
@sofyanmahmoud4776
@sofyanmahmoud4776 3 жыл бұрын
You are amazing
@Amir-gi5fn
@Amir-gi5fn 8 ай бұрын
ValueError: Cannot convert '784' to a shape. AttributeError: 'NoneType' object has no attribute 'items'
@Amiths18
@Amiths18 4 ай бұрын
Add " , " after 784
@RiadAhmed-ce6qo
@RiadAhmed-ce6qo 8 ай бұрын
you are very good thanks
@randyrabinzengui9174
@randyrabinzengui9174 4 жыл бұрын
what is the the application using for that code? pycharm or an other
@AladdinPersson
@AladdinPersson 4 жыл бұрын
PyCharm yes
@randyrabinzengui9174
@randyrabinzengui9174 4 жыл бұрын
@@AladdinPersson but I'm trying with my Paycharm but i think it's have some problems! Please how get the code 🙏
@randyrabinzengui9174
@randyrabinzengui9174 4 жыл бұрын
@@AladdinPersson it's on mac or Windows?
@cedricmanouan2333
@cedricmanouan2333 4 жыл бұрын
Very good !
@AladdinPersson
@AladdinPersson 4 жыл бұрын
I appreciate you man!
@grahamastor4194
@grahamastor4194 3 жыл бұрын
Question: when building a model without a final activation layer and the allowing the loss function to apply the last activation; what does the model.predict code look like? Thanks.
@apocalypt0723
@apocalypt0723 4 жыл бұрын
thanks for the video.
@aljosaklajderic5580
@aljosaklajderic5580 3 ай бұрын
It is cool series, but could you update your code? It throws bunch of errors. Thanks!
@akashk7390
@akashk7390 10 ай бұрын
can you please provide the code
@olo259
@olo259 9 ай бұрын
thanks sir for the video
@JearBear6896
@JearBear6896 3 жыл бұрын
For the people confused on the tensorflow.keras.layers. The new way is .keras.datasets.
@navalsurange3588
@navalsurange3588 4 жыл бұрын
dude how is your training time soo less mine is 69 -75 sec per epoch also, while taking batch size = 32 only 60000/32 training data elements are passed not 60000 so I changed batch size to 1 and now it is taking 69-75 sec
@philippfrogel9355
@philippfrogel9355 4 жыл бұрын
i think this is because he uses a gpu, and you dont? it can be tricky to activate it sometimes, i think. also the batch-size doesnt make sense if it is 1. then you do each train step only with one image. if your batch-size is eg 32, then normally still all 60000 images are taken within one epoch. be careful with these answers, im not a pro myself
@kaylaparys7146
@kaylaparys7146 3 жыл бұрын
hello mine also takes only about a second each and I am running a GTX 1060. Since tensorflow appears to run on the GPU I would assume the stronger the GPU the faster the computations.
@moussa5495
@moussa5495 Жыл бұрын
VERY NICE AND CLEAR
@philippfrogel9355
@philippfrogel9355 4 жыл бұрын
get dis guy more subs
@iskrabesamrtna
@iskrabesamrtna 3 жыл бұрын
wouldn't it be more practical if you'd use Flatten layer instead of doing reshaping on the x_train?
@magicalpotato196
@magicalpotato196 2 жыл бұрын
I understand why the last Dense layer is 10, but why is the first layer 512, and the second layer 256, im new to this completely so if anyone could give an explination for dummies id appreciate it :)
@SaiKiranAdusumilli
@SaiKiranAdusumilli 2 жыл бұрын
it's the random number that we can take ... but there is some logic in minimising values the dense layers .... for example let's take a human image that needs to be predicted.... in the first dense layer we are identifying will produce some results that will predict fingers or hands and legs, hair , eyes etc (identification of small parts).. now coming to 2nd dense layer we will combine those fingers and will form a hand image or leg image , by using eyes and hair we can identify a head.. now coming to final output layer we will combine all these values(hand, face ) and we will predict it's a human image... so that it works..
@juanluismagana9043
@juanluismagana9043 4 жыл бұрын
thank you so much, I really understand everything (I'm not native english speaker)
@DiaaHaresYusf
@DiaaHaresYusf 2 жыл бұрын
the division into 255 is not because of performance .. its because of weights asigned to the neural network -- weights going to be generated randomlly between 0 and 1 and if you kept your Xs value big it will be ignored and may all neural network wont learn .. thanks
@asherabecassis9575
@asherabecassis9575 3 жыл бұрын
13 / 5000 For Apple users -> os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
@moussa5495
@moussa5495 Жыл бұрын
Do u have a website?
@amruthavarshini8094
@amruthavarshini8094 8 ай бұрын
campusx dhekloo ekk barrr
@Namenlos-r8f
@Namenlos-r8f 5 ай бұрын
your voice sounds so familiar
@shivamanand8998
@shivamanand8998 4 жыл бұрын
Got accuracy of 98.1 by using 1 relu 2 relu 3 sigmoid
@furkatsultonov9976
@furkatsultonov9976 4 жыл бұрын
You can use sigmoid function for binary classification like cat-no cat. For the MNIST problem you should use softmax activation function for the output layer as we need multiple labels.
@manishahajare2470
@manishahajare2470 2 жыл бұрын
5:46
@rudela9900
@rudela9900 3 жыл бұрын
Never mind, a parenthesis wss in the wrong place. Thanks anyway.
@dailyupdatelesson4097
@dailyupdatelesson4097 4 жыл бұрын
bro, we need more advanced lessons, these codes already on github or other,
@AladdinPersson
@AladdinPersson 4 жыл бұрын
I hear you, am doing more advanced too, but we have to think about the newbies too ;)
@nikhildr4441
@nikhildr4441 4 жыл бұрын
yeah! what about beginners 😔
@Rindik
@Rindik Жыл бұрын
Thanks for tutorial, but fu*king pycharm killing my nervous. But Vs code helped me
@thevoid5181
@thevoid5181 Жыл бұрын
if you had problems importing keras like me from tensorflow import _tf_uses_legacy_keras_ do this instead: import keras
@thevoid5181
@thevoid5181 Жыл бұрын
from keras import layers from keras.datasets import mnist
@FillyRoid
@FillyRoid 5 ай бұрын
​@@thevoid5181 from keras.datasets import mnist doesnt work. They doesnt find datasets. u can find datasets over "from keras import datasets", but from keras.datasets it doesnt work
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 433 М.
99.9% IMPOSSIBLE
00:24
STORROR
Рет қаралды 31 МЛН
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН
My scorpion was taken away from me 😢
00:55
TyphoonFast 5
Рет қаралды 2,7 МЛН
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,4 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 573 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Building a Neural Network with PyTorch in 15 Minutes | Coding Challenge
20:34
MAMBA from Scratch: Neural Nets Better and Faster than Transformers
31:51
Algorithmic Simplicity
Рет қаралды 218 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,4 МЛН
245 - Advantages of keras functional API in defining complex models
20:25
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,5 МЛН
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 357 М.
99.9% IMPOSSIBLE
00:24
STORROR
Рет қаралды 31 МЛН