Hi there! If you want to stay up to date with the latest machine learning and big data analysis tutorials please subscribe here: kzbin.info Also drop your ideas for future videos, let us know what topics you're interested in! 👇🏻
@cadenmilo86523 жыл бұрын
a trick : you can watch movies at Flixzone. Been using it for watching loads of movies recently.
@stetsondayton44913 жыл бұрын
@Caden Milo Yea, I've been watching on flixzone for since december myself :D
@maneeshgaurav14384 жыл бұрын
how to see classification label against every sentence, could you put more light
@maartjedejong74403 жыл бұрын
Hey! When doing the model.fit I get this error: UnimplementedError: Cast string to float is not supported [[node binary_crossentropy/Cast (defined at :1) ]] [Op:__inference_train_function_2861] Function call stack: train_function What am I doing wrong?
@aryamankukal10562 жыл бұрын
how would you implement this with one hot encoding instead of embedding layer?
@anasputhawala63902 жыл бұрын
Is your model not overfitting? 10:38 you can see that your validation loss is increasing whereas your train loss is decreasing? Isn't that a sign of over-fitting?
@breakfastwithdave89332 жыл бұрын
Exactly I noticed the same. Strange enough at that point the video was cut and final results not shown
@juancruzalric66053 жыл бұрын
You don't need to flatten the embedding layer? or the LSTM take care of that?
@ZD3413 жыл бұрын
Your video was a lifesaver.
@vaizerdgrey2 жыл бұрын
Can you please tell, how can we handle seed for Tokenizer? For example if we use the trained model for production then we will receive a completely new sentence and if we use Tokenizer to convert those words to sentence, then we will get completely new integer vector and the prediction might fail.
@delllaptop59714 жыл бұрын
Hey could someone please answer why would you fit the tokenizer on only the training data and not on the test data? Wont there be some words in the test data that are not present in the training data? How does it affect accuracy?
@Rajivrocks-Ltd.4 жыл бұрын
You'll get overfitting that way i think, when you eventually deploy the model you will get words that aren't present in your model. At least this what I think. I am a novice at NLP so don't take my word for it.
@wwg6813 жыл бұрын
does the length of the sentences affect the model in any way?
@NeotenicApe4 жыл бұрын
Hey, love your videos! Do you think you could run an example with the SNLI dataset? I have problems conceptualizing what the input should look like since there are two sentences being compared, yielding one out three possible outputs.
@DecisionForest4 жыл бұрын
Thanks for the support mate! Don’t know the SNLI dataset.. pass me a link to see the features please
@ariouathanane3 жыл бұрын
hello, How to do for multi class please?
@juanbomfim224 жыл бұрын
Instead of being just 0 or 1, what I changes should I perform in order to be able to classificate (0, 1, 2, 3) for example?
@DecisionForest4 жыл бұрын
Good question, for multiclass classification the most important aspect is the choice of the output layer, our Dense layer. Here we use "sigmoid" but for multiclass we would use "softmax".
@juanbomfim224 жыл бұрын
@@DecisionForest Oh! Thanks for replying. I've just implemented my own example of classification following your tutorial. And now: I should only change sigmoid to softmax and what else? Dense layer to 4 (0,1,2,3) outputs? Just that?
@DecisionForest4 жыл бұрын
@@juanbomfim22 Yes, the dense layer units set to the number of classes.
@juanbomfim224 жыл бұрын
@@DecisionForest Thank you so much!!! I'm starting in Machine Learning in Python and this tutorial is amazing. It was the only one that was helpful for me in what I wanted to do!! I did my multiclassification model and it's working :)
@DecisionForest4 жыл бұрын
Glad to hear that! It’s an amazingly rewarding journey isn’t it? Keep up at it and always challenge yourself.
@lamikbj3 жыл бұрын
Hey thanks man,by the way, How to predict a sentence by the trained model .
@Angleishu263 жыл бұрын
In #create LSTM model -> model.add(LSTM(64, dropout=0.1)) Error :- cannot convert a symbolic Tensor (lstm_4/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a tensor to a numpy call, which is not supported. How to remove this error? Sir please solve it. 😥
@sailalmishra4860 Жыл бұрын
This is because u r not using compatible version of numpy thats compatible with the installed version of tensorflow. Try pip update both numpy and tensorflow. Should resolve this error
@marcuscheong75854 жыл бұрын
Hey, are you using a GPU? I'm trying to classify text using lstm on Kaggle and it is really slow even after using the Kaggle provided GPU accelerator
@DecisionForest4 жыл бұрын
Nope, for these videos I run locally on my MBP i9, 32gb ram. Weird.. shouldn’t be slow as the dataset is not huge...
@brojbaelena15693 жыл бұрын
kaggle is quite slow.. try writing a project in google collab (dont forget to activate gpu)
@satadruhazra86074 жыл бұрын
I'm trying to build an intent classification model with tensorflow.. I'm facing some issues about validation accuracy and prediction accuracy.. I want some expert advice. can you provide your linkedin link or any contact info to help me out Please.
@DecisionForest4 жыл бұрын
Can you detail the issue here? I might be able to help with some ideas.
@anastasiiag85184 жыл бұрын
Great explanaition! Thank you. Subscribed.
@DecisionForest4 жыл бұрын
Thank you, really glad it was helpful!
@parasharupasana4 жыл бұрын
Thanks!
@NKRKing4 жыл бұрын
Hi, thank you for this video i have question in terms of prediction. on what should i do the prediction? on test_padded ? like this predictions = model.predict(test_padded, steps=1, verbose=0) thanks
@NKRKing4 жыл бұрын
Further more, i visualize the data using confusion matrix and the half is true predicted and the half is false, Although i increase the number of epochs
@ola0x983 жыл бұрын
@@NKRKing you have to fit the tokenizer on the test data.
@dmitrikochubei35694 жыл бұрын
Great stuff! also made your likes count 69 by adding another one.