Sarcasm is Very Easy to Detect! GloVe + LSTM

  Рет қаралды 12,936

Normalized Nerd

Normalized Nerd

Күн бұрын

Пікірлер: 33
@garrettosborne4364
@garrettosborne4364 4 жыл бұрын
Excellent. I am working on GloVe model and this adds some insights to that project.
@NormalizedNerd
@NormalizedNerd 4 жыл бұрын
That sounds great! Keep supporting :D
@datahacker1405
@datahacker1405 3 жыл бұрын
Your videos deserve more likes , you deepdive into the concepts which is really helpful for us
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
Thanks a lot :D
@chetan923
@chetan923 4 жыл бұрын
Nice advertising skills bro, I was up to a project on sentiment analysis and thought tf-idf would suffice, now you made me watch all your videos to understand your references and guess what, GLoVe is again interesting like anything new I come across in data. I guess I'll have to work on the same to understand more.
@NormalizedNerd
@NormalizedNerd 4 жыл бұрын
Do read more about it!
@ChinmayiHR-y3s
@ChinmayiHR-y3s 4 күн бұрын
Bro which algorithm you used in this project
@adamameen5
@adamameen5 2 жыл бұрын
Great video! Can I know how you are performing classification in this @Normalized Nerd
@missmad5789
@missmad5789 3 жыл бұрын
Hello :) thank you so much for this video! I'm doing this same project at the moment and I have a question for you: After looking at both datasets and checking for duplicates I realized that both data sets seem to contain nearly the same exact headlines. This means adding both data sets into one dataframe, it will contain thousands of duplicates. Does that worsen the models performance or does it somehow help the model to learn better? I'm new to ML :D thank you
@missmad5789
@missmad5789 3 жыл бұрын
by the way, I love your energy in this video! :)
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
Great point! You should remove the duplicates. I forgot to mention this step in the video.
@adamameen5
@adamameen5 2 жыл бұрын
@@NormalizedNerd Can someone tell how to remove the duplicates.. this is a great video, thank you!
@sindhu2410
@sindhu2410 2 жыл бұрын
Is there any pretrained model or transformation model to detect sarcasm?
@paulkerrigan9857
@paulkerrigan9857 4 жыл бұрын
Sarcasm is entirely dependent upon contextual clues, though. Two identical sentences could be genuine or sarcastic depending upon the situation. Is it really a problem that a neural network could solve by looking at strings?
@NormalizedNerd
@NormalizedNerd 4 жыл бұрын
If you look closely, some of the strings that I used in the examples do contain the context. If the corpus contains the context then yes it is possible for a neural network.
@paulkerrigan9857
@paulkerrigan9857 4 жыл бұрын
@@NormalizedNerd I see. Thanks for your response.
@-_BahauddinTaha
@-_BahauddinTaha 2 жыл бұрын
Can we do it for other language, using the same procedure??
@atreyamajumdar9836
@atreyamajumdar9836 4 жыл бұрын
Very nice video! But, if you are removing words like 'to', 'the' etc, isn't it also affecting the meaning of the sentence? And that can cause some error?
@NormalizedNerd
@NormalizedNerd 4 жыл бұрын
@Atreya Majumdar good point. I actually tried including the stop words but the validation accuracy didn't improve mush. I guess the data is not so rich in that respect. However you can be creative and try something with parts of speech detection.
@atreyamajumdar9836
@atreyamajumdar9836 4 жыл бұрын
@@NormalizedNerd, I don't have any experience with NLP. Still- is it possible that the presence of the second sentence in an example is teaching the network to classify it as a sarcasm? Maybe you can check this easily by calculating the proportion of sarcasm sentences with two full stops in it, in the training set.
@NormalizedNerd
@NormalizedNerd 4 жыл бұрын
@@atreyamajumdar9836 As I am removing every punctuation (including full stop) so the model doesn't know how many sentences are there. It just receives a sequence of words.
@abhijeetv4418
@abhijeetv4418 Жыл бұрын
Very nice bro...Can you please help me out reach the code? Can you please share it somehow?
@midhileshmomidi2434
@midhileshmomidi2434 4 жыл бұрын
Hi sir one doubt here finally some sentences gives incorrect output So how to use this to be learnt by the model in future to give correct output
@NormalizedNerd
@NormalizedNerd 4 жыл бұрын
For that you'll need online learning. A simple approach would be to train your model at a regular interval on the new sentences.
@abrahammathew8698
@abrahammathew8698 3 жыл бұрын
Good Video!! What if the words in the test sentence not present in corpus, will it work?
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
No, it will treat unknown words as zero vectors so no meaningful output.
@greeshmaheshwari1620
@greeshmaheshwari1620 4 жыл бұрын
Amazing work bro. I want to know is there any method to not make incorrect guess again n again for same or similar sentences. Basically, any method to append this new input and target, without retraining the whole model every time.
@NormalizedNerd
@NormalizedNerd 4 жыл бұрын
Thanks man! Well appending input-output pairs is like using if-else instead of machine learning so it's a big no. However you can try to use online learning. In that case you can train the model periodically after you have gathered enough new data.
@somunath1713
@somunath1713 3 жыл бұрын
Ml er sathe kichu emon playlist banao dada jate amra kivabe data science job a apply korte parbo kichu valo india based data science company te..
@debanshumajumdar9883
@debanshumajumdar9883 4 жыл бұрын
Your video should be more famous, it's simple and solves many issues in the heads of people struggling with Glove Model usage and LSTN. amazing! One question - For a multi class use case, do you recommend some specific changes?
@NormalizedNerd
@NormalizedNerd 4 жыл бұрын
Thanks man! For multi-class case, just change the activation function in the last layer (softmax) and the loss function (categorical cross entropy).
@thetensordude
@thetensordude 4 жыл бұрын
Bhaiya please some seaborn tut...
@NormalizedNerd
@NormalizedNerd 4 жыл бұрын
Sure I'll try to make some.
Text Summarization & Keyword Extraction | Introduction to NLP
14:59
Normalized Nerd
Рет қаралды 51 М.
Introduction to NLP | GloVe & Word2Vec Transfer Learning
21:12
Normalized Nerd
Рет қаралды 11 М.
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
Quando eu quero Sushi (sem desperdiçar) 🍣
00:26
Los Wagners
Рет қаралды 15 МЛН
Cheerleader Transformation That Left Everyone Speechless! #shorts
00:27
Fabiosa Best Lifehacks
Рет қаралды 16 МЛН
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 409 М.
Introduction to NLP | Word Embeddings & Word2Vec Model
23:10
Normalized Nerd
Рет қаралды 38 М.
NLP Tutorial in Python - Spam Classification
20:49
Greg Hogg
Рет қаралды 9 М.
Naive Bayes, Clearly Explained!!!
15:12
StatQuest with Josh Starmer
Рет қаралды 1,1 МЛН
Watch this to learn Machine Learning in 2021!
11:58
Normalized Nerd
Рет қаралды 3,7 М.
8. Text Classification Using Convolutional Neural Networks
16:28
Weights & Biases
Рет қаралды 89 М.