Day 9-Word Embedding Layer And LSTM Practical Implementation In NLP Application|Krish Naik

  Рет қаралды 38,141

Krish Naik

Krish Naik

Күн бұрын

Today materials: colab.research...
All materials will be added in the below dashboard. Enroll for free
ineuron.ai/cou...
Hello Guys,
Finally we at iNeuron are happy to announce 6 months Live Full Stack Blockchain Development Course which will be starting from 23rd July 2022 and will be available for lifetime. The instructor of the course will myself, Navin Sir from Telusko youtube channel and Sanjeevan. The main focus of the course is to become a Full Stack Blockchain Development developer which is an amazing requirement in many companies. The course fees is really affordable which is about 4000 INR includeing GST. You can find the entire course syllabus from the below link.
Course syllabus: bit.ly/3OQwdGL
You can get additional 10% disocunt by using Krish10 as coupon code.Happy Learning!!
Don't miss this opportunity and grab it before it's too late. Happy Learning!!

Пікірлер: 45
@tarun4705
@tarun4705 Жыл бұрын
26:13 Honestly your free content is much better than other paid content. I would even pay twice or thrice the amount of money compared to paid content to watch your videos.
@RavitejaGundimeda
@RavitejaGundimeda Жыл бұрын
26:13 I value free content. Thanks for making it free and it was a great learning experience!
@ajaynegi6278
@ajaynegi6278 2 жыл бұрын
Happy Birthday Krish! You are a Messiah for students like us....and thank you so much for these videos!
@prerequisitechannel
@prerequisitechannel 2 жыл бұрын
Great content Krish, Thank you! Waiting for sessions on transformers and BERT.
@avbendre
@avbendre Жыл бұрын
yes
@Gokulhraj
@Gokulhraj 2 жыл бұрын
Deprecated: tf.keras.text.preprocessing.one_hot does not operate on tensors and is not recommended for new code. Prefer tf.keras.layers.Hashing with output_mode='one_hot' which provides equivalent functionality through a layer which accepts tf.Tensor input. See the preprocessing layer guide for an overview of preprocessing layers.
@shubhsharma4016
@shubhsharma4016 2 ай бұрын
can you share some code on how to proceed further with the steps?
@srishtisharma5101
@srishtisharma5101 5 ай бұрын
I spent more then 1L and now i am studying from your free content krish..don't ever say..nobody values.. 26:25
@milindtakate5987
@milindtakate5987 2 жыл бұрын
Waiting for sessions on transformers and BERT.
@sandipansarkar9211
@sandipansarkar9211 2 жыл бұрын
finished watching
@osikoyaadeola2530
@osikoyaadeola2530 2 жыл бұрын
Thanks so much sir
@Ayoub.Naderei
@Ayoub.Naderei 10 күн бұрын
amazing man
@ratnak1058
@ratnak1058 2 жыл бұрын
Amazing session
@hargovind2776
@hargovind2776 2 жыл бұрын
Great explanation
@mrityunjayupadhyay7332
@mrityunjayupadhyay7332 2 жыл бұрын
nice explaination
@bilalshabbir1343
@bilalshabbir1343 2 жыл бұрын
Awesome Serious
@mohankrishna2188
@mohankrishna2188 2 жыл бұрын
I am eagerly waiting for transformers and and BERT
@kulbhushansingh1101
@kulbhushansingh1101 2 жыл бұрын
sir waiting for next session eagerly.
@bilalsidiqi9992
@bilalsidiqi9992 2 жыл бұрын
what are the true/target values that are used in calculating the loss function
@lakshaysain7375
@lakshaysain7375 Ай бұрын
@047nupurai2
@047nupurai2 3 ай бұрын
after adding that embedding layer my model summary is giving me output shape as question mark and my total params and trainable params as zero .why is that happening ? Any specific reason or solution to that .
@garvsekahohumhinduhain6085
@garvsekahohumhinduhain6085 4 ай бұрын
Thank you sir
@oindriladas5807
@oindriladas5807 Жыл бұрын
Sir when should you upload next NLP classes on Transformers and BERT
@irshadali3515
@irshadali3515 Жыл бұрын
Sir pls make a video on character LSTM
@anirudh7150
@anirudh7150 2 ай бұрын
How to decide the vocabulary size?
@victorpoudel4358
@victorpoudel4358 2 жыл бұрын
You are a 💎
@kirankoshy209
@kirankoshy209 2 жыл бұрын
So, it's not necessary to use Word2Vec or Glove for embedding, right?
@geekyprogrammer4831
@geekyprogrammer4831 2 жыл бұрын
Its been almost 2 weeks yet no new content. You told this will go on for a month.
@chinnibngrm272
@chinnibngrm272 11 ай бұрын
Hii sir Thanks for wonderful nlp sessions I just want to know how can i generate word embeddings for code mixed data like (telugu-english code mixed ) because we cant directly use pre trained word2vec or fasttext models for code mixed data soo if i train my data using this embedding layer can i get accurate representation of numeric vectors for my words? please if anyone know about this please share your thoughts here!!!
@__MahendranD
@__MahendranD Жыл бұрын
is this work for other languages like Tamil text etc...?
@shopinghaul___
@shopinghaul___ 2 жыл бұрын
@Afeez, drop it here...pleasee if you can
@litonpaul6133
@litonpaul6133 2 жыл бұрын
Sir when you will take the next session?
@thepresistence5935
@thepresistence5935 2 жыл бұрын
Assignment solution: --------------------code--------------------------- # imports import tensorflow as tf from tensorflow.keras.layers import Embedding from tensorflow.keras.models import Sequential from tensorflow.keras.preprocessing.text import one_hot from tensorflow.keras.preprocessing.sequence import pad_sequences ### Assignment sent=["The world is a better place", "Marvel series is my favourite movie", "I like DC movies", "the cat is eating the food", "Tom and Jerry is my favourite movie", "Python is my favourite programming language" ] # Change to one hot representation :) vocabulary_size = 300 # Total vocabulary size! sentence_length = 20 # This is for one hot sentence length max_length = 10 # This is for embeddign vector length (feature dimensions) # let's convert to one hot vector one_hot_assignment = [one_hot(word, vocabulary_size) for word in sent] # let's pad the one hot vector padded_assignment = pad_sequences(one_hot_assignment, padding = 'pre', maxlen = sentence_length) # build a model model = Sequential() model.add(Embedding(vocabulary_size, max_length, input_length = sentence_length)) model.compile('adam', 'mse') # let's see the word embedding! print(model.predict(padded_assignment[0]))
@datasciencegyan5145
@datasciencegyan5145 2 жыл бұрын
how we decide we have to use pre or post padding
@rekha9314
@rekha9314 2 жыл бұрын
I m looking for a job as a data scientist, can I join hackathon, and what's the procedure to join, plzzzzzzzzzzzz
@dswithanand
@dswithanand 2 жыл бұрын
Hello Sir when is the next session
@shankarbasu9357
@shankarbasu9357 4 ай бұрын
But why pizza and Berger have changed
@muizzkhalak
@muizzkhalak 2 жыл бұрын
Next session please
@dovie_thebeauty3449
@dovie_thebeauty3449 2 жыл бұрын
hello sir, next session update?
@siddavatamthirumalareddy973
@siddavatamthirumalareddy973 2 жыл бұрын
Hello Krish naik sir, can you please re-start the NLP live sessions
@khalidal-reemi3361
@khalidal-reemi3361 2 жыл бұрын
LIKE LIKE LIKE LIKE 👍👍👍
@poonkodivijay9595
@poonkodivijay9595 2 жыл бұрын
Can i get job guarantee program in data science
@ravish5387
@ravish5387 2 жыл бұрын
Sir, when is next session?
@zainulabideen9758
@zainulabideen9758 2 жыл бұрын
Bro next session update
@karthiksundaram544
@karthiksundaram544 Жыл бұрын
Cute
00:16
Oyuncak Avı
Рет қаралды 11 МЛН
Will A Guitar Boat Hold My Weight?
00:20
MrBeast
Рет қаралды 211 МЛН
Implementing Word Embedding Using Keras- NLP | Deep Learning
18:05
Word Embedding - Natural  Language Processing| Deep Learning
15:10
Word Embedding and Word2Vec, Clearly Explained!!!
16:12
StatQuest with Josh Starmer
Рет қаралды 307 М.
Cute
00:16
Oyuncak Avı
Рет қаралды 11 МЛН