Video is helpful. Can you please tell me something about the "score", a confidence score, how they calculate it? Any kind of formula or based on some metrics?
@prashun69572 ай бұрын
I want to get answers from a table in the image, how to do that?
@flaskapp98853 жыл бұрын
subscribed :)
@khaoyaa Жыл бұрын
Is it possible to implement this as a mobile app?
@alfafa66572 жыл бұрын
How do I enter (input) a question directly? I want to know how to connect this model to social media like telegram or whatsapp, do you know it?
@niloozh60983 жыл бұрын
thank you for your helpful video😍. how can we recognize that a question is not answerable?
@xuejingfu38612 жыл бұрын
I want to ask what will happen if the question we ask has nothing to do with the context? Then if we ask the same question multiple times how to make the program recognize it
@jamesbriggs2 жыл бұрын
the answer score will (hopefully) be very low, and it will likely return nonsense - recognizing the same question multiple times I suppose you could keep track of past questions and perform a check for repetition
@dato007 Жыл бұрын
is there way to pre-load a context? i.e. i don't want the model to read in the entire context (e.g. a book) everytime i wan to ask a question
@imdadood57053 жыл бұрын
I am enrolled to your NLP in Python. I haven’t been actively participating yet. It is really good!
@jamesbriggs3 жыл бұрын
hey that's awesome to hear, glad you're enjoying it!
@shanefeng3215 Жыл бұрын
Can you talk a bit about how to further train (fine-tune) the model? Do I just put the start index and end index as the labels to BertForQuestionAnswering? I followed the hugging face page's example but it doesn't work like the loss is decreasing but the accuracy is very low. I calculated the accuracy as if only the predicted start index match with the label start index AND the predicted end index match with the label end index.
@charmz9733 жыл бұрын
M just enjoying the channel
@jamesbriggs3 жыл бұрын
Haha that's awesome, reading all your comments now!
@charmz9733 жыл бұрын
@@jamesbriggs we can't thank you enough for this gold mine
@mariiii973 жыл бұрын
Great video! Do you know how to increase the number of words in the answer? Thanks :)
@jamesbriggs3 жыл бұрын
It depends on how the model has been pretrained and fine-tuned, you would need to fine-tune the model on your Q&A dataset (with longer answers) for it to start outputting longer answers - which you'll need the transformers train_qa.py script for - and this guide will help too huggingface.co/transformers/custom_datasets.html#qa-squad Good luck!
@abhishekprakash98032 жыл бұрын
@@jamesbriggs i always stuck to generate long answers.....i saw ur webpages..pinecone. and read abstractive model...but i wana ask u he only generate the long answers only for sqad datasets, how i will perform to generate the long answers for my documents .......long answers generation is so challengeing even i tired to serach ..in internet, i hope u will help me
@meylyssa36663 жыл бұрын
great introduction to Q&A!
@jamesbriggs3 жыл бұрын
Happy you liked it :)
@mamunurrahman5341 Жыл бұрын
This video is so simple to understand and so helpful to reach my idea project.
@user-wr4yl7tx3w Жыл бұрын
video idea, what are the challenges for doing transformer on scale.
@siamakshams1923 Жыл бұрын
Excellent work. Even with some minor changes to HF code recommendation I found the tutorial very helpful. Thank you.
@vitocorleone19913 жыл бұрын
Tanq
@shakshuka942 жыл бұрын
What if we want long answers? For example if we need to extract all preferred qualifications from a job description context then what to do? I tried this but it gives a very short answer and doesn’t pull all the details or requirements. This is just one example. Any suggestions here?
@jamesbriggs2 жыл бұрын
You can try either pulling just the context, eg a doc_store -> retriever pipeline or you can look into long-form question-answering (LFQA) which using a doc_store -> retriever -> generator, where the generator is trained to produce multi-sentence answers. You should be able to find info on how to do both of these online, I'll also be publishing a video on LFQA around tuesday next week