BERTScore Explained in 5 minutes
5:45
Пікірлер
@zerofive3699
@zerofive3699 25 күн бұрын
👍
@marvinacklin792
@marvinacklin792 Ай бұрын
What language in this?
@AboniaSojasingarayar
@AboniaSojasingarayar Ай бұрын
We used python to create this LLM app.
@Bumbblyfestyle
@Bumbblyfestyle Ай бұрын
Very useful
@AboniaSojasingarayar
@AboniaSojasingarayar Ай бұрын
Glad it was helpful :)
@johannes7856
@johannes7856 Ай бұрын
Nice Tutorial, thanks. 😊
@AboniaSojasingarayar
@AboniaSojasingarayar Ай бұрын
Thank you so much! 😊 I’m glad you found the tutorial helpful!
@johannes7856
@johannes7856 Ай бұрын
@@AboniaSojasingarayar Do you know if there is a tool that can convert the Annoted json from the Anylabeling tool to the yolo format?
@AboniaSojasingarayar
@AboniaSojasingarayar Ай бұрын
@johannes7856 Hi Johannes, You may try following library github.com/rooneysh/Labelme2YOLO If not you can convert any labelling json to coco json and again convert it to yolo using the above library. Hope this helps.
@DevSingh-v2h
@DevSingh-v2h Ай бұрын
can you please share the colab notebook
@AboniaSojasingarayar
@AboniaSojasingarayar Ай бұрын
Sure, here it is: gist.github.com/Abonia1/fc442374e1c20c86db8effbf95d93eb6
@khlifimohamedrayen1303
@khlifimohamedrayen1303 Ай бұрын
Thank you very much for this tutorial! I was having many problems running the ollama server on colab without the colabxterm... You're such a life saver!
@AboniaSojasingarayar
@AboniaSojasingarayar Ай бұрын
You are most welcome! Glad it helped.
@Bumbblyfestyle
@Bumbblyfestyle Ай бұрын
Good info 😊
@AboniaSojasingarayar
@AboniaSojasingarayar Ай бұрын
Glad it helped 🙂
@mohamadadhikasuryahaidar7652
@mohamadadhikasuryahaidar7652 Ай бұрын
thanks for the tutorial
@AboniaSojasingarayar
@AboniaSojasingarayar Ай бұрын
Happy to help
@anandrajgt3602
@anandrajgt3602 Ай бұрын
Please post a video regarding github actions
@AboniaSojasingarayar
@AboniaSojasingarayar Ай бұрын
Sure! Thanks for your suggestion.
@Nabeel27
@Nabeel27 2 ай бұрын
I get error: Runtime.ImportModuleError: Unable to import module 'lambda_function': Error importing numpy: you should not try to import numpy from its source directory; please exit the numpy source tree, and relaunch your python interpreter from there. followed all steps as in your video.
@Nabeel27
@Nabeel27 2 ай бұрын
Looks like I had to setup the lambda as arm64 and the layer (created on mac Docker) also as arm64. Next, it also requires Bedrock setup and access request to llama model to use. llama 2 is no longer available, have to request llama 3 8B or something else.
@AboniaSojasingarayar
@AboniaSojasingarayar 2 ай бұрын
Hello Nabeel, Are you still facing the above issue?
@Nabeel27
@Nabeel27 2 ай бұрын
@@AboniaSojasingarayar Thank you so much for following up! the error I am getting now is this: "errorMessage": "Error raised by bedrock service: An error occurred (AccessDeniedException) when calling the InvokeModel operation: User: arn:aws:sts::701934491353:assumed-role/test_demo-role-sfu6wu6d/test_demo is not authorized to perform: bedrock:InvokeModel on resource: arn:aws:bedrock:us-east-1::foundation-model/meta.llama3-8b-instruct-v1:0 because no identity-based policy allows the bedrock:InvokeModel action",
@Nabeel27
@Nabeel27 2 ай бұрын
@@AboniaSojasingarayar I was able to solve it. I got the permission to use llama3 and also had to update role permissions to use Bedrock.
@AboniaSojasingarayar
@AboniaSojasingarayar 2 ай бұрын
@@Nabeel27 Great 🎉
@Bumbblyfestyle
@Bumbblyfestyle 2 ай бұрын
@zerofive3699
@zerofive3699 2 ай бұрын
Awesome abo keep up the good work
@AboniaSojasingarayar
@AboniaSojasingarayar 2 ай бұрын
Thanks!
@enia123
@enia123 2 ай бұрын
thank you I was studying something related, but my computer's performance was very poor due to lack of money. I had a problem with ollama not working in Colab, but it was resolved! thank you I would like to test a model created in Colab. Is there a way to temporarily run it as a web service?
@AboniaSojasingarayar
@AboniaSojasingarayar 2 ай бұрын
Most welcome. Great and glad to hear that finally it worked. 1. Of course we can use the flask API and ColabCode package to serve your mode via endpoint in ngrok temporary URL. github.com/abhishekkrthakur/colabcode 2. And another way is using flask and flask-ngrok. pypi.org/project/flask-ngrok/ pypi.org/project/Flask-API/ Sample code for reference: from flask import Flask from flask_ngrok import run_with_ngrok app = Flask(__name__) run_with_ngrok(app) @app.route("/") def home(): return "Hello World" app.run() If needed I'll try to do a tuto on this topic in future. Hope this helps:)
@enia123
@enia123 2 ай бұрын
@@AboniaSojasingarayar thank you Have a nice day~
@tapiaomars
@tapiaomars 2 ай бұрын
Hi, its possible integrate DynamoDB for store and retrive context of last user prompts in lambda function?
@AboniaSojasingarayar
@AboniaSojasingarayar 2 ай бұрын
Hello, Yes , DynamoDB, S3, or in-memory storage depending on requirements. Each piece of context is associated with a user ID, ensuring that contexts are isolated per user with conversation ID. Hope this helps.
@tapiaomars
@tapiaomars 2 ай бұрын
@@AboniaSojasingarayar Thanks, I'll try it and let you know how it goes.
@ziaullah2115
@ziaullah2115 2 ай бұрын
please create one video for breast cancer detection in yolov10 model
@AboniaSojasingarayar
@AboniaSojasingarayar Ай бұрын
Absolutely, I’ll work on getting it ready shortly. If there are specific areas you want me to concentrate on, just let me know! Also, do you have any custom dataset you'd like to use for this tutorial? Thanks
@iroudayaradjcalingarayar317
@iroudayaradjcalingarayar317 2 ай бұрын
Super
@AboniaSojasingarayar
@AboniaSojasingarayar 2 ай бұрын
Glad it helped
@VenkatesanVenkat-fd4hg
@VenkatesanVenkat-fd4hg 2 ай бұрын
Great discussion....
@AboniaSojasingarayar
@AboniaSojasingarayar 2 ай бұрын
Thank you Venkatesan. I'm glad you enjoyed the discussion.
@mayshowgunmore5269
@mayshowgunmore5269 2 ай бұрын
Hi I'm trying to run these processes, but in this video 12:36 how to create and execute the file named ".env" , it always show Error , I can't figure it out. Thanks!
@AboniaSojasingarayar
@AboniaSojasingarayar 2 ай бұрын
Hello, You can use local VScode or any IDE to create .env New file -> name it as .env And add your API key as follows: ROBOFLOW_API_KEY=your_api_key Once done drag and drop it in colab. Hope this helps.
@VenkatesanVenkat-fd4hg
@VenkatesanVenkat-fd4hg 3 ай бұрын
Great share, insightful share as always...Are u using obs studio for recording....by Senior Data Scientist....
@AboniaSojasingarayar
@AboniaSojasingarayar 3 ай бұрын
Glad it helped. Not really! Just using the built-in recording and iMovie to edit it.
@alvaroaraujo7945
@alvaroaraujo7945 3 ай бұрын
Hey, Abonia..Thanks for the amazing content. I just had one issue though: on executing the 'map_reduce_outputs' function, I had the ConnectionRefusedError: [Errno 61]. Hope someone know what it is
@AboniaSojasingarayar
@AboniaSojasingarayar 3 ай бұрын
@@alvaroaraujo7945 Hello , thanks for your kind words. It may be related to your ollama serve.Are you sure Ollama is running ?
@machinelearningzone.6230
@machinelearningzone.6230 3 ай бұрын
Nice explanation and walkthrough. Could you provide the link to the code repo for this exercise.
@AboniaSojasingarayar
@AboniaSojasingarayar 3 ай бұрын
Glad it helped. As mentioned in the description, you can find the code and explanation in this article walkthrough. medium.com/@abonia/deploying-a-rag-application-in-aws-lambda-using-docker-and-ecr-08e246a7c515
@zerofive3699
@zerofive3699 3 ай бұрын
It is very helpful mam , it is useful on impliying
@zerofive3699
@zerofive3699 3 ай бұрын
Nice video mam
@user-wr4yl7tx3w
@user-wr4yl7tx3w 3 ай бұрын
Can we simply rely on open source only without using Amazon? What if it is just prototyping?
@AboniaSojasingarayar
@AboniaSojasingarayar 3 ай бұрын
Yes, we can use open source completely.
@World-um5vo
@World-um5vo 4 ай бұрын
Hi, Thank you for the video, So if we want to fine tune the model and evaluate it for videos, then how to do it ?
@AboniaSojasingarayar
@AboniaSojasingarayar 4 ай бұрын
Your most welcome. Here I have introduced basic usage of SAM 2 models. If you want to evaluate your finetuned model you may try mean IoU score, for a set of predictions and targets or DICE, precision, recall, and mAP.
@Basant5911
@Basant5911 4 ай бұрын
streaming does't work via doing this. I wrote code from scratch without langchain.
@AboniaSojasingarayar
@AboniaSojasingarayar 4 ай бұрын
@@Basant5911 can you share your code base and error or issue that you are facing currently please?
@DenisRothman
@DenisRothman 4 ай бұрын
❤Thank you for this fantastic educational video on my book!!! 🎉
@AboniaSojasingarayar
@AboniaSojasingarayar 4 ай бұрын
@@DenisRothman Thank you for your kind words. I'm grateful for the opportunity to review the book and share my thoughts. Your work is well-deserved and truly one of the most insightful books I've read.
@MohamedMohamed-xf7wh
@MohamedMohamed-xf7wh 4 ай бұрын
You used a webpage as a data source for the RAG app, what If I add pdf file instead of the webpage as a data source, how can I deploy it in aws lambda?
@AboniaSojasingarayar
@AboniaSojasingarayar 4 ай бұрын
To build RAG with pdf in AWS ecosystem, you need to follow steps that involve uploading the PDF to an S3 bucket, extracting text from the PDF, and then integrating this data with your RAG application.
@MohamedMohamed-xf7wh
@MohamedMohamed-xf7wh 4 ай бұрын
@@AboniaSojasingarayar Can I locally extract text from pdf and build vector DB locally using vscode and then build the docker image and push it to ECR AWS like what you did in the video?
@AboniaSojasingarayar
@AboniaSojasingarayar 4 ай бұрын
@@MohamedMohamed-xf7wh Yes, you can locally extract text from PDF files, build a vector database and then prepare your application for deployment on AWS Lambda by building a Docker image and pushing it to ECR. But which vector db are you using? It can be accessible with API?
@MohamedMohamed-xf7wh
@MohamedMohamed-xf7wh 4 ай бұрын
@@AboniaSojasingarayar FAISS .. what is the problem with vector db?
@AboniaSojasingarayar
@AboniaSojasingarayar 4 ай бұрын
@@MohamedMohamed-xf7wh Great!
@htayaung3812
@htayaung3812 4 ай бұрын
Really Nice! Keep going. You deserve more subscribers.
@AboniaSojasingarayar
@AboniaSojasingarayar 4 ай бұрын
@@htayaung3812 Thank you so much for your support! I'm working to bring more tutorials.
@raulpradodantas9386
@raulpradodantas9386 5 ай бұрын
Save my life to create lambda layers... I have been trying for days. TKS!
@AboniaSojasingarayar
@AboniaSojasingarayar 4 ай бұрын
@@raulpradodantas9386 Glad to hear that! You most welcome.
@SidSid-kp4ij
@SidSid-kp4ij 5 ай бұрын
Hi I'm trying to run my trained model with interface to webcam but getting error can you share any insight on it
@AboniaSojasingarayar
@AboniaSojasingarayar 5 ай бұрын
@@SidSid-kp4ij Hello Sid, Sure can you post your error message here please?
@gk4457
@gk4457 6 ай бұрын
All the best
@RajuSubramaniam-ho6kd
@RajuSubramaniam-ho6kd 6 ай бұрын
Thanks for the video. Very useful for me as I am new to AWS lambda and bedrock. Can you please upload the lambda function source code? Thanks again!
@AboniaSojasingarayar
@AboniaSojasingarayar 6 ай бұрын
Glad it helped. Sure you can find the code and complete the article on this topic in the description. In any way here is the link to the code : medium.com/@abonia/build-and-deploy-llm-application-in-aws-cca46c662749
@jannatbellouchi3908
@jannatbellouchi3908 6 ай бұрын
Which version of BERT is it used in BERTScore ?
@AboniaSojasingarayar
@AboniaSojasingarayar 6 ай бұрын
As we are using lang= "en" so it uses roberta-large. We can also customize it using the model_type param of BERScorer class For default model for other languages,find it here: github.com/Tiiiger/bert_score/blob/master/bert_score/utils.py
@jagadeeshprasad5252
@jagadeeshprasad5252 6 ай бұрын
hey great content. please continue to do more videos and real time projects. Thanks
@AboniaSojasingarayar
@AboniaSojasingarayar 6 ай бұрын
Glad it helped. Sure I am already on it.
@zerofive3699
@zerofive3699 6 ай бұрын
Awesome mam , very easy to understand
@NJ-hn8yu
@NJ-hn8yu 6 ай бұрын
Hi Abonia, thanks for sharing. I am facing this error . can you please tell how to resolve it "errorMessage": "Unable to import module 'lambda_function': No module named 'langchain_community'",
@AboniaSojasingarayar
@AboniaSojasingarayar 6 ай бұрын
Hello, You are most welcome. You must prepare your ZIP file with all the necessary packages. You can refer to the instructions starting at the 09:04
@humayounkhan7946
@humayounkhan7946 6 ай бұрын
Hi Abonia, thanks for the thorough guide, but i'm abit confused with the lambda_layer.zip file, why did you have to create it through docker? is there an easier way to provide the dependencies in a zip file without going through docker? Thanks in advance!
@AboniaSojasingarayar
@AboniaSojasingarayar 6 ай бұрын
Hi Humayoun Khan, Yes we can but Docker facilitates the inclusion of the runtime interface client for Python, making the image compatible with AWS Lambda. Also it ensures a consistent and reproducible environment for Lambda function's dependencies. This is crucial for avoiding discrepancies between development, testing, and production environments. Hope this helps.
@evellynnicolemachadorosa2666
@evellynnicolemachadorosa2666 7 ай бұрын
hello! Thanks for the video. I am from Brazil. What would you recommend for large documents, averaging 150 pages? I tried map-reduce, but the inference time was 40 minutes. Are there any tips for these very long documents?
@AboniaSojasingarayar
@AboniaSojasingarayar 7 ай бұрын
Thanks for you kind words and glad this helped. Implement a strategy that combines semantic chunking with K-means clustering to address the model’s contextual limitations. By employing efficient clustering techniques, we can extract key passages effectively, thereby reducing the overhead associated with processing large volumes of text. This approach not only significantly lowers costs by minimizing the number of tokens processed but also mitigates the recency and primacy effects inherent in LLMs, ensuring a balanced consideration of all text segments.
@VirtualMachine-d8x
@VirtualMachine-d8x 2 ай бұрын
​@@AboniaSojasingarayar Video was great and very useful.. can you make the small video on this clustering method using embedding ?
@AboniaSojasingarayar
@AboniaSojasingarayar 2 ай бұрын
@@VirtualMachine-d8x Sure will do, happy to hear from you again. Thanks for the feedback.
@Coff03
@Coff03 7 ай бұрын
Did you use OpenAI API key here?
@AboniaSojasingarayar
@AboniaSojasingarayar 7 ай бұрын
Here we use open-source Mixtral from ollama.But, yes we can use OpenAI models as well.
@MishelMichel
@MishelMichel 7 ай бұрын
Very informatics nd Your voice very clear dr
@AboniaSojasingarayar
@AboniaSojasingarayar 7 ай бұрын
Glad it helped!
@fkeb37e9w0
@fkeb37e9w0 7 ай бұрын
Can we use openai and chromadb on aws??
@AboniaSojasingarayar
@AboniaSojasingarayar 7 ай бұрын
Yes we can! In the below tutorial I have demonstrated how we can create and deploy lambda layer via container for larger dependencies : kzbin.info/www/bejne/nZrGpJVvpZyooJYsi=F_X7-6YCAb0Kz3Jc
@fkeb37e9w0
@fkeb37e9w0 7 ай бұрын
@@AboniaSojasingarayar yes but can this be done without eks or containers?
@AboniaSojasingarayar
@AboniaSojasingarayar 7 ай бұрын
Yes! You can try it by creating a custom lambda layer.If you face issue try to use only the required libraries and remove any unnecessary dependencies from your zip file.Hope this helps.
@vijaygandhi7313
@vijaygandhi7313 7 ай бұрын
In the abstractive summarization use-case, usually a lot of focus is given to the LLMs being used and its performance. Limitations of LLM including context length and ways to overcome this issue are often overlooked. Its important to make sure that our application is scalable when dealing with large document sizes. Thank you for this great and insightful video.
@AboniaSojasingarayar
@AboniaSojasingarayar 7 ай бұрын
Thank you Vijay Gandhi, for your insightful comment! You've raised an excellent point about the importance of considering the limitations of LLMs in the context of abstractive summarization, especially regarding their context length and scalability issues when dealing with large documents. Indeed, one of the significant challenges in using LLMs for abstractive summarization is their inherent limitation in processing long texts due to the maximum token limit imposed by these models. This constraint can be particularly problematic when summarizing lengthy documents or articles, where the full context might not fit within the model's capacity.
@zerofive3699
@zerofive3699 7 ай бұрын
Really useful info mam , keep up the good work
@AboniaSojasingarayar
@AboniaSojasingarayar 7 ай бұрын
It's my pleasure.
@Bumbblyfestyle
@Bumbblyfestyle 7 ай бұрын
👍👍
@akshaykotawar5816
@akshaykotawar5816 8 ай бұрын
Very Informative thanks for uploading
@AboniaSojasingarayar
@AboniaSojasingarayar 8 ай бұрын
Glad it helped!
@akshaykotawar5816
@akshaykotawar5816 8 ай бұрын
Nice video
@AboniaSojasingarayar
@AboniaSojasingarayar 8 ай бұрын
Thanks Akshay. Glad it helped!
@MishelMichel
@MishelMichel 8 ай бұрын
Nyccc Mam 😍
@AboniaSojasingarayar
@AboniaSojasingarayar 8 ай бұрын
Glad it helped 😊
@zerofive3699
@zerofive3699 8 ай бұрын
Very nice video, learnt a lot
@AboniaSojasingarayar
@AboniaSojasingarayar 8 ай бұрын
Thank you! Glad it helped🤓
@appikumar-d8l
@appikumar-d8l 8 ай бұрын
Please do more on AWS Bedrock to develop on RAG applications......your explanation is simple and effective.......stay motivated and upload more videos about LLM
@AboniaSojasingarayar
@AboniaSojasingarayar 8 ай бұрын
Thanks for your kind words! Sure I will do it.
@akshaykotawar5816
@akshaykotawar5816 8 ай бұрын
Yes same thing i want ​@@AboniaSojasingarayar
@AboniaSojasingarayar
@AboniaSojasingarayar 7 ай бұрын
Here the tutorial link to Deploying a Retrieval-Augmented Generation (RAG) in AWS : kzbin.info/www/bejne/nZrGpJVvpZyooJY