Thank you so much! 😊 I’m glad you found the tutorial helpful!
@johannes7856Ай бұрын
@@AboniaSojasingarayar Do you know if there is a tool that can convert the Annoted json from the Anylabeling tool to the yolo format?
@AboniaSojasingarayarАй бұрын
@johannes7856 Hi Johannes, You may try following library github.com/rooneysh/Labelme2YOLO If not you can convert any labelling json to coco json and again convert it to yolo using the above library. Hope this helps.
@DevSingh-v2hАй бұрын
can you please share the colab notebook
@AboniaSojasingarayarАй бұрын
Sure, here it is: gist.github.com/Abonia1/fc442374e1c20c86db8effbf95d93eb6
@khlifimohamedrayen1303Ай бұрын
Thank you very much for this tutorial! I was having many problems running the ollama server on colab without the colabxterm... You're such a life saver!
@AboniaSojasingarayarАй бұрын
You are most welcome! Glad it helped.
@BumbblyfestyleАй бұрын
Good info 😊
@AboniaSojasingarayarАй бұрын
Glad it helped 🙂
@mohamadadhikasuryahaidar7652Ай бұрын
thanks for the tutorial
@AboniaSojasingarayarАй бұрын
Happy to help
@anandrajgt3602Ай бұрын
Please post a video regarding github actions
@AboniaSojasingarayarАй бұрын
Sure! Thanks for your suggestion.
@Nabeel272 ай бұрын
I get error: Runtime.ImportModuleError: Unable to import module 'lambda_function': Error importing numpy: you should not try to import numpy from its source directory; please exit the numpy source tree, and relaunch your python interpreter from there. followed all steps as in your video.
@Nabeel272 ай бұрын
Looks like I had to setup the lambda as arm64 and the layer (created on mac Docker) also as arm64. Next, it also requires Bedrock setup and access request to llama model to use. llama 2 is no longer available, have to request llama 3 8B or something else.
@AboniaSojasingarayar2 ай бұрын
Hello Nabeel, Are you still facing the above issue?
@Nabeel272 ай бұрын
@@AboniaSojasingarayar Thank you so much for following up! the error I am getting now is this: "errorMessage": "Error raised by bedrock service: An error occurred (AccessDeniedException) when calling the InvokeModel operation: User: arn:aws:sts::701934491353:assumed-role/test_demo-role-sfu6wu6d/test_demo is not authorized to perform: bedrock:InvokeModel on resource: arn:aws:bedrock:us-east-1::foundation-model/meta.llama3-8b-instruct-v1:0 because no identity-based policy allows the bedrock:InvokeModel action",
@Nabeel272 ай бұрын
@@AboniaSojasingarayar I was able to solve it. I got the permission to use llama3 and also had to update role permissions to use Bedrock.
@AboniaSojasingarayar2 ай бұрын
@@Nabeel27 Great 🎉
@Bumbblyfestyle2 ай бұрын
❤
@zerofive36992 ай бұрын
Awesome abo keep up the good work
@AboniaSojasingarayar2 ай бұрын
Thanks!
@enia1232 ай бұрын
thank you I was studying something related, but my computer's performance was very poor due to lack of money. I had a problem with ollama not working in Colab, but it was resolved! thank you I would like to test a model created in Colab. Is there a way to temporarily run it as a web service?
@AboniaSojasingarayar2 ай бұрын
Most welcome. Great and glad to hear that finally it worked. 1. Of course we can use the flask API and ColabCode package to serve your mode via endpoint in ngrok temporary URL. github.com/abhishekkrthakur/colabcode 2. And another way is using flask and flask-ngrok. pypi.org/project/flask-ngrok/ pypi.org/project/Flask-API/ Sample code for reference: from flask import Flask from flask_ngrok import run_with_ngrok app = Flask(__name__) run_with_ngrok(app) @app.route("/") def home(): return "Hello World" app.run() If needed I'll try to do a tuto on this topic in future. Hope this helps:)
@enia1232 ай бұрын
@@AboniaSojasingarayar thank you Have a nice day~
@tapiaomars2 ай бұрын
Hi, its possible integrate DynamoDB for store and retrive context of last user prompts in lambda function?
@AboniaSojasingarayar2 ай бұрын
Hello, Yes , DynamoDB, S3, or in-memory storage depending on requirements. Each piece of context is associated with a user ID, ensuring that contexts are isolated per user with conversation ID. Hope this helps.
@tapiaomars2 ай бұрын
@@AboniaSojasingarayar Thanks, I'll try it and let you know how it goes.
@ziaullah21152 ай бұрын
please create one video for breast cancer detection in yolov10 model
@AboniaSojasingarayarАй бұрын
Absolutely, I’ll work on getting it ready shortly. If there are specific areas you want me to concentrate on, just let me know! Also, do you have any custom dataset you'd like to use for this tutorial? Thanks
@iroudayaradjcalingarayar3172 ай бұрын
Super
@AboniaSojasingarayar2 ай бұрын
Glad it helped
@VenkatesanVenkat-fd4hg2 ай бұрын
Great discussion....
@AboniaSojasingarayar2 ай бұрын
Thank you Venkatesan. I'm glad you enjoyed the discussion.
@mayshowgunmore52692 ай бұрын
Hi I'm trying to run these processes, but in this video 12:36 how to create and execute the file named ".env" , it always show Error , I can't figure it out. Thanks!
@AboniaSojasingarayar2 ай бұрын
Hello, You can use local VScode or any IDE to create .env New file -> name it as .env And add your API key as follows: ROBOFLOW_API_KEY=your_api_key Once done drag and drop it in colab. Hope this helps.
@VenkatesanVenkat-fd4hg3 ай бұрын
Great share, insightful share as always...Are u using obs studio for recording....by Senior Data Scientist....
@AboniaSojasingarayar3 ай бұрын
Glad it helped. Not really! Just using the built-in recording and iMovie to edit it.
@alvaroaraujo79453 ай бұрын
Hey, Abonia..Thanks for the amazing content. I just had one issue though: on executing the 'map_reduce_outputs' function, I had the ConnectionRefusedError: [Errno 61]. Hope someone know what it is
@AboniaSojasingarayar3 ай бұрын
@@alvaroaraujo7945 Hello , thanks for your kind words. It may be related to your ollama serve.Are you sure Ollama is running ?
@machinelearningzone.62303 ай бұрын
Nice explanation and walkthrough. Could you provide the link to the code repo for this exercise.
@AboniaSojasingarayar3 ай бұрын
Glad it helped. As mentioned in the description, you can find the code and explanation in this article walkthrough. medium.com/@abonia/deploying-a-rag-application-in-aws-lambda-using-docker-and-ecr-08e246a7c515
@zerofive36993 ай бұрын
It is very helpful mam , it is useful on impliying
@zerofive36993 ай бұрын
Nice video mam
@user-wr4yl7tx3w3 ай бұрын
Can we simply rely on open source only without using Amazon? What if it is just prototyping?
@AboniaSojasingarayar3 ай бұрын
Yes, we can use open source completely.
@World-um5vo4 ай бұрын
Hi, Thank you for the video, So if we want to fine tune the model and evaluate it for videos, then how to do it ?
@AboniaSojasingarayar4 ай бұрын
Your most welcome. Here I have introduced basic usage of SAM 2 models. If you want to evaluate your finetuned model you may try mean IoU score, for a set of predictions and targets or DICE, precision, recall, and mAP.
@Basant59114 ай бұрын
streaming does't work via doing this. I wrote code from scratch without langchain.
@AboniaSojasingarayar4 ай бұрын
@@Basant5911 can you share your code base and error or issue that you are facing currently please?
@DenisRothman4 ай бұрын
❤Thank you for this fantastic educational video on my book!!! 🎉
@AboniaSojasingarayar4 ай бұрын
@@DenisRothman Thank you for your kind words. I'm grateful for the opportunity to review the book and share my thoughts. Your work is well-deserved and truly one of the most insightful books I've read.
@MohamedMohamed-xf7wh4 ай бұрын
You used a webpage as a data source for the RAG app, what If I add pdf file instead of the webpage as a data source, how can I deploy it in aws lambda?
@AboniaSojasingarayar4 ай бұрын
To build RAG with pdf in AWS ecosystem, you need to follow steps that involve uploading the PDF to an S3 bucket, extracting text from the PDF, and then integrating this data with your RAG application.
@MohamedMohamed-xf7wh4 ай бұрын
@@AboniaSojasingarayar Can I locally extract text from pdf and build vector DB locally using vscode and then build the docker image and push it to ECR AWS like what you did in the video?
@AboniaSojasingarayar4 ай бұрын
@@MohamedMohamed-xf7wh Yes, you can locally extract text from PDF files, build a vector database and then prepare your application for deployment on AWS Lambda by building a Docker image and pushing it to ECR. But which vector db are you using? It can be accessible with API?
@MohamedMohamed-xf7wh4 ай бұрын
@@AboniaSojasingarayar FAISS .. what is the problem with vector db?
@AboniaSojasingarayar4 ай бұрын
@@MohamedMohamed-xf7wh Great!
@htayaung38124 ай бұрын
Really Nice! Keep going. You deserve more subscribers.
@AboniaSojasingarayar4 ай бұрын
@@htayaung3812 Thank you so much for your support! I'm working to bring more tutorials.
@raulpradodantas93865 ай бұрын
Save my life to create lambda layers... I have been trying for days. TKS!
@AboniaSojasingarayar4 ай бұрын
@@raulpradodantas9386 Glad to hear that! You most welcome.
@SidSid-kp4ij5 ай бұрын
Hi I'm trying to run my trained model with interface to webcam but getting error can you share any insight on it
@AboniaSojasingarayar5 ай бұрын
@@SidSid-kp4ij Hello Sid, Sure can you post your error message here please?
@gk44576 ай бұрын
All the best
@RajuSubramaniam-ho6kd6 ай бұрын
Thanks for the video. Very useful for me as I am new to AWS lambda and bedrock. Can you please upload the lambda function source code? Thanks again!
@AboniaSojasingarayar6 ай бұрын
Glad it helped. Sure you can find the code and complete the article on this topic in the description. In any way here is the link to the code : medium.com/@abonia/build-and-deploy-llm-application-in-aws-cca46c662749
@jannatbellouchi39086 ай бұрын
Which version of BERT is it used in BERTScore ?
@AboniaSojasingarayar6 ай бұрын
As we are using lang= "en" so it uses roberta-large. We can also customize it using the model_type param of BERScorer class For default model for other languages,find it here: github.com/Tiiiger/bert_score/blob/master/bert_score/utils.py
@jagadeeshprasad52526 ай бұрын
hey great content. please continue to do more videos and real time projects. Thanks
@AboniaSojasingarayar6 ай бұрын
Glad it helped. Sure I am already on it.
@zerofive36996 ай бұрын
Awesome mam , very easy to understand
@NJ-hn8yu6 ай бұрын
Hi Abonia, thanks for sharing. I am facing this error . can you please tell how to resolve it "errorMessage": "Unable to import module 'lambda_function': No module named 'langchain_community'",
@AboniaSojasingarayar6 ай бұрын
Hello, You are most welcome. You must prepare your ZIP file with all the necessary packages. You can refer to the instructions starting at the 09:04
@humayounkhan79466 ай бұрын
Hi Abonia, thanks for the thorough guide, but i'm abit confused with the lambda_layer.zip file, why did you have to create it through docker? is there an easier way to provide the dependencies in a zip file without going through docker? Thanks in advance!
@AboniaSojasingarayar6 ай бұрын
Hi Humayoun Khan, Yes we can but Docker facilitates the inclusion of the runtime interface client for Python, making the image compatible with AWS Lambda. Also it ensures a consistent and reproducible environment for Lambda function's dependencies. This is crucial for avoiding discrepancies between development, testing, and production environments. Hope this helps.
@evellynnicolemachadorosa26667 ай бұрын
hello! Thanks for the video. I am from Brazil. What would you recommend for large documents, averaging 150 pages? I tried map-reduce, but the inference time was 40 minutes. Are there any tips for these very long documents?
@AboniaSojasingarayar7 ай бұрын
Thanks for you kind words and glad this helped. Implement a strategy that combines semantic chunking with K-means clustering to address the model’s contextual limitations. By employing efficient clustering techniques, we can extract key passages effectively, thereby reducing the overhead associated with processing large volumes of text. This approach not only significantly lowers costs by minimizing the number of tokens processed but also mitigates the recency and primacy effects inherent in LLMs, ensuring a balanced consideration of all text segments.
@VirtualMachine-d8x2 ай бұрын
@@AboniaSojasingarayar Video was great and very useful.. can you make the small video on this clustering method using embedding ?
@AboniaSojasingarayar2 ай бұрын
@@VirtualMachine-d8x Sure will do, happy to hear from you again. Thanks for the feedback.
@Coff037 ай бұрын
Did you use OpenAI API key here?
@AboniaSojasingarayar7 ай бұрын
Here we use open-source Mixtral from ollama.But, yes we can use OpenAI models as well.
@MishelMichel7 ай бұрын
Very informatics nd Your voice very clear dr
@AboniaSojasingarayar7 ай бұрын
Glad it helped!
@fkeb37e9w07 ай бұрын
Can we use openai and chromadb on aws??
@AboniaSojasingarayar7 ай бұрын
Yes we can! In the below tutorial I have demonstrated how we can create and deploy lambda layer via container for larger dependencies : kzbin.info/www/bejne/nZrGpJVvpZyooJYsi=F_X7-6YCAb0Kz3Jc
@fkeb37e9w07 ай бұрын
@@AboniaSojasingarayar yes but can this be done without eks or containers?
@AboniaSojasingarayar7 ай бұрын
Yes! You can try it by creating a custom lambda layer.If you face issue try to use only the required libraries and remove any unnecessary dependencies from your zip file.Hope this helps.
@vijaygandhi73137 ай бұрын
In the abstractive summarization use-case, usually a lot of focus is given to the LLMs being used and its performance. Limitations of LLM including context length and ways to overcome this issue are often overlooked. Its important to make sure that our application is scalable when dealing with large document sizes. Thank you for this great and insightful video.
@AboniaSojasingarayar7 ай бұрын
Thank you Vijay Gandhi, for your insightful comment! You've raised an excellent point about the importance of considering the limitations of LLMs in the context of abstractive summarization, especially regarding their context length and scalability issues when dealing with large documents. Indeed, one of the significant challenges in using LLMs for abstractive summarization is their inherent limitation in processing long texts due to the maximum token limit imposed by these models. This constraint can be particularly problematic when summarizing lengthy documents or articles, where the full context might not fit within the model's capacity.
@zerofive36997 ай бұрын
Really useful info mam , keep up the good work
@AboniaSojasingarayar7 ай бұрын
It's my pleasure.
@Bumbblyfestyle7 ай бұрын
👍👍
@akshaykotawar58168 ай бұрын
Very Informative thanks for uploading
@AboniaSojasingarayar8 ай бұрын
Glad it helped!
@akshaykotawar58168 ай бұрын
Nice video
@AboniaSojasingarayar8 ай бұрын
Thanks Akshay. Glad it helped!
@MishelMichel8 ай бұрын
Nyccc Mam 😍
@AboniaSojasingarayar8 ай бұрын
Glad it helped 😊
@zerofive36998 ай бұрын
Very nice video, learnt a lot
@AboniaSojasingarayar8 ай бұрын
Thank you! Glad it helped🤓
@appikumar-d8l8 ай бұрын
Please do more on AWS Bedrock to develop on RAG applications......your explanation is simple and effective.......stay motivated and upload more videos about LLM
@AboniaSojasingarayar8 ай бұрын
Thanks for your kind words! Sure I will do it.
@akshaykotawar58168 ай бұрын
Yes same thing i want @@AboniaSojasingarayar
@AboniaSojasingarayar7 ай бұрын
Here the tutorial link to Deploying a Retrieval-Augmented Generation (RAG) in AWS : kzbin.info/www/bejne/nZrGpJVvpZyooJY