Python Advanced AI Agent Tutorial - LlamaIndex, Ollama and Multi-LLM!

  Рет қаралды 44,497

Tech With Tim

Tech With Tim

Күн бұрын

Interested in AI development? Then you are in the right place! Today I'm going to be showing you how to develop an advanced AI agent that uses multiple LLMs.
If you want to land a developer job: techwithtim.net/dev
🎞 Video Resources 🎞
Code: github.com/techwithtim/AI-Age...
Requirements.txt: github.com/techwithtim/AI-Age...
Download Ollama: github.com/ollama/ollama
Create a LlamaCloud Account to Use LLama Parse: cloud.llamaindex.ai
Info on LLama Parse: www.llamaindex.ai/blog/introd...
Understanding RAG: • Why Everyone is Freaki...
⏳ Timestamps ⏳
00:00 | Video Overview
00:42 | Project Demo
03:49 | Agents & Projects
05:44 | Installation/Setup
09:26 | Ollama Setup
14:18 | Loading PDF Data
21:16 | Using llama Parse
26:20 | Creating Tools & Agents
32:31 | The Code Reader Tool
38:50 | Output-Parser & Second LLM
48:20 | Retry Handle
50:20 | Saving To A File
Hashtags
#techwithtim
#machinelearning
#aiagents

Пікірлер: 94
@257.4MHz
@257.4MHz 11 күн бұрын
You are one of the best explainers ever. Out of 50 years listening to thousands of people trying to explain thousands of things. Also, it's raining and thundering outside and I'm creating this monster, I feel like Dr. Frankenstein
@justcars2454
@justcars2454 7 күн бұрын
50 years of listening, and learning, iam sure you have great knowlege
@bajerra9517
@bajerra9517 10 күн бұрын
I wanted to express my gratitude for the Python Advanced AI Agent Tutorial - LlamaIndex, Ollama and Multi-LLM! This tutorial has been incredibly helpful in my journey to learn and apply advanced AI techniques in my projects. The clear explanations and step-by-step examples have made it easy for me to understand and implement these powerful tools. Thank you for sharing your knowledge and expertise!
@Batselot
@Batselot 13 күн бұрын
I was really looking forward to learn this. Thanks for the video
@samliske1482
@samliske1482 5 күн бұрын
You are by far my favorite tech educator on this platform. Feels like you fill in every gap left by my curriculum and inspire me to go further with my own projects. Thanks for everything!
@seanbergman8927
@seanbergman8927 13 күн бұрын
Excellent demo! I liked seeing it built in vs code with loops, unlike many demos that are in Jupyter notebooks and can’t run this way. Regarding more demos like this…Yes!! Most definitely could learn a lot from more and more advanced LlamaIndex agent demos. Would be great to see a demo that uses their chat agent and maintain chat state for follow-up questions. Even more advanced and awesome would be an example where the agent will ask a follow up question if it needs more information to complete a task.
@ravi1341975
@ravi1341975 12 күн бұрын
wow this is absolutely mind blowing ,thanks Tim.
@techgiant__
@techgiant__ 11 күн бұрын
Just used your code with llama 3, and made the code generator a function tool, and it was fvcking awesome. Thanks for sharing👍🏻
@ChadHuffman
@ChadHuffman 10 күн бұрын
Amazing as always, Tim. Thanks for spending the time to walk through this great set of tools. I'm looking forward to trying this out with data tables and PDF articles on parsing these particular data sets to see what comes out the other side. If you want to take this in a different direction, I'd love to see how you would take PDFs on how different parts of a system work and their troubleshooting methodology and then throw functional data at the LLM with errors you might see. I suspect (like other paid LLMs) it could draw some solid conclusions. Cheers!
@jorgitozor
@jorgitozor 10 күн бұрын
This is very clear and very instructive, so much valuable information! Thanks for your work
@garybpt
@garybpt 11 күн бұрын
This was fascinating, I'm definitely going to be giving it a whirl! I'd love to learn how something like this could be adapted to write articles using information from our own files.
@AlexKraken
@AlexKraken 5 күн бұрын
If you keep getting timeout errors and happen to be using a somewhat lackluster computer like me, changing `request_timeout` in these lines llm = Ollama(model="mistral", request_timeout=3600.0) ... code_llm = Ollama(model="codellama", request_timeout=3600.0) to a larger number (3600.0 is 1 hour, but it usually takes only 10 minutes) helped me out. Thanks for the tutorial!
@ft4jemc
@ft4jemc 12 күн бұрын
Great video. Would really like to see methods that didn't involve reaching out to the cloud but keeping everything local.
@camaycama7479
@camaycama7479 7 сағат бұрын
Awesome video, man thx a big bunch!
@nour.mokrani
@nour.mokrani 12 күн бұрын
Thanks for this tutorial and your way of explaining, I've been looking for this , Can you also make a vid on how to build enterprise grade generative ai with Nemo Nvidia that would be so interesting, thanks again
@samwamae6498
@samwamae6498 12 күн бұрын
Awesome 💯
@Ari-pq4db
@Ari-pq4db 13 күн бұрын
Nice ❤
@SashoSuper
@SashoSuper 13 күн бұрын
Nice one
@vaughanjackson2262
@vaughanjackson2262 5 күн бұрын
Great vid.. only issue is the fact that the parsing is done externally. For RAG's ingesting sensitive data this would be a major issue.
@Pushedrabbit699-lk6cr
@Pushedrabbit699-lk6cr 13 күн бұрын
Could you also do a video on infinite world generation using chunks for RPG type pygame games?
@seanh1591
@seanh1591 13 күн бұрын
Tim - thanks for the wonderful video. Very well done sir!! Is there an alternative to LlamaParse to keep the parsing local?
@stevenheymans
@stevenheymans 12 күн бұрын
pymupdf
@henrylam4934
@henrylam4934 11 күн бұрын
Thanks for the tutorial. Is there any alternate to LlamaParse that allows me to run the application completely local?
@JNET_Reloaded
@JNET_Reloaded 3 күн бұрын
nice
@sethngetich4144
@sethngetich4144 7 күн бұрын
I keep getting errors when trying to install the dependencies from requirements.txt
@blissfulDew
@blissfulDew 6 күн бұрын
Thanks for this!! Unfortunately I can't run it on my laptop, it takes forever and the AI seems confused. I guess it needs powerful machine...
@meeFaizul
@meeFaizul 12 күн бұрын
❤❤❤❤❤❤
@billturner2112
@billturner2112 11 сағат бұрын
I liked this. Out of curiosity, why venv rather than Conda?
@kodiak809
@kodiak809 9 күн бұрын
so ollama is run locally in your machine? can i make it cloud based by applying it into my backend?
@anandvishwakarma933
@anandvishwakarma933 7 күн бұрын
Hey can you shared the system configuration need to run this application ?
@camaycama7479
@camaycama7479 7 сағат бұрын
Does the mistral large will be available ? I'm wondering if the LLM availability will be up to date or there's other step to do.
@unflappableunflappable1248
@unflappableunflappable1248 9 күн бұрын
круто
@adilzahir9921
@adilzahir9921 9 күн бұрын
Can i use that to make ai agent that can call customers and interact with them and take notes of what's happens ? Thank's
@WismutHansen
@WismutHansen 12 күн бұрын
You obviously went to the Matthew Berman School of I'll revoke this API Key before publishing this video!
@armandogayon2128
@armandogayon2128 6 күн бұрын
what keyboard are you using? 😊
@willlywillly
@willlywillly 12 күн бұрын
Another great tutorial... Thank You! How do I get in touch with you Tim for consultant?
@TechWithTim
@TechWithTim 12 күн бұрын
Send an email to the email listed on my about page on youtube
@Marven2
@Marven2 13 күн бұрын
Can you make a series
@avxqt966
@avxqt966 10 күн бұрын
I can't install packages of llama-index in my Windows system. Also, the 'guidance' package is showing an error
@Pyth_onist
@Pyth_onist 13 күн бұрын
I did one using Llama2.
@giovannip.6473
@giovannip.6473 12 күн бұрын
are you sharing it somewhere?
@bigbena23
@bigbena23 3 күн бұрын
What if I don't my data to be manipulated in the cloud? Is there an alternative for LlamaParser that can be ran locally?
@danyloustymenko7465
@danyloustymenko7465 11 күн бұрын
What's the latency of models running locally?
@ofeksh
@ofeksh 12 күн бұрын
Hi Tim! GREAT JOB on pretty much everything! BUT, i have a problem im running on windows with pycharm and it shows me an error when installing the requirements, because its pycharm, i have 2 options for installing the requirements, one from within pycharm and one from the terminal FIRST ERROR (when i install through pycharm) in both options im seeing an error (similar one, but not exactly the same) can you please help me with it?
@diegoromo4819
@diegoromo4819 12 күн бұрын
you can check which python version you have installed.
@ofeksh
@ofeksh 12 күн бұрын
@@diegoromo4819hey, thank you for your response, which version should i have? i can't find it in the video.
@neilpayne8244
@neilpayne8244 9 күн бұрын
@@ofeksh 3.11
@ofeksh
@ofeksh 9 күн бұрын
@@neilpayne8244 shit, that's my version...
@amruts4640
@amruts4640 13 күн бұрын
Can you please do a video about making a gui in python
@mayerxc
@mayerxc 13 күн бұрын
What are your MacBook Pro specs? I'm looking for a new computer to run llm locally.
@techgiant__
@techgiant__ 13 күн бұрын
Buy a workstation with very good Nvidia gpu, so u can use cuda. If u still want to go for a MacBook Pro, get the M2 with 32gb or 64gb ram. I’m using a MacBook m1 16” 16gb ram and I can only run llms with 7 - 13b without crashing it
@TechWithTim
@TechWithTim 12 күн бұрын
I have an M2 Max
@GiustinoEsposito98
@GiustinoEsposito98 11 күн бұрын
Have you ever thought about using colab as a remote webserver with local llm such as llama3 and calling it from your pc to get predictions? I have your same problem and was thinking about solving like this
@DomenicoDiFina
@DomenicoDiFina 12 күн бұрын
Is it possible to create an agent using other languages?
@adilzahir9921
@adilzahir9921 9 күн бұрын
What the minimum laptop to run this model ? Thank's
@joshuaarinaitwe8351
@joshuaarinaitwe8351 9 күн бұрын
Hey tim. Great video. I have been watching your videos for some time, though i was definitely young then. I need some guidance. Am 17, i want to do ai and machine learning course. Somebody advise me.
@radheyakhade9853
@radheyakhade9853 10 күн бұрын
Can anyone tell me what basic things should one know before going into this video ??
@_HodBuri_
@_HodBuri_ 4 күн бұрын
Error 404 not found - local host - api - chat [FIX] If anyone else gets an error like that when trying to run the llamacode agent, just run the llamacode llm in terminal to download it, as it did not download it automatically for me at least as he said around 29:11 So similar to what he showed at the start with Mistral: ollama run mistral. You can run this in a new terminal to download codellama: ollama run codellama
@aishwarypatil8708
@aishwarypatil8708 Күн бұрын
thanks alot !!!!
@technobabble77
@technobabble77 12 күн бұрын
I'm getting the following when I run the prompt: Error occured, retry #1: timed out Error occured, retry #2: timed out Error occured, retry #3: timed out Unable to process request, try again... What is this timing out on?
@coconut_bliss5539
@coconut_bliss5539 8 күн бұрын
Your Agent is unable to reach your Ollama server. It's repeatedly trying to query your Ollama server's API on localhost, then those requests are timing out. Check if your Ollama LLM is initializing correctly. Also make sure your Agent constructor contains the correct LLM argument.
@TballaJones
@TballaJones Күн бұрын
Do you have a VPN like NordVPN running? Sometimes that can't mess up local servers
@Aiden-rz6vf
@Aiden-rz6vf 12 күн бұрын
Llama 3
@dr_harrington
@dr_harrington 11 күн бұрын
DEAL BREAKER: 17:20 "What this will do is actually take our documents and push them out to the cloud."
@257.4MHz
@257.4MHz 6 күн бұрын
Well, I can't get it to work. It gives 404 on /api/chat
@omkarkakade3438
@omkarkakade3438 5 күн бұрын
I am getting the same error
@mrarm4x
@mrarm4x 2 күн бұрын
you are probably getting this error because you are missing the codellama model, run ollama pull codellama and it should fix it
@dolapoadefisayomioluwole1341
@dolapoadefisayomioluwole1341 13 күн бұрын
First to comment today 😂
@Meir-ld2yi
@Meir-ld2yi 2 күн бұрын
ollama mistral work so slowly that even hello take like 20 min
@ases4320
@ases4320 10 күн бұрын
But this is not completely "local" since you need an api key, no?
@matteominellono
@matteominellono 9 күн бұрын
These APIs are used within the same environment or system, enabling different software components or applications to communicate with each other locally without the need to go through a network. This is common in software libraries, operating systems, or applications where different modules or plugins need to interact. Local APIs are accessed directly by the program without the latency or the overhead associated with network communications.
@levinkrieger8452
@levinkrieger8452 13 күн бұрын
First
@neiladriangomez
@neiladriangomez 13 күн бұрын
I’ll come back to this in a couple of months. Too advance for me, my head is spinning I cannot grasp a single info😵‍💫
@TechWithTim
@TechWithTim 13 күн бұрын
Haha no problem! I have some easier ones on the channel
@cocgamingstar6990
@cocgamingstar6990 3 күн бұрын
Me too😅
@alantripp6175
@alantripp6175 Сағат бұрын
I can't figure out which AI agent vendor is open for me to sign up to use.
@dezly-macauley
@dezly-macauley 13 күн бұрын
I want to learn how to make an AI agent that auto-removes / auto-deletes these annoying spam s3x bot comments on useful KZbin videos like this.
@kazmi401
@kazmi401 13 күн бұрын
Why youtube does not add my comment. F*CK
@NathanChambers
@NathanChambers 13 күн бұрын
Using a module that requires you to upload the files or data (LlamaParse/LlamaCloud) totally defeats the purpose of self hosting your on LLM models... Dislike just for that! it makes as little sense as putting your decentralized currency in a centralized bank. LLAL
@skyamar
@skyamar 13 күн бұрын
stupid orc
@ivavrtaric913
@ivavrtaric913 13 күн бұрын
How is that an issue? You want to have the ability to parse the files to the model. Are you sure you've grasped the concept of agents and tools? The whole point is have RAG locally. Decentralized comparison is simply unrelated to what has been done here.
@NathanChambers
@NathanChambers 13 күн бұрын
@@ivavrtaric913 It is the same thing being done. You're taking something that allows you/your business to do things on their own without third party... but adding 3rd party for no reason. 3rd party where your data can be hacked/stolen/man-in-the-middle attacked. So the comparison IS VALID!
@NathanChambers
@NathanChambers 13 күн бұрын
@@ivavrtaric913 The whole point of things like ollama and LLMs is to keep things IN-HOUSE. Doing 3rd party defeats the purpose of using these models. Same things as putting decentralized money in central banks. So they really are the same type of stupid thing to do! It's like saying cocaine is bad for you, but let's go do some crack. :P
@TechWithTim
@TechWithTim 13 күн бұрын
Then simply don’t use it and use the local loading instead. I’m just showing a great option that works incredibly well, you can obviously tweak this and thats the idea.
@jaivalani4609
@jaivalani4609 11 күн бұрын
Hi tim.its really.simple 2 understand One ask is.llama.parse free to use ? Or does it needs subscription key ?
@jaivalani4609
@jaivalani4609 11 күн бұрын
Can we use Lama parse locally ?
@TechWithTim
@TechWithTim 11 күн бұрын
It’s free to use!
@jaivalani4609
@jaivalani4609 9 күн бұрын
@@TechWithTim Thanks, but does it requires data to be sent to Cloud?
Backend, Frontend or DevOps? How to Decide!
16:54
Tech With Tim
Рет қаралды 36 М.
Run your own AI (but private)
22:13
NetworkChuck
Рет қаралды 962 М.
10 Minutes To Escape Or This Room Explodes!
10:00
MrBeast
Рет қаралды 65 МЛН
LA FINE 😂😂😂 @arnaldomangini
00:26
Giuseppe Barbuto
Рет қаралды 19 МЛН
How To Connect Llama3 to CrewAI [Groq + Ollama]
31:42
codewithbrandon
Рет қаралды 14 М.
All You Need To Know About Running LLMs Locally
10:30
bycloud
Рет қаралды 79 М.
Create your own CUSTOMIZED Llama 3 model using Ollama
12:55
DevTechBytes
Рет қаралды 9 М.
Build a RAG Based LLM App in 20 Minutes! | Full Langflow Tutorial
24:03
Next Gen Hackers protecting our world
57:39
David Bombal
Рет қаралды 70 М.
AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"
23:47
15 Python Libraries You Should Know About
14:54
ArjanCodes
Рет қаралды 345 М.
Big Tech AI Is A Lie
16:56
Tina Huang
Рет қаралды 33 М.
RAG from the Ground Up with Python and Ollama
15:32
Decoder
Рет қаралды 15 М.