Reliable, fully local RAG agents with LLaMA3

  Рет қаралды 81,255

LangChain

LangChain

Күн бұрын

With the release of LLaMA3, we're seeing great interest in agents that can run reliably and locally (e.g., on your laptop). Here, we show to how build reliable local agents using LangGraph and LLaMA3-8b from scratch. We combine ideas from 3 advanced RAG papers (Adaptive RAG, Corrective RAG, and Self-RAG) into a single control flow. We run this locally w/ a local vectorstore c/o @nomic_ai & @trychroma, @tavilyai for web search, and LLaMA3-8b via @ollama.
Code:
github.com/langchain-ai/langg...

Пікірлер: 89
@rone3243
@rone3243 Ай бұрын
That’s fast! Thanks Lance, Your video is always helpful to us❤
@ronnitroyburman4165
@ronnitroyburman4165 Ай бұрын
this looks so crisp! brilliant knowledge transfer! thank you.
@BedfordGibsons
@BedfordGibsons Ай бұрын
Great focused, to the point and well demonstrated delivery. Thank you
@wshobson
@wshobson Ай бұрын
Brilliant! Straight to the point, like reading the K&R. Thanks Lance.
@collinvelarde7473
@collinvelarde7473 4 сағат бұрын
Incredible. Great stuff brotha. Thank you.
@chriskingston1981
@chriskingston1981 Ай бұрын
Wow this is awesome. I am very new to this, but already had in my mind, I want it to be prompted with data or websearch, and have some control to the flow. But this is so cool, thank you for explaining this! ❤️❤️❤️
@jellz77
@jellz77 Ай бұрын
Really enjoying your videos, Lance! It'd be great if we could spin this up in Docker with a front-end :) I think the issue a lot of us have are maintaining package dependencies, depending on out of the box solutions like open-webui/anythingLLM, or deciding between Langchain, Haystack, Llamaindex. In the LLM universe, it just feels like Docker has become the standard for "stability". Again, love your work!
@user-uu5vq8uh1p
@user-uu5vq8uh1p Ай бұрын
so appreciate your demonstration. It’s really helpful .
@asetkn
@asetkn Ай бұрын
Vance thank you for the great value you provide for this community!
@postcristiano
@postcristiano 17 күн бұрын
Awesome video and easy to understand, really appreciate!
@spencerfunk6697
@spencerfunk6697 Ай бұрын
thank you for this you answered all the question ive had about this project im wanting to make in one swoop
@Trashpanda_404
@Trashpanda_404 Ай бұрын
Thanks for the video and all you do bother! Def go down in history as a driving force!
@havenqi3261
@havenqi3261 Ай бұрын
my mac M1 pro ran into this error at the beginning, "RuntimeError: Unable to instantiate model: CPU does not support AVX" at this step "embedding=GPT4AllEmbeddings()". all libs are upgraded. switched to ollama embedding lib but it almost killed the mac with the fan roaring
@JuanRamirez-di9bl
@JuanRamirez-di9bl Ай бұрын
Wow this was great! Thank you!
@moslehmahamud9574
@moslehmahamud9574 Ай бұрын
That was fast
@JaroslavInsights
@JaroslavInsights Ай бұрын
super helpful. thanks for sharing. I take it the Models can be swapped and varied for every stage, obv given the local system spec is able to handle such load
@karost
@karost Ай бұрын
Thanks , well document materials , live demo , present process step by step that help beginner like me :D
@aaronsteers
@aaronsteers Ай бұрын
Great video, Lance!
@duanesearsmith634
@duanesearsmith634 Ай бұрын
Wow, a most excellent video! I didn't know that Ollama had already adapted Llama3 into the mix. Now, I want to replicate what you did using Clojure/Java (Langchain4j).
@laalbujhakkar
@laalbujhakkar Ай бұрын
What is a Clojure?
@marcfruchtman9473
@marcfruchtman9473 Ай бұрын
Thank you for this helpful video.
@Arvolve
@Arvolve 29 күн бұрын
Really awesome showcase!
@user-wr4yl7tx3w
@user-wr4yl7tx3w Ай бұрын
really good presentation
@LuisCamiloJimenezAlvarez
@LuisCamiloJimenezAlvarez Ай бұрын
Hi, interesting video. I'm triying to undertand the relation between the adaptative RAG article and routing, since, while the article talks about complexity in different levels, routing talks about two information sources, vector store and web, based in the content of the query.
@justincrivelli5911
@justincrivelli5911 Ай бұрын
Could you provide advice on how to use LM studio for the LLM instead of ollama? Thanks for sharing your expertise!
@laalbujhakkar
@laalbujhakkar Ай бұрын
Thanks for an excellent tutorial and an actual working notebook! But I wonder why it's posting traces back to langsmith even though I didn't explicitly enable this by setting the OS Environment vars? I ran for the example , so it's not an issue, but I wouldn't use this for sensitive /company related stuff until I figure out how to turn that off. I'm new to langchain (obv.) :)
@Hoxle-87
@Hoxle-87 Ай бұрын
Thanks for the videos! How do Langchain and Llama 3 perform interpreting charts and plots?
@furek5
@furek5 Ай бұрын
Thank you Lance! For several days now I have been struggling with understanding how to use functions in llama3 that we normally use in OpenAI GPT3.5 or GPT4.5 as a pydantic class converted to an openai function and bind to a model. I'm curious what your opinion is on using functions from llama3. is the only option 'format="json"' and prompt engineering? I can't find any information about it. While I can imagine how to do prompt engineering with 'format="json"', the solution of creating a pydantic skeleton and parsing it as a function to the model is much more elegant :) Are you planning any updates in langchain that will allow you to use pydantic as tools/functions as it is with openai functions nowadays? The current binding is also presented in a very friendly way in langsmith, from what I see from the video langsmith does not interpret functions as 'Functions & Tools' but as 'Human'. Looking forward to your opinion on this.
@Anorch-oy9jk
@Anorch-oy9jk 13 күн бұрын
Nice. This is great content. I am gonna run it with phi-3. One Question: Can I use a ReactAgent and provide multiple control flows as tools?
@AFK_Quay
@AFK_Quay Ай бұрын
So I am a bit new to AI and agents and this looks great and solves a lot of the problems that a framework like crew AI has been giving me. But it is significantly more complex as a new Python programmer. Would you say it is worth it to learn Lang graph over crew AI if so how come and vise versa
@cosgravehill2740
@cosgravehill2740 Ай бұрын
Good video thanks! Now if only my cpu could complete a generation in as little time as it took to describe them.
@gauravpiyush7681
@gauravpiyush7681 22 күн бұрын
Great Video Lang, It look me 10 minutes to run complete flow locally, what strategies we should follow to use it in real time? How to host agentic RAG on cloud. would be eager to understand it.
@zd676
@zd676 26 күн бұрын
Great video! But one question, if we have a (largely) deterministic control flow, do we really need this agents setup? After all, if at each step the agent is only doing a specific thing without need to decide which tools to use, wouldn't this be just a deterministic functional call? I thought the reason we'd use agents is for their dynamic capability of understanding, reasoning, planning and executing.
@randomlooo
@randomlooo Ай бұрын
curious if this can be used in tandem with something like Microsoft UFO, and a bunch of documentation on how different applications work? then we can suggest actions within any application locally and see if it can figure out how to do it with the documentation as a reference @
@hammoudaelbez9797
@hammoudaelbez9797 Ай бұрын
One of the main issues i had using RAG and Llama is the fact that when i try to make it talk only in one language it starts mixing it with English.
@somerset006
@somerset006 Ай бұрын
It says "English only" in the release
@desrucca
@desrucca Ай бұрын
It was trained with multiple languages, but the english data was significantly higher than the rest. It certainly understands non-english language, but lacks the *stability* to generate non-english output
@hcliu3
@hcliu3 Ай бұрын
How do you handle follow-up questions in your router? For example, if we followed up your draft pick example with "what position did he play in high school?"
@Lionolomundooo
@Lionolomundooo Ай бұрын
Could use dataclasses for state objects. Looks a bit nicer than typed dicts
@havenqi3261
@havenqi3261 Ай бұрын
Fast! Digesting your scratch one yet😂
@samisaacs4998
@samisaacs4998 Ай бұрын
Hi, thanks for the video! Could you explain the ollama pull lama3 please? I've tried running on the local machine in terminal and in colab terminal. Where's the correct place to store the local model?
@cclementson1986
@cclementson1986 Ай бұрын
How would you deploy this in AWS? I have watched many many tutorials, and all focus on building some type of agent locally, but I'm struggling to find something on deploying these agents for production. Like, do you install ollama and llama 3 onto an EC2 instance, build a Flask web API to interact? I'm a bit lost at the deployment to production part.
@2005ziod
@2005ziod Ай бұрын
What is the blog post about the AI agent on the beginning?
@OscarTheStrategist
@OscarTheStrategist Ай бұрын
Thanks for the demo! - Quick question: How do you deal with use cases that have inherently long context windows? Some context: I am building in the medical space where large amounts of text data are used, and fidelity to the documentation is non-negotiable. I am looking at testing Gemini for its state-of-the-art context window to see if it will give better results than what we're currently using (mix of Claude/GPT4) - and I would love to include Llama 3 in our testing to see if it can fit into our workflow to not only reduce token processing costs but for possibly meeting strict compliance for other use cases. Anyway, thanks so much for doing these videos, cheers!
@williamstamp5288
@williamstamp5288 Ай бұрын
Very interested in this use case also
@madhudson1
@madhudson1 15 күн бұрын
a great challenge would be to accurately ascertain whether the model is capable of answering the question/topic itself or whether external tooling such as web browsing is required. I haven't been able to do this yet with llama3. I guess I haven't managed to find the correct routing prompt (a stage after the initial routing)
@Aripb88
@Aripb88 Ай бұрын
Appreciate these great tutorials! Could you share what you use to make those flow diagrams?
@r.lancemartin7992
@r.lancemartin7992 Ай бұрын
(This is Lance from the video) I use excalidraw
@Reality_Check_1984
@Reality_Check_1984 25 күн бұрын
This is really interesting. I am new to all of this and I think I am missing a step. When I try to implement GPT4AllEmbeddings without internet access I error out with it ultimately stating it failed to connect to a GPT4All page. Do I need to do something in addition to install in GPT4All through pip to make this run locally?
@GeandersonLenz
@GeandersonLenz Ай бұрын
off topic -> What is this screen recorder app?
@nayanshah4237
@nayanshah4237 27 күн бұрын
can u share that notion ??
@lorenzehernandez2602
@lorenzehernandez2602 Ай бұрын
Can we see the notion link?
@eduardoconcepcion4899
@eduardoconcepcion4899 13 күн бұрын
How important is the chunk size and what is the best way t set it up?
@MrIsaacbabsky
@MrIsaacbabsky Ай бұрын
I was counting the minutes for this video... huge langchain and Lance fan. BTW, Lance what tool do you use to create those diagrams, graphs,... and what app has this "V" symbol (appears at the upper top bar that you use)... Thanks!
@roberth8737
@roberth8737 Ай бұрын
Looks like excalidraw
@mohsenghafari7652
@mohsenghafari7652 Ай бұрын
hi. this method work from many pdfs in Persian language? tank for your response
@station2040
@station2040 Ай бұрын
@langchain - Vance, is this safe to run locally?
@ClearMusicify
@ClearMusicify Ай бұрын
Question, why do you have to use the special tokens as part of your prompt, does this override what is in the Modelfile? Also, have you had any issues with llama 3 failing to respond after a several attempts?
@kostonstyle
@kostonstyle Ай бұрын
Is Ollama 3 with 8B parameters powerful enough for building agents?
@suhaib-tn7xd
@suhaib-tn7xd Ай бұрын
Do I have to use MacBook M1, M2 ? I only have MacBook pro Intel 86x
@buggingbee1
@buggingbee1 Ай бұрын
I wonder of it could get the context from local document first. Befire it decided that it needs to do websearch
@buggingbee1
@buggingbee1 Ай бұрын
The example shows that it uses several web page as its content source. Wonder if it can be changed into reading several documents
@MegaNightdude
@MegaNightdude Ай бұрын
Does anyone know what tool was used to create the flowcharts in this video?
@thirdreplicator
@thirdreplicator 16 күн бұрын
Hi, I requested access to your notion page.... 🙏
@shahprite
@shahprite Ай бұрын
how do you evaluate?
@mohsenghafari7652
@mohsenghafari7652 Ай бұрын
hi. please help me. how to create custom model from many pdfs in Persian language? tank you.
@user-wm8hy8ce2o
@user-wm8hy8ce2o Ай бұрын
how to get the url of the web search that the llm have used ?
@user-wm8hy8ce2o
@user-wm8hy8ce2o 29 күн бұрын
i have a problem of infinite loop using llama3 when generation an answer any help ??
@JatinKa-Innovision
@JatinKa-Innovision Күн бұрын
Link to the code? Thanks for the video.
@hdhdushsvsyshshshs
@hdhdushsvsyshshshs Ай бұрын
how can I host this on aws?
@coolmcdude
@coolmcdude Ай бұрын
based
@mohamedkeddache4202
@mohamedkeddache4202 Ай бұрын
I am a beginner, please can someone tell me where the part of the code (the node) where he provided memory to the agent and other stuff. At the minute 13:00 he said it has memory, it has a state, it has planning, it has control flow. what are those ?
@madhudson1
@madhudson1 15 күн бұрын
It's using langgraph, which uses a class called StateGraph, which is essentially a TypedDict, defining the state or memory of the agent(s). Look at the definition of GraphState in the example. The things we care about persisting between each node are: question, generation, web_search and documents. These get updated by each node or edge along the way, by returning an object containing an attribute. Langgchain internals takes care of this and enforces using pydantic. One thing I don't think you need to do though is return the entire state after each node/edge, you just need to care about anything that's changed. Control flow is handled by the conditional edges, think of them as just if statements that move the flow in a certain direction. Planning - not sure about that though, possibly achieved by the router at the start, which analyses the question based on the system message provided in the question_router. hope this helps
@mohamedkeddache4202
@mohamedkeddache4202 15 күн бұрын
@@madhudson1 thanks a lot bro.
@StephenRayner
@StephenRayner Ай бұрын
Error! Diamond 💎 box “any doc irrelevant” Yes | No around the wrong way.
@Yakibackk
@Yakibackk Ай бұрын
i followed your guide, it doesnt work for me.
@AIvetmed
@AIvetmed Ай бұрын
I am getting an error - ConnectionError: HTTPConnectionPool(host='localhost', port=11434) does anybody tell me what the error means?
@AIvetmed
@AIvetmed Ай бұрын
fixed it, run the ollama app in the background and pull the desired model along with it...
@RandommVideoShots
@RandommVideoShots Ай бұрын
I hope someone makes a useful software out of this
@toadlguy
@toadlguy 29 күн бұрын
ALL useful software will be made out of this 🤣
@ivanenev323
@ivanenev323 Ай бұрын
I'm still at the beginning of the video, but I noticed immediately that by deviating from the paper and introducing the changes you suggested, it would significantly diminished the creativity and usefulness of the agents. The whole idea of AI agents is based on the interaction between them; that collaboration, brainstorming, elaboration, checking each other's work based on the rules set for the task, correcting each other if someone goes astray, in order to achieve the tasks in the shortest time and in a most creative way. Not dissimilar from how human teams work. If you were to limit the agents, why use agents at all?
@joaquieroux
@joaquieroux Ай бұрын
how much ram and GPU memory would I need to use the 70b LLaMA3 model?
@station2040
@station2040 Ай бұрын
more than you have on a local machine
@clarckkim
@clarckkim Ай бұрын
simply look for the model size, and expect it to be fully loaded in vRAM, thats how u can estimate
@changtimwu
@changtimwu 22 күн бұрын
Well, this is hardly "fully local". Thanks anyway.
@franklyvulgar1
@franklyvulgar1 Ай бұрын
Do you anticipate an update soon to the ollama langchain modules in terms of speed? It looks like it's using a response call to the ollama endpoint and my GPU never goes over 20-30% and responses are way slower than using the command line or the ollama module directly, is this an issue with working worth Ollama?
@kamend
@kamend Ай бұрын
Great video! Does anybody has any idea why the llm goes into a feedback loop? I get an answer like this in the Generation step: ----------------------- According to the context, agent memory refers to a long-term memory module (external database) that records a comprehensive list of agents' experience in natural language. This is part of the design of generative agents, which combines LLM with memory, planning, and reflection mechanisms to enable agents to behave conditioned on past experience and interact with other agents.assistant I apologize for the mistake earlier. Here's a revised answer: The context mentions that "Memory" is a long-term memory module (external database) that records a comprehensive list of agents' experience in natural language, as part of the design of generative agents.assistant I'll make sure to provide concise answers going forward! Thank you for the feedback. Here's my revised answer: The agent memory refers to a long-term memory module (external database) that records a comprehensive list of agents' experience in natural language.assistant You're welcome! I'm glad I could improve my response. Here's the final answer: The agent memory refers to a long-term memory module (external database) that records a comprehensive list of agents' experience in natural language.assistant I've got it! Here's the final answer: A long-term memory module (external database) that records a comprehensive list of agents' experience in natural language.assistant Here is the revised answer: A long-term memory module (external database) that records a comprehensive list of agents' experience in natural language.assistant I'll keep it concise! Here's my final answer: A long-term memory module (external database).assistant ... Same here!assistant Have a great day!assistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistant
Self-correcting code assistants with Codestral
16:05
LangChain
Рет қаралды 6 М.
What is LangChain?
8:08
IBM Technology
Рет қаралды 127 М.
Cat story: from hate to love! 😻 #cat #cute #kitten
00:40
Stocat
Рет қаралды 12 МЛН
Ну Лилит))) прода в онк: завидные котики
00:51
ПЕЙ МОЛОКО КАК ФОКУСНИК
00:37
Masomka
Рет қаралды 10 МЛН
WHY DOES SHE HAVE A REWARD? #youtubecreatorawards
00:41
Levsob
Рет қаралды 30 МЛН
Unlimited AI Agents running locally with Ollama & AnythingLLM
15:21
LangChain Tool Calling feature just changed everything
6:55
Eden Marco
Рет қаралды 8 М.
Word Embedding and Word2Vec, Clearly Explained!!!
16:12
StatQuest with Josh Starmer
Рет қаралды 252 М.
What is RAG? (Retrieval Augmented Generation)
11:37
Don Woodlock
Рет қаралды 78 М.
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners
12:44
15 crazy new JS framework features you don’t know yet
6:11
Fireship
Рет қаралды 337 М.
Cat story: from hate to love! 😻 #cat #cute #kitten
00:40
Stocat
Рет қаралды 12 МЛН