Great explanation, would be great to do one more tutorial using multimodal local RAG, considering the different chunks like tables, texts, and images, where you can use unstructured, Chroma, and MultiVectorRetriever completely locally.
@ytaccount9859Ай бұрын
Awesome stuff. Langgraph is a nice framework. Stoked to build with it, working through the course now!
@leonvanzylАй бұрын
The tutorial was "fully local" up until the moment you introduced Tavily 😜😉. Excellent tutorial Lance 👍
@sergeisotnikАй бұрын
Any internet search, by definition, is no longer local. However, embeddings here are used from a third-party service (where only the first 1M tokens are free).
@starbuck1002Ай бұрын
@@sergeisotnik Hes using nomic-embed-text embedding model locally, so there is no token cap at all.
@sergeisotnikАй бұрын
@@starbuck1002 It looks like you're right. I saw that `from langchain_nomic.embeddings import NomicEmbeddings` is used, which usually means an API call. But in this case, the initialization is done with the parameter `inference_mode="local"`. I didn’t check the documentation, but it seems that in this case, the model is downloaded from HuggingFace and used for local inference. So, you’re right, and I was wrong.
@ravivarman7291Ай бұрын
Amazing session and content explained very nicely in just 30 mins; Thanks so much
@adriangpuiuАй бұрын
@lance, please add langgraph documentation to the chat. the community will appreciate that. Let me know what you think
@joxxenАй бұрын
You are amazing, like always. Thank you for sharing
@becavasАй бұрын
Why did you use lama3.2:3b-instruct-fp16 instead of lama3.2:3b?
@LandryYvesJoelSebeogo23 сағат бұрын
may GOD bless you Bro
@henson2k28 күн бұрын
You make LLM to do all hard work for candidates filtering
@sunderrajan6172Ай бұрын
Beautifully done; thanks
@SavvasMohitoАй бұрын
That's a great tutorial that shows the power of LangGraph. It's impressive you can now do this locally with decent results. Thank you!
@davesabra432027 күн бұрын
Thanks it is indeed very cool. Last time you used 32Gb , do you think this will run with 16Gb? memory.
@TogowallaАй бұрын
Great video. What tool did you use to illustrate the nodes and edges in your notebook?
@user-wr4yl7tx3wАй бұрын
Can you consider doing an example of contextual retrieval that Anthropic recently introduced.
@marcogarciavanbijsterveld617823 күн бұрын
I'm a med student interested in experimenting with the following: I'd like to have several PDFs (entire medical books) from which I can ask a question and receive a factually accurate, contextually appropriate answer-thereby avoiding online searches. I understand this could potentially work using your method (omitting web searches), but am I correct in thinking this would require a resource-intensive, repeated search process? For example, if I ask a question about heart failure, the model would need to sift through each book and chapter until it finds the relevant content. This would likely be time-consuming initially. However, if I then ask a different question, say on treating systemic infections, the model would go through the entire set of books and chapters again, rather than narrowing down based on previous findings. Is there a way for the system to 'learn' where to locate information after several searches? Ideally, after numerous queries, it would be able to access the most relevant information efficiently without needing to reprocess the entire dataset each time-while maintaining factual accuracy and avoiding hallucinations.
@JesterOnCrack2 күн бұрын
I'll take a minute to try and asnwer your question to the best of my ability. Basically, what you are describing are ideas that seem sound for your specific application, but are not useful everywhere. Whenever you restrict search results, there is a chance you're not finding the 1 correct answer you needed. Speaking from experience, even a tiny chance of not finding what you need is enough to deter many customers. Of course, your system would have a tradeoff in efficiency - completing queries quicker. Bottom line is, there are ways to achieve this with clever data- and AI-engineering. I don't think that there is a single straightforward fix to your problem though.
@VictorDykstraАй бұрын
Very well explained.😊
@thepeoplesailabАй бұрын
Very informative ❤❤
@AlexEllisАй бұрын
Thanks for the video and sample putting all these parts together. What did you use to draw the diagram at the beginning of the video? Was it generated by a DSL/config?
@blakenator123Ай бұрын
looks like excalidraw
@ericlees553424 күн бұрын
why does he make it so easy..
@fernandobarros9834Ай бұрын
Great tutorial! Is it necessary to add a prompt format?
@skaternationableАй бұрын
Using PromptTemplate/ChatPromptTemplate works as well. It seems that the .format here is equivalent to the `input_variables` param within the former 2 classes
@fernandobarros9834Ай бұрын
@@skaternationable Thanks!
@andresmauriciogomezr3Ай бұрын
thank you
@beowesАй бұрын
Question: You have operator.add on the loopstep, but tnen increment the loopstep in the state too… am i wrong in that it would then incorrect?
@sidnath7336Ай бұрын
If different tools require different key word arguments, how can these be passed in for the agent to access?
@developer-h6e22 күн бұрын
is it possible to make agent that when provide with few hundred links extracts info in all links and store it
@serychristianrenaudАй бұрын
thanks
@hari8568Ай бұрын
Is there an elegant way to handle recursion errors?
@jamie_SFАй бұрын
Awesome
@johnrogers3315Ай бұрын
Great tutorial, thank you
@ephimp3189Ай бұрын
Is it possible to add a "fact checker" method? What if the answer is obtained from a document that gives false information? it would technically answer the question, just not be true
@aiamfreeАй бұрын
it's sooooo fast!
@ghostwhowalks2324Ай бұрын
amazing stuff which can be done with few lines of code. disruption coming everywhere
@HarmonySoloАй бұрын
LangGraph is too complicated, you have to implement State, Node etc. I would prefer to implement the Agent workflow by myself, which is mush easier at least I do not need to learn how to use LangGraph
@generatiacloud24 күн бұрын
Any repo to share?
@RazorCXTechnologies23 күн бұрын
Excellent tutorial! Another easier option is to use n8n instead because it has Langchain integration with AI agents built in and almost no code required to achieve same functionality. N8n also has automatic chatbot interface and webhooks.
@kgro35314 күн бұрын
langflow is best solution
@_Learn_With_Me_EraofAIАй бұрын
Unable to access chat ollama
@HELLODIMENSIONАй бұрын
You have no idea how much u saved me 😂 salute 🫡 thank u.