You are transforming the LLM market and its adoption. I'm a fan and we use it widely in my projects and products. Feedback from someone who consumes project commits, even blogs and all videos: PLEASE invest in a microphone! Congratulations on the great work over the last year.
@EJMConsutlingLLC10 ай бұрын
You are a very smart and talented deveployer. I found this to be humbling, educational, and very insightful. I especially appreciated the insider deveployer aspect of the video. It was a treat get to follow the workflow and build process. Well Done! Thank you!
@hectorcastro246710 ай бұрын
Great video! I appreciate the insights. Could you clarify a bit when I should use StateGraph or MessageGraph in my projects?
@Cdaprod10 ай бұрын
Yall are a god-send to open source 😘
@levbszabo6 ай бұрын
Great video! I've been loving this new framework
@urd46518 ай бұрын
Thank you very much for sharing. very informative and clear
@Cam081410 ай бұрын
I have tried the llm compiler paradigm. The joiner seems to loose context of the available tools. Not sure how to handle this.
@abirdutta32610 ай бұрын
Please also provide similar tutorials for langchain js, I think the community lacks there, as compared to py sdk.
@itay703610 ай бұрын
How do you handle human in the loop in this case?
@LjaDj5XQKey9mSDxh42 ай бұрын
Would be cool to have a LangGraph LLM Compiler implementation in Typescript
@beratcimen195410 ай бұрын
can langsmith trace native llm api calls?
@MrEnriqueag9 ай бұрын
The 2 last agents are wrong ReWoo never sees the responses in the solver, we only pass the plan and the tasks but not the responses to the plan And LLMCompiler doesn't handle task replacement, it calls everything in parallel with the tool result references instead of the data itself
@zzzzzzzzzzzzzzzzzz1g9 ай бұрын
all of these should be reshot for js also
@MrEnriqueag10 ай бұрын
Using prompts from langchain hub is a mess because the langchain documentation doesn't cover how to use them, modify them etc, and to top it off, sometimes you can't even make changes to them in the hub, because committing changes to multi modal prompts is broken I had to reverse engineer them and then create my own class to manage prompt updates Please show documentation on how to edit them or at least offer an alternative
@nabilmuthanna608310 ай бұрын
This is a video of “you can see …”
@xspydazx4 ай бұрын
the thing with agents and useing tools : chain and tools and graphs : they are all the same : Hence a chain is a special agent performing a singlized task which can be called to perform steps in a complex graph : which is itself a route or branch ! this is a multi agentic workflow : so we should be determining best practices : ie tools although easy to build a single function to perform a single small component : for tasks such as coding a graph will be a better requirement : and this graph should be a single tool ! So after the agent recives the query , it words the query in the best format to give to a graph to perform : , if requirements are required then that agent is the human in the loop which if absolutly neccsary can pass to the human : as the agent can ask for permission to perfrom rudimentary function which are not required ad can be ok by the main agent : but detremental answer can be forwarded back to the human in the loop : SO we can see that to give the model multiple graphs : and a requirements gathering tool : the agent can innitate agentic workflows : SO We can design entire Chains of thought or Methodolgy such as these to be implemented by a single front agent : Hence the perspective of the user is the model is performing all tasks and returning quality outputs : In fact this could be the case as each agent can even be the same model which is deployed to a api surface ! HOw is this possible and the model not get confused ? Each agent is its own agent and the model is just the back bone so in fact each query is a new chat, unless you pass the previous content to the model : So Agents are just wrappers for the model : We could have a multimedia rag ad a single model tpo perform all tasks : in fact our tokenizers can tokenize any format , it can be handed to the modle who will use a tool to deal with the input for the model or it can be performed by the input process , tokenizing images to be captioned and added to the context as base64 + caption ! this ca be passed if required from the state to the generator models or agents ! so indeed we Could have multiple models : ForEach modality and the input would determine which aget would be used: In fact the input model would be a agent which does not use a model as a back bone a SImple Chatbot ( intent model ( only to detect the input intent ! not the task intent ) toi forward the correct media to the correct toknenizer etc ! or Graph ! LangGraoh is quite good ! there are many graphers out there ! , but this is a conceptual tree which has pointer to the next node : we should consider each route as a single execution and each conditional node as a pause ! or end node which begins a new route either to the end or another start point the conditional node ! hence graph complexity ! and a methodology to execute a graph (or set of adjacent nodes ) . then to thenext connected set of Adjacent noes ( or Route ) !