Conditional Edges in LangGraph
25:02
28 күн бұрын
Пікірлер
@AtalUpadhyay
@AtalUpadhyay 16 сағат бұрын
Two videos in playlist is showing unavailable. Aprreciate if you make it publicly available.
@AISoftwareDevelopers
@AISoftwareDevelopers 13 сағат бұрын
Thanks, they are coming online as soon as the coding exercise are complete and fully tested. Thanks for the nudge. Happy new 2025 - may this year be the year of learning 😁🎁🎄
@darshanasamanpura9325
@darshanasamanpura9325 16 сағат бұрын
thank you so much, it's a very clear explanation.
@AISoftwareDevelopers
@AISoftwareDevelopers 13 сағат бұрын
I am glad you found the content useful. Happy 2025 and may this year be the year of learning 😁🎁🎄
@intelligentestate
@intelligentestate Күн бұрын
Hey, I am Really curious how you were able to get Hermes in GPT4ALL to view online photos. I've tried using localdocs to connect my models to web content bet no dice....
@AISoftwareDevelopers
@AISoftwareDevelopers Күн бұрын
What version of gpt4all are you using? It was reported earlier that certain versions after 3.4.2 had issues with some of the models. In my case, no special configuration was required.
@intelligentestate
@intelligentestate 12 сағат бұрын
@@AISoftwareDevelopers I updated to their latest version (1.6.1) It's on windows, I can get my visual models to hallucinate an answer(Based on whatever the address says) but other than that no dice.
@jeblincoln9032
@jeblincoln9032 2 күн бұрын
Exciting! Thanks for the tutorial.
@AISoftwareDevelopers
@AISoftwareDevelopers 2 күн бұрын
I am glad you find it exciting, too! It’s basically one of the most exciting frameworks to come out in 2024. it’s so cool to see a trend towards simplicity. In my opinion, frameworks will shift from DSLs to idiomatic language constructs. And Pydantic is leading the way. Thanks for chiming in 😁👍🎄🎁
@AngusLou
@AngusLou 2 күн бұрын
Thanks for the video. One quick question, what are the pros and cons for Pydantic and CrewAI agent framework?
@AISoftwareDevelopers
@AISoftwareDevelopers 2 күн бұрын
@@AngusLou Great question. While not an expert, I can share from experience that CrewAI is more established and it uses domain-specific structures to build and configure agents. PydanticAI is newer and it relies on idiomatic Python code, aiming to simplify the agent management process. Perhaps others, with more experience, can also chime in.
@micbab-vg2mu
@micbab-vg2mu 3 күн бұрын
thanks:)
@AISoftwareDevelopers
@AISoftwareDevelopers 3 күн бұрын
You're welcome! Cheers 😁
@dreamphoenix
@dreamphoenix 4 күн бұрын
Thank you.
@AISoftwareDevelopers
@AISoftwareDevelopers 4 күн бұрын
@@dreamphoenix thanks for the support
@CiaoKizomba
@CiaoKizomba 4 күн бұрын
why AzureOpenAI?
@AISoftwareDevelopers
@AISoftwareDevelopers 4 күн бұрын
Many corporate users use Azure-based OpenAI APIs as a way to control access, costs and privacy. I used it as an example of an OpenAI-compatible model passed as a client to PydanticAI. Any other model will do just as fine. Thank you for your comment! 👍
@CiaoKizomba
@CiaoKizomba 4 күн бұрын
i noticed that in the langchain documentations, they use a configuration.py file. i understand that this is good practice for software engineers. can you do a video explaining why this is a good practice for langgraph.
@AISoftwareDevelopers
@AISoftwareDevelopers 4 күн бұрын
Roger that
@CiaoKizomba
@CiaoKizomba 4 күн бұрын
wow, this is quality content. why did it take so long for youtube to recommend it. or even just have this video show up in youtube search.
@AISoftwareDevelopers
@AISoftwareDevelopers 4 күн бұрын
Thanks, I’m glad you enjoyed the content. It is normal for small creators to struggle with visibility when they first start. Over time, and with comments like this, KZbin starts showing it to more people. Thank you so much for your support. ❤️
@fruitshopowner2505
@fruitshopowner2505 4 күн бұрын
This has nothing to do with agents lol, this is completely normal, sequential, and explicit calling of wrappers of random LLMs. This is what langchain was doing 3 years ago
@AISoftwareDevelopers
@AISoftwareDevelopers 4 күн бұрын
Ha! Exactly as you said- PydanticAI uses plain Python, making it look like just a wrapper. Stay tuned for future videos where I’ll show how to customise the system prompts, add tools and agent memory. Then, the power of this simple, yet powerful framework will become more visible. Thanks for the comment. Ciao! 😁🎁🎄
@orafaelgf
@orafaelgf 5 күн бұрын
Great video. If possible, it would be great to see something like this but with agents interacting with multiple calendars (Google Calendar) through the “prompt”. asking to create, delete or update events. It would be amazing with your teaching method.
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
Yes, I will keep that in mind for future videos. Thanks for the suggestion!
@SamiSabirIdrissi
@SamiSabirIdrissi 5 күн бұрын
Extremely simple demo
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
Thanks for the support. More complicated demos are on the way. As an introduction, I wanted to keep it simple and show how easy it is to create an agent. 🕵️‍♂️
@ErginSoysal
@ErginSoysal 5 күн бұрын
Thanks for the code. It's quite helpful. But, your both agents return `HumanMessage` instances. Then, you needed to resort/reorganize them to overcome that issue although it's not required for this project. I guess, this was a fix for your misordered conversation from a former project since all your agents act as a human.
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
Thanks for the comment! I’ll double-check the code and see if there’s a fix. In the meantime, if you already solved the problem, feel free to submit a Pull Request so others don’t face the same problem. It will be much appreciated ❤️
@Baleion
@Baleion 5 күн бұрын
I like that you start with the simple examples and get more robust. Well organized, valuable content. Thanks! Subscribed!
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
Thank you for the support. I am glad you found the content helpful. 😄
@mahajanravish
@mahajanravish 5 күн бұрын
Hi This is great but i hope this is your first but not the last video. I hope to see how to use tools which is the crux of any AgenticAI framework. I went through the code but i am stuck at how to register tool(python function) with the Agent. It use a class called RunContext. I am not really getting the code.
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
There will be more videos, including tools. I have all the code examples ready, it’s just a matter of recording, editing and posting. Please stay tuned and thank you for the support!
@MajesticGeek
@MajesticGeek 5 күн бұрын
This was well done. Thank you! Standing by for future PydanticAI tutorials from you, please and thank you!
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
@@MajesticGeek Roger that! Will do my best to get them out in a relatively accelerated schedule 🤞🎄🎁
@ApolitoNatalya
@ApolitoNatalya 5 күн бұрын
Thank you so much for this amazing video! Just a quick off-topic question: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). What's the best way to send them to Binance?
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
Can't help with that, sorry. Tahnk you for the comment.
@garethjax
@garethjax 5 күн бұрын
Oh, very interesting, thanks! subscribed!
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
Thanks for the sub!
@60pluscrazy
@60pluscrazy 5 күн бұрын
Very good 🎉🎉🎉
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
Thank you for the feedback. I am glad you enjoyed the video 😀
@SenthilKumaran4u
@SenthilKumaran4u 5 күн бұрын
Deserves subscription. Done.
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
Thank you for the support. I am glad you found the content helpful. Cheers, mate! 😄
@SenthilKumaran4u
@SenthilKumaran4u 5 күн бұрын
This is super cool, another good framework to try & explore. Thanks for sharing. Your explanation is clear and to the point especially not wasting time by typing the code.
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
Hey, thanks for the feedback. I will try to stick to this format, if it helps get the points through easier.
@Akshatgiri
@Akshatgiri 5 күн бұрын
Great intro. More videos on pydantic ai agents please.
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
Definitely! A good framework deserves more videos 😁
@JNET_Reloaded
@JNET_Reloaded 6 күн бұрын
Nice ty :D
@AISoftwareDevelopers
@AISoftwareDevelopers 5 күн бұрын
You are welcome. I am glad you found the content helpful.
@capt2026
@capt2026 10 күн бұрын
Nube here. Very interesting, Thanks. You have several models downloaded. Are they on an external drive? and what is the configuration of your machine?
@AISoftwareDevelopers
@AISoftwareDevelopers 10 күн бұрын
The models are stored on my local hard drive - 14'' MPB (late 2023) with M3 and 1TB storage. You can store them in an external drive, if space is an issue. The operations will be slower, but it will work. Thanks for the comment!
@pritshupanda7262
@pritshupanda7262 10 күн бұрын
Awesome video. One issue I am encountering is that when I type Hi, the HumanMessage is "Hello, How can i assist you today?". This gets fed back to the supervisor node, who treats this as a user message and calls the LLM tool again. And so this starts an endless conversation where the agents keeps talking to itself until i have to force stop it. However, if I try "What is python?", i get back a straightforward answer and the program ends normally. Any idea how to fix this? Is it something that can be fixed at the prompt level?
@AISoftwareDevelopers
@AISoftwareDevelopers 10 күн бұрын
Yes, it can be fixed, take a look at the pinned comment where you can enhance the system prompt or introduce a counter in the graph state to prevent runaway conversations among agents. If you need more help, jump on to the discord server and DM me on the tutorial help channel. Cheers!
@dognini
@dognini 10 күн бұрын
Thank you. This is great content.
@AISoftwareDevelopers
@AISoftwareDevelopers 10 күн бұрын
Glad you enjoyed it
@naitik_patel
@naitik_patel 11 күн бұрын
Can I use this as a alocal server and use its API hosted locally for my other projects? If it does it will be awesome and not then I think that's a good next iteration feature it needs to implement❤
@themax2go
@themax2go 11 күн бұрын
already done see my other post
@AISoftwareDevelopers
@AISoftwareDevelopers 11 күн бұрын
Yes, as @themax2go pointed out, you can configure and expose an API endpoint and have other apps use the models.
@naitik_patel
@naitik_patel 11 күн бұрын
@@AISoftwareDevelopers thanks I will surely try
@mohameddonia6544
@mohameddonia6544 12 күн бұрын
Hey, thanks for sharing! Does it have a limit on PDF file size? I've got some files that are almost 5GB. Will it work?
@AISoftwareDevelopers
@AISoftwareDevelopers 12 күн бұрын
I am not aware of any limits, but parsing a PDF of that size will be a challenge for any application, not just gpt4all. A powerful CPU, tons of RAM and GPU may be able to help. Otherwise, you may want to parse the PDFs into Markdown first, using something like LlamaParse (paid) and then process the MD files in gpt4all. The embeddings will still take time though.
@SteveHodgkiss1
@SteveHodgkiss1 12 күн бұрын
As it's an installed application, is there any way to use the local models inside an editor such as Visual Studio or Windsurf?
@themax2go
@themax2go 12 күн бұрын
Yes in options activate openai endpoint
@AISoftwareDevelopers
@AISoftwareDevelopers 12 күн бұрын
I don’t see a reason why not. The models are downloaded to a folder you can configure and therefore load and use from anywhere else you need to. Great question!
@albertcurtis1201
@albertcurtis1201 13 күн бұрын
If you use their 3.5.0 and above you won't be able to side load models ... Downgrade to 3.4.2 which rocks.
@AISoftwareDevelopers
@AISoftwareDevelopers 13 күн бұрын
I wasn't aware of this, but after checking they have already released 3 minor updates since the video was recorded. A fast-paced team, for sure 😃
@albertcurtis1201
@albertcurtis1201 12 күн бұрын
@@AISoftwareDevelopers Those minor updates still don't load most HF models out of the box. Your luck may vary. I use 3.4.2
@adamtreat7582
@adamtreat7582 12 күн бұрын
You can use sideloaded models just fine, but it might require tweaking the chat template. The latest version - 3.6.0 - which was just released does have replacements and examples for several well known sideloaded models.
@AISoftwareDevelopers
@AISoftwareDevelopers 12 күн бұрын
@@adamtreat7582 thanks for chiming in. What is a good link to learn more about this? If there's enough interest, maybe I can throw together a quick tutorial on how to side-load models?
@geelws8880
@geelws8880 14 күн бұрын
I would love a video on how to build custom high quality datasets with nomic
@albertcurtis1201
@albertcurtis1201 13 күн бұрын
You really can't using their PC models ... they are very stupid.
@AISoftwareDevelopers
@AISoftwareDevelopers 13 күн бұрын
Can you elaborate on the use case and the tools? Is this with Nomic Atlas or GPT4ALL?
@AISoftwareDevelopers
@AISoftwareDevelopers 16 күн бұрын
As pointed out by a few viewers, for certain queries the agent network may go into a loop where none of the agents are able to provide a sufficient answer and the supervisor is kind of stuck in a redirection paralysis. The way to prevent this is through a revised system prompt for the supervisor and the individual agents to have a circuit-breaker or explicit instructions that if an agent is not providing the answer, then move on to the LLM agent. This would prevent unnecessary round trips and make the graph preform faster.
@jayjayjay000
@jayjayjay000 18 күн бұрын
great content! any chance we can test the finished product?
@AISoftwareDevelopers
@AISoftwareDevelopers 17 күн бұрын
Yes. The finished product can be tested by checking out the code from GitHub, adding your OpenAI keys and running the Streamlit script. Have you checked out the code? Any issues?
@SimonBransfieldGarth
@SimonBransfieldGarth 19 күн бұрын
Really great video, thank you!
@AISoftwareDevelopers
@AISoftwareDevelopers 19 күн бұрын
Glad it helped! Care to elaborate on what you enjoyed about the video? It helps small creators like myself make better content.
@SimonBransfieldGarth
@SimonBransfieldGarth 19 күн бұрын
@@AISoftwareDevelopers Really clearly explained and very logical build up from scratch. Great to follow along. Thank you!
@SimonBransfieldGarth
@SimonBransfieldGarth 19 күн бұрын
@@AISoftwareDevelopers Will clone the repo next week and try it out!
@AISoftwareDevelopers
@AISoftwareDevelopers 18 күн бұрын
Great, feel free to DM on the Discord server, if you run into any issues.
@Condinginsight
@Condinginsight 20 күн бұрын
wa...o...classic ... many thanks for the in-depth knowledge bro.
@AISoftwareDevelopers
@AISoftwareDevelopers 20 күн бұрын
@Condinginsight, thank you for the comment.
@Jose-d5h4c
@Jose-d5h4c 20 күн бұрын
Thanks for the tutorial. You have to explain how you arrive at these complex solutions! Specific use cases can be very disruptive, for example to create a travel manager in real time that is capable of searching for activities, flights, hotels...
@AISoftwareDevelopers
@AISoftwareDevelopers 20 күн бұрын
@Jose-d5h4c, you're absolutely right - multi-agent networks can open possibilities for disruptive use cases, like the travel example you provided. A few years ago Andreessen Horowitz published the now classic "Why Software Is Eating the World" a16z.com/why-software-is-eating-the-world. We are entering a phase where AI will be eating the world, and much sooner than people predict. It took computers and software 70 years to start eating the world, but AI is already almost there in less than 10. But that's not the scariest part. When quantum computing goes mainstream, then a tsunami will unleash, changing paradigms about work and society, well beyond the currently prognosticated scenarios. The final blow will come from the developments in nano technology and robotics, where we will reach the point of being able to grow organic matter and shape it into any robotic being available. Quantum chips, running AGI, inside a completely human-looking and acting robots, that's the true disruption. So, what's to do? While this may sound dark, it's not necessarily bad news. We are nearing the end of the knowledge era that started with Descartes and Gutenberg's press. From the 15th century onwards, we've worshipped at the altar of knowledge. As recently as 20-30 years ago knowledge and hoarding knowledge meant big business. Take the MLS real estate listing services in the States. The fact they hold information meant a whole industry was profiting huge sums of money just by locking out everyone else from the valuable knowledge. But now, knowledge is everywhere. And it is almost free. So, it has become a commodity. The king is dead, long live...wait a second, but what's next? We've entered the age of wisdom. Knowledge alone is not enough anymore - the age of AI will bring us unlimited wisdom. Example: while the MLS can tell a buyer the available homes for sale, their physical properties and price, a true multi-agent AGI will combine millions of data points and understand truly if a home purchase, that home purchase, is the right or the best choice for us. Knowledge has transformed into wisdom. The next natural question is then, with all that wisdom coming [almost] for free, what do we do with it? Well, that will be a deeply personal choice. For my part, I choose to remain optimistic and think all of this will benefit the humanity and make us live in a better world. Thanks for the comment and sorry for the long answer. I hope it made a little bit of sense.
@Lirim_K
@Lirim_K 22 күн бұрын
The legend is at it again! Going to watch this later, thanks a lot man! A video of how to build local agents using Ollama would be great. Many european companies have to adhere to GDPR and they don't want the data to leave the EU. If you use OpenAI, then they will store your data in the US for 30 days - which is a no-go for these EU companies. What tools and hardware are needed for local inferencing?
@AISoftwareDevelopers
@AISoftwareDevelopers 21 күн бұрын
Local agents with Ollama is a great idea. The key will be to find a model that supports agent calling with structured outputs and tools. For small machines the 7B models typically struggle with these tasks, but 37B and 80B models do a better job, for which one needs GPUs. Thanks for the valuable suggestion!
@tradertube
@tradertube 23 күн бұрын
Great video. How can I get the Azure API key?
@AISoftwareDevelopers
@AISoftwareDevelopers 23 күн бұрын
Thanks for the comment. You can start here: techcommunity.microsoft.com/blog/educatordeveloperblog/getting-started-with-azure-ai-studio/4095602 You may also consider using an OpenAI model/key, which may be a bit easier than setting up Azure AI Studio. Both will allow selection of a model and performing the RAG operations in the tutorial, for roughly the same price. I hope this helps.
@albertoavendano7196
@albertoavendano7196 27 күн бұрын
Man, you are a lifesaver… I will give you an infinite out of 10… thanks a lot … I just have a question: can I use gpt models locally off line (such as gpt4all, azure, etc) just to compare the models myself?
@AISoftwareDevelopers
@AISoftwareDevelopers 26 күн бұрын
pip install --upgrade --quiet langchain-community gpt4all Open models.py and add these lines: from langchain_community.llms import GPT4All from gpt4all import Embed4All local_path = ( "./models/Meta-Llama-3-8B-Instruct.Q4_0.gguf" # replace with your local file path ) count = 0 class MyCustomHandler(BaseCallbackHandler): def on_llm_new_token(self, token: str, **kwargs) -> None: global count if count < 10: print(f"Token: {token}") count += 1 self.model_gpt4all = GPT4All( model=local_path, callbacks=[MyCustomHandler()], streaming=True) self.model_gpt4all_embeddings = Embed4All() In ingest.py and chat.py use the model_gpt4all_embeddings and model_gpt4all. That should be it!
@AISoftwareDevelopers
@AISoftwareDevelopers 26 күн бұрын
Check out discord.gg/5RS7uVpN - it may be easier to message longer text on Discord. Thanks for the question! 😀
@AISoftwareDevelopers
@AISoftwareDevelopers 27 күн бұрын
Adding memory to the graph can also significantly improve the quality outcomes, however it will cost extra tokens: from langgraph.checkpoint.memory import MemorySaver # Set the configuration needed for the state config = {"configurable": {"thread_id": "1"}} memory = MemorySaver() graph = builder.compile(checkpointer=memory) graph.invoke({"messages": [HumanMessage(content=user_input)]}, config)
@Lirim_K
@Lirim_K 28 күн бұрын
Again, awesome video! I really appreciate these tutorials! One question: at around @22:18 the quality score does not improve (as you say) after the reviewer checks it but it's 850 again and it still went to summary despite the threshold being 900. If this score is not 100% reliable then there is a risk to put this into production?
@AISoftwareDevelopers
@AISoftwareDevelopers 28 күн бұрын
Hi @Lirim_K, you bring up an excellent point that I should have addressed in the video. At their current state (as of 12/24) I would not trust any AI-generated code to put in production straight up. No matter how advanced the tool features are - Bolt.new, Cursor, Windsurf...or how much the LLMs claim to understand the intent of the user, as of today, I wouldn't put AI-generated code directly into production. What one could hope for is if AI can reduce the burden on developers or testers, say by X%, that might be a win. With that said, in the example the quality score indeed didn't improve between two runs and the developer agent regenerated the code to get to 950. Final point - the "Quality Score" measure in the tutorial is representative measure and somewhat devoid of a comprehensive quality matrix you might find in static or dynamic code analyzers, such as cyclomatic complexity, potential defects, adherence to requirements, etc. It was used as an example to illustrate the conditional edges in LangGraph and how one can control the agent flow. I hope this answers your question. Thanks so much for chiming in. Make sure to check out the Discord sever at discord.gg/5RS7uVpN for more conversations and insights. Cheers, mate.
@WASIF0332
@WASIF0332 28 күн бұрын
can you tell me which ollama model i can use for this ? i tried with llama3.2:1b and llama3.1:8b and getting this error File "/venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 1228, in with_structured_output raise ValueError(msg) ValueError: Received unsupported arguments {'method': 'json_mode'}
@AISoftwareDevelopers
@AISoftwareDevelopers 28 күн бұрын
Right. with_structured_output is a special method and unfortunately it is not universally supported by all LLMs. the 1 and 8B Ollama models will struggle with this, however the 40B+ might be able to give you an answer. You can also try the mistral models, but I doubt the result will be any different. That is one of the reasons I used an OpenAI model for this tutorial. Normally I prefer local models, but to get structured output, you really need a beefed up LLM. Consider joining the Discord server at discord.gg/5RS7uVpN and we might be able to DM for a deeper discussion.
@MZeus-sq2pp
@MZeus-sq2pp 28 күн бұрын
Such an incredible tutorial! Thank you~
@AISoftwareDevelopers
@AISoftwareDevelopers 28 күн бұрын
I am glad you enjoyed the video!
@mentos.0
@mentos.0 Ай бұрын
Very nice tutorial. Gonna be super useful for all those super long PDF:s
@AISoftwareDevelopers
@AISoftwareDevelopers Ай бұрын
@@mentos.0 thank you for the comment. Let me know if you have any suggestions or questions. Chers, mate! 😁👍
@tuddy6652
@tuddy6652 Ай бұрын
Amazing man, thank you. I am currently struggling with an use case. I want to build a multi-agent system for allowing data proccesing for different data samples. The user would ask a question and an agent will have to split the task in subtasks that are more easier to code, next I will have an agent that generates code for each task, executes it on the data and saves the result in another file. The next task is picked up by the agent and the result from previous iteration is analyzed like the previous one, this goes on and on until no tasks remain Do you think this is feasible? Im aiming to use langgraph and some local LLMs like Llama or qwen2.5-coder If you have some advice I would really appreciate thanks again man!
@AISoftwareDevelopers
@AISoftwareDevelopers Ай бұрын
Yes, it could be feasible. Mileage will vary based on the complexity of the tasks, language to implement, LLM to use. But the idea is solid: analyse the big task, break it up to smaller tasks, create code, run the code, export a data file, move to the next agent. Between tasks, I would suggest a quality control agent and conditional edges, to ensure each data export step is done per spec. Start with Python, if that's an option, then use with LangGraph's Plan and Execute template, then integrate REPL into the executor agents and finally use a powerful LLM such as Sonnet 3.5, GPT-4o or Qwen. Watch this guy explain it with a simple example: kzbin.info/www/bejne/Y6rFmISBZad4Y6s. More complex examples will require more rigor, but you get the idea. This is the LangGraph code for the graph: github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/plan-and-execute/plan-and-execute.ipynb
@AISoftwareDevelopers
@AISoftwareDevelopers Ай бұрын
To add, consider passing structured output from agent calls and starting with a well-designed GraphState model. Study up Pydantic, it will help greatly when passing structured data among agents.
@tuddy6652
@tuddy6652 Ай бұрын
@@AISoftwareDevelopers Thanks again, appreciate your response, I will check out the resources and will se how it goes.
@mushtaqueansari2
@mushtaqueansari2 Ай бұрын
Excellent explantion, i am trying to use it i am getting chroma error. Traceback (most recent call last): File "/home/ec2-user/RAG/POC_PDF/ingest.py", line 24, in <module> vector_store = Chroma( ^^^^^^^ File "/home/ec2-user/python312_env/lib/python3.12/site-packages/langchain_chroma/vectorstores.py", line 313, in __init__ self._client = chromadb.Client(_client_settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ec2-user/python312_env/lib/python3.12/site-packages/chromadb/__init__.py", line 334, in Client return ClientCreator(tenant=tenant, database=database, settings=settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ec2-user/python312_env/lib/python3.12/site-packages/chromadb/api/client.py", line 58, in __init__ super().__init__(settings=settings) File "/home/ec2-user/python312_env/lib/python3.12/site-packages/chromadb/api/shared_system_client.py", line 19, in __init__ SharedSystemClient._create_system_if_not_exists(self._identifier, settings) File "/home/ec2-user/python312_env/lib/python3.12/site-packages/chromadb/api/shared_system_client.py", line 26, in _create_system_if_not_exists new_system = System(settings) ^^^^^^^^^^^^^^^^ File "/home/ec2-user/python312_env/lib/python3.12/site-packages/chromadb/config.py", line 352, in __init__ raise RuntimeError( RuntimeError: Chroma is running in http-only client mode, and can only be run with 'chromadb.api.fastapi.FastAPI' or 'chromadb.api.async_fastapi.AsyncFastAPI' as the chroma_api_impl. see docs.trychroma.com/guides#using-the-python-http-only-client for more information.