Here is the source code repo: github.com/aidev9/tuts/tree/main/langchain-ollama
@TimHoffellerАй бұрын
Hi, pretty cool stuff and very helpful! You mentioned the rag approach. A follow up with this approach would be very cool😊. Thanks a lot for your work!
@AISoftwareDeveloperАй бұрын
Hi @TimHoffeller, yes RAG with LangChain would provide a more scalable solution. Luckily, my next video will be covering that very topic with LangChain, Supabase and a local Ollama. Check back in the next few days and let me know your thoughts. Thank you for the comment.
@AISoftwareDeveloper28 күн бұрын
@@TimHoffeller the RAG video is now available. Any feedback is appreciated.
@arunbhati101Ай бұрын
Great explanation and easy to understand example. Thanks for sharing your knowledge.
@AISoftwareDeveloperАй бұрын
Thank you @arunbhati101, I am glad you got something out of it. What videos would you like to see in the future?
@Zenith_popАй бұрын
one thing in case of bigger model u need it for more memory in tha case vram is an issue and u need to quantization but then accuracy defects using 8bit or 4 bit also in that case code errors happen , smaller cap are usless due to being applicable in real life projects , but good
@AISoftwareDeveloperАй бұрын
Yes, for bigger models, VRAM and GPU can be highly beneficial. And you're right, bigger models will deliver better results for real life projects. Thank you for the comment.
@AaronBlox-h2tАй бұрын
Cool video. Source code is great . Also including the relevant urls in the video would be good. Thanks.
@AISoftwareDeveloperАй бұрын
Thanks, here are the links, also available in the video description now. github.com/pillarstudio/standards/blob/master/reactjs-guidelines.md gist.github.com/nlaffey/99fdb37c0ba286f38a0582564061dea8
@adriangpuiuАй бұрын
what it would be nice, is to have a meta agent that creates dynamic tools and reinserts them into the flow when needed
@AISoftwareDeveloperАй бұрын
Yeah, that would be cool. Based on the user prompt, the meta agent creates its own tools and executes them as needed. Controlling the agent will be a beast
@umarkhan8787Ай бұрын
Nice man 👍🏻, just wanted to where should i append my tool/function response, like in system or in tools😅
@AISoftwareDeveloperАй бұрын
You'd want to append the tool response into the messages array. You can then filter that array by instance type, passing ToolMessage as the filter. That will do the trick 👍
@genzprogrammerАй бұрын
Thanks Bro, 🎉 Correct Time. Was looking for something like this
@AISoftwareDeveloperАй бұрын
Thanks for the comment. What would you like to see next?
@genzprogrammerАй бұрын
@@AISoftwareDeveloper How can we load a complete codebase from a git repo and Implement a Rag for that codebase?
@AISoftwareDeveloperАй бұрын
@@genzprogrammer Easy. What would the RAG do - are you thinking a chatbot to talk to the codebase or something more advanced, like artifacts generation?
@genzprogrammerАй бұрын
@@AISoftwareDeveloper Thinking of build something like a Chat where it takes my query and search the vectordb for which file should i change Nd what should i change
@AISoftwareDeveloperАй бұрын
@@genzprogrammer that's a great idea. So, it will tell you what files need to be updated based on a feature change you're thinking. Can you give me one or two examples of queries you'd ask?
@AlexK-xb4coАй бұрын
ollama models by default are configured to have 2k context size, fyi
@AISoftwareDeveloperАй бұрын
Thanks, that’s great to know.
@sathishbabu3322Ай бұрын
Awesome video. Can you teach me how to convert this coding to .py file so I can run it in my local machine visual studio code app and check with other compatible models
@AISoftwareDeveloperАй бұрын
Thank you for the comment. Yes, you can save a Jupyter notebook and run it as a Python scrip locally. Watch this to learn how: Jupyter Notebooks in VS Code on MacOS kzbin.info/www/bejne/aaHFd5VtjZeCmLc
@sathishbabu3322Ай бұрын
@@AISoftwareDeveloper I am using windows machine. Hope that video does covers it as well. And thank you so much for a quick response. Really appreciated
@AISoftwareDeveloperАй бұрын
The video doesn’t cover windows, but once you have VSCode installed, running Python is the same. I hope that helps. Thank you for your comment.
@AksafanАй бұрын
Hey, this was surprisingly helpful!! Thank you so much! Can I ask for a source code please?
@AISoftwareDeveloperАй бұрын
Here you go: github.com/aidev9/tuts/tree/main/langchain-ollama
@AISoftwareDeveloperАй бұрын
Thanks for the comment. What topic would you like to see next?
@AksafanАй бұрын
@@AISoftwareDeveloper Some tuning would be great, cause the quality of code (that React components) is so bad tbh. I asked several frontend engineers to look at it and it was not the best one. Maybe it's cause of models I used (llama 3.2-3b and 3.1-7b). Also, that would be great to know how to prepare a proper guideline for a model to use, cause links (event to GitHub raw MD file) are not working when there are a bunch of other links on that page.
@AISoftwareDeveloperАй бұрын
Here is the repo: github.com/aidev9/tuts/tree/main/langchain-ollama
@agraciagАй бұрын
Next step? using voice to give it the prompt :-)
@AISoftwareDeveloperАй бұрын
Nice one, @agraciag
@MubashirullahDАй бұрын
Build it in 10 minutes, waste days in the future dealing with dependency issues
@AISoftwareDeveloperАй бұрын
Not to worry, mate. We’ll have AI to fix all dependency issues 😉
@codelinxАй бұрын
Sounds like someone has SDE …. Small developer energy😂😂🫰
@MubashirullahDАй бұрын
@codelinx XD
@NanoGi-lt5fcАй бұрын
I hv one question can i somehow create github co pilot who just giveme suggestions wht i need to do using this ?
@AISoftwareDeveloperАй бұрын
Hey, definitely. You can use this and create your own Github Copilot as a VSCode extension. Here's how to get started: code.visualstudio.com/api/get-started/your-first-extension
@NanoGi-lt5fcАй бұрын
@@AISoftwareDeveloper thanks sir I will try this will it give me suggestions like an co pilot give can I publish that extension as well