Just built your own abstraction for the specific project. It is way easier than making LangChain fit your need. By the way, think about how much struggle LangChain has on streaming, multimodal content, prompt caching, structural output, and how much delay and loss of opportunity LangChain would cost to you before they squeeze these useful features into their already clumsy design, if they ever manage to. Pointless. Just grab what you need and build your own chain. The APIs are easy enough to use.
@noway24883 күн бұрын
Could u tell me is it ok if we work with already made llm rather than new we own made.
@LuisJavierStudio4 күн бұрын
already a year since this video how is it now?
@AizenMD10 күн бұрын
You explained what to consider when hosting an open source LLM, not how to do that. Technically you not click bait as it is sort of "how to" but more like "how to do it well" from high perspective not how to actually take a model and host it
@pgshock14 күн бұрын
Can you link the interview you mentioned you did with (Lusha?) about dashboards/BI/SQL?
@shanebluebutterfly26 күн бұрын
I don't understand why you posit the context window is a limitation in NLQ, the LLM only needs context of the database structure (row titles and types) feeding the actual data in to the context window seems ridiculously counter-productive, at that point why not just have your data ready as json and use a model with massive context. Let the LLM explore the sql, leverage the relational structure of your data.
@avinashnair5064Ай бұрын
Can we use this for open soruce locally hosted Ollama models ?
@АндрейГалицков-б7я24 күн бұрын
looks like this project is dead, no commits for 7 months
@velocirapture89Ай бұрын
Front Page! Suddenly repressed trauma coming up
@velocirapture89Ай бұрын
I’ve tried to get it to do my job for me! But it can’t. 😂
@LarimussАй бұрын
Im learning, but i want to start simple, though, and understand the code as much as possible. I wish there was a good rag tutorial with full code explanatipn.
@MstRima-f7rАй бұрын
Hello sir are you looking for a professional KZbin thumbnail designer and video SEO expert?
@starlord7526Ай бұрын
I totally agree. Yesterday, I created a RAG project and added transcripts from all the videos of a certain KZbinr who has over 1,000 videos. However, when I tested it, the answers were vague and unclear, which defeated the entire purpose of building the project. I wanted to understand the concepts the creator explained in his videos, but I never could. RAG failed in that aspect.
@prolegoincАй бұрын
This happens in every RAG project. Did you attempt to quantify "the answers were vague?" That is the starting point. Start with a set of questions and expected answers organized in a spreadsheet. Then generate answers and paste them into your spreadsheet. Identify where it is falling short and why. You will make faster progress if you also capture the context from the original transcripts - then you can evaluate whether the LLM is getting confused via reasoning limitations, or whether it isn't getting the right context. See the performance report github.com/prolego-team/pdd/blob/main/Example-RAG-Formula-1/Performance-Report/Performance%20Report.md - download and look at the spreadsheet for an example.
@liluhlamperuh3747Ай бұрын
Вы описали действительно интересную идею для говорящего ассистента! Чат-бот или голосовой помощник с возможностями, которые вы упомянули, мог бы значительно улучшить опыт геймеров и упростить управление модами и техническими проблемами. Вот несколько ключевых функций, которые могли бы быть полезны в таком ассистенте: Загрузка контента: Ассистент мог бы анализировать веб-страницы и загружать нужные файлы, автоматизируя процесс скачивания модов. Установка модов: Простая команда для установки модов без необходимости вручную перемещать файлы или изменять настройки. Оптимизация производительности: Ассистент мог бы автоматически настраивать параметры игры и модов, чтобы избежать лагов. Например, он мог бы управлять конфигурацией графики, разрешения экрана и настройками производительности. Решение проблем: В случае возникновения ошибок или багов, ассистент мог бы предлагать решения, корректировать конфигурацию или даже устранять неполадки автоматически. Обеспечение актуальности системы: Регулярные проверки состояния системы, обновление драйверов и программ, а также удаление ненужного ПО для поддержания производительности компьютера. Поддержка и консультации: Вопросы о том, как использовать моды, решение распространенных проблем и предоставление учебных материалов для новичков в моддинге. Такой ассистент мог бы значительно упростить взаимодействие с играми и модами, а также сэкономить время игроков. Хотя реализация такого проекта требует значительных технических усилий и ресурсов, это было бы замечательное достижение в мире геймеров!
@prolegoincАй бұрын
Info on Scatchpads and overcoming LLM context window limitations: www.prolego.com/reports/report-conquering-llm-context-window-constraints
@prolegoincАй бұрын
Thank you Shanif! Follow him at www.linkedin.com/in/shanifdhanani/ and check out Locusive, www.locusive.com/
@cartwrittewambua5533Ай бұрын
Very insightful. For someone with Data Engineering and Architecture skills, I think that optimizing the data model in the backend would greatly help address this. For example, you could use optimized data marts and table partitioning. Is this a solution?
@prolegoincАй бұрын
Yes, definitely. If you're willing to make data and infrastructure investments, then pretty much anything in AI is much easier. We've talked to some companies who are creating new flattened db tables so that everything fits in the LLM context. Unfortunately this type of data work can get really, really expensive to build and maintain. As LLMs get better, it will be more economical to offload as much data analysis to the LLM instead of making these investments. But we're not there yet.
@vandaloart71312 ай бұрын
What are some interesting interfaces for LLMs you have seen recently?
@vandaloart71312 ай бұрын
Thank you for the videos
@austinlee60282 ай бұрын
Thanks for sharing good insights. I wonder my database written in foreign languages still working well with complex queries. Should I match column names to English individually?
@Keytotransition2 ай бұрын
The fact that this video of yours has less views than some others is wild. Thanks for the straightforward and direct instructions
@ananyamishra3822 ай бұрын
I recently read about RAG which is retrieval augmented generation. It proposes to deal with large contextual data like documents by converting them into vector databases, we also convert our prompt into a vector embedding and then find the relevant vector embeddings from within the huge document. Then this retrieved information is added as context to the prompt rather than the entire huge database. This seems to be applicable in your case as well. Is there a reason why you decided to not go that way?
@ananyamishra3822 ай бұрын
Who is running the SQL qury?
@shanif2 ай бұрын
Great chat, love the focus on clarifying why coding with LLMs are different and also the authenticity with both you guys!
@RealKyleMeredith2 ай бұрын
Thanks Kevin! I love your content. Great video!
@GriffinBrown-tq9jz2 ай бұрын
This sounds great! How do I know this framework is authoritative?
@prolegoinc2 ай бұрын
What do you mean by "authoritative?" This is what we're doing for our client projects. What is the alternative?
@slightlyarrogant2 ай бұрын
Really good advice thank you for sharing your process!
@AnthatiKhasim-i1e2 ай бұрын
Fraud Detection: Advanced analytics tools in SmythOS improve fraud detection systems by analyzing transaction patterns and identifying anomalies with greater accuracy.
@sirishkumar-m5z2 ай бұрын
Fraud Detection: Advanced analytics tools in SmythOS improve fraud detection systems by analyzing transaction patterns and identifying anomalies with greater accuracy.
@muizzuddinrazakedu70473 ай бұрын
Golden advice
@pedropcamellon3 ай бұрын
Great advice
@iykazorji81714 ай бұрын
Thank you! This channel is a Godsend!
@MysticScapes4 ай бұрын
Not valid anymore with the new GPT models.
@filipgasiorowski3343 ай бұрын
Why is that?
@MohammadrezaMokhtari-qh2yg4 ай бұрын
wow, just wow!
@MudroZvon4 ай бұрын
I used RAG in KZbinr's AI Telegram bot. Created a lot of AI processed character data, with fine-grained segments. The prompt itself is also crafted according to the latest Claude prompt discovery. The most amazing AI life coach experience for me yet. JulienHimselfBot
@morespinach98324 ай бұрын
Cute, but how would this work in production with, say, very large data sets. Or with 2-3 data files we needed to combine?
@eclecticshenanigans5 ай бұрын
This video series is literally saving my career. Thank you for your thoughtful delivery of all things AI. 5 stars on yelp!
@prolegoinc5 ай бұрын
@eclecticshenanigans - we love hearing that! Thanks for sharing!
@vineetsingh40425 ай бұрын
Your contenr on LLM based application are gold mines🎉🎉🎉
@prolegoinc5 ай бұрын
@vineetsingh4042 - glad you enjoy our work!
@muhammadrezafatehinia85235 ай бұрын
Thanks for this insightful overview.
@prolegoinc5 ай бұрын
Our pleasure!
@prabhugururaj19885 ай бұрын
Does it support ollama models running in local?
@user-wr4yl7tx3w5 ай бұрын
i don't know how sam gets away saying the most obvious thing as if there is some nugget of wisdom there.
@sorabhutube6 ай бұрын
Love how this 4 min was spent thank you for cutting to the chase.. instant subscribe !
@prolegoinc5 ай бұрын
Thanks for the sub!
@jamescody74406 ай бұрын
Just came across your channel and it is a gold mine. I'm learning a lot from you. Thank you.
@prolegoinc6 ай бұрын
Glad you found us! Let us know if there's ever a specific question you'd like us to address.
@OwnOpinions6 ай бұрын
Never explained anything related to title 😂
@nealbarrett27746 ай бұрын
Promo'SM
@jonron38056 ай бұрын
The challenge is not thet they do not believe in the technology, they have data security concerns which any amount of prototying will not convince them about. What they need is some authority to tell them that data sent over LLM's will not be used to re-train the data on LLM's
@NishmaShah026 ай бұрын
Great way! Does this killer app relevant even after 8 months now ? I have a problem that needs a solve like this. What is your mind state after 8 months ?
@Shishiranshoku6 ай бұрын
I was wondering if you have some suggestions on optimizing the documentation being used for RAG. We're using RAG linked to our Notion 'wiki', and I want to implement guidelines for the info being added, to ensure it is 'ai friendly'.
@AITester-j3u7 ай бұрын
Best advice on this topic!
@jamesyoungerdds79017 ай бұрын
Wow, what a great conversation, thank you! First time viewer, typed "using rag for a codebase" into Google and your video showed up, and I'm so glad it did. Honestly, anyone who doesn't how visionary this video is - they are just not getting it. Have you seen @indydevdan? You guys are awesome, all the best 💪
@kwngo697 ай бұрын
i love your guys' content!! please continue to make them - it's helping me make a lot of decisions behind my ai apps