Build your own Local Mixture of Agents using Llama Index Pack!!!

  Рет қаралды 3,742

1littlecoder

1littlecoder

Күн бұрын

Пікірлер: 16
@jaivalani4609
@jaivalani4609 2 ай бұрын
Thanks ,this was in my thoughts way back in May ,thank fully some body implemented it as next step for me :)
@lrkx_
@lrkx_ 3 ай бұрын
Nice. Tutorial, thank you.
@asrjy
@asrjy 2 ай бұрын
Great video as always. I have a question. I'm trying to build an RAG that answers questions about a dataset that I have using the create_pandas_dataframe_agent. I also have a long list of sample questions and answers that I want the RAG to imitate but not exactly copy. These questions contain some domain knowledge and I've also added some information about the columns at the end of these sample questions and answers. I'm currently passing this as a prefix parameter but I'm not sure if that's the best way to do this. The idea is to have this pandas agent as something that can answer questions that don't require pandas as well. What's the best way to build this? Thanks in advance!
@bilaljamal-e1t
@bilaljamal-e1t 2 ай бұрын
Fantastic. Thanks so much , l have an issue for some reason getting this error Round 3/3 to collecting reference responses. An error occurred:
@jaivalani4609
@jaivalani4609 2 ай бұрын
How does it managers the memory when we use multiple LLMs
@abhisheknema
@abhisheknema 9 күн бұрын
Thanks! Which local machine setup you use and which one you recommend?
@1littlecoder
@1littlecoder 9 күн бұрын
i've got a simple Mac with 36GB RAM
@abhisheknema
@abhisheknema 9 күн бұрын
@@1littlecoder thanks, is it M3 pro?
@1littlecoder
@1littlecoder 9 күн бұрын
@@abhisheknema it's M3 Max but Pro should be okay for many use cases
@abhisheknema
@abhisheknema 7 күн бұрын
@@1littlecoderappreciate your quick response. Any throttling issues while working with big llms like mixtral8x7b (28gb) for long run if you have 14” version.
@SonGoku-pc7jl
@SonGoku-pc7jl 3 ай бұрын
thanks! how is title of previous video? i hope for next video of mixture of experts! :)
@1littlecoder
@1littlecoder 3 ай бұрын
Could you elaborate? You mean like mixture of experts with llama index ?
@Cingku
@Cingku 3 ай бұрын
Can we customize system prompt for both aggregator and proposers?
@staticalmo
@staticalmo 3 ай бұрын
On which GPU did the models run? Integrated GPU?
@1littlecoder
@1littlecoder 3 ай бұрын
Yep Integrated. but mostly on CPU
Unlimited AI Agents running locally with Ollama & AnythingLLM
15:21
Tim Carambat
Рет қаралды 139 М.
小路飞嫁祸姐姐搞破坏 #路飞#海贼王
00:45
路飞与唐舞桐
Рет қаралды 26 МЛН
Kluster Duo #настольныеигры #boardgames #игры #games #настолки #настольные_игры
00:47
Двое играют | Наташа и Вова
Рет қаралды 2,6 МЛН
Running With Bigger And Bigger Lunchlys
00:18
MrBeast
Рет қаралды 125 МЛН
Как мы играем в игры 😂
00:20
МЯТНАЯ ФАНТА
Рет қаралды 3,4 МЛН
Mistral 8x7B Part 1- So What is a Mixture of Experts Model?
12:33
Sam Witteveen
Рет қаралды 42 М.
Reliable, fully local RAG agents with LLaMA3
21:19
LangChain
Рет қаралды 113 М.
Variational Autoencoders | Generative AI Animated
20:09
Deepia
Рет қаралды 16 М.
Mixture-of-Experts vs. Mixture-of-Agents
11:37
Super Data Science: ML & AI Podcast with Jon Krohn
Рет қаралды 694
Claude 3.5 Deep Dive: This new AI destroys GPT
36:28
AI Search
Рет қаралды 740 М.
What Makes A Great Developer
27:12
ThePrimeTime
Рет қаралды 195 М.
Unit 8 [Foundations of Data Science]: Data Analysis in Python
29:15
Yeng Miller-Chang
Рет қаралды 19
Mixture of Models (MoM) - SHOCKING Results on Hard LLM Problems!
25:21
小路飞嫁祸姐姐搞破坏 #路飞#海贼王
00:45
路飞与唐舞桐
Рет қаралды 26 МЛН