LM Studio + AnythingLLM: Process Local Documents with RAG Like a Pro!

  Рет қаралды 6,326

CodingAI

CodingAI

Күн бұрын

Пікірлер: 22
@nikhils7583
@nikhils7583 Күн бұрын
Great one , seriously a proper human teaching us ai😅, subbed
@PallaviChauhan91
@PallaviChauhan91 Ай бұрын
Great teaching style. Please also share more tutorials on model finetuning and agents.
@danielmacedo2154
@danielmacedo2154 7 күн бұрын
Thx! very useful for personal projects with very specific information. Amazing video, great work
@Jzguan
@Jzguan 2 ай бұрын
Omg I've found u !!!! Been searching over the net. None of it are legit. Yours are true value
@coding-ai-now
@coding-ai-now 2 ай бұрын
Thank you for such encouraging words. It makes the effort worth it.
@Jzguan
@Jzguan 2 ай бұрын
@@coding-ai-now please do not give up. cant wait for more. im learning and implementing what you have taught. been lost .. now im clear =)
@arooly
@arooly 19 күн бұрын
Thx great video, very clear it worked perfect.
@marshallodom1388
@marshallodom1388 5 күн бұрын
Thanks for such a great video, Can I use this method to read construction documents (pdf) and have it answer questions about the content relevant to specific trades? I understand it can't read Autocad construction drawings but can it do take-offs? (enumerate quantities across multiple drawings labeled with text). Can it count or do addition? Can I load multiple files? What are my limitations; total token count of the docs (4096) or is that the question-response size limit? Or must total size of doc loaded be no greater than some certain amount? Which LLMs are best for reading docs; Vision models, Coding models, Math models, or Instruct models? Why can't LLM studio handle this on its own? If it CAN do all of this how much longer before I'm forced into retirement? Got any more vids about this?
@Mixdreamer
@Mixdreamer 4 күн бұрын
How can you check the model performance in the anything llm gui ? Like tokens per second I know LMStudio interface has it
@Mixdreamer
@Mixdreamer 4 күн бұрын
How can you check the model performance in the anything llm gui ? Like tokens per second I know LMStudio interface has it but anything did not
@anonymous_friend
@anonymous_friend Күн бұрын
I tried this, but Anything says 404 Failed to load model "nomic-ai/nomic-embed-text-v1.5-GGUF". Error: Model is not llm.
@alqaimyouth
@alqaimyouth 15 күн бұрын
hi and thanks for the info, quick question why using llm studio while you can run ollama and anythingllm?
@coding-ai-now
@coding-ai-now 13 күн бұрын
It's just a nice GUI for managing your models. Plus you can get other models as well. If you're just using Meta's models and are not going to use LLM Studio features then you don't need it. This was just a continuation from my LLM Studio video.
@2LlamasENT
@2LlamasENT 22 күн бұрын
Is anyone having issues with Anything LLM? I enter the LM studio base URL and max tokens. However, under LM Studio Mode. it says "Loading available models" I am running the server on LM studio. but the models do not load on Anything LLM. Thanks!
@sathishchinthana
@sathishchinthana 7 күн бұрын
hey what happen if i trun off the server .can you explain how close these safely and restart
@coding-ai-now
@coding-ai-now 7 күн бұрын
If you have you're volumes set up then It's just like restarting your computer. Everything that was save will see be there. If volumes aren't set up then the memory is ephemeral and everything will be gone.
@jordonkash
@jordonkash 2 ай бұрын
Seems that the retrieval worked flawlessly despite you using the smaller llama3.2 3B in LM Studio, do you see any RAG performance difference with larger models?
@coding-ai-now
@coding-ai-now 2 ай бұрын
I haven't seen a big difference but I haven't done a lot of testing to compare either.
@wgabrys88
@wgabrys88 Ай бұрын
I encourage you both to try 14b qwen instruct 8kversion - the difference between 7b models and that one is amazing, for me it's the same as chatgpt and the others $ models
@Mixdreamer
@Mixdreamer 4 күн бұрын
So the way this works is: AnythingLLM does the API query to vector database which has your files and pipes it through LLama server ? Let me know if I stated it right 😂
@KenBob52
@KenBob52 3 ай бұрын
it would be great if your monitor was in focus, we could then see what you are doing
@rikmoran3963
@rikmoran3963 3 ай бұрын
I could see it fine.
Do Anything with Local Agents with AnythingLLM
10:45
Prompt Engineering
Рет қаралды 19 М.
Local LLM Challenge | Speed vs Efficiency
16:25
Alex Ziskind
Рет қаралды 142 М.
УНО Реверс в Амонг Ас : игра на выбывание
0:19
Фани Хани
Рет қаралды 1,3 МЛН
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
5:18
warpdotdev
Рет қаралды 243 М.
Cheap mini runs a 70B LLM 🤯
11:22
Alex Ziskind
Рет қаралды 315 М.
How to Use LM Studio: A Step-by-Step Guide
15:59
Bitfumes
Рет қаралды 10 М.
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Cole Medin
Рет қаралды 372 М.
Stop paying for ChatGPT with these two tools | LMStudio x AnythingLLM
11:13
Why Agent Frameworks Will Fail (and what to use instead)
19:21
Dave Ebbelaar
Рет қаралды 116 М.
I Ditched Traditional RAG for Agentic RAG and Got SHOCKING Results!
9:53