Please do a dedicated video on training minimal base models for specific purposes. You're a legend. Also a video on commercial use and licensing would be immensely valuable and greatly appreciated.
@Al-Storm9 ай бұрын
+1
@akram59609 ай бұрын
Where to start with in the path of learning AI (llm, rag, generative Ai..,)
@fxstation13299 ай бұрын
+1
@vulcan4d8 ай бұрын
Yes!
@nasirkhansafi86348 ай бұрын
Very nice question i am waiting for the same. Wish Tim make that video soon
@PCFix411TechGamer9 ай бұрын
I’m just about to dive into LM Studio and AnythingLM Desktop, and let me tell you, I’m super pumped! 🚀 The potential when these two join forces is just out of this world!
@frankdenweed6456Ай бұрын
Are you a crypto bro?
@_lull_5 ай бұрын
You deserve a Nobel Peace Prize. Thank you so much for creating Anything LLM.
@kylequinn196310 ай бұрын
This is exactly what I've been looking for. Now, I'm not sure if this is already implemented, but if the chat bot can use EVERYTHING from all previous chats within the workspace for context and reference... My god that will change everything for me.
@TimCarambat10 ай бұрын
It does use the history for context and reference! History, system prompt, and context - all at the same time and we manage the context window for you on the backend
@IrakliKavtaradzepsyche10 ай бұрын
@@TimCarambatbut isn’t history actually constrained by active model’s context size?
@TimCarambat10 ай бұрын
@@IrakliKavtaradzepsyche yes, but we manage the overflow automatically so you at least don't crash from token overflow. This is common for LLMs, to truncate or manipulate the history for long running sessions
@avepetro83803 ай бұрын
So this strictly for LLM's? Is this like an AI assistant?
@VanSocero6 ай бұрын
The potential of this is near limitless so congratulations on this app.
@sitedev10 ай бұрын
I’d love to hear more about your product roadmap - specifically with how it relates to the RAG system you have implemented . I’ve been experimenting a lot with Flowise and the new LlamaIndex integration is fantastic - especially the various text summarisation and content refinement methods available with a LlamaIndex based RAG. Are you planning to enhance the RAG implementation in AnythingLLM?
@JzguanАй бұрын
Omg I've found u !!!! Been searching over the net. None of it are legit. Yours are true value
@autonomousreviews252110 ай бұрын
Fantastic! I've been waiting for someone to make RAG smooth and easy :) Thank you for the video!
@bradcasper48239 ай бұрын
Thank you, I've been struggling for so long with problematic things like privateGPT etc. which gave me headaches. I love how easy it is to download models and add embeddings! Again thank you. I'm very eager to learn more about AI, but I'm absolute beginner. Maybe video on how would you learn from the beginning?
@JohnRiley-r7j9 ай бұрын
Great stuff,this way you can run a good smaller conversational model like 13b or even 7b,like Laser Mistral. Main problem with this smaller LLM are massive holes in some topics,or informations about events,celebs or other stuff,this way you can make your own database about stuff you wanna chat. Amazing.
@continuouslearner10 ай бұрын
So if in case we need to programmatically use this, does anythingllm itself offer a ‘run locally on server’ option to get an API endpoint that we could call from a local website for example? i.e. local website -> post request -> anythingllm (local server + PDFs)-> LMstudio (local server - foundation model)
@clinbrokers10 ай бұрын
Did you get an answer?
@FisVii7710 ай бұрын
[00:00] 💻 Introduction to Local LLM Setup - Overview of setting up a local LLM on your device using LM Studio and Anything LLM. - Single-click installation process for both applications. - Highlighting the benefits of GPU over CPU for running these applications. [00:26] 🛠 Tools and Compatibility - Discussion on compatibility and installation for Windows OS. - Introduction to Anything LLM as a versatile, private chat application. - Emphasis on open-source nature and contribution possibilities of Anything LLM. [01:36] 🔧 Installation Guide - Step-by-step guide on installing LM Studio and Anything LLM. - Highlighting the simplicity and halfway mark of the installation process. [02:04] 🌐 Exploring LM Studio - Navigating LM Studio's interface and downloading models. - Discussion on model compatibility and the importance of GPU offloading for better performance. [03:14] 📊 Model Selection and Download - Detailed overview of different model qualities (Q4, Q5, Q8) and their implications on performance. - Advice on model selection based on size and download considerations. [04:23] 🤖 Chatting with Models in LM Studio - How to use the internal chat client of LM Studio. - Limitations of LM Studio's chat feature and how Anything LLM can enhance the experience. [05:47] 🔗 Integrating Anything LLM with LM Studio - Setting up Anything LLM to work with LM Studio. - Configuration details for connecting the two applications for a more powerful local LLM experience. [06:12] 🛠 Configuring Anything LLM for Enhanced Functionality - Detailed instructions on configuring Anything LLM settings to connect with LM Studio. - Emphasizing the customization options for optimizing performance and user experience. [06:45] 🎨 Customizing User Experience - Exploring the customization features within Anything LLM, including themes and plugins. - How to personalize the application to fit individual needs and preferences. [07:15] 🔄 Syncing with LM Studio - Step-by-step guide on ensuring seamless integration between Anything LLM and LM Studio. - Tips on troubleshooting common issues that may arise during the syncing process. [07:58] 🚀 Launching Your First Session - Initiating a chat session within Anything LLM to demonstrate the real-time capabilities. - Showcasing the smooth and efficient operation of the model after configuration. [08:34] 💡 Advanced Features and Tips - Introduction to advanced features available in Anything LLM, like voice recognition and command shortcuts. - Advice on how to utilize these features to enhance the overall experience and productivity. [09:10] 🌟 Conclusion and Encouragement for Exploration - Encouraging users to explore the full potential of Anything LLM and LM Studio. - Reminder of the open-source community's role in improving and expanding the software's capabilities. [09:45] 🤝 Invitation to Contribute - Invitation for viewers to contribute to the development of Anything LLM and LM Studio. - Highlighting the importance of community feedback and contributions for future enhancements. [10:20] 📚 Resources and Support - Providing resources for additional help and support, including community forums and documentation. - Encouragement to reach out with questions or for assistance in maximizing the use of the tools. [10:55] 🎉 Final Thoughts and Farewell - Reflecting on the ease and power of deploying LLMs locally with Anything LLM and LM Studio. - Wishing viewers success in their projects and explorations with these tools. With love, From a Brother in Christ
@Augmented_AI9 ай бұрын
How well does it perform on large documents. Is it prone to lost in the middle phenomena?
@TimCarambat9 ай бұрын
That is more of a "model behavior" and not something we can control.
@jimg82969 ай бұрын
Just got this running and it's fantastic. Just a note that LM Studio uses the API key "lm-studio" when connecting using Local AI Chat Settings.
@thegoat10.79 ай бұрын
does it provide script for youtube?
@TazzSmk10 ай бұрын
thanks for the tutorial, everything works great and surprisingly fast on M2 Mac Studio, cheers!
@stanTrX8 ай бұрын
IMO anythingLLM is much userfriendly and really has big potential. thanks Tim!
@xevenau10 ай бұрын
software engineer and AI knowledge? You got my sub.
@NigelPowell10 ай бұрын
Mm...doesn't seem to work for me. The model (Mistral 7B) loads, and so does the training data, but the chat can't read the documents (PDF or web links) properly. Is that a function of the model being too small, or is there a tiny bug somewhere? [edit: got it working, but it just hallucinates all the time. Pretty useless]
@vivekkarumudi10 ай бұрын
Thanks a ton ...you are giving us power on working with our local documents... its blazingly fast to embed the docs, super fast responses and all in all i am very happy.
@ashleymusihiwa10 ай бұрын
thats liberating ! i was really concerned about privacy especially when coding or working on refining internal proposals> Now I know what to do
@BarryFence9 ай бұрын
What type of processor/GPU/model are you using? I'm using version 5 of Mistral and it is super slow to respond. i7 and an Nvidia RTX 3060ti GPU.
@claudiantenegri261210 ай бұрын
Very nice tutorial! Thanks Tim,
@properlogic6 ай бұрын
00:01 Easiest way to run locally and connect LMStudio & AnythingLLM 01:29 Learn how to use LMStudio and AnythingLLM for a comprehensive LLM experience for free 02:48 Different quantized models available on LMStudio 04:14 LMStudio includes a chat client for experimenting with models. 05:33 Setting up LM Studio with AnythingLLM for local model usage. 06:57 Setting up LM Studio server and connecting to AnythingLLM 08:21 Upgrading LMStudio with additional context 09:51 LM Studio and AnythingLLM enable private end-to-end chatting with open source models Crafted by Merlin AI.
@olivierstephane92329 ай бұрын
Excellent tutorial. Thanks a bunch😊
@williamsoo85008 ай бұрын
Awesome man. Hope to see more video with AnythingLL!
@continuouslearner10 ай бұрын
Also, how is this different from implementing RAG on a base foundation model and chunking our documents and loading it into a vector db like pinecone? Is the main point here that everything is locally run on our laptop? Would it work without internet access?
@cosmochatterbot10 ай бұрын
Absolutely stellar video, Tim! 🌌 Your walkthrough on setting up a locally run LLM for free using LM Studio and Anything LLM Desktop was not just informative but truly inspiring. It's incredible to see how accessible and powerful these tools can make LLM chat experiences, all from our own digital space stations. I'm particularly excited about the privacy aspect and the ability to contribute to the open-source community. You've opened up a whole new universe of possibilities for us explorers. Can't wait to give it a try myself and dive into the world of private, powerful LLM interactions. Thank you for sharing this cosmic knowledge! 🚀👩🚀
@PswACC9 ай бұрын
The biggest challenge I am having is getting the prompt to provide accurate information that is included in the source material. The interpretation is just wrong. I have pinned the source material and I have also played with the LLM Temperature to no avail of an accurate chat response that aligns with the source material. Also tried setting chat mode to Query but it typically doesn't produce a response. Another thing that is bothering me is how I can't delete the default thread that is under the workspace as the first thread.
@Helios1st8 ай бұрын
Wow, great information. I have a huge amount of documents and everytime I search for something it's getting such a difficult task to fulfill
@morespinach98325 ай бұрын
And what have you found with this combination of dumb tools? Search through documents is crazy slow with LM Studio and AnythingLLM.
@viveks21710 ай бұрын
I have tried, but could not get it to work with the files that was shared as context. Am I missing something? It's giving answers like that the file is in my inbox I will have to read it, but does not actually reads the file
@_skiel9 ай бұрын
i m also struggling. sometimes it refers to the context and most of the times it forgot having access eventho its referencing it
@SpeedRacer24X11 күн бұрын
Thanks so much for this excellent video! A lot is different since you recorded it though. LM Studio is now at Version 0.3.5 and none of the LLM models that you showed in the video are there anymore.? Your Anything LLM program is also a newer version and downloaded LLama 3. The screens in both programs are totally different now and I am completely unable to follow your video in the revised version of these programs to get any results with my own documents. Is there any chance that you could create an updated video please?
@TheHeraldOfChange9 ай бұрын
OK, I'm confused. If I were to feed this a bunch of pdf documents/books, would it then be able to draw on the information contained in those files to answer questions, summarise then info, or general content based on that info in the same literary/writing style as the initial files? And all 'offline' on a local install? (This is the Holy Grail that I am seeking out.
@holykim43529 ай бұрын
u can already do this with chatgpt custom gpts
@TheHeraldOfChange9 ай бұрын
@@holykim4352 got a link or reference? I've not found any way to do what I want so far. Maybe I misunderstand the process, but I can't seem to find the info I need either. Cheers.
@pabloandrescaceresserrano2637 ай бұрын
Absolutely great!! thank you!!!
@MusicByJC8 ай бұрын
I am a software developer but am clueless when it comes to machine learning and LLM's. What I was wondering, is it possible to train a local LLM by feeding in all of your code for a project?
@rowbradley9 ай бұрын
Thanks for building this.
@monbeauparfum14522 ай бұрын
Bro, this is exactly what I was looking for. Would love to see a video of the cloud option at $50/month
@TimCarambat2 ай бұрын
@@monbeauparfum1452 have you tried the desktop app yet (free)
@craftedbysrs8 ай бұрын
Thanks a lot! This tutorial is a gem!
@2010SiskoАй бұрын
That was a really good video. Thank you so much.
@thualfiqar879 ай бұрын
That's really amazing 🤩, I will definitely be using this for BIM and Python
@fieldpictures13068 ай бұрын
Thanks for this, about to try it to query legislation and case law for a specific area of UK law to see if it effective in returning references to relevant sections and key case law. Interested in building a private LLM to assist with specific repetitive tasks. Thanks for the video.
@gamer_br3 ай бұрын
🎯 Key points for quick navigation: 00:00:12 *💻 Execução fácil de modelos de linguagem locais usando LM Studio e Anything LLM no seu computador.* 00:01:36 *🧩 Instalação simples e rápida do LM Studio e Anything LLM em Windows, metade do processo já é completado ao instalar os programas.* 00:02:59 *📥 Download de modelos pode ser a parte mais demorada, mas essencial para começar.* 00:04:23 *🖥️ Uso de GPU acelera respostas e alcança velocidades como do ChatGPT.* 00:06:14 *🔗 Conexão entre LM Studio e Anything LLM desbloqueia uso poderoso em contexto local.* 00:09:25 *📚 Incremento de modelos com dados contextuais adicionais melhora a precisão das respostas.* 00:09:54 *🔒 Chat local é privado e seguro, sem custos mensais, usando LM Studio e Anything LLM.* 00:10:49 *⭐ Modelos populares como Llama 2 ou Mistral garantem uma melhor experiência.* Made with HARPA AI
@giovanith10 ай бұрын
I loaded a simple txt file, embbedded as presented in video, and ask a question about a topic within the text. Unfortunately it seems the model does't know nothing about the text. Any tip ? (Mistral 8 bit, RtX4090 24 Gb).
@MrZaarco10 ай бұрын
Same here, plus it hallucinates like hell :)
@jakajak19918 ай бұрын
I get this response every time: "I am unable to access external sources or provide information beyond the context you have provided, so I cannot answer this question". Mac mini M2 Pro Cores:10 (6 performance and 4 efficiency) Memory:16 GB
@icometofightrocky5 ай бұрын
Great video very well explained !
@catwolf2567 ай бұрын
To operate a model comparable to GPT-4 on a personal computer, you would currently need around 60GB of VRAM. This would roughly necessitate three 24GB graphics cards, each costing between $1,500 and $2,000. Therefore, equipping a PC to run a similar model would cost more than 25 years' worth of a ChatGPT subscription at $20 per month or $240 per year. Although there are smaller LLM (Large Language Models) available, such as 8B or 13B models requiring only 4-16GB of VRAM, they don't compare favorably even with the freely available GPT-3.5. Furthermore, with OpenAI planning to release GPT-5 later this year, the hardware requirements to match its capabilities on a personal computer are expected to be even more demanding.
@TimCarambat7 ай бұрын
Absolutely. Closed source and cloud based models will always have a performance edge. The kicker is are you comfortable with their limitations on what you can do with them, paying for additional plugins, and the exposure of your uploaded documents and chats to a third party. Or get 80-90% of the same experience with whatever the latest and greatest oss model is running on your CPU/GPU with none of that concern. Its just two different use cases, both should exist
@catwolf2567 ай бұрын
@@TimCarambat While using versions 2.6 to 2.9 of Llama (dolphin), I've noticed significant differences between it and ChatGPT-4. Llama performs well in certain areas, but ChatGPT generally provides more detailed responses. There are exceptions where Llama may have fewer restrictions due to being less bound by major company policies, which can be a factor when dealing with sensitive content like explosives or explicit materials. however, while ChatGPT has usage limits and avoids topics like politics and explicit content, some providers offer unrestricted access through paid services. and realistically, most users-over 95%-might try these services briefly before discontinuing their use.
@Naw1dawg6 ай бұрын
Get a pcei nvme ssd. I have 500gb of “swap” that I labeled as ram3. Ran a 70b like butter with the gpu at 1% only running display. Also you can use a 15$ riser and add graphics cards. You should have like 256gb on the gpu, but you can also vramswap, but that isn’t necessary bc you shouldn’t rip anywhere near 100gb at once. Split up your processes. Instead of just cpu and ram use the cpu to send commands to anything with a chip, and attach a storage device immediately to it. The pc has 2 x 8gb ram naturally. You can even use an hdd it is just a noticeable drag of under 1 gb/s. There are many more ways to do it, once I finish the seamless container pass I will have an otb software solution for you. -- swap rate and swapiness will help if you have solid storage.
@catwolf2566 ай бұрын
@@Naw1dawg yes, you can modify or add addition to your pc to run LLM on your pc, but still it's not worth to do it. because, most of people who playing around LLM, they would use it only short period of time. a month or so max, what i am saying is paying over $ 5,000 build for the LLM is not worth, compare to paying $20 per month enjoying fun.
@PaulCuciureanu4 ай бұрын
@@catwolf256 could be worth it if you make it available to all your friends and get them to pay you instead ;-)
@codygaudet807110 ай бұрын
This video is gold. Push this to the top people.
@BudoReflex9 ай бұрын
Thank you! Very useful info. Subbed.
@MCSchuscha10 ай бұрын
changing the embedding model would be a good tutorial! For examle how to use a multi langual model!
@AC-go1tp8 ай бұрын
Thank you so much for your generosity. I wish the very best for your enterprise . God Bless!
@drew58349 ай бұрын
Great work Tim, I'm hoping I can introduce this or anything AI into our company
@alfata726 ай бұрын
thank you for your simple explanation
@cee70049 ай бұрын
Thank you for making this video. This helped me a lot.
@another_dude_online9 ай бұрын
Thanks dude! Great video
@immersift785610 ай бұрын
looks soo good! I have a question : is there some way to add chat diagram like voiceflow or botpress ? For example, guiding the discussion for an ecommerce chatbot and give multiple choice when ask questions ?
@TimCarambat10 ай бұрын
I think this could be done with just some clever prompt engineering. You can modify the system prompt to behave in this way. However, there is no voiceflow-like experience built-in for that. That is a clever solution though.
@CrusaderGeneral9 ай бұрын
thats great, I was getting tired of the restrictions in the common AI platforms
@LiebsterFeind10 ай бұрын
LM Studios TOS paragraph: "Updates. You understand that Company Properties are evolving. As a result, Company may require you to accept updates to Company Properties that you have installed on your computer or mobile device. You acknowledge and agree that Company may update Company Properties with or WITHOUT notifying you. You may need to update third-party software from time to time in order to use Company Properties. Company MAY, but is not obligated to, monitor or review Company Properties at any time. Although Company does not generally monitor user activity occurring in connection with Company Properties, if Company becomes aware of any possible violations by you of any provision of the Agreement, Company reserves the right to investigate such violations, and Company may, at its sole discretion, immediately terminate your license to use Company Properties, without prior notice to you." Several posts on LLM Reddit groups with people not happy about it. NOTE: I'm not one of the posters, read-only, I'm just curious what others think.
@TimCarambat10 ай бұрын
Wait so their TOS basically says they may or may not monitor your chats in case you are up to no good with no notification? okay. I see why people are pissed about that. I dont like that either unless they can verifiable prove the "danger assessment" is done on device because otherwise this is no better than just cloud hosting but paying for it with your resources
@TimCarambat10 ай бұрын
Thanks for bringing this to my attention btw. I know _why_ they have it in the ToS, but I cannot imagine how they think that will go over.
@LiebsterFeind10 ай бұрын
Ancient idea clash between wanting to be a good "software citizen" and the unfortunate fact that their intent is still to "monitor" your activities. As you said in your second reply to me, "monitoring" does not go over well with some and the consideration of the intent for doing so, even if potentially justified, is a subsequent thought they will refuse to entertain. @@TimCarambat
@alternate_fantasy9 ай бұрын
@@TimCarambatLet's say there is a monitoring background behind, what if we setup a vm that did not allow to connect to the internet, does that will make our data safe ?
@TimCarambat9 ай бұрын
@@alternate_fantasy it would prevent phone homes, sure, so yes. That being said I have Wiresharkd lmstudio while running and did not see anything sent outbound that would indicate they can view anything like that. I think that's just their lawyers being lawyers
@djkrazay77919 ай бұрын
This is an amazing tutorial. Didn't know there were that many models out there. Thank you for clearing the fog. I have one question though, how do I find out what number to put into "Token context window"? Thanks for your time!
@TimCarambat9 ай бұрын
Once pulling into LMStudio, its in the sidebar once the model is selected. Its a tiny little section on the right sidebar that say "n_ctxt" or something similar to that. Youll then see it will explain further how many tokens your model can handle at max, RAM permitting.
@djkrazay77919 ай бұрын
@@TimCarambat your the best... thanks... 🍻
@proflead4 ай бұрын
Well explained! Thanks!
@Dj-Mccullough9 ай бұрын
I had a spare 6800xt sitting around that had been retired due to overheating for no apparent reason, as well as a semi-retired ryzen 2700x , and i found 32 gigs of ram sitting around for the box. Just going to say flat out that it is shockingly fast. I actually think running Rocm to enable gpu acceleration for lm studio is running llm's better than my 3080ti in my main system, or at the very least, so similar i cant perceive a difference
@MrAmirhk8 ай бұрын
Can't wait to try this. I've watched a dozen other tutorials that were too complicated for someone like me without basic coding skills. What are the pros/cons of setting this up with LMStudio vs. Ollama?
@TimCarambat8 ай бұрын
If you don't like to code, you will find the UI of lmstudio much more approachable, but it can be an information overload. Lmstudio has every model on huggingface. Ollama is only accessible via terminal and has limited model support but is dead simple. This video was made before we launched the desktop app. Our desktop comes with ollama pre-installed and gives you a UI to pick a model and start chatting with docs privately. That might be a better option since that is one app, no setup, no cli or extra application
@Mursaat1009 ай бұрын
Thanks for the video! I did it as you said and got the model working (same as you picked). It ran faster than i expected and I was impressed with the quality of the text and the general understanding of the model. However when i uploaded some documents [in total just 150 kb of downloaded HTML from a wiki] it gave very wrong answers [overwhelmingly incorrect]. What can i do to improve this?
@TimCarambat9 ай бұрын
two things help by far the most! 1. Changing the "Similarity Threshold" in the workspace settings to be "No Restriction". This basically allows the vector database to return all remotely similar results and no filtering is applied. This is based purely on the vector database distance of your query and the "score" filtered on, depending on documents, query, embedder, and more variables - a relevant text snippet can be marked as "irrelevant". Changing this setting usually fixes this with no performance decrease. 2. Document pinning (thumbtack icon in UI once doc is embedded). This does a full-text insertion of the document into the prompt. The context window is managed in case it overflows the model, however this can slow your response time by a good factor, however coherence will be extremely high.
@Mursaat1009 ай бұрын
Thank you! But i dont understand what you mean with "Thumbtack icon in UI once doc is embedded". Could you please clarify?@@TimCarambat
@lmt1252 ай бұрын
This is great! So we woukd always have to run lm studio before running anything llm?
@TimCarambat2 ай бұрын
If you wanted to use LMStudio, yes. There is not specific order but both need to be running of course
@wingwing26839 ай бұрын
It's very helpful. Thank you!
@O-8-152 ай бұрын
Can you make a tutorial how we can make either or the other to TTS for the AI-Response in a chat? I don't mean speech-recognition. just AI voice output.
@karlwireless9 ай бұрын
This video changed everything for me. Insane how easy to do all this now!
@Chris.88810 ай бұрын
Nice one Tim. It’s been on my list to get a private LLM set up. You’re guide is just what I needed. I know Mistral is popular. Are those models listed on capabilities, top being most efficient? I’m wondering how to choose the best model for my needs.
@TimCarambat10 ай бұрын
Those models are curated by thr lmstudio team. Imo they are based on popularity. However, if you aren't sure what model to chose, go for Llama2 or Mistral, can't go wrong with those models as they are all around capable
@Chris.88810 ай бұрын
Thanks Tim, much appreciated.
@WestW3st9 ай бұрын
I mean this is pretty useful already, is there plans to increase the capabilities to include other formats of documents, images, etc?
@Equality-and-Liberty9 ай бұрын
I want to try it in a Linux VM, but from what I see you can only make this work on a laptop with a desktop OS. It would be even better if both LMstudio and AnythingLLM could run in one or two separate containers with a web UI
@TheDroppersBeats8 ай бұрын
@Tim, this episode is brilliant! Let me ask you one thing. Do you have any ways to force this LLM model to return the response in a specific form, e.g. JSON with specific keys?
@rosenvladev965410 ай бұрын
How can I use .py files. It appears they arent supported
@TimCarambat10 ай бұрын
If you change them to .txt if will be okay. We just need to basically have all "unknown" types try to parse as text to allow this since there are thousands of programming text types
@frfr2023frfr9 ай бұрын
Unfortunately not running on Intel Macs.. Insane how fast these machines have become obsolete.
@TBMazembe9 ай бұрын
was really disappointing for me too
@TimCarambat9 ай бұрын
I run this on a Intel Mac daily. It works, albeit slowly. I have a Intel Windows with NVIDIA 3090 that allows me to offload to GPU, much faster then. You cant expect blazing speeds on a CPU only (even with small quantized models) for LLM inferencing. Just not a reality we are in currently, but one i think that will manifest soon. The GPU barrier is definitely hurting proliferation of LLMs on devices.
@TBMazembe9 ай бұрын
@@TimCarambat LM Studio only support Apple Silicon Mac (M1/M2/M3)
@frfr2023frfr9 ай бұрын
@@TimCarambat I have indeed found another way to run llms on my Macbook (16" 2019). It just works quite slowly and it amazes me all the more how much the Macs with M-chips surpass the intel Mac in terms of performance.
@TimCarambat9 ай бұрын
@@frfr2023frfr it blew me away too, even the "older" M1 chips are blazing fast with a 5-bit quantized Llama 2 7B. Its crazy
@muhammadbintang65885 ай бұрын
what much better between higher parameters or bit (Q)??
@TimCarambat3 ай бұрын
Models tend to "listen" better at hight quantization
@Djk0t10 ай бұрын
Hi Tim, Fantastic. Is it possible to use anythingllm with gpt4 directly, for local use? like the example you demonstrated above.
@thedeathcake10 ай бұрын
Can't imagine that's possible with GPT4. The VRAM requires for that model would be in the hundreds of GB.
@alanmcoll10110 ай бұрын
Thanks mate. Had them up and running in a few minutes.
@morespinach98325 ай бұрын
Just like Ollama and many others.
@shattereddnb32685 ай бұрын
I´ve been playing around with running local LLMs for a while now, and it´s really cool to be able to run something like that locally at all, but it does not come even close to replacing ChatGPT. If there actually were models as smart as ChatGPT to run locally they would require a very expensive bunch of computers...
@Jascensionvoid10 ай бұрын
This is an amazing video and exactly what Ineeded. Thank you! I really appreciate it. Now the one thing,how do I find the token context window for the different models? I'm trying out gemma?
@TimCarambat10 ай бұрын
up to 8,000 (depends on VRAM available - 4096 is safe if you want best performance). I wish they had it on the model card on HuggingFace, but in reality it just is better to google it sometimes :)
@Jascensionvoid10 ай бұрын
I gotcha. So for the most part, just use the recommended one. I got everything working. But I uploaded a PDF and it keeps saying I am unable to provide a response to your question as I am unable to access external sources or provide a detailed analysis of the conversation. But the book was loaded and moved to workspace and save and embed? @@TimCarambat
@TimCarambat10 ай бұрын
For what its worth in LM Studio, on the sidebar there is a `n_cntxt` param that shows the maximum you can run. Performance will degrade if your GPU is not capable though to run the max token context.
@BotchedGod9 ай бұрын
AnythingLLM looks super awesome, cant wait to setup with ollama and give it a spin. tried chat with rtx but the youtube upload option didnt install for me and that was all i wanted it for
@JasonStorey8 ай бұрын
Hey, Great video. For some reason I don't have LMStudio as an optional provider in AnythingLLM, any thoughts? thanks.
@TimCarambat8 ай бұрын
That certainly isn't right... Where are you in the UI where you do not see lmstudio?
@uwegenosdude8 ай бұрын
Thanks, Tim, for the good video. Unfortunately I do not get good results for uploaded content. I'm from Germany, so could it be a language problem, cause the uploaded content is german text? I'm using the same mistral model from your video and added 2 web pages to anythingLLMs workspace. But I'm not sure if the tools are using this content for building the answer. In the LM studio log I can see a very small chunk of one of the uploaded web pages. But in total, the result is wrong. To get good embeddings values I downloaded nomic-embed-text-v1.5.Q8_0.gguf and use it for the Embedding Model Settings in LM Studio which might be not necessary, cause you didn't mentioned such steps in your video. I would appreciate any further hints. Thanks a lot in advance.
@HugoRomero-mq7om8 ай бұрын
Very useful video!! Thanks for the work. I kept a doubt about the chats that take place, there is any registration of the conversations? For commercial purposes it will be nice to generate leads with the own chat!
@TimCarambat8 ай бұрын
Absolutely, while you can "clear" a chat window you can always view all chats sent as a system admin and even export them for manual analysis or fine-tuning.
@boomerstrikeforce10 ай бұрын
Great overview!
@Namogadget_9 ай бұрын
Love you for your explain Love from INDIA 😊
@moreloveandjoy9 ай бұрын
Brilliant. Thank you.
@bhushan80b8 ай бұрын
Hi tim, citations showing are not correct it just showing random files ...is there any way to sort it out
@ezdeezytube2 ай бұрын
Instead of dragging files, can you connect it to a local folder? Also, why does the first query work but the second always fail? (it says "Could not respond to message. an error occured while streaming response")
@scchengaiah49042 ай бұрын
Really awesome stuff. Thank you for bringing such quality aspects and making it open-source. Could you please help to understand on how efficiently the RAG pipeline in AnythingLLM works ? For example: If I upload a pdf with MultiModal content or If I want my document to be embedded in a semantic way or use Multi-vector search, Can we customize such advanced RAG features ?
@fxstation13299 ай бұрын
Thank you so much for the concise tutorial. Can we use ollama and LM studio as well with AnythingLLM. It only takes either of them. I have some models in ollama, and some in LM. Would love to have them both in AnythingLLM. I don't know if this is possible though. Thanks!
@TheExceptionalState10 ай бұрын
Many thanks for this. I have been looking for this kind of solution for 6+ months now. Is it possible to create an LLM based uniquely on say a database of say 6000 pdfs?
@TimCarambat10 ай бұрын
A workspace, yes. You could then chat with that workspace over a period of time and then use the answers to create a fine-tune and then you'll have an LLM as well. Either way, it works. No limit on documents or embeddings or anything like that.
@TheExceptionalState10 ай бұрын
@@TimCarambatMany thanks! I shall investigate "workspaces". If I understand correctly I can use a folder instead of a document and AnythingLLM will work with the content it contains. Or was that too simplistic? I see other people are asking the same type of question.
@Peter-bi4hm8 ай бұрын
Did not work for me on Windows11. Tried to add a local document, but save and embed always throws an error. "The specified module could not be found. -> \AppData\Local\Programs\anythingllm-desktop esources\backend ode_modules\onnxruntime-node\bin api-v3\win32\x64\onnxruntime_binding.node" Then I tried to download the Xenova all-MiniLM-L6-v2 manually but the error remains.
@milorad930110 ай бұрын
Hello Tim, you can make a video connecting Ollama with AnythingLLM?
@stephanh108310 ай бұрын
Why am I having a problem with docs (pdf, txt). I get "Could not respond to message. An error occurred while streaning response, network error. Without docs it works fine with LM-Studio. What am I doing wrong? I use M1 with 16 GB Ram
@opensource100010 ай бұрын
maybe watch the video and start your server and connect to it
@stephanh108310 ай бұрын
@@opensource1000 Server is started and works fine with chat. When I use pdf or txt I get that error message and Anything LLM crashes. Yes, I will watch Video again, maybe I missed something.
@frosti76 ай бұрын
Awesome, but anything LLM won't see PDFS with OCR like ChatGPT would, is there a multi-model that can do that?
@TimCarambat6 ай бұрын
We need to support vision first so we can enable OCR!
@nightmisterio9 ай бұрын
To add PDFs in the chat and make a Pools of knowledge to select from would be great.
@NaveenKumar-vj9sc10 ай бұрын
Thanks for the insights. What's the best alternative for a person who doesn't want to run locally yet he wants to use opensource LLMs for interacting with documents and webscraping for research.
@TimCarambat10 ай бұрын
OpenRouter has a ton of hosted open-source LLMs you can use. I think a majority of them are free and you just need an API key.
@juhamattilaАй бұрын
for me the app does not open (1.7.0) or the window of the app does not appear, when i quit it and try to restart it it automaticly shutsdown i tried trashing the cache files with appcleaner then it behaves like first time starting, the app is on but window does not appear tried re-intalling also tried intel version. same behaviour. i have lulu outgoing firewall sometimes it picks some attemps and i allow those but other times it does not. also tried vpn turned off. weird thing is that one time it started fine i downloaded model it got stuck unpacking the model for two hours so ended up closing the app it did not open again after that. i wonder what might be blocking it on my computer?
@Trendish_channel10 ай бұрын
ohh.... this Mistral model has been trained on 10.2021... it`s so old.. 😟
@TimCarambat10 ай бұрын
If I recall there is I a mistral open hermes model and it has more recent info
@Trendish_channel10 ай бұрын
@@TimCarambat Yeahh... I tried it already - it says it was trained on December 2021
@jongdonglu9 ай бұрын
hey tim im on debian, lm studio runs and wonderfully well too, however im having a small issue, on the sidebar the only icon that isnt a weird square is the local server icons... what icon pack or font do i need from the repo?
@TimCarambat9 ай бұрын
phosphor icons
@NicolasCadilhac2 ай бұрын
Followed the instructions. At my first question in chat with mistral-7b, I get "Only user and assistant roles are supported!"
@TimCarambat2 ай бұрын
Using LMStudio?
@bennguyen131310 ай бұрын
I notice some of the models are 25GB+.. BLOOM, Meta's Llama 2, Guanaco 65B and 33B, dolphin-2.5-mixtral-8x7b etc Do these models require training? If not, but you wanted to train it with custom data, does the size of the model grow, or does it just change and stay the same size? Aside from LMStudio , AnythingLLM , any thoughts on other tools that attempt to make it simpler to get started, like Oobabooga , gpt4all io , Google Colab , llamafile , Pinokio ?
@avantigaming16279 ай бұрын
I am trying to access pdf and documentation present on a website I have given AnythingLLM, but it seems not to be working. Is it possible to do so, or do I need to manually download them from the website and attach them in AnythingLLM?
@matheus-mondaini8 ай бұрын
How can I add data into the ChromaDB for AnythingLLM to read it?
@TimCarambat8 ай бұрын
By uploading via the UI while Chroma is your selected vector DB
@matheus-mondaini8 ай бұрын
@@TimCarambat i need to do this via code, is it possible?
@TimCarambat8 ай бұрын
@@matheus-mondaini Yes, if you have an instance running or on desktop their is an API and you can see the endpoint "update-embeddings" for a workspace. Docs are usually at localhost:3001/api/docs