🎯 Key Takeaways for quick navigation: 00:29 Open-source *chatbot tool.* 01:50 Diverse *chat models.* 03:13 Requires *internet for some models.* 06:04 Free *local models.* 07:50 Privacy-focused. 08:16 Easiest *way to run chatbots.* Made with HARPA AI
@DJPapzin11 ай бұрын
🎯 Key Takeaways for quick navigation: 00:29 🖥️ *Running Chatbots Locally* - Use Jan app. - Free and open-source. - Works offline too. 01:50 🌐 *Variety of Models* - Llama, Mixol, GPT. - Open source models. - Options for customization. 07:50 🚀 *Easy and Efficient* - User-friendly interface. - Quick setup. - No internet needed. Made with HARPA AI
@noobicorn_gamer11 ай бұрын
We're finally seeing some improvements in UI software for casual people to use. I'm happy how the AI market is developing to be more casual friendly and not just for devs. I wonder how Jan makes money by doing this.
@CM-zl2jw11 ай бұрын
Taxpayers and venture capitalists probably? Who knows though. I am blown away by how tech savvy some people are. Even with only a little bit of knowledge and with AI some pretty powerful workflows are built and shipped. Exciting stuff.
@AtomicDreamLabs11 ай бұрын
I never thought it was hard. LM studio makes it so easy even my 11-year-old daughter can do it
@GameHEADtime9 ай бұрын
@@CM-zl2jw Probably not its a gui they not getting tax money to print hello world but if they are maybe its better then sending it to ukraine orgies thanks..
@Geen-jv6ck11 ай бұрын
The small Phi-2 model is proven to perform better than most 7-13B models out there, including Mistral-7B and LLaMa-13B. It’s good to see it available on the app.
@enlightenthyself11 ай бұрын
You are complaining that a LANGUAGE model can't do math... You are definitely special 😂
@Strakin11 ай бұрын
Yea, i asked it to calculate the coefficient of a warp 9 warp drive and it couldnt even do that.@freedomoffgrid
@TheReferrer7211 ай бұрын
@freedomoffgrid No models can do basic math, they have to use tools. Even the might GPT 4 has serious problems with math.
@enlightenthyself11 ай бұрын
@freedomoffgrid limitations in the technology it self brother..
@CM-zl2jw11 ай бұрын
Better how?
@alan_yong11 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 🚀 *Introduction to Jan Tool* - Introduction to a free open-source tool called Jan for running chatbots locally on your computer. - Jan is secure, offline, and supports various operating systems like Windows, Mac, and Linux. - Highlighting the tool's simplicity and availability under the AGPLv3 license. 01:11 📥 *Downloading and Installing Jan* - Demonstrates the process of downloading and installing Jan on a Windows PC. - The availability of open-source models like mistol instruct 7B, llama 2, and mixol 8X 7B for download. - Emphasis on the user-friendly interface of Jan, making it easy to explore and install models. 03:13 🌐 *Connecting to Internet for OpenAI Models* - Explains that while open-source models like llama and mixol can run locally, OpenAI models (e.g., GPT 3.5, GPT 4) require an internet connection and an API key. - Provides guidance on obtaining an API key for OpenAI models. - Discusses the distinction between free local models and those requiring an internet connection. 05:24 💬 *Chatting with GPT 4 Using Jan* - Demonstrates the process of setting up a chat session with GPT 4 using Jan. - Highlights the cost associated with using GPT 4 due to the reliance on the OpenAI API. - Showcases the real-time interaction and responses from the GPT 4 model. 06:16 💻 *Local Model Mixol 8X 7B Usage* - Illustrates the usage of a local model, mixol 8X 7B, without requiring an internet connection. - Emphasizes the cost-free nature of running local models directly from the hard drive. - Compares the resource usage and output quality with GPT 4 and GPT 3.5 turbo. 07:50 🔄 *Exploring Diverse Local Models* - Discusses the variety of open-source models available for download and use in Jan. - Highlights the flexibility to download and run models locally, even without an internet connection. - Points out the ease of disconnecting from the internet while using local models for enhanced privacy. 08:59 🌐 *Conclusion and Recommendation* - Concludes by emphasizing Jan's simplicity, speed, and cost-free usage for local chatbot models. - Expresses the lack of sponsorship or affiliation with Jan, making it an unbiased recommendation. - Encourages viewers to explore Jan for running local chatbots easily and efficiently. Made with HARPA AI
@begula_real11 ай бұрын
Whoa nice hardwork
@qster11 ай бұрын
Great video as always, but you might want to mention the PC requirements when running larger models.
@TheMiczu11 ай бұрын
I was wondering the same if mixtral was running well because of his beast machine or not.
@qster11 ай бұрын
A vague rule to go by is make sure you have a few gigabytes of ram more than the file size of the model itself, but the larger the file the better GPU you also need. @@TheMiczu
@ProofBenny11 ай бұрын
how did you get it to run on the GPU ?@@qster
@qster11 ай бұрын
@@ProofBenny it will automatically use it, no need to change settings
@TheGalacticIndian11 ай бұрын
The current, pioneering and rather primitive LLM models have an average size of a few to tens of gigabytes. In this size they can pack most of the information, works of literature, paintings or films known to mankind. This means that they FANTASTICLY compress data. What is the result of this? That the data captured from a single user, even if filmed with sound 24/7, will be compressed to such a minuscule size that a precocious 14.4k modem would suffice to transmit it just when the Internet access appears. Besides, the model may be canny enough to find ways to connect to that Internet, attach user data to any file and send it that way to the rulers of our lives. Privacy needs serious work.
@stefano9410311 ай бұрын
This is what I’ve been waiting for. I actually downloaded and started running it before even finishing your video. Great find! Thanks!
@CM-zl2jw11 ай бұрын
I can feel your excitement. I was about to do the same but think I will use discipline and read the comments first 😂🎉. Self control!!
@stefano9410311 ай бұрын
@@CM-zl2jw haha too smart! So far testing has been pretty good 👍
@Designsecrets11 ай бұрын
How did you get it working, every time i enter a message, i get error occured: failed to fetch.
@frankywright11 ай бұрын
Thanks again, Matt. You are truly a legend. I have searched online for ways to run my own chat bot, and just like magic, you present it. Thanks mate.
@fun-learning11 ай бұрын
Thank you ❤
@avivolah940111 ай бұрын
There is also LM studio that does the same thing only with a bit more controls :)
@ChrisS-oo6fl11 ай бұрын
Or oobabooga with infinitely more control and multimodal. It’s the Ui that everyone creating models uses.
@mayagayam11 ай бұрын
Do any of these allow for agents or the equivalent of copilot or autogpt?
@alejandrofernandez347811 ай бұрын
From the video the main difference with lmstudio is Jan is open source, but am not sure if it can run on older processors or machines like lmstudio is starting to do..
@bigglyguy842911 ай бұрын
@@ChrisS-oo6fl Yeah, but it's a pile of code-vomit on Github, which is exactly why normal peeps like me are NOT using it...
@mattbeets11 ай бұрын
LM studio can also run Autogen @@mayagayamsee various tutorials on youtube :)
@onecrowdehour11 ай бұрын
Just might be the post we have all been waiting for, way to go Mr. Wolfe.
@CelestiaGuru11 ай бұрын
A description of your hardware configuration (CPU, amount of RAM, GPU, amount of video memory, network upload and download speeds) would be very helpful. What might be quick for you might be absurdly slow for someone with a less-capable system or slower network configuration.
@bobclarke591311 ай бұрын
I adore YTers who quote what % of CPU or ram is being used without saying. Because that means they think we're family and know every detail of each other. And will be pleased when I show up to crash in their guest room.
@scottmiller259111 ай бұрын
I'd like someone to do a Pinokio, Petals, Oobabooba, Jan framework comparision. Oobabooba and Pinkokio give you a lot of under the hood options I'm not seeing demonstrated here - pre-prompts, token buffer access, etc.
@toxichail969911 ай бұрын
lm studio as well. paired with open-interpreter you can use local llms to help with automating tasks on your pc. includiong opening things and creating files as well as create and execute code
@AryanSoni-p9e11 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 🚀 *Introduction to Jan App and its Features* - Introduction to Jan, a free and open-source tool for running chatbots locally. - Jan supports various operating systems, including Mac, Windows, and Linux. - Overview of Jan's compatibility with both open-source and closed-source models. 01:25 🛠️ *Installing and Exploring Jan App* - Installing Jan on Windows and accessing the chat page. - Explanation of the absence of pre-installed models and how to explore available models. - Listing various open-source models like Mistol, Llama, and Mixol 8X 7B. 02:57 ⚙️ *Installing Mixol 8X 7B and Managing Models* - Installing Mixol 8X 7B model and understanding its size. - Mentioning pre-installed closed-source models, GPT 3.5 Turbo, and GPT 4. - Highlighting the flexibility of setting up multiple models for comparison. 05:12 💬 *Interacting with Chatbots using Jan* - Creating a chat thread, setting custom instructions, and selecting models. - Demonstrating the usage of Open AI models like GPT 4 (requires API key) and Mixol 8X 7B. - Emphasizing the cost associated with using Open AI models for demo purposes. 07:36 🌐 *Running Jan Offline and Exploring Model Variety* - Discussing the ability to run Jan offline with local models like Llama and Mixol. - Highlighting the freedom to download and use various models, including uncensored or fine-tuned ones. - Describing Jan as an easy, fast, and free solution for running chatbots locally. 08:59 👏 *Endorsement and Conclusion* - Clarifying the video's non-sponsored nature and expressing excitement about Jan. - Encouraging viewers to explore Jan, emphasizing its simplicity and intuitive interface. - Concluding by positioning Jan as a valuable tool for easy access to local chat models. Made with HARPA AI
@TheFlintStryker11 ай бұрын
I installed Jan... downloaded 4 or 5 different models. 0 have worked. "Error occurred: Failed to fetch"... 🤷♂
@Dayo6111 ай бұрын
I got the same error message
@christopherkinnaird288111 ай бұрын
Same @@Dayo61
@Designsecrets11 ай бұрын
@@Dayo61 Same, and no help , nothing........no answers to fix it
@DrFodz11 ай бұрын
Is there a way you can share documents like pdf with it? If not, are there any alternatives that can do that? Thanks a lot Matt!
@jonathanpena597211 ай бұрын
I only know of ChatPDF (at the top if Googled). It's done online, so not ran locally, but it's free!
@AdamRawlyk6 ай бұрын
as someone who isn't very tech savy but who's been wanting a nice jumping on point to offline local chatbots, this is a brillient start. Videos like this and channels like network chuck have been a huuuuuge help in getting me more involved and helping me understand it better, one step at a time. Incredible work you guys all do, keep up the awesome work. :) Edit: I also wanna note that AI itself has been getting such a bad rep and it geniunely surprises me. Like i understand the problems of jobs, and bad actors, but any advancement in technology has problems like that which arise. l mean look at how people viewed the internet in it's infancy. And yes, there are bad people who do bad things. But thats not the technologies fault, thats the descretion of the individual who uses it. When it comes to AI, i prefer a glass half full approach. Sure it can be used for bad, but it can also be a wonderful and amazing thing if we give it the chance to fully evolve and shine and put some measures inj place to help against or deter the bad actors. :p
@americanswan11 ай бұрын
I'm definitely looking for something like this, but I need to feed it about 100 PDF files that it needs to scan and know intimately, then I would be thrilled.
@USBEN.11 ай бұрын
There are local models trained for way higher token limits upto 20k.
@americanswan11 ай бұрын
@@USBEN. What are you talking about? Self hosting an AI needs tolkens? What?
@USBEN.11 ай бұрын
@@americanswan token limit =word context limit it can take in. Like 4096 slider in the video but upto 20k words
@missoats873111 ай бұрын
@@americanswan These models are restricted in what amount of content they can "remember". This content is measured in "tokens". GPT-4 has a context window of 128k tokens (which is a lot), which some people say means it could remember and talk about a 300 page book for example. So to find out if there is the right model for your needs, you would have to find out how many tokens the text in your PDFs has. As far as I know, OpenAI has a tool where you can paste text and it tells you how many tokens it has. Then you would have to find a model that has enough tokens in it's "context window" so it could remember the text in your PDFs. If every PDF only has one page with text it would be a lot easier than if it has a 100 pages (since a higher token limit also means higher demands for your computer). The next problem is that in "Jan" there doesn't seem to be an option to input your documents, so you would have to find a similar tool that allows that. At this moment in time I don't think you will find a satisfying solution (if your PDFs have a lot of text). But many people are looking for solutions for exactly your problem (especially since this would be very valuable for a lot of companies). So I'm optimistic something will come up in the next months or so.
@TheFlintStryker11 ай бұрын
@@USBEN.can you point to best working models with higher context in your opinion?
@mcclausky11 ай бұрын
Amazing video! Thank you. Matt, do you know perhaps how can we train those local models with the data and files from our hard drive? (word, excel, PDF files, etc)
@DaleMuellerDotCom11 ай бұрын
Train or upload docs for reference
@drkeithnewton11 ай бұрын
@Matt Wolfe - Thank you for making sense of this AI world for me so that it is easier for me to do so.
@BionicAnimations11 ай бұрын
Thanks Matt! Just what I have been waiting for. Now all I need is an image generator like this.👍
@Krisdomain11 ай бұрын
you can try stable diffusion to generate image locally on your computer
@BionicAnimations11 ай бұрын
@@Krisdomain That would be awesome! Thanks! Is it as easy to set up as JanAI or is it difficult?
@stribijev11 ай бұрын
@@BionicAnimations It is easy as I could do that. However, the images I generated were not nice, maybe I used poor models :(
@Vitordiogovitx11 ай бұрын
There are great tutorial to install, I enjoy the Automatic1111 UI, it's prettier to use, but if you want quality images to be generated locally, there is some studying required and follow up installs. Keywords for your search: ControlNet , Negative Prompting, Seed number. This should give you an idea to where you are heading.
@BionicAnimations11 ай бұрын
@@stribijev Hmm... I've usually seen really good pics with Stable Diffusion. How did you learn how to make them?
@TheCRibe11 ай бұрын
Great video ! Note that the experimental release has the GPU option.
@Ira3-ix4bh11 ай бұрын
Hey Matt, I love your content, been with you for almost 2 years now! Have you found a similar tool that allows your to upload files to a local LLM for data analysis? Basically this same thing, but with file upload capabilities?
@milliamp11 ай бұрын
There are a handful of other good tools for running local LLM like LM studio (mac/PC) and Ollama (mac) that I think are worth mentioning alongside Jan.
@TomGrubbe11 ай бұрын
LMStudio is closed source though.
@pventura4911 ай бұрын
Matt - big fan of your videos. This Jan chatbot tool looks great. Thank you for bringing us all the latest and greatest info on AI. 😃
@LucidFirAI11 ай бұрын
Best AI news. I tried for a couple of weeks to run LLAMA like 6 months ago and found it very challenging.
@iulianpartenie626011 ай бұрын
0:09 Answer from Phind 34B Q5: As an AI language model, I don't have specific hardware requirements to run. My existence is based on neural networks and cloud computing, so the minimum resource needed would be a stable internet connection to communicate with the server where I'm hosted.
@iulianpartenie626011 ай бұрын
Something is suspicious about this application. To the question "Why my GPU is running 100%?" I had a very high response time, while the CPU reached 82%, the GPU at 97% (constant) and 52 GB of memory. I didn't have such consumption of resources even when editing videos. Is it possible to make your resources available as happens with the blockchain?
@nryanpaterson622011 ай бұрын
How fortunate for me! I was just thinking about having an offline chat, and BOOM! Here ya go! Thanks, Matt, I love the content! Keep it up!
@AIMFlalomorales11 ай бұрын
i was just messing around with this last night!!!!!! dude, you are on TOP OF IT MATT!!!!!! lets go to a Padres game!
@joseluisgonzalezgeraldo157711 ай бұрын
Thanks! Lobo, why not a video about the computer you need to run locally these models? Minimum RAM CPU-GPU, NVIDIA type and so on? Could be very useful for those thinking to buy a new PC to play with AI at home.
@bigglyguy842911 ай бұрын
That's EXACTLY what I want. I tried asking the local PC shop but really they just know gaming and were basically offering me the most expensive of everything
@konstantinlozev227211 ай бұрын
The lowest spec that I have been able to run 7B models (4-bit quantised) in LM Studio is GTX 1060 6GB laptop with an old Core i5 and 16GB RAM. If I would build a dedicated PC, I would go with RTX 4060Ti 16GB GPU, any modern 8 core CPU and 32GB RAM.
@40g33k11 ай бұрын
This guy isn't technically qualified He's just running a KZbin channel
@bigglyguy842911 ай бұрын
@@konstantinlozev2272 I've been having fun lately via LM Studio, running dolphin-2.0-mistral-7b.Q5_K_M.gguf. My machine has 16GB RAM and an RTX 2060
@konstantinlozev227211 ай бұрын
@@40g33k You can certainly know when you are running on the GPU Vs the CPU. What is more, GPU memory use (the critical metric) can be monitored with MSI Afterburner. Hardly rocket science.
@primordialcreator84811 ай бұрын
the only issue i have with these, like Jan and LLM studio is the chat history or memory, anyway to have the local models save a memory locally so they can remember the chats forever?
@michai33311 ай бұрын
I still prefer LM studio due to the ability to modify GPU layers and CPU offsets. Also, LM provides direct access to Huggingface models.
@mdekleijn11 ай бұрын
Me too, LM studio will also inform you if your hardware is capable of running the model.
@jennab17611 ай бұрын
I would love a comparison between the two, I was actually going to ask Matt for that
@jennab17611 ай бұрын
Do you have any tips for what the recommended settings are for GPU layers and CPU offsets? My laptop is not very robust, sadly, but I did just upgrade to 32gb of ram. That did not fix my high cpu usage when running LM studios, however. It still has moments where it spikes up to 100%
@michai33311 ай бұрын
@@jennab176 depends if your laptop even has an independent GPU. Many of the mid to lower tier laptops with just use the CPU’s integrated graphics processing, which at that point modification of GPU layers won’t improve token processing speed. It really depends on your specific hardware configuration. I bet if you post your specs here the community can help you optimize your settings.
@patrick-zeitz11 ай бұрын
More apps for local LLM‘s: - LM Studio - Faraday - GPT4ALL Commandline (inst. use via UI) - Ollama ( Ollama Web Ui ) - Text Gen Web Ui - PrivateGPT
@missoats873111 ай бұрын
Do you know if any of those allow the user to "upload" documents to them to chat about?
@patrick-zeitz11 ай бұрын
@@missoats8731third try 😂 youtube removes constantly my answers … yes g p t 4 a l l
@rajaramkrishnan418111 ай бұрын
Can we upload ppt,docs and then ask these models questions based on the files uploaded?
@r0bophonic11 ай бұрын
This looks cool! It wasn’t clear to me from the video, but I believe only open source models can be run locally (versus paid OpenAI models like GPT-4).
@stribijev11 ай бұрын
That is right, you can see Matt uses his own API key to enable GPT models.
@AntonioVergine11 ай бұрын
This is because there is no "downloadable" version of gpt. Mixtral, on the opposite, doesn't have an online version (provided by the developers) so if you want to use it you must download it and run it on your computer.
@r0bophonic11 ай бұрын
@@stribijev Yeah, that’s when it became clear the title is misleading. I think the video title should be changed to “Run Any Open Source Chatbot FREE Locally on Your Computer”
@stribijev11 ай бұрын
@@r0bophonic Right. Actually, it is no news, the LLM Studio has been out there for quite a while.
@yonosenada177311 ай бұрын
This one is likely one of my favorites yet! Thank you!
@alikims11 ай бұрын
can you train it with your local documents and chat about them?
@puravidasolobueno11 ай бұрын
Wow! Best practical video I've seen in many months! Thanks, Matt!
@whiteycat61511 ай бұрын
Was waiting for this video for a while. Thank you
@theeggylegs11 ай бұрын
Another great update. Thanks Matt! We appreciate you.
@Bella251511 ай бұрын
I know there are some Models that also allow us to input images, similar to gpt vision. Is there any program that simplified that process?
@playboy7132211 ай бұрын
what about LM Studio? That is my current fav.
@pete53111 ай бұрын
This is great Matt thanks. Now we just need jailbreaked model
@pete53111 ай бұрын
i have already found this, its called "Pi GPT" and it will answer the most outrageous and evil questions you can imagine
@Terran_AI11 ай бұрын
I've been looking for a way to run a secure local model for a while and Jan definitely looks like one of the easiest.. However I use GPTs mainly for data analysis. Is it possible to attach files for analysis using this application?
@tanwilliam73518 ай бұрын
You have just saved my life! This is what I have been looking for!
@anthonygross196311 ай бұрын
How do you know it won’t steal your data or worse? Is it a good idea to be downloading such a large file??
@Ourplanetneedsyou11 ай бұрын
Hi, Could you help me? What can you advise? The book is created, the outline is created, there is an understanding about the number of chapters and their titles. It is necessary to structure the text, link and divide it into chapters. Who of AI is the best at this? Free and paid options? Thank you in advance
@acewallgaz11 ай бұрын
i get an "error occurred: failed to fetch" message, is it because my video card isn't good enough? i have 128GB of ram
@TheSnekkerShow11 ай бұрын
I've done all the cybersecurity training, and refresher training, and in-person training, and read the reminder emails, yet here I am about to download and install some more mystery software because of some dude on KZbin.
@hstrinzel11 ай бұрын
FABULOUS! Thank You! Brilliant. "Works right out of the box". IS there a way to run it on the GPU instead of the CPU?
@abhaykantiwal10946 ай бұрын
Hey Matt, Can I customize the models after downloading so that I can train it on my own data?
@maxwell-cole11 ай бұрын
Interesting post, Matt. Thanks for sharing.
@tunestyle11 ай бұрын
Go, Matt! Great video as always!
@DavidPar_2024Ай бұрын
Thanks Matt..This video was very informative.
@JavierCaruso11 ай бұрын
Does any of this apps/models support external api calls and upload files?
@emanuelmma211 ай бұрын
Great video 👌
@geoattoronto11 ай бұрын
Thanks for feeding this out so quickly,
@TungzTwisted11 ай бұрын
This is definitely it. You hit the nail on the head. This was much like what DiffusionBee to bring Stable Diffusion to those who weren't prompt proficient but it's pretty crazy! Happy New Year !
@tanakamutaviri556111 ай бұрын
Matt. Thank you for your hardwork.
@sethjchandler11 ай бұрын
Mistral under Jan freezes up my MacBook. 16 gg may not be enough ram?
@scottfernandez16111 ай бұрын
Awesome Matt very easy to follow. Happy New Year 😊!😊
@ryanchuah11 ай бұрын
Thanks for recommending this. Been looking for something like this for a long time. Can recommend which model is good for SEO research and article generating?
@AdamKai7911 ай бұрын
Nice. Super valuable video. Thank you! You talked about the small cost for using one's own OpenAI API key, but do you know (or anyone here in the comments know) if those conversations are private, or does OpenAI use them to train the model like they do with using regular GPT3.5 or 4?
@Spraiser7411 ай бұрын
When using the API the personal data is not used for training in theory.
@SocratesWasRight11 ай бұрын
Not used in training. I think that there is also an opt-out toggle in paid version of regulat chatgpt.
@CM-zl2jw11 ай бұрын
@@SocratesWasRightyes. True. You can opt out of sharing data on chatGPT but then it doesn’t save the chat history.
@policani11 ай бұрын
Does this software work with an AMD video card or only NVIDIA? There are a bunch of Stable Diffusion projects I want to run on my PC, but can't because I own an AMD video card with 6gb ram.
@keithkeith21064 ай бұрын
How do we know after download of mistral or even gpt4 that once I teach it for a specific business case that my intel is not available to others who might try to steal what I have built
@JosephShenouda11 ай бұрын
This is EXCELLENT @Matt thanks for sharing this.
@RomiWadaKatsu11 ай бұрын
can I give it strict instructions like when I use oobabooga and "bend" the ai by feeding it the beginning of the answer? Sometimes I don't want to use the uncensored model since the censored ones perform better, but I want to still force them to tell me stuff they're trying to filter or guide them towards the type of answer I need
@anac.15410 ай бұрын
Great content, including your website. Thank you!
@Dj_Nizzo11 ай бұрын
Amazing! Now we just need a simple Windows program like this that transcribed Audio/Video to text, using something like Whisper AI. Every other option out there isn’t user friendly like this. Hopefully in 2024
@TheFlintStryker11 ай бұрын
Descript is really good for transcription.
@claudiososa55607 ай бұрын
Great Video, What PC configuration with Windows do you recommend to use a Mistral version?
@Gorf123411 ай бұрын
Is this a potential replacement for a business intranet search, where guidance and instructions are stored but unsearchable unless you already know the keywords you need? I want someone to be able to ask "How do I deal with a mentally unstable caller" (or whatever their semantics dictate) and actually get the best page for supporting vulnerable customers insted of "no results" because none of the keywords match, or (even worse) get results that show how to improve the stability of the VOIP software.
@freedm-bj1sb3 ай бұрын
I have an Intel i9 CPU with 8 cores, 16 logical processors, and 32GB of RAM on Win10. During my trial of Jan Ai - Mistral Instruct 7B Q4, the response time was exceedingly slow, contrary to a video demonstration that portrayed it as fast, efficient, and streamlined.
@stewiex11 ай бұрын
Amazing! I can't wait to try this!
@behrooz83938 ай бұрын
Hey Matt, can I train a model with different language data (not English) locally? If so which one? What I want is something similar to ChatRTX but my PC doesn't meet the minimum requirement for then. I want to feed many docs (in non-English language) to it and ask questions from it. Do you know any local AI that can do that?
@HosannaSookra5 ай бұрын
Can you list you PC Specs please? So people like me who's to poor to buy a good pc, can know if this will somewhat run on our potato pcs? Thanks.
@aidanblah964611 ай бұрын
Can you explain Custom Instructions? Or provided a link for info? Thanks 🙏
@bigglyguy842911 ай бұрын
In Chat GPT, the paid version, you can click your user name and then find the custom instructions stuff. In there you can enter who you are, what you do, your interests, how you want CGPT to reply etc. I find it more useful than creating a custom GPT.
@amanjha830311 ай бұрын
The biggest model on that one available list in Jan is LZLV 70B Q4. I have not heard of that one before and couldn't find much information about it either. Anyone else used it before?
@BeatsandTech11 ай бұрын
I wonder if this can be installed on a server and ran remotely across a network or if it has a local browser UI? I have a couple of servers that that I may test this on...
@VanSocero11 ай бұрын
matt wolfe u finally gave me a workable LLM Brother!!! Thank you 😀
@dr.gregoryf.maassen263711 ай бұрын
I have a question for the community. Is there a way to change the file location of the models in Jan? It saves it in appdata on my c drive. I would like the models to be saved elsewhere.
@paulstamp11 ай бұрын
Thanks
@gregdove-k5l11 ай бұрын
will it automatically use gpu instead of cpu? or do you need to change settings?
@tonywhite447611 ай бұрын
Finally. Ollama STILL doesn't have the "coming soon" Windows version and my antivrus software won't let me download LM Studio. Great find, Thanks Jan. Can't wait to see Marsha. 🤣
@b.j.freedman11 ай бұрын
Using ChatGPT +, I can create custom GPTs.I have created some for use by high school science students. These are by topics and are confined to those topics at a certain level. Can I do something similar with the LLMs using Jan? Create separate, "downstream" GPTs which can then be accessed in the (local) classroom? This would be better for students than having one ChatBot for all the subjects.
@rickdunn628411 ай бұрын
Is there a way to load a large chunk of data into a folder , or An app, or something - and use chat gpt to interact with that data? Loading it all into open ai website hasn’t worked well for us
@seb_gibbs11 ай бұрын
That windows download seems to take an usually high amount of CPU just to show the initial menu of AI's available. Lots of different powershell.exe processes have started running in the background. Does it come with a trojan?
@pierruno10 ай бұрын
Did you make a Video about LM Studio?
@GoldSrc_16 күн бұрын
I don't understand why people like this video, he never mentioned his PC specs, which are one of the most important bits of information.
@stuart_oneill11 ай бұрын
Matt: Which of the Generative Sys are best for accessing new/same day Net info? I need to access daily news.
@tiredbusinessdad11 ай бұрын
Which of the different AI LLM models you have tested, would you said have given the fastest response time, while still providing a good response?
@latestAiHacks11 ай бұрын
the models are not working. I always get that response: Error occurred: Failed to fetch
@michealsichilongo11 ай бұрын
Great video Consider making a video on how to customize or train on specific topic
@epicboy33011 ай бұрын
On top of that, Mistral requires 24 gigs of dedicated ram on your GPU to run so that’s also kind of a downside lol. Also, thanks for sharing this much simpler local AI, I’m excited to try it out
@BionicAnimations11 ай бұрын
@dark_mode Exactly!👍
@JG27Korny11 ай бұрын
I like oobabooga. You can choose and install whatever model you like from hugging space. You can even talk to them not just chat. There are some models that are extremely good at role-playing.
@tigrisparvus297011 ай бұрын
6Gb of GPU ram for the Mistral 7B model is the minimum. Though more is better.
@JG27Korny11 ай бұрын
CPU inference can be used but it is quite slow. Unless you want same X rated content it is better not to do it lol.@@tigrisparvus2970
@MattCasper-p5x10 ай бұрын
Thanks, Matt! This is great! Do you know if these models can be trained in this platform to be customized for specific areas?
@ultragamersvk166811 ай бұрын
getting error message:```Error occurred: Failed to fetch``` please help
@mimotron11 ай бұрын
Maybe I'm wrong but the "instructions" tab isn't like a system prompt ? Like "you're an helpful assistant blablabla" ?
@dougveit11 ай бұрын
Great work thanks Matt!!😊
@JT-Works11 ай бұрын
Very cool, is there any way to publicly? Host a large language model with a software like this? I have been looking everywhere and cannot seem to find anything.