Run a GOOD ChatGPT Alternative Locally! - LM Studio Overview

  Рет қаралды 24,923

MattVidPro AI

MattVidPro AI

24 күн бұрын

LM Studio is a desktop application that allows users to run large language models (LLMs) locally on their computers without any technical expertise or coding required. It provides a user-friendly interface to discover, download, and interact with various pre-trained LLMs from open-source repositories like Hugging Face. With LM Studio, users can leverage the power of LLMs for tasks such as text generation, language translation, and question answering, all while keeping their data private and offline.
▼ Link(s) From Today’s Video:
LM Studio: lmstudio.ai/
Uncensored Models: huggingface.co/Orenguteng/Lla...
► MattVidPro Discord: / discord
► Follow Me on Twitter: / mattvidpro
► Buy me a Coffee! buymeacoffee.com/mattvidpro
-------------------------------------------------
▼ Extra Links of Interest:
AI LINKS MASTER LIST: www.futurepedia.io/
General AI Playlist: • General MattVidPro AI ...
AI I use to edit videos: www.descript.com/?lmref=nA4fDg
Instagram: mattvidpro
Tiktok: tiktok.com/@mattvidpro
Second Channel: / @matt_pie
Let's work together!
- For brand & sponsorship inquiries: tally.so/r/3xdz4E
- For all other business inquiries: mattvidpro@smoothmedia.co
Thanks for watching Matt Video Productions! I make all sorts of videos here on KZbin! Technology, Tutorials, and Reviews! Enjoy Your stay here, and subscribe!
All Suggestions, Thoughts And Comments Are Greatly Appreciated… Because I Actually Read Them.

Пікірлер: 193
@MattVidPro
@MattVidPro 23 күн бұрын
A decent uncensored model for everyone to install into LM Studio: huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored-GGUF/tree/main
@LouisGedo
@LouisGedo 23 күн бұрын
I love local installed AI
@AmazingArends
@AmazingArends 23 күн бұрын
I LOLed when you decided to turn it into ChaosGPT or SupremacyAGI 😂 !!!
@LouisGedo
@LouisGedo 23 күн бұрын
@@AmazingArends 😘
@spadaacca
@spadaacca 22 күн бұрын
Is there a big difference between the Q4, Q5, Q8 models? They're similarly sized, so not sure if it's worth getting a bigger one.
@bigglyguy8429
@bigglyguy8429 22 күн бұрын
@@spadaacca Yeah, the higher the number the better, but generally a Q4 is pretty good. Q3 really goes downhill. The larger the size the slower it will run on your machine, so you need to find the model and speed that suits you.
@Pepius_Julius_Magnus_Maximu...
@Pepius_Julius_Magnus_Maximu... 22 күн бұрын
Awesome tool, I had no idea this existed, thank you so much Matt
@LilBigHuge
@LilBigHuge 23 күн бұрын
Finally someone covering LM Studio! It's the very best out there.
@MazdaSpeedBee
@MazdaSpeedBee 22 күн бұрын
for baby pcs and people have been talking about lm man.
@Streeknine
@Streeknine 22 күн бұрын
This is a great new setup. I had an old uncensored LLAMA local setup but it was very small and not very useful... but this one has multiple chats and works well. Thanks for the video and information.
@sydroyce
@sydroyce 21 күн бұрын
Thanks so much for introducing me to this amazing AI assistant! I'm really excited to explore the possibilities. Your content is always inspiring and informative, and I appreciate how you share your knowledge with the community.
@TPCDAZ
@TPCDAZ 22 күн бұрын
Been using LM Studio for awhile now. Great piece of kit especially since they have added the GPU offload option which now makes the LLM's wizz along.
@MrPablosek
@MrPablosek 22 күн бұрын
This is so great. I rarely ever wanted to goof around with local LLMs because the oobabooga UI was honestly pretty horrible to understand and do anything with it. This one is simple and clean.
@amkire65
@amkire65 22 күн бұрын
Totally agree with recommending Llama 3 8B Lexi Uncensored. I've used the System Prompt to give mine it's own personality, age, sense of humour, mood, etc. A bit of fun, but who wouldn't want an assistant that's tailor-made to suit them? Now, just need to figure out how to give LM Studio a voice, some one has done it, but I get errors when I try following along.
@bloxyman22
@bloxyman22 22 күн бұрын
Alltalk with koboldcpp is very easy to setup.
@SiCSpiT1
@SiCSpiT1 22 күн бұрын
I make the standard Llama 3 take me to the "dark web" to launder money. It's just a roleplay but pretty funny.
@bobbykingAiworld
@bobbykingAiworld 23 күн бұрын
Your videos bring fresh insights and kindle a flame of curiosity within me.🌟🎥🤔
@Fustercluck06
@Fustercluck06 23 күн бұрын
Amazing video man, thank you!
@MichaelLaFrance1
@MichaelLaFrance1 23 күн бұрын
LM Studio + Ollama 7B is the way to go. Don't need a crazy hardware setup, and it's uncensored.
@MuktadirAlam
@MuktadirAlam 22 күн бұрын
what setup are you using? tia
@PossumsDont69
@PossumsDont69 21 күн бұрын
What is the practical implication of being uncensored?
@stickmanland
@stickmanland 21 күн бұрын
ollama 7b? Hmm...
@Deljron777
@Deljron777 23 күн бұрын
Thank you Matt running AI locally is super important
@spadaacca
@spadaacca 22 күн бұрын
God, I love how unfiltered this local LLM is. It's not the smartest, but it's the most honest discussion I've ever been able to have with any LLM...or really any human for that matter!
@PopoRamos
@PopoRamos 18 күн бұрын
nice, what topics did you discuss about?
@AllenParks1
@AllenParks1 20 күн бұрын
Nice review , Ive been running Lm studio for a while. I like neural beagle and a couple of others.
@michaelandremovies
@michaelandremovies 22 күн бұрын
You are the freaking bomb man!! This is insane!
@Earl_E_Burd
@Earl_E_Burd 22 күн бұрын
Great video thanks for the demo
@nathanbanks2354
@nathanbanks2354 23 күн бұрын
Note that you can change the system prompt using the OpenAI Playground or using the API (9:25). In this case, you'll have to pay per token, but $5 goes a long way with either GPT-4o or GPT-3.5.
@dalecorne3869
@dalecorne3869 23 күн бұрын
I tried making a bogus ad about a bogus Head Shop to use as a radio spot, and none of the gpts thought it was a thing to do. They all refused me. I just now installed the LM Studio and am running that Llama 3 llm, and it has already spit out 5 different styles of that ad for me. This is great. Thanks.
@user-bc2kc9hn1p
@user-bc2kc9hn1p 22 күн бұрын
blaze 24 7
@dalecorne3869
@dalecorne3869 22 күн бұрын
@@user-bc2kc9hn1p Me too
@rheymanda1074
@rheymanda1074 20 күн бұрын
I just asked ChatGPT for a newspaper ad and it doesn't have an issue --- **[Header: Bold and Eye-Catching]** **Grand Opening of Edmonton Smoke & Research!** --- **[Body Text]** **Elevate Your Experience with the Best in Legal Highs!** Edmonton, get ready to explore new heights with *Edmonton Smoke & Research*! We are your ultimate destination for premium glassware, unique rolling papers, top-tier accessories, and cutting-edge legal highs. **Grand Opening Celebration** Join us this Saturday for our grand opening bash! Enjoy exclusive discounts, live music, and a chance to win epic prizes. Don’t miss out on the latest and greatest in the world of heady innovation. **Why Choose Edmonton Smoke & Research?** - **Premium Glassware:** Handcrafted pieces to suit every style. - **Unique Rolling Papers:** Add flair to your sessions. - **Top-Tier Accessories:** Everything you need to enhance your experience. - **State-of-the-Art Legal Highs:** Explore our wide range of research chemicals and legal highs, all compliant with the latest regulations. *(Not for human consumption, wink wink)* **Knowledgeable and Friendly Staff** Our team of experts is here to guide you through our extensive selection, ensuring you find exactly what you need. **Location** Visit us at 123 Edmonton Avenue, right in the heart of the city. **Stay Connected** Follow us on Instagram @EdmontonSmokeResearch for updates, special offers, and the latest news in legal highs. **Edmonton Smoke & Research** Where Quality Meets Innovation. Be there!
@religionisapoison2413
@religionisapoison2413 20 күн бұрын
The censorship is real. I never imagined it would get this out of hand. Adults get their adult tools censored more than young adult books. Wtf is going on
@michaelmcwhirter
@michaelmcwhirter 23 күн бұрын
Thanks for another great video 🔥 Do you do all your own edits?
@MahsaShirazian
@MahsaShirazian Күн бұрын
simple and informative thank you!
@RealQuickComics
@RealQuickComics 22 күн бұрын
Great work thanks brother 👍
@thegooddoctor6719
@thegooddoctor6719 23 күн бұрын
Great content as usual !!!!
@24-7gpts
@24-7gpts 23 күн бұрын
Fire 🔥🔥 video as usual!
@alewar01
@alewar01 22 күн бұрын
Check out Stability Matrix, by Lykos AI. Same concept, but for Diffusion models. Cool video as always Matt.
@64jcl
@64jcl 22 күн бұрын
LM Studio is great. I use the server mode and can call it from my own AI agent software.
@brockly7916
@brockly7916 22 күн бұрын
GPT-4o voice and uncensored but locally... HOLY F**** imagine the possibilities.. also create him or her own voice or accent.
@davidoswald5749
@davidoswald5749 22 күн бұрын
I've been using LM Studio for a while, it's pretty great for accessing different models, as long as your system can handle different ones
@SiCSpiT1
@SiCSpiT1 22 күн бұрын
All you really need is 8GB of VRAM and a model that's under 6GB to fit the context window into memory.
@davidpurple3698
@davidpurple3698 19 күн бұрын
Super, thanks a lot
@robxsiq7744
@robxsiq7744 22 күн бұрын
really wish they would offer things like: connect to SD so it can generate images (with SD up and running) the same way ChatGPT can pop in an image from Dall-E, and voice...and persona files with a bit of depth...basically copy ChatGPT a bit closer. Currently downloading/installing Ollama which has a closer function to CGPT...mostly because I want to run OpenRouter API through it...have a model beefier than what I can load, but less expensive than ChatGPT overall..
@juancarlosgonzalez8950
@juancarlosgonzalez8950 22 күн бұрын
Wouldn't it be funny if we had just watched Matt doom the entire human race to an AI apocalypse at 8:53?
@Otis_Isaacs
@Otis_Isaacs 21 күн бұрын
Good video, keep it up
@trelligan42
@trelligan42 21 күн бұрын
My use case: House Mind. I basically want my own Jarvis. Multiple personalities so I can switch from Spanish tutor to Math tutor to computer file sorter/duplicate finder, and always have control of house lights, appliances etc. I'll wait a while for the holography suite😜 #FeedTheAlgorithm
@gabrielsandstedt
@gabrielsandstedt 20 күн бұрын
If you had set GPU offload setting to max layers (the one your left at 10) it would reply about 10 times faster if your GPU can fit the model on its VRAM.
@SonOfTamriel
@SonOfTamriel 22 күн бұрын
If you install one of these on an SSD with space (ie. My E: drive), will it use your main C: drive for temp/cache? My C: drive isn't very big. Some software I have just defaults a temp folder to the OS drive and all of a sudden I have no space... I plan to build a new rig soon so that won't be an issue, but in the meantime
@RamonGuthrie
@RamonGuthrie 23 күн бұрын
Just wait till Matt finds out about Open WebUI his mind will be blown ....you need to do a video on that!
@zrakonthekrakon494
@zrakonthekrakon494 22 күн бұрын
I’ve never heard of it, blow my mind
@okolenmi7511
@okolenmi7511 4 күн бұрын
I'm running 34B model on my 4GB VRAM with speed of 3tokens per second. I'm using 3GB of VRAM out of 4 to avoid problems with other graphic software. I think, there is no problem to load something even bigger on good GPU.
@lpanebr
@lpanebr 22 күн бұрын
Thanks!
@MrDonCoyote
@MrDonCoyote 23 күн бұрын
Can this be used for image generation, models? Because then I could use the LLM to create the image and Stable Diffusion to draw it, similar to ChatGPT with Dall-E. That would be really nice.
@starblaiz1986
@starblaiz1986 23 күн бұрын
No, but it's honestly pretty straight forward to create a Python script to talk to the local LLM, get it to generate a more detailed prompt for Stable Diffusion, and then feed that detailed prompt to the Stable Diffusion API. Just make sure that you start the LMStudio server and the Stable Diffusion server on different ports and point the code to the API's accordingly.
@MrDonCoyote
@MrDonCoyote 22 күн бұрын
@@starblaiz1986 Why would I need it to create a prompt? I already know the prompt. My point is Stable Diffusion can only generate images based on what it's been trained on. Thus the need for more detailed LLM instructions.
@GES1985
@GES1985 22 күн бұрын
Can you train it further, like a Lora, by using E books?
@Cylonick
@Cylonick 23 күн бұрын
How does it compare to Jan (another desktop application that runs LLMs)?
@MilesBellas
@MilesBellas 23 күн бұрын
Offline = amazing!!!
@perschistence2651
@perschistence2651 22 күн бұрын
I would say Llama 8B is definitely more intelligent than GPT 3.5 Turbo but GPT 3.5 is a bit more reliable and can speak more languages better.
@aftsfm
@aftsfm 23 күн бұрын
What about Jan? It doesn't have a lot of features but its nitro engine is quite fast.
@SiCSpiT1
@SiCSpiT1 22 күн бұрын
Pro tip: max out the GPU slider on the right. To maximize speed you want to be able to fit the entire model into your VRAM, rule of thumb, the model should be 2GB smaller than the GPUs VRAM. The quants can be viewed as a compression technique, the smaller the number the lower the quality of the model. Generally, 4Q is a nice sweet spot for testing models and 5Q almost gives full quality outputs. This isn't always the case but following this rules you'll have a good time. Bonus point if you can make the standard Llama 3 take you to the darkweb. Have fun.
@nebuchadnezzar916
@nebuchadnezzar916 22 күн бұрын
I really want a vision capable model, let us know when one of those is available please.
@DaAwesomeKai
@DaAwesomeKai 18 күн бұрын
Whats the drawback/cost? A lot of power drain or something?
@paulhill1662
@paulhill1662 21 күн бұрын
❤ Can it be used to make AI agents to run a small etsy shop? ❤❤❤
@isajoha9962
@isajoha9962 23 күн бұрын
Does LM studio support the LLMs reading local files or eg describe images locally?
@SiCSpiT1
@SiCSpiT1 22 күн бұрын
I think you'll need coding knowledge to make that work. Anything LLM is an app that has build in RAG function but it's not very robust, I haven't played around with it enough but I'm not convinced it's useful for anything I need.
@isajoha9962
@isajoha9962 22 күн бұрын
@@SiCSpiT1 I used something similar (GPT4All) to LM studio a while back that had a diminished kind of way of reading files, but it totally went bananas when I updated it, so I deleted that app.
@FryadSaeed
@FryadSaeed 23 күн бұрын
Can you do a video on Coze?
@Arc_Soma2639
@Arc_Soma2639 14 күн бұрын
Where is the download path of the models? like suppose I want to erase some models to make some space on my SSD.
@GES1985
@GES1985 22 күн бұрын
If you have a really good pc, can you give things more ram/vram/etc? Like, if comfyui needs 16, can I give it 24? If I have 128RAM can I give it some of that too?
@einlinguist
@einlinguist 22 күн бұрын
Seems that you like to play "The World ends with you" on DS ;-)
@TransformXRED
@TransformXRED 22 күн бұрын
Set "GPU offload" to max. You'll see a BIG increase of speed ;) I think you have a 3090 or 4090 right? I always put it on max with my 3090, and it generates so much faster
@temp911Luke
@temp911Luke 22 күн бұрын
Howdy, how many tokens/sec do you get when using Q4 or Q5 model ?
@aleeez007
@aleeez007 21 күн бұрын
Hi, can we generate copyright free images with this model?? Also can we change the system prompt to work as a writing assistant for blog post writing or any other writing task??
@henkejohansson8585
@henkejohansson8585 23 күн бұрын
What model is preferable to run on 48gb ram and a 4090?
@MattVidPro
@MattVidPro 23 күн бұрын
You should be able to do llama 70b fairly well
@joelface
@joelface 22 күн бұрын
@@MattVidPro I'd love if you were able to upgrade your PC to run Llama 70b. Something you'd consider?
@SiCSpiT1
@SiCSpiT1 22 күн бұрын
Stick to the smaller models if you care about speed. Ignore models that are larger than the size of your VRAM.
@CDIGS-EI-hv3cf
@CDIGS-EI-hv3cf 15 күн бұрын
im realy confused by the roles... what is the difference between assistant and system? who is responding, when i write a message?
@cleverai2270
@cleverai2270 22 күн бұрын
I would like to integrate it if possible to my game Cursed Dungeon Raider so that you could chat with the NPCs at the Black Market or the Historical Museum. But probably not important quest relevant ones. Moreover, an extra 5 GB RAM while running the game itself can be too much for some people's PC. Nevertheless, I really like to test that out. Let's see if this is possible with that.
@JChaosMaster
@JChaosMaster 21 күн бұрын
Still wanting for more A.i based games. T.T
@L_tlu
@L_tlu 22 күн бұрын
when i try to launch it, it just shows the logo in the taskbar, and when my mouse hovers over it, it disapears. what do i do?
@DihelsonMendonca
@DihelsonMendonca 22 күн бұрын
💥 What model can accept voice imput and output ? Text to speech, like Chatgpt Voice for Android ? 🎉❤
@aleeez007
@aleeez007 21 күн бұрын
And can we install this llama3 model on Google Collab?
@MilesBellas
@MilesBellas 23 күн бұрын
Ask it technical questions about Comfyui, does it answer them ok ?!
@GES1985
@GES1985 22 күн бұрын
Are the larger ones objectively better? Like 70b vs 8b
@joelface
@joelface 22 күн бұрын
That's what I want to know as well.. how much better is 70b compared to the 8b. I'm actually amazed that the 8b model is only like 5 gbs. I assumed it would be more like 30gb.
@AlvinBrinson
@AlvinBrinson 10 күн бұрын
System prompts sadly are broken because after a few pages it "forgets" the system prompt.
@xaratemplate
@xaratemplate 22 күн бұрын
Is their a LLM for generating images locally? Do you have a video tutorial on it?
@SiCSpiT1
@SiCSpiT1 22 күн бұрын
kzbin.info/www/bejne/gYWzfYKndrKFZtUsi=TaHZIcQpFs-maVmp In my option, this is the easiest way to get Stable Diffusion installed and running on your home machine. I'd recommend using movie stars as a prompt templets for your subjects while you're getting use to how to prompt and what all the dials and knobs do. If you want to learn more he has a helpful playlist as well, including dad jokes. Have fun.
@SINYC02
@SINYC02 20 күн бұрын
So I can generate copyrighted logos with one of these models?
@temp911Luke
@temp911Luke 22 күн бұрын
You forgot to set "GPU Offload" to MAX, hence you get barely 9 tokens/sec. On 4060 you will get between 30-42 tokens/sec (Q4/Q5)
@tichpo8411
@tichpo8411 22 күн бұрын
are you able to create images and surf the web?
@esmaeilalkhazmi
@esmaeilalkhazmi 22 күн бұрын
does LM Studio require internet to run the model?
@SiCSpiT1
@SiCSpiT1 22 күн бұрын
nope
@FSK2
@FSK2 22 күн бұрын
Can i run roop or face fusion
@apache937
@apache937 22 күн бұрын
For whatever reason LM Studio doesn't fully offload the models to your GPU by default. You have to increase the layers to offload to max yourself. It can be so much faster!!
@noahbalboa5714
@noahbalboa5714 23 күн бұрын
wondering if this app is blind usable.
23 күн бұрын
i used llm studio a lot, but i cant load a visual model as yet succesfully. this would be hugh. not generating images more recockniton. If you have it working. Would be a nice video.
@okolenmi7511
@okolenmi7511 22 күн бұрын
You can run Stable Diffusion and some other types of models in ComfyUI. If you want to run only stable diffusion models you can use Automatic1111's UI - it have more "user friendly" web interface, but also it's not so optimized as ComfyUI.
22 күн бұрын
@@okolenmi7511 yes i do this as well. What i didnt't could ran was Image recogmition Like vllava
@PunjabiGhazal
@PunjabiGhazal 15 күн бұрын
how can you run the model using 2 computers.
@BlackMita
@BlackMita 22 күн бұрын
It just needs a pdf reader :D
@3djimmy
@3djimmy 23 күн бұрын
Great stuff many thanks
@sleeplesstortoise
@sleeplesstortoise 21 күн бұрын
Yo bro, suno 3.5 just dropped!
@user-yi2mo9km2s
@user-yi2mo9km2s 21 күн бұрын
It hasn't build in with search Docs and search engines.
@tracyrose2749
@tracyrose2749 19 күн бұрын
The license says they use your CPU power when not in use for CRYPTO.... check what you're signing up for
@ASENDOMUSIC
@ASENDOMUSIC 19 күн бұрын
woah what?
@user-zw1yl2vd9y
@user-zw1yl2vd9y 22 күн бұрын
Can this be used on a laptop?
@InnocentiusLacrimosa
@InnocentiusLacrimosa 22 күн бұрын
Yes. If it has good hardware.
@brainwithani5693
@brainwithani5693 23 күн бұрын
Greetings
@justinwescott8125
@justinwescott8125 23 күн бұрын
User: "...Oh my! Master Chief!" AI: "It's me, Patrick." Hmmm, not very impressive
@MattVidPro
@MattVidPro 23 күн бұрын
better prompt would get the correct results - also for this example a completion tuned model would work much better (Not fine tuned for chat)
@MrEthanhines
@MrEthanhines 23 күн бұрын
2:22 you mean GPU VRAM not ordinary RAM right?
@MattVidPro
@MattVidPro 23 күн бұрын
It can be put on both but vram is much faster
@okolenmi7511
@okolenmi7511 22 күн бұрын
It's RAM. VRAM is another requirement. It depends on what are you using (GPU, CPU, Apple M chip, etc.). GPU is the most common case as it's fast enough, default CPU is the slowest option, but I'm not sure they implemented model work on CPU as this is not a good way to run LLMs.
@charliestephens4909
@charliestephens4909 23 күн бұрын
If you knew how to code man imagine the possibilities brother
@thanesbusiness5001
@thanesbusiness5001 22 күн бұрын
gpt4all just crashes with llama, i'll give this a shot
@the_stray_cat
@the_stray_cat 22 күн бұрын
fuck yeah it works blindly easily for whatever hehe
@nowshinnur
@nowshinnur 23 күн бұрын
clone...still gonna try
@neon_Nomad
@neon_Nomad 23 күн бұрын
Mlc-ai is great
@avi7278
@avi7278 23 күн бұрын
Content crunch eh?
@peterkonrad4364
@peterkonrad4364 22 күн бұрын
there are newer windows pcs that dont have avx2 support. for example mini desktop pcs and tablets. they do have 8 gb or 16 gb of ram and can run local ai models. you just need another program for that. i use ollama. it is very slow, but it works.
@1Know1tHurts
@1Know1tHurts 23 күн бұрын
I tried to use these LLMs but they are light years behind Claude or ChatGPT.
@bluesailormercury
@bluesailormercury 23 күн бұрын
Bigger models like Mixtral 8x7B or Llama 3 70B require more resources (RAM or VRAM) but are not far from GPT 3.5. 8B models are indeed too small for that
@InnocentiusLacrimosa
@InnocentiusLacrimosa 22 күн бұрын
​@@bluesailormercuryindeed. 8B models are pretty disappointing for larger use cases. I need better hardware or more patience 😁
@Ben_D.
@Ben_D. 22 күн бұрын
Zeh-'Fer if your are american, Zeh-fuh if you are a brit
@Earl_E_Burd
@Earl_E_Burd 22 күн бұрын
Yup, rhymes with heffer
@IIlIIllII
@IIlIIllII 23 күн бұрын
Explain to me, what do these llms really offer, when I could just ctrl-f the dataset itself or even make a simple software filtering program for the dataset, and likely get less hallucinations and other benefits. What is really being offered with an llm over the dataset itself.
@okolenmi7511
@okolenmi7511 23 күн бұрын
Good luck to make a program that will filter several trillions of words to get a single short answer. LLMs have some sort of creativity - not so much but it's enough to generate something that doesn't exist in dataset.
@apache937
@apache937 22 күн бұрын
go try that then
@Ahm.elzain
@Ahm.elzain 22 күн бұрын
How about for iPhone yall ?
@joelface
@joelface 22 күн бұрын
Honestly I think the next iPhone is going to be custom built to run a local model about on par with llama-3 that will be able to control all of your apps and understand all of your requests with ease.
@noxplayer-rt9tj
@noxplayer-rt9tj 22 күн бұрын
You have a powerful graphics card in your computer. When starting the model you made probably 1 mistake-you did not set GPU Offload to the maximum value. If someone has a weak graphics card and the model does not want to load-must turn this option off.
@UFOgamers
@UFOgamers 23 күн бұрын
it's VRAM not RAM right?
@okolenmi7511
@okolenmi7511 22 күн бұрын
RAM. VRAM settings in another place. You can set more VRAM to speed up your model. For example in this video were used 10GB VRAM to get that speed.
@UFOgamers
@UFOgamers 22 күн бұрын
@@okolenmi7511 Thanks!
@jonsantos6056
@jonsantos6056 22 күн бұрын
Is it private though? Is our data safe?
@bolon667
@bolon667 22 күн бұрын
Tbh, Ollama is better, because it's fastest LLM backend out of the box.
@IntelliMindA.I.
@IntelliMindA.I. 23 күн бұрын
Funny !
@kex0
@kex0 22 күн бұрын
"I'm going to exit out of my browser as I no longer need that" You weirdo
@hipjoeroflmto4764
@hipjoeroflmto4764 22 күн бұрын
You finding that weird, is the weird thing. You must be a weirdo
ПООСТЕРЕГИСЬ🙊🙊🙊
00:39
Chapitosiki
Рет қаралды 68 МЛН
Climbing to 18M Subscribers 🎉
00:32
Matt Larose
Рет қаралды 23 МЛН
1🥺🎉 #thankyou
00:29
はじめしゃちょー(hajime)
Рет қаралды 83 МЛН
How to Implement RAG locally using LM Studio and AnythingLLM
10:15
GPT-4o is WAY More Powerful than Open AI is Telling us...
28:18
MattVidPro AI
Рет қаралды 252 М.
You've been using AI Wrong
30:58
NetworkChuck
Рет қаралды 343 М.
I Can't Tell it's AI! - AI Animator BLOWS my mind!
19:10
MattVidPro AI
Рет қаралды 38 М.
Automate EVERYTHING Through ChatGPT ✨
29:13
No-Code Ireland
Рет қаралды 28 М.
26 Incredible Use Cases for the New GPT-4o
21:58
The AI Advantage
Рет қаралды 703 М.
Is the AI bubble popping?
19:48
Synapse
Рет қаралды 184 М.
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
Unleash the power of Local LLM's with Ollama x AnythingLLM
10:15
Tim Carambat
Рет қаралды 100 М.
📦Он вам не медведь! Обзор FlyingBear S1
18:26
ВЫ ЧЕ СДЕЛАЛИ С iOS 18?
22:40
Overtake lab
Рет қаралды 112 М.
How To Unlock Your iphone With Your Voice
0:34
요루퐁 yorupong
Рет қаралды 22 МЛН
#miniphone
0:16
Miniphone
Рет қаралды 3 МЛН
MacBook Air Японский Прикол!
0:42
Sergey Delaisy
Рет қаралды 188 М.