Run DeepSeek R1 Privately on Your Computer

  Рет қаралды 127,241

Skill Leap AI

Skill Leap AI

Күн бұрын

Пікірлер: 367
@SkillLeapAI
@SkillLeapAI Күн бұрын
Check out my Private AI Chatbot On Your Computer course as well as 20+ AI courses on the new Skill Leap. Start a free trial here: bit.ly/skill-leap
@AdamsTaiwan
@AdamsTaiwan Күн бұрын
What I need to see working is vsCode using copilot to access this model via LM Studio. I wonder if there is plug and play way to add the model and be able to use the UI features like code completion, code edit, code chat, etc.
@boxmybrain
@boxmybrain Күн бұрын
What config is needed to run this . Mac and pc
@GhostEmblem
@GhostEmblem Күн бұрын
Why did you tell us to install llama 3 and only mention later that we dont need to. I deleted 25gb to do that.
@mikekidder
@mikekidder 23 сағат бұрын
@@GhostEmblem listen to video again.. "Llama 3.3 optional"... "If you dont have the space, skip it"
@pditty8811
@pditty8811 12 сағат бұрын
@SkillLeapAI whats your computer specs?
@jamessullenriot
@jamessullenriot Күн бұрын
Just wanted to comment this .... thanks for being one of the few AI channels that doesn't have those ridiculous thumbnails where you have some clickbait text in the image with a shocked look on your face.
@SkillLeapAI
@SkillLeapAI Күн бұрын
Haha thx. I try to only focus on practical Ai applications and tutorials and stay away from the news side of things
@alestbest
@alestbest Күн бұрын
That's why it's important to give every channel that has this kind of click-bait stuff a thumbs down. That way you won't be bothered by these channels in the suggestion list anymore. The more people do this, the sooner this nonsense will stop.
@ishaanpotnis
@ishaanpotnis 22 сағат бұрын
🤣🤣
@TerbrugZondolop
@TerbrugZondolop 12 сағат бұрын
Honestly, the shocked face is such a disgusting trait. I have stopped watching certain channels because of that on that reason alone.
@realismatitsfinest1
@realismatitsfinest1 11 сағат бұрын
Yes, those are getting irritating. And I'm now seeing more thumbnails created with AI image generators. Which will also soon wear me down too.
@ChristiaanRoest79
@ChristiaanRoest79 Күн бұрын
Great tool. I have been using Deepseek V3 for weeks now. For text analysis its way above 4o. More precise in its analysis and it also listens better to prompts.
@SkillLeapAI
@SkillLeapAI Күн бұрын
oh interesting. I've really only focused on R1. Have to test out V3 some more
@ChristiaanRoest79
@ChristiaanRoest79 Күн бұрын
I love all the constant new AI releases. Makes it possible to experiment and looking for the most effective workflows. Real fun and challenging👍@@SkillLeapAI
@cryptomancer2927
@cryptomancer2927 Күн бұрын
What hardware are you running to do that?
@1truthseeking8
@1truthseeking8 4 сағат бұрын
Dyou have a video on the BEST ways to use A.I. for investing & speculation on crypto, etc? ​@@SkillLeapAI
@Metarig
@Metarig Күн бұрын
Hey man, that’s insane, thanks so much. I’ve got a 3090, and every model runs super fast. I’m just gonna install the 70B one next to see how it goes. But wow, it’s so quick-it barely even takes a second to reply.
@MrNelahem
@MrNelahem 16 сағат бұрын
won't you need more than 1 single 24GB VRAM gpu to run a 70B model?
@Metarig
@Metarig 16 сағат бұрын
@@MrNelahem Yeah, I needed more memory for that model, but the lower tier worked fine. What's really interesting is that memory is the only thing I'm lacking, not GPU power. I had no idea the GPU would be enough to run that model smoothly and fast. Technically, you could add another 3090 and run the bigger model.
@ktms1188
@ktms1188 Күн бұрын
I’m seriously impressed with how well my M1 Max handles running AI models locally. Huge thanks for sharing all this information-it’s incredible what Apple Silicon is capable of. I think a lot of us underestimated the potential when Apple made the switch to their own chips, especially with AI integration in mind. I can run both the 14B and 32B models locally on my 32GB laptop, though I stick with 14B for smoother performance. It’s crazy to think that a three-year-old M1 Max laptop can hold its own against an RTX 4090 desktop for locally hosted AI.
@SamFrysteen
@SamFrysteen 20 сағат бұрын
Was hoping to find out how well would run on M2 Max Mac Studio. Would the 32b run well on it? Or should I stick with smaller versions? And is the 14b good enough for creating unique content?
@ktms1188
@ktms1188 20 сағат бұрын
@SamFrysteen you would honestly have no idea the difference between a 14b and 32b if no one told you. It really comes down to the RAM, if you have 64 GB of RAM the way that Apple does their unified memory you could easily run the 32B model. There are tests that they run on them and it's within a couple percentage points of accuracy for the two models so really not much different at all.
@SamFrysteen
@SamFrysteen 19 сағат бұрын
@ thanks for speedy reply ... just checked my specs, and should run smooth I'm guessing.... Apple M2 Ultra with 24-core CPU, 60-core GPU, 32-core Neural Engine 128GB unified memory
@realismatitsfinest1
@realismatitsfinest1 11 сағат бұрын
I'm new here and have no idea what you are talking about here. "M Max"? "RTX 4090"? "R2D2?!?" (Like 'Christine' - Julia Louis Dreyfus - in TNAOOC sitcom: "I don't know who you're talking about. I don't know any of the people in your story!") That said, running AI locally is the only way I'd ever want to play with AI as I'm too scared about the consequences of my actions (i.e., what is the AI doing to me and what data is it collecting and sending back 'home'?").
@ktms1188
@ktms1188 Сағат бұрын
@realismatitsfinest1 The M1 is is the model of Apple computer that I have. They typically upgraded every year or two and so the new variant is the m4. RTX 4090 is the graphics card used in desktops. Apple uses a thing called unified memory, so if you plan on running AIs locally, you might want to look at the Apple computers, with the highest amount of RAM that you can afford. The value proposition is very high there and they run the models so well. When it comes to running these AI models locally, apples computer system, uses their RAM in a way that you would have to have a massive graphics card in a desktop computer to equal. That's why if anyone wants to get into running these big models locally, you really need to focus on the RAM, which never used to really be a big thing. The more RAM you can get the better and the bigger model you can run. That's just a quick summary and hope that helps some.
@Hall
@Hall Күн бұрын
I don’t plan to do this, but it’s fascinating to see how easy it is. Thanks for the helpful instructions. ❤
@realismatitsfinest1
@realismatitsfinest1 10 сағат бұрын
I'm too scared to try!
@VoloBuilds
@VoloBuilds Күн бұрын
Thanks, Saj! Very helpful. I've always used API services for AI models but it seems we may be approaching a world where local models can actually perform well enough to be worth using for some usecases. This was a great intro to doing that!
@julianinskip
@julianinskip 21 сағат бұрын
Super useful video with simple step by step instructions. I have just subscribed to keep up-to-date with LLMs and AI. The only thing is it would be nice to have chapters for each step but really no biggie. Thanks and keep the videos coming.
@dermathe-df7yq
@dermathe-df7yq 6 сағат бұрын
After viewing dozens of videos on this topic, this is the only one I consider "upthumb-must". Thanks
@joen5000
@joen5000 Күн бұрын
Amazing. They actually shortcut huge server-use worldwide. In other words, they immensely cut on electricity use, on initial investment and allowed using smaller resources in order to achieve a similar performance, like other's huge resources. I've heard, they're only a few high intelligent Chinese mathematicians and scientists who made this breakthrough. All respect.
@AT-os6nb
@AT-os6nb Сағат бұрын
they didn't cut any power use, they just used someone else's power bill & system to do it. without the inovation and infrastructure of other ai models, deepseek is nothing. Chinese companies are great at this aren't they? exploit the resources of others first
@tombyrer1808
@tombyrer1808 Күн бұрын
I wish I knew the RAM & VRAM requirements for the models...
@Nobledidakt
@Nobledidakt 11 сағат бұрын
Installed The deepseek R1-14b model(The 9gb Version), on my Laptop with an intel UHD 620 intergrated processor(8 gigs RAM, 500gb SSD, Core i5 10th Gen) and it literally took 2 min to answer me back after i said hi😭😭😭, safe to say i'm downscaling to The 7B model. Thank You For The Video None The Less.
@khalidmajeed298
@khalidmajeed298 Күн бұрын
Once again you came up with a superb video. Whatever, you have shown is a reality, but, some goofies are still in shock and not believing in it. Keep going! 🎉❤🎉
@byronfriesen7647
@byronfriesen7647 Күн бұрын
I am amazed. I now have a LLM running on my Mac. Well taught.
@teddyperera8531
@teddyperera8531 Күн бұрын
great video. waiting for the comparison video of R1 and O1
@akshaypatwa879
@akshaypatwa879 Күн бұрын
Subscribed you only for the fact that you made it super easy to install and understand
@thabulos
@thabulos Күн бұрын
Wow...I had no idea it was this easy. Instant subscribe.
@ArjunaManas
@ArjunaManas 14 сағат бұрын
Thank you very much, this is extremely helpful in empowering everyday people to have affordable access to high quality Ai tools.
@mikekidder
@mikekidder Күн бұрын
Great video. Want to mention on the side-by-side comparison -- Ollama will keep the first model loaded, but unload/load the second model. So expect a delay for start up/response on second model -- can observe this in a terminal with "ollama ps". I've been using LS Studio, but was reminded that already had olllama / open-webui setup
@robinmountford5322
@robinmountford5322 Күн бұрын
14B works very well on a 32Gig Ram / 12Gig Vram PC. Fast enough. Phi4 Is my default go-to model. It's incredible.
@basiludeh811
@basiludeh811 Күн бұрын
Great Video. I would love a video for those that work on windows-based computers. Thanks
@markmulvehill815
@markmulvehill815 Күн бұрын
Just got this setup, so awsome, thanks! FYI I have an M1 Max Macbook Pro, 32GB RAM. Using docker per your vid 7b runs super fast, llama3.3 horrible (if at all). Not sure if it's just the UI or the whole model but I will be exploring running just "Native" vs. through Docker in hopes it finds the GPU on my laptop. Thank you again!!
@geroginy
@geroginy Күн бұрын
Quick question, how do you uninstall a model in case it was too much for the computer? 32b for example?
@SkillLeapAI
@SkillLeapAI Күн бұрын
Try this in terminal. ollama rm deepseek-r1 change the name of the model to whichever you downloaded
@SkillLeapAI
@SkillLeapAI Күн бұрын
OpenwebUI also has a admin panel under settings and there is an option to see all the models you installed and delete them from there.
@geroginy
@geroginy Күн бұрын
@@SkillLeapAI Awesome, thank you for the reply!
@hashanhemachandra4071
@hashanhemachandra4071 Күн бұрын
Exactly what I have been looking for. Thanks so much!
@ChrisM541
@ChrisM541 7 сағат бұрын
Thank you! This was brilliantly explained. I am very surprised that the 32GB version of DeepSeek runs so fast...and on your laptop! Then again, they did say the code was super optimised (rather than the standard brute force GPU). No wonder Nvidia et all are running scared. Investors in the USA are pumping billions of $ into what is clearly MASSIVELY over inflated current OpenAI etc models !!!
@jb510
@jb510 3 сағат бұрын
Ty. One of the best comparisons on model size. Perhaps I missed it but would have been nice to hear exact specs on the laptop you were running these on.
@emranba-abbad8335
@emranba-abbad8335 24 минут бұрын
Excellent! I would appreciate your recommended HW best suited for running DeepSeek, a scalable configuration please. As soon as I build such a machine I'll sign up for the course and start AIing!
@SkillLeapAI
@SkillLeapAI 17 минут бұрын
Do you have a budget in mind?
@oc-tech1666
@oc-tech1666 2 сағат бұрын
Excellent! Is there a way in this setup to use own, local documents as knowledge source for deepseek?
@ngana8755
@ngana8755 Күн бұрын
Is there an advantage to locally installing DeepSeek using Ollam vs. LM Studio? Also, regarding GB needed for local installation, is this RAM of hard drive space?
@SkillLeapAI
@SkillLeapAI Күн бұрын
no just a personal preference. the GB needed is your hard drive space. RAM is related to memory which shouldn't be a major issue with this.
@ngana8755
@ngana8755 Күн бұрын
@@SkillLeapAI Thanks. When can we expect you to put a course together on DeepSeekR1 on Futurepedia?
@striker44
@striker44 Күн бұрын
​@@SkillLeapAIhigher RAM will help in faster response than just compute performance increase, no?
@elksalmon84
@elksalmon84 5 сағат бұрын
2:45 space is second problem here. You also need that amount of RAM to work with the model.
@yCaptures
@yCaptures Күн бұрын
Hey, i really a glad to have come accross this video and wanted to thank you
@ChristiaanRoest79
@ChristiaanRoest79 Күн бұрын
Btw whats also pretty good is Minimax AI that can analyse text with 4 million tokens (compared to Gemini's 1 million tokens). Quite impressive.
@edwardferry8247
@edwardferry8247 15 сағат бұрын
You need the 400GB plus model if you want the one that matches OpenAI.
@ThePoushal
@ThePoushal 8 сағат бұрын
Lol
@Lech_Robakiewicz
@Lech_Robakiewicz Сағат бұрын
What is your actuall (during filming) hardware and what you are going to build ??? It is very profound question and you should define it before starting anything.
@SkillLeapAI
@SkillLeapAI Сағат бұрын
I’m on a MacBook Pro m3 max . Building a pc with the nvidia rtx 5090 gpu. But haven’t figured out the rest of the pc yet
@Lech_Robakiewicz
@Lech_Robakiewicz Сағат бұрын
Thx. That (what you would suggest about the platform) would be very helpful.
@ChicagoBob123
@ChicagoBob123 3 сағат бұрын
Excellent fact filled content.
@I_Mackenzie
@I_Mackenzie Күн бұрын
The extra step you took to download and install Ollama 3.3 at 42gb. What is the advantage of doing that instead of just downloading the R1 model?
@SkillLeapAI
@SkillLeapAI Күн бұрын
Just to have another model that I like. You can skip it if you don’t need another model. I like to have several models so that’s the other one I recommend
@I_Mackenzie
@I_Mackenzie Күн бұрын
@ Ok. I’m downloading it, but is it easy to uninstall?
@Gemax-hope
@Gemax-hope Күн бұрын
yes ​@@I_Mackenzie
@realismatitsfinest1
@realismatitsfinest1 9 сағат бұрын
@@I_Mackenzie I'm no expert but I found this from @SkillLeapAI above in another comment: "Try this in terminal. ollama rm deepseek-r1 change the name of the model to whichever you downloaded" ... and... "OpenwebUI also has a admin panel under settings and there is an option to see all the models you installed and delete them from there." ...(I was worried about this too!) 😉
@tortysoft
@tortysoft 6 сағат бұрын
I started doing this then thought - why bother? It will be out of date by the time I've downloaded it. Anyway, the Web version works well enough. Secrecy? Well that horse has long gone. Incredible to know it can be done though - thanks. Subscribed.
@rookking153
@rookking153 4 сағат бұрын
Question I do not have a extra computer or room on my current what do I do?
@JessieS
@JessieS Күн бұрын
What if an updated version is released, how do you go ahead and use that latest model without losing previous prompts?
@realismatitsfinest1
@realismatitsfinest1 10 сағат бұрын
Well, that's the downside of using it locally. However, it is safer. If that computer is kept offline forever, your data will never be transferred back to Ch- Communist Party servers to be used against you for nefarious ends. Personally, I'd prefer safety over convenience. (It's also the reason I don't own/use a cell phone.)
@jaymeoliveirajr
@jaymeoliveirajr 3 сағат бұрын
Thanks for sharing. Really nice video. Is it possible to train one of these model with content available in my PC?
@b3owu1f
@b3owu1f Күн бұрын
is this better/faster than using LM Studio on Windows? Or is it the same? LM Studio is a slick interface, easy to search for models, download, use, etc.
@TheAmaterasu
@TheAmaterasu 18 сағат бұрын
Would you happen to have a video on configuring the local install to utilize the GPU? Any chance it would work with an AMD GPU?
@MoeSays
@MoeSays Сағат бұрын
excellent stuff - thanks man!
@MikeMaker851
@MikeMaker851 Күн бұрын
What are the specs of the PC you are running it on?
@SkillLeapAI
@SkillLeapAI Күн бұрын
I’m on a Apple M3 max with 64 gig of ram
@busyworksbeats
@busyworksbeats Күн бұрын
Thank you for the CLEAR tutorial!!!
@declandux3693
@declandux3693 31 минут бұрын
With the setup shown in the video, does it support uploading the files, such as xls, PDF, JPEG, etc? What file types does it support? Thank you.
@MohamedAshik-t3l
@MohamedAshik-t3l Күн бұрын
What is difference between 7b,8b ,14b, 671b. They are distilled from 671b model or each models are separate models?
@jeffwads
@jeffwads Күн бұрын
They are totally different models and not Deepseek R1. This causes massive confusion. The big one you noted is the real DS R1.
@davesmith7658
@davesmith7658 9 сағат бұрын
Its the performance and accuracy you get. You can see the bench marks of the models performance.
@SytheZN
@SytheZN 3 сағат бұрын
The smaler ones are Llama or Qwen with some finetuning using data generated by the big model.
@scientiest12
@scientiest12 Күн бұрын
Cool. I tried the 1.5b model on M2 mac and the model sucks. It cant answer questions like "make a 5 word sentence that lists all 24 letters" and instead started an unstopping thinking process without even answering the question. I'm sure the higher models are much better.
@billereses4935
@billereses4935 7 сағат бұрын
The smaller models tend to think to much... :-) Maybe you have to reduce the "temperature" value from default 0.8 to something like 0.5-0.6 as stated in the documentation of R1. This value can be set temporally for current chosen model (an icon on top right) or permanent somewhere in the mode settings.
@anta-zj3bw
@anta-zj3bw Күн бұрын
I love install vids..thanks!!
@ElevatedARFitness
@ElevatedARFitness 14 сағат бұрын
Awesome video! Thanks for sharing!
@MultiAsdf1234
@MultiAsdf1234 4 сағат бұрын
can the open webUI do the web search when using the DeepSeek AI model? and how to do that? thx
@AliTweel
@AliTweel 22 сағат бұрын
@7:52 A note to mention before the docker command step, remove the "-d" so you can see in the terminal any dependency downloads before you go to open the localhost:3000
@Tarila-18
@Tarila-18 Күн бұрын
What's the minimum system requirement to install this locally?
@ChasquiSoy
@ChasquiSoy Күн бұрын
what is the minimum requirement to run this??
@nivmhn
@nivmhn Күн бұрын
Thanks for the tutorial.
@itint
@itint Күн бұрын
Great video!How can you uninstall in a clean way?
@SkillLeapAI
@SkillLeapAI Күн бұрын
OpenwebUI has a admin settings panel for uninstalling models. The other things you can uninstall as you would with regular apps
@groen2820
@groen2820 Күн бұрын
@@SkillLeapAI Is it on the browser of Admin panel | Open WebUI? I can't find that option. I want to uninstall Llma3.3, followed your video but it's not compatible with my PC, it seems like it wouldn't run on yours too. How do I uninstall it? For anyone reading this, DO NOT INSTALL Llma3.3 unless you have a really good system that can run it. Edit: I uninstalled it by running this in the terminal ollama rm llama3.3
@TheLightworkz
@TheLightworkz Күн бұрын
@@groen2820from terminal type ollama list then ollama rm and whatever the name of the LLM is (example ollama rm llama3.3:7b
@portalpacific4578
@portalpacific4578 Күн бұрын
i never got the popup on windows after installing. Its running and i can prompt "ollama" command but how do i get the ui?
@NoodlesTBograt
@NoodlesTBograt Күн бұрын
Same problem please help
@julianinskip
@julianinskip 21 сағат бұрын
Mine also cam up with the command prompt (might be a Windows thing). Just run the "ollama run llama3.2" or "ollama run llama3.3" command in the command prompt that opens.
@portalpacific4578
@portalpacific4578 13 сағат бұрын
@ The command prompt doesnt actually open but after installing ollama the command is recognized similar to installing npm or python. So i tested it there. All g i decided to use cline in vs code instead
@NoodlesTBograt
@NoodlesTBograt 6 сағат бұрын
@@julianinskip Not going to bother I ran it on the site I asked it the date it cameback 09 July 2024
@charliebravo8114
@charliebravo8114 Күн бұрын
can you install these models on an external drive and then run them on your local computer?
@trilokatmadasa3180
@trilokatmadasa3180 Күн бұрын
Running LLMs from an external drive is feasible but only recommended if the drive is a high-speed NVMe SSD connected via a fast interface such as Thunderbolt or USB 3.2. Slower ports or drives will bottleneck performance. For best results, use an internal SSD to avoid any potential delays or inefficiencies.
@realismatitsfinest1
@realismatitsfinest1 10 сағат бұрын
@@trilokatmadasa3180 Great question to the original poster and thanks for the answer.
@MrJfitchett
@MrJfitchett 12 сағат бұрын
Excellent. Really enjoyed it. 1 question. how do you turn on the what looked like optional reasoning part. or is that built into the model and on by default. I just saw on the site it had to be activated.
@eclass4349
@eclass4349 Күн бұрын
Thanks for this information. May I know what the hardware requirements are for this to work well on a local PC?
@Javaman92
@Javaman92 4 сағат бұрын
DeepSeek runs smoothly from the command line, but it seems very slowly from Docker. I'm guessing having all the extra stuff running to have a pretty output put my laptop over the edge of running it well.
@ArtificialExperience
@ArtificialExperience Күн бұрын
I’m trying to figure out the benefits of installing this? I’m using OpenAI for coding, documents, spreadsheets, questions, etc. feel like I’d be moving backwards.
@rsstnnr76
@rsstnnr76 14 сағат бұрын
Is the offline, local version of R1 stop collecting keystrokes and sending that info to China somehow?
@realismatitsfinest1
@realismatitsfinest1 9 сағат бұрын
Yes. But if that computer ever goes online EVER, all your data could be sent back to CCP servers in Chi^ina. So, if you do this, once you download, you can never use that computer to go online while that is present on that computer. And best still, before you go back online on that computer, my suggestion is to format the hard drive several times before doing so. However, this is not the R1 version. To load the R1 you would need to download the 671b version which is usually too big to handle just your regular, run-of-the-mill computer. You would need a more expensive and powerful computer to run the 671b version locally.
@I_Mackenzie
@I_Mackenzie Күн бұрын
On the local version I asked what its cut-off date was and it said 2023 and gave me a ton of mumbo jumbo about depends on which model? I did mention the model R1. When I asked online at the Deepseek website it told me knowledge was current up until July 2024. Not sure about this local model?
@ilkerkeklik6122
@ilkerkeklik6122 Күн бұрын
Is it necessary to install llama3.3 before installing deepseek-r1 like installed in the video or is it enough to install just deepseek-r1 skipping first step?
@spiralofhope
@spiralofhope Күн бұрын
it is not necessary. you can install deepseek only
@bibbidi_bobbidi_bacons
@bibbidi_bobbidi_bacons 5 сағат бұрын
Docker is super cool software
@canarese
@canarese 5 сағат бұрын
What is your laptop's configuration? In terms of memory, cores, free HD space etc..?
@balonm
@balonm Күн бұрын
What’s the difference to install it that way vs LM studio and download in LM…?
@SkillLeapAI
@SkillLeapAI Күн бұрын
Not much. Just preference. I like the webUI version, so I usually show that in my videos
@balonm
@balonm Күн бұрын
Thanks, one more question. Since it runs locally can it still browse web for answers? Or can it be uploaded with pdf/docs etc.? I am thinking like notebook LM but locally?
@3dparagon
@3dparagon Күн бұрын
@SkillLeapAI If in offline mode, if you ever go back to online mode, is there any chance that it can push all the information, attachments back to DeepSeek? Also, if DeepSeek comes out with say R2 and you have trained R1, will R2 pickup where you left off in R1 or does it have to be retrained?
@GabrielsGalaxy
@GabrielsGalaxy Күн бұрын
I’m also curious on this
@realismatitsfinest1
@realismatitsfinest1 10 сағат бұрын
I don't definitively know the answer to your questions but knowing what I know about online privacy (namely that you don't have any!) my answer to #1 is "yes it will push everything back to the DS servers." That's what it's built for after all. As for #2 ... I have no idea. But because I'm so safety conscientious, personally I would just buy a new computer every time I wanted to update the AI (every 2 years?) and keep the new one offline while formatting the old computer's hard drive (several times over!) to be able to use it online again as either a main computer or back up one.
@RHH1095
@RHH1095 22 сағат бұрын
I was able to follow you and can play with some of the smaller models. WHAT ARE the startup steps after I shutdown my computer and start playing again another day. Start Ollama then Docker and open the Link I saved from Docker?
@lionvictor9944
@lionvictor9944 5 сағат бұрын
Excellent thanks for sharing
@spiralofhope
@spiralofhope Күн бұрын
installed, it has a tray icon, and nothing but view logs and quit. the documentation has no hints. it has no interface
@spiralofhope
@spiralofhope Күн бұрын
oh, it opened a new tab in my terminal.. so it's commandline but that wasn't mentioned
@spiralofhope
@spiralofhope Күн бұрын
docker is garbage and doesn't run the container
@edwardm9975
@edwardm9975 15 сағат бұрын
Are you using a Mac Pro with the card
@GaryIV
@GaryIV Күн бұрын
About to try to install the smallest model on a potato laptop with an Intel i3-8081u, 6gb of DDR3 ram, and no GPU at all. I'll let you guys know how it goes 👌
@mohmbilly
@mohmbilly Күн бұрын
how it goes ?
@AndreaDingbatt
@AndreaDingbatt Күн бұрын
Please let us know how you get on?! (I'm on an old, tiny potato laptop, and dont even know the ram ,Etc!!)
@mohmbilly
@mohmbilly Күн бұрын
@@AndreaDingbatt the smallest mpdel ( 1.7 b ) work fine but rather redundant answer, have not tried the 7B models
@AndreaDingbatt
@AndreaDingbatt Күн бұрын
@@mohmbilly Thank You!!🙂
@GaryIV
@GaryIV 19 сағат бұрын
UPDATE FOR REPLIERS: The 1.5b Deepseek model works extremely smoothly, as well as the 3b Llama model. The 7b DeepSeek model was much slower, probably the absolute limit my machine can handle in terms of AI, but it was technically capable of running on my laptop, and output without a problem.
@JimsworldSanDiego
@JimsworldSanDiego 13 сағат бұрын
I followed the instructions and can run DeepSeek from the terminal, but OpenWebUI refuses to see the model. Stuck atm.
@chemical_wala
@chemical_wala 12 сағат бұрын
Wheather we need a graphic card of 8gb or above any specifications ram and storage vram etc
@JDSpartan2007
@JDSpartan2007 Күн бұрын
Is this safe to run locally?
@realismatitsfinest1
@realismatitsfinest1 10 сағат бұрын
Don't know the definitive answer but as far as I know, as long as that computer NEVER connects to the internet (after everything is downloaded), I believe you'll be safe. But, from my understanding, as soon. as you connect that computer online again, everything you've done on DS will be uploaded to CCP servers. So buy a computer you will only once get online for and just you it for DS and AI exploration. And, when you want to update it, buy a new computer and do the same thing with that computer. Then, take the old one, format the hard drive several times and use that old one as either your new computer or a backup one. (That's how I plan to play this AI "game" anyway.)
@JDSpartan2007
@JDSpartan2007 10 сағат бұрын
@@realismatitsfinest1 Couldn't there be code to connect itself to the internet though? I mean I guess I could download it all and then have the passwords whipped from accessing the internet. This is well above my pay grade. Thank you for replying.
@jamesreichert5637
@jamesreichert5637 Күн бұрын
it would be really good to see a walk through using the above with the ability to read PDF or other documents from a folder locally (from doing this it needs something called a RAG) which I am still getting my head around. Not sure if that object is added into the LLM or is something read into memory and when you close down llama you have the re-read it at launch.
@nehaa_0105
@nehaa_0105 Күн бұрын
is it necessary to download ollama 3.2 or 3.3 ? like it taking so much space 🙂, can't i directly install r1 model and follow the other steps you taught ?
@darkkingastos4369
@darkkingastos4369 Күн бұрын
So what kind of limitations and barred questions does this model have?
@realismatitsfinest1
@realismatitsfinest1 10 сағат бұрын
"How can Taiwan remain independent from China?" and "What massacre occurred in 1989 at Tiananmen Square?" would be two that come to mind. 😁
@onboard3441
@onboard3441 Күн бұрын
Awesome thanks. Took a while to download deepseek r1, kept having to redo it. Guess the servers busy lol
@realismatitsfinest1
@realismatitsfinest1 10 сағат бұрын
It could have been due to the hack they experienced. They said that they were minimizing the users connectivity while they solved the problem.
@Keeplearning92
@Keeplearning92 16 сағат бұрын
I have installed 7B model on MacBook M1 Pro which is decent in performance. I first tried 14B and my Mac was struggling and each prompt was taking more than 30 seconds to process. 7B is good, 4B would be faster. You can try and see what your machine can support.
@fotografm
@fotografm 23 сағат бұрын
When you ran the two AIs side by side did they come up with identical answers or wildly different ones ? If different, then which one is correct ?
@DaveEtchells
@DaveEtchells Күн бұрын
Great, quick and clear how-to, thanks! So this uses Docker, is there a way to do it in LM Studio? It seems like I already have so many different environments on my computer… :-/
@malimal4972
@malimal4972 Күн бұрын
I have Apple M1 Sequoia 15.2 - tried running 8b, didn't give me a response in Open webui but did in terminal
@gormixes
@gormixes 3 сағат бұрын
Hi guys, please help, I am a noob. How can I install Llama and DeepSeek models on a different local disk, cause rn llama is installing on a system disk by default and there is no way to change the destination of installation. Thanks.
@jobsearchtv
@jobsearchtv Күн бұрын
Can you be sure there is no backdoor? Probably not.
@felixgraphx
@felixgraphx Күн бұрын
Your question kind of makes no sense. (Respectfully) Its running in lmstudio or ollama on the GPU it inputs text tokens and output text. The GPU has only matrix math's commands and cannot interact with the rest the computer.
@darkkingastos4369
@darkkingastos4369 Күн бұрын
@@felixgraphx Your gpu is always interacting with your cpu it uses the cpu's pcie lanes.
@felixgraphx
@felixgraphx Күн бұрын
@darkkingastos4369 the cpu interacts with the GPU to setup its internal states and read its output buffers through those lanes, yes. But The GPU does not control anything.
@darkkingastos4369
@darkkingastos4369 Күн бұрын
@@felixgraphx I'm just saying if there is something nefarious buried deep in it how could we know?
@felixgraphx
@felixgraphx Күн бұрын
@darkkingastos4369 you made me realiza that is the next big scare they're gonna go with: "there could be something in the weights that will output nefarious code when asked to generate code for a game or something."
@DJKav
@DJKav Сағат бұрын
So which Deepseek model would be best for me; PC: Ryzen 9 5900X, 64gb RAM, rtx 4080 Super with lots of TBs of NVMe drives/space?
@TJ-hs1qm
@TJ-hs1qm 35 минут бұрын
Only the VRAM on the GPU counts… even the 5090 has only 32 GB VRAM. A m4 with 64 GB shared memory or a mac mini local cluster is currently the best option. Anything below that will deliver poor results (e.g., any 7B model).
@rajivs007
@rajivs007 Күн бұрын
i think ollama has to be running in background for docker container to get results.
@rajivs007
@rajivs007 Күн бұрын
ok there is a support for bundled ollama with open webui
@bljack
@bljack 12 сағат бұрын
What's your laptop specs? How are you able to run .ultiple models that's huge like that?
@anselrod5699
@anselrod5699 21 сағат бұрын
But do you trust the code?
@LifeLightTruths
@LifeLightTruths 7 сағат бұрын
Thanks for this Post! I am having problems opening Docker in my Macbook - Coming up with Incompatible CPU detected
@RichardBond5566
@RichardBond5566 Күн бұрын
is it possible to link it to the internet to get updated info from the internet when it is installed locally
@mbezik
@mbezik Күн бұрын
why not use lm studio?
@spiralofhope
@spiralofhope Күн бұрын
This. I went through the video's instructions, stuff doesn't work right. I'll just go back to LM Studio.
@attilahagen
@attilahagen 21 сағат бұрын
Thanks for the tutorial. It works but extremely slow , it takesd 5 minutes to answer, though I've a good PC. What is wrong?
@SkillLeapAI
@SkillLeapAI 21 сағат бұрын
try the smaller version. it's a graphic cards issue if it's slow
@kekkaisenn6497
@kekkaisenn6497 22 сағат бұрын
Could any size of this model run on a phone locally?
@dbenwit4321
@dbenwit4321 19 сағат бұрын
I got as far as getting Open WebUI installed and running but there are no models to select. Am I missing a step?
@pacwest1000
@pacwest1000 23 сағат бұрын
Is this a Trojan Horse surveillance tool? Tracking everything you do? Thanks.
@burtburtist
@burtburtist 23 сағат бұрын
run it in a vm
@professor_thunder
@professor_thunder Күн бұрын
Can some explain why a fast GPU is so important as opposed to CPU and Ram?
@joelbosshoss9029
@joelbosshoss9029 20 сағат бұрын
Graphical Processing Unit equals display or visual output while CPU processes data to the GPU and Ram holds often used data for quick access.
@Laxpowertoo
@Laxpowertoo Күн бұрын
What if ollama doesn't give you a popup after installation? I see other with the same problem. How about an explanation?
DeepSeek is a Game Changer for AI - Computerphile
19:58
Computerphile
Рет қаралды 574 М.
$1 vs $500,000 Plane Ticket!
12:20
MrBeast
Рет қаралды 122 МЛН
Вопрос Ребром - Джиган
43:52
Gazgolder
Рет қаралды 3,8 МЛН
I built an Omni-Directional Ball-Wheeled Bike
27:55
James Bruton
Рет қаралды 4,1 МЛН
Кладовка за 2600$.
22:00
АУКЦИОН КОНТЕЙНЕРОВ В США
Рет қаралды 175 М.
OpenAI's nightmare: Deepseek R1 on a Raspberry Pi
4:18
Jeff Geerling
Рет қаралды 1 МЛН
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,6 МЛН
Bill Gates on Trump, Musk, U.S.-China and More | WSJ
11:38
The Wall Street Journal
Рет қаралды 1,2 МЛН
DeepSeek R1 Fully Tested - Insane Performance
15:10
Matthew Berman
Рет қаралды 823 М.
DeepSeek R1 + Perplexity = WOW
9:32
Skill Leap AI
Рет қаралды 35 М.
$1 vs $500,000 Plane Ticket!
12:20
MrBeast
Рет қаралды 122 МЛН