Check out our updated course on running private AI chatbots on your computer. bit.ly/skillleap
@mal-avcisi97833 ай бұрын
Are you serious ? Using MacOS ? No one is using MacOS. Stop making videos showing howtos with MacOS.
@joeldowner29913 ай бұрын
try AIJoel - Multi Generator: text, code, image (create sticker, remove image background, add color to black white image, image to video, logo design) and (music and video are in beta mode)
@IndieGamesSpotlight24 күн бұрын
so I installed ollama and asked it, are you cloud based or are you running locacly on my computer? It replied that is cloud based Ai. Why is that?
@marcusstone62733 ай бұрын
Hey bro I just want to say that you grind is on another level. So good that you can go so for many years and sitll create new channels and succeed. Nice transition to AI content and your views are great too. Hope you get a lot of sponsorship and affiliate deals.
@SkillLeapAI3 ай бұрын
I appreciate that!
@mr.cannon3 ай бұрын
PC USERS IF YOU ARE GETTING THE WSL ERROR WHEN RUNNING DOCKER - Enable virtualization in BIOS: This process varies depending on your computer manufacturer and model. Generally, you'll need to restart your computer and enter the BIOS settings (often by pressing F2, F10, or Del during startup), then look for an option related to virtualization or VT-x and enable it.
@joeduffy522 ай бұрын
I get this error but can't see anything in BIOS like you mention. My M/B is the Gigabyte X570 Aorus Elite.
@mr.cannon2 ай бұрын
@@joeduffy52 Enter the bios in advanced mode - go over to Tweaker tab - go down to advance CPU settings - and enable SVM mode
@faridgasimov17422 ай бұрын
Just returned to leave this comment
@Alvin-i2t7oАй бұрын
One new SSD and a full install later.... It works!! Docker was giving me an error which I couldn't resolve but it was all straightforward on a clean installation!
@AliHassan-wc6nb24 күн бұрын
Any luck?
@tikkivolta28543 ай бұрын
the really interesting part around 07:15 is that in a few months computing power and size of these models will make it possible to run them on practically anything. when they get more effcient we'll have them in our phones.
@SantiagoAbud3 ай бұрын
That's the future of this technology. Not that I endorse it or judge it in any way, but it's the way the development is heading.
@thinhngo72443 ай бұрын
OpenELM from Apple is already runable on mobile devices I think
@tikkivolta28543 ай бұрын
@@thinhngo7244 i ran LLAMA 3.1 8b on my laptop with docker and ollama. like a breeze.
@muhammadasad85493 ай бұрын
Brilliant. I have been looking for this videos since meta announced 3.1. Hats off.
@muhammadasad85493 ай бұрын
@SkillLeapAI I should be really thankful if u can make a video to deploy it on server.
@b34k973 ай бұрын
"We need to go to an app that a lot of people have never used before.... its called 'Terminal'". OMG that line had me dying!
@SkillLeapAI3 ай бұрын
I’ve made videos for 8 years and 99% of people never used terminal
@b34k973 ай бұрын
@@SkillLeapAI No I understand, and it makes perfect sense. Just as someone who's used a terminal at school and at work for the past decade... the delivery of line just tickled my funny bone
@ZavierBanerjea3 ай бұрын
Absolutely hilarious...
@danielrossy74533 ай бұрын
Same dude : ) And then my wife came into the room and said "are you really?" HAHAHA
@skywalkerjedi953 ай бұрын
Thanks! This video was awesome and really detailed! Can’t wait to start trying this.
@GmanBB3 ай бұрын
You have great teaching skills. Thank you for making it so simple!
@FastWReX3 ай бұрын
No joke. I've always hated Docker. Hated everything about it. However, seeing you run the Openweb UI command and it just randomly showed up in the Docker app is making me reconsider. Holy moly! Might have to reinstall it on my Raspberry Pi 5.
@BeyondBigFoodАй бұрын
Hrmph. I can't even get docker to run. required compatibility check: Group membership check.
@iskandarhussain3 ай бұрын
Perfect ‼️ Just the video I was looking for with intro to how to upload files to the model‼️ THX‼️
@Ilan-Aviv2 ай бұрын
great video for local ai. easy and clear explanation. thank you!
@naeemulhoque17772 ай бұрын
Nice, straight forward video. Can you make a Hardware buying guide please?
@Hilmz3 ай бұрын
Not private, its a hybrid model, caches data when offline then when connected back you can see it sends data. Use wireshark it will show you its sending data
@longboardfella53062 ай бұрын
This is an important comment because many channels are saying OpenWebUI and Llama3 is private to your machine. Is there any way to turn off the cache sending process? I would like to analyse documents that are not permitted to be publicly uploaded
@MihaMartini2 ай бұрын
@@longboardfella5306 Ollama and Llama3 are private, but OpenWebUI might send some analytics and other data.
@bryanjuho2 ай бұрын
Is this true? Any resource that supports this statement?
@DudethatGross2 ай бұрын
@@longboardfella5306 run it in a VM or container that has the network adapter disconnected from the internet, or wifi off entirely
@andresshamis43482 ай бұрын
Llama is literally private, maybe openwebui…. However what would the ui need to cache and send over the internet? Doesnt make sense to me
@kevinmiole2 ай бұрын
I hope they give access to internet. Thank you for this
@micbab-vg2mu3 ай бұрын
Great - I will try 70B - ) thanks for instruction how to do it:)
Sounds like step one is buy a $5,000 M3 if you want to run it locally now at least before they release smaller quantizations of Llama 70B and 8B
@melaronvalkorith13013 ай бұрын
Llama3.1 8B runs much faster than you can read on an RTX 2060 Super. Not dirt cheap, but I built my PC for $1.4K 4 years ago - should be cheaper now, and I built it for gaming, not AI. You can host it on a desktop and connect your other devices with a VPN like Tailscale. Don’t spend extra money for less performance by going for small form-factor/laptop.
@HitsInSandbox3 ай бұрын
Go grab one of the older servers that can be had for $100 - $150 with the dual XEON 10 core cpu's and boost it up from the 64gig to 128gig running 20 cores it runs well but you won't be able to stick a modern video card in it. But it still does a good job for a system that can be built for $200 Much better than 3 i10 systems running. Also warning get ready for a bigger Hydro bill though.
@CodyAvant3 ай бұрын
I run 8b on my 2020 M1 MacBook Air (8gb ram and 7 core GPU) and the output token rate is much faster than a normal speech cadence.
@mcombatti3 ай бұрын
I have a 9 year old windows computer that runs 13b and 8b models the same speed as your brand new $5000 Mac 👀 it was purchased at a Walmart for $270 in 2015 😂
@bassamel-ashkar40053 ай бұрын
Bullshit detected @@mcombatti
@Kevin-fp5zo3 ай бұрын
Hello! What kind of hardware setup do you have to have to run OpenSource LLMs? LLMs are actually quite heavy and they require high GPU, RAM, and CPU power. Can you do a KZbin video about which computer power parameters or PC brands are optimal for running them smoothly? I love your content. Keep up the good work! :)
@betabishop3144Ай бұрын
They are indeed quite heavy but in case you haven't noticed, in each of the LLMs documentation there should be a section dedicated to the hardware requirements, google has it, meta has it and the other probably do too. You could look them up like "Llama 3.1 hardware requirements" and the first link should take you there
@MoonyongKim3 ай бұрын
Hi. First of all, thanks for the video. It's really useful and easy to follow step by step. I am running M1 mac air and seems like it's not good enough to run llama 3.1 as it seems to freeze my computer. Which model would you recommend for M1 mac air?
@mediatechtubeАй бұрын
Nice video. What are the use cases for running AT locally on your computer at home? Whats the purpose when people can get a subscription? I can think of a few but i would like to know what others think. Cheers!
@MrAshwin2718 күн бұрын
Huge respect
@pertsiya3 ай бұрын
Thank you for your will to share with us!
@womble_10343 ай бұрын
subscription incoming!! great content, keep up the good work
@mikemaldanado601514 күн бұрын
Nice video. I dont like u have to sign up for webui and don't like docker for security reasons but your fist two steps helped alot.. btw you can upload or pass large files on the command line. finally microisoft llama does not meet the traditional definition of open source, because it's not. What they did is create a new definition, their definition, and put it in the terms of service..... must be nice to be able to change the def of words willy nilly. also nothing we have today meets the official comp sci definition of AI, not by a long shot.
@PowerON-Tech2 ай бұрын
Thank you very much for this video.
@II-qh7xn2 ай бұрын
worked with issues hats off
@FusionX6903 ай бұрын
Amazing tutorial 👍
@tikkivolta28543 ай бұрын
i would love for you to create a tutorial how to train these models on specific data. any chance?
@SkillLeapAI3 ай бұрын
Adding to the list
@nohandle8008Ай бұрын
@@SkillLeapAI awesome, thank you. I have a specific use case requiring very specific data to train the model, would love to see how effective it can be. Also concerned with the data flowing back online, can you elaborate on what is sending date back out when the machine is reconnected to the internet as others have mentioned?
@fangeming13 ай бұрын
How much vram is needed to run the model depends if the model of quantised or not. This should be explained in this video instead of giving contradictory information.
@tikkivolta28543 ай бұрын
as much as you are correct i am fairly certain "giving contradicting information" wasn't the intent.
@moonduckmaximus64043 ай бұрын
Hey do you know where we can get a comprehensive explanation to what we downloaded? i cant afford his course
@robwin00722 ай бұрын
Great video. Question: can I redirect all models downloaded (installed) in Ollama, to a secondary drive inside my laptop? C: Primary System M.2 SSD 2T D: Secondary SSD 2T
@yetkindev2 ай бұрын
you have a great internet :D
@vladyslavklochan41813 ай бұрын
Thank you for tutorial.
@andyli5413 ай бұрын
Is there a way to bring this local running Llama 3.1 onto my website? I want to share my trained AI with other people. Thanks!
@bilza20232 ай бұрын
There are special server on digital ocean .. but simply you installon your website and make it available throught an API.
@wettingfairy67643 ай бұрын
讲的很清楚,很棒的入门指引
@walter36632 ай бұрын
Thanks for the great tutorial. Can you let me know the defaut path to store chat histories? Is it possible to change it?
@terrysh72642 ай бұрын
Hi. TY for this video. I'm wondering, do I need to train the model that I install?
@SkillLeapAI2 ай бұрын
No you can just use it after install
@yetkindev2 ай бұрын
you are the perfect man thank you
@rafaeel73125 күн бұрын
Thanks for the vid, a couple of confusions to share: 12:18 how can a LLama 3.1 model not know anything about LLama 3 because of delay? it doesn't make a lot of sense Plus you compared 8B on a text exchange while you gave the 70B model a python code to decipher, then you gave the 8b a text file to summarise. We can't compare execution times unless the task is identical
@blackgptinfo3 ай бұрын
Great video. Did you test how many documents you can upload at once and have it summarized?
@SkillLeapAI3 ай бұрын
I haven’t yet but it think it’s a good amount
@hydron71503 ай бұрын
wanted to try 70b with 4070ti 12gb ryzen 5 7600x 64gb 6000mhz ram and it is pretty slow, it takes 20 seconds to response "hi" prompt 😄
@null4624Ай бұрын
thanks dude, I was able to run llama3.1 8b on a linux laptop with 8gb ram and am impressed..
@shampaghosh1241Ай бұрын
Wow did you do any special modifications? My device is also has 8GB ram and an Intel i3 processor do you think I can possibly run it with decent speed?
@null4624Ай бұрын
@@shampaghosh1241 No special mods, just selected the smallest model and followed the steps from this video and ran some prompts.
@digigoliath3 ай бұрын
I do appreciate this informative walkthrough though! TQVM!!
2 ай бұрын
Perfect. Thanx
@ArekMateusiakАй бұрын
Hi, does anyone knows what is needed to run well full 70B Llama 3.1 model? so it responds quickly?
@puccaso3 ай бұрын
9:20 i believe that there is also a docker credentials package that also works, and doesn't require the GUI bloat.
@sabuein3 ай бұрын
Thank you.
@gaganmadhan733Ай бұрын
Can we store the files on cloud storages like AWS S3 and then run it or deploy it ??
@hiteshdesai21523 ай бұрын
this is great, thanks for puting in such simple and understandable way. I can run locally now, is there a way where I can point it out this local models to my python code, or my langchain/llama_index application code?
@kick_kisu2 ай бұрын
Great video. How can I run the docker for open webui on my Linux local server, but ollama running on my windows pc?
@arnolda71503 ай бұрын
Than you so much. Can you tell me how to acces openAi with WiFi on or off?
@karthikb.s.k.44863 ай бұрын
Inference looks fast in local what configuration of Mac laptop are using please let me know. Thank you for nice tutorial.
@SkillLeapAI3 ай бұрын
It’s the top of the line m3 Mac with 64 gig of ram
@HitsInSandbox3 ай бұрын
The 405B version needs at least 768Gig of Ram as it uncompresses the 200+ gig in memory and runs uncompressed vector links from memory to run effectively. But should you do it it beats OpenAi hands down. Fully uncompressed it could be from 2 to 4 terabytes in size.
@gaathastory3 ай бұрын
Reminds me of last two seasons of TV series Person of Interest…. Store “AI” on massive RAM capacity sticks
@frankbaron16082 ай бұрын
terminal is called "cmd" on windows.
@Kingkimabdu4090Ай бұрын
Everything seemed fine until I clicked the link in Docker. The website page opened with an error message stating, "This page isn’t working." Can anyone offer assistance?
@naveenkumarmurugan19622 ай бұрын
lovely
@filipskerik14772 ай бұрын
The Docker "thing" is only for embedding the model installed before to the webUI? Or its something that pushes all of the stuff to cloud and I don't need that NASA computer? Thanks
@nosuchthing824 күн бұрын
Wait, what does docker do if the LLM is running?
@gRosh083 ай бұрын
Crazy cool.
@gRAVItation19883 ай бұрын
Great job! I have a M1 mac with 16gb ram. Can i run 8b?
@SkillLeapAI3 ай бұрын
I think so
@tikkivolta28543 ай бұрын
@@uncannyrobotdo you also train models and would care to elaborate? i'd be all ears
@tikkivolta28543 ай бұрын
@@uncannyrobot i will find one, thank you!
@dawnbunty73 ай бұрын
i have macbook m3 pro with 18 gb ram and 500 gb harddisk will this be sufficient
@thevoice685320 күн бұрын
can you do a tutorial on how to do it on windows, thanks
@SimonFeay17 күн бұрын
when I go to workspace I don't seem to have the "Documents" tab.
@nessim.liamani3 ай бұрын
Can we locally remove restraints on LLaMA models, including ethical safeguards? Thanks
@HolographicKodeАй бұрын
What's your hardware setup you run this model on?
@SkillLeapAIАй бұрын
I’m on m3 Mac 64 gig ram. I can run the 70B model and the small models respond almost instantly
@HolographicKodeАй бұрын
@@SkillLeapAI mac book pro? mac pro? how much VRAM? (complete configuration). this has to be a $5K+ setup i suspect.
@zunairakhalid73583 ай бұрын
Can we do a APi call of this local LLM in my code ?
@NVX_Ink3 ай бұрын
What would be an affordable, yet ideal, desktop workstation
@cortomaltese122 күн бұрын
When I try to run open-web ui, Flowise opens up. I guess they are listening to the same ports =/ ? Can I solve this in any way?
@swarnimdubey2 ай бұрын
How much total of the storage it requires anyway?
@andreac73893 ай бұрын
Hi, is this model multimodal like ChatGPT 4 Omni? I mean, can it generate code, solve mathematical problems, etc., or is it purely a linguistic model capable of easy conversation but unable to handle complex issues? In other words, my question is, do only the models hosted on the servers of OpenAI, Anthropic, or Meta have the capability to manage complex problems, or does this offline model also have that capability? Thank you.
@SkillLeapAI3 ай бұрын
They have some of that capability but online models are going to much better. It’s very difficult to run the better models offline. The best version of llama 3.1 is too complex to run on a computer and OpenAI and Anthropic dont have an open source model that you can run locally
@mohdalki72719 күн бұрын
Can I train my data?
@HitsInSandbox3 ай бұрын
You can use msty without docker
@RiftWarth3 ай бұрын
Wow the 405B model file size is still smaller than some of the Call of Duty games. LOL
@SkillLeapAI3 ай бұрын
Yea that’s true but it’s basically text. If it was video, it would a million times bigger
@HitsInSandbox3 ай бұрын
It is layer compressed and thanks to Ai is the tightest compression of any type of computer files to exist is AI. Just because it is 250G that is compressed as fully uncompressed it might be more like 2 to 4 terabytes and still using a pointer index for all the words in all the languages it supports. The same compression would take a Blueray movie of 4.5Gig and compress it down to like 50 or 60 meg space. But the layout of data is totally different for different needs.
@jitendravishwakarma79492 ай бұрын
gemma2:2b is insane model compared to its size
@ndidiahiakwo74123 ай бұрын
Will the website version be capabe of uploading documents anytime soon? My computer isn't large enough to support running the offline models.
@SkillLeapAI3 ай бұрын
Not sure. I hope so
@prajwalm.s79763 ай бұрын
Can I fine tune the 70B model and use the open web ui?
@jeremy4510Ай бұрын
Can you do a video like this for using llama in Python
@moonduckmaximus64043 ай бұрын
Thank you for your effort. Any reason why my LLAMA3.1 would respond one letter @ a time? 70B
@HitsInSandbox3 ай бұрын
YOu need more memory and likely a AMD or Nvidea video card with at least 8 gig ram 12 to 16 is better. But the 70B needs at least 64Gigs of ram not 32 or less. Otherwise your just using Virtual ram which will eventually crash it as it chokes on larger inputs. If you have the specs then your Anti Virus is likely bogging it down checking all data movement in memory.
@moonduckmaximus64043 ай бұрын
@@HitsInSandbox 4090 with 128 gigs of ram....i dont use an antivirus
@frankvasquez48272 ай бұрын
If I installed the 8B model first and then I want to install the 70b, Will I have both installed or the largest one will overwrite the 8B? Can I uninstall the models, just to save up some storage space 😅 (asking because I'm not too technical about it, I'm using Windows btw). Thanks in advance.
@SkillLeapAI2 ай бұрын
It will install both and you can choose between them. It won’t override. And yes you can remove. My new video covers that and some new upgrades
@frankvasquez48272 ай бұрын
@@SkillLeapAI Thank you, I will watch it. I managed to install them with this video!
@Caged_Monuments-x6p3 ай бұрын
So I got it working and My CPU is running 75% and I have a 16 core amd 3950x. My GPU is barely running.
@SkillLeapAI3 ай бұрын
Nice
@karlpedersen33422 ай бұрын
is there a how to for window 11?
@moonduckmaximus64042 ай бұрын
hey i keep getting a notification that llama 3.5 is ready to update from llama3.1? i click the notification in windows 11 but nothing happens. how can i verify or update llama 3.1?
@Carlzora3 ай бұрын
Are you able to upload images to use with prompts?
@SkillLeapAI3 ай бұрын
Most open models don’t have vision and if they do, they are not good. I would use ChatGPT for that
@longboardfella53062 ай бұрын
Lava models are pretty good for image analysis
@volebien3 ай бұрын
thanks a lot. That's what i wanted to do. Now i can upload files and not pay for chatgpt plus. But anyway, it is very slow and uses a lot of cpu. Do you know any tweaks where i can share the workload with the gpu?
@SkillLeapAI3 ай бұрын
Only to use smaller models. Not llama. But like phi 3 or the smaller ones
@lucifergaming94913 ай бұрын
i use ubuntu my web ui doesnt show any models after correct installation
@IbrahimAli-hf7mq3 ай бұрын
+1
@ankurkumarsrivastava69583 ай бұрын
I installed llama3.1. Now how to remove previously installed llama3?
@NeptuneGadgetBR3 ай бұрын
Hi, I couldn't run docker on my GPU, I have RTX 4090 which should help a lot, while on CPU is slow, do you have any idea how to enable my GPU on Docker - Windows 11?
@fl0283 ай бұрын
Use --gpu all Option :)
@extremelylucky9993 ай бұрын
Would like to learn to do Llama + Groq + iPhone shortcuts to run llama.
@OutperformThemAllii3 ай бұрын
Are there way to select hard drive when install? My C Drive is almost full, how can I select my other drive?
@RedwanRiyadYT3 ай бұрын
same issue. Please reply here if you find a solution.
could you put the file on your web server then use it to be your search and or help?
@Repz983 ай бұрын
*Requirement* at 7:56
@Repz983 ай бұрын
Get 25 of "AMD Radeon RX 7600 XT 16GB". When not using the Llama go back to mine crypto.
@SelvamuthuMR3 ай бұрын
hugging face llama 3.1 model repo storage is 60 gb but it run very slow for one response and ollama run same llama 3.1model faster but size of llama 3.1 is around 5 gb. what is the difference
@aboubevwic288020 күн бұрын
How is it offline, when you have to login to webUI?
@SkillLeapAI20 күн бұрын
You just have to create the account. You can turn off your WiFi after that
@christerjohanzzon3 ай бұрын
So, you don't need a fancy GPU to run Llama locally? It does say that you need an Nvidia GPU...but you're running a Mac? Please elaborate.
@SkillLeapAI3 ай бұрын
I have the built in Apple GPU. These are my specs. Chipset Model: Apple M3 Max Type: GPU Bus: Built-In Total Number of Cores: 40
@SkillLeapAI3 ай бұрын
The 8B model should run on variety of GPUs
@christerjohanzzon3 ай бұрын
@@SkillLeapAI Ah, I see. Thanks for explaining. :)
@longboardfella53062 ай бұрын
I believe modern Macs run a unified memory model which combines and distributes the GPU and CPU memory as needed. PCs don’t do this so need a specific amount of GPU ram on dedicated Nvidia GPUs to run models. I have an RTX8000 which is 24GB VRAM which runs all 8B models fine - but completely chokes on 70B models regardless of quantisation. For Macs it’s all about your total memory available and having enough modern GPU cores to do the CUDA processing as I understand it
@nguyenphamduy33862 ай бұрын
how to upload file *.xlsx. When i use this apple can't upload file. Help me
@moonduckmaximus64043 ай бұрын
I can try to help you find information on the web, but I'm a large language model, I don't have direct access to the internet..
@CristianaCosta-gi9is3 ай бұрын
Thanks for the video. Dou you have any tutorial on how to install the LLM on the cloud? I cant install the models locally because my computer is already short in space.
@SkillLeapAI3 ай бұрын
Most cloud providers already offer these LLMs by default but I’ll look into making a video
@NeoDon13 ай бұрын
Those specs for the 405b are not right. I have 64gb's ram and mine flies.with 4080 super and AMD 5900x
@piska4fАй бұрын
Well, if you try to use the same token length that's suggested in their website, it would require much more computing power than what you already have... it's set to 2048 by default...
@Eldorado6614 күн бұрын
Well it's probably walking, not flying.
@0reo23 ай бұрын
I see chatgpt is becoming the edge of LLM: hey Chatgpt, how do i install this other LLM i want to have?
@pankajkhatnani25643 ай бұрын
By the way, how can the internet be connected to get real time answers with this?