How to Get Started with Ollama
9:37
Пікірлер
@GiovanniBalestrieri
@GiovanniBalestrieri 20 сағат бұрын
I wouldn t trust a service like this. Honestly.... Am I alone?
@gabrielmorales930
@gabrielmorales930 2 күн бұрын
thank you sir, or m'am, have a nice day
@danil5948
@danil5948 6 күн бұрын
Hi, thank you for the video. I have a problem as some apps require sudo access, and I need the password for that. How can I obtain it?
@TechXplainator
@TechXplainator 5 күн бұрын
Hey there! For sudo access, you'll typically use the same password as your user account. If you don't remember it, you can reset it by opening a terminal and using the passwd command. Just make sure you're logged into your user account first. If you're having trouble, most Linux distributions have recovery methods like booting into recovery mode or using a live USB to reset the password. Hope this helps!
@archilecteur
@archilecteur 8 күн бұрын
Instead of Terminal, can the call to the model running in Google Colab be inserted in an app?
@aravindhhere
@aravindhhere 11 күн бұрын
Thanks for the demo, Lot of other videos where already outdated using crewai cli and ollama. Video was crisp, saved time and that's all I need !.
@saabirmohamed636
@saabirmohamed636 12 күн бұрын
Thanks for the video ! I use lightning too ....its awesome, paid version too worth every cent.
@MohamedYosry-s7r
@MohamedYosry-s7r 13 күн бұрын
Thank you
@Rising_Volt_Tacklers
@Rising_Volt_Tacklers 13 күн бұрын
Flux dev + ColmfyUi + lightning Ai create a video
@TechXplainator
@TechXplainator 13 күн бұрын
Thanks! Will do!
@Omar-mn1eb
@Omar-mn1eb 14 күн бұрын
Concise, works, and simple. Thanks!
@Alex_Brezhnev
@Alex_Brezhnev 15 күн бұрын
super content. thank you.
@mushiburrahamanmallick1408
@mushiburrahamanmallick1408 17 күн бұрын
Today i enjoy the playlist "Leonardo - the AI image generator" and i must say it is amazing. keep doing this great job. More power to you.🧡🧡👍👍
@jayescreations489
@jayescreations489 18 күн бұрын
I dont have a Mac. will this work for me. Im just trying to configure ubiquify equipment I was sent a email suggesting webui. I am lost.
@TechXplainator
@TechXplainator 17 күн бұрын
Both ollama and docker work on Linux and windows. So this setup should work all these operating systems as well.
@sarahk13peace
@sarahk13peace 19 күн бұрын
thanks a lot for this tutorial, I have the edu package, is there a way to run comfyUi with CPU after that?
@TechXplainator
@TechXplainator 18 күн бұрын
Thanks so much! It is possible, though with significantly reduced performance.
@sarahk13peace
@sarahk13peace 18 күн бұрын
@@TechXplainator True, but they offer 4hours of running the studio for free and -to my understanding- you can keep having these chunks of 4 hours which is not bad for free stuff :D
@eduardomolina-n8f
@eduardomolina-n8f 19 күн бұрын
Siempre use Gradio pero con tu exelente explicacion installe ComfyUI y es todo lo que has dicho una maravillosa herramienta , gracias por tu dedicación en compartir tus conocimientos ,muchas gracias.
@TechXplainator
@TechXplainator 18 күн бұрын
Thank you so much for your lovely comment!
@EoinMoore-e5s
@EoinMoore-e5s 25 күн бұрын
very informative :-), thank you!
@nightwingvyse
@nightwingvyse 27 күн бұрын
I got up to about here 5:30 but I got an error saying "Form data requires "python-multipart" to be installed." It tells me to enter "pip install python-multipart", but when I do it just says "command not found".
@tails_the_god
@tails_the_god 28 күн бұрын
i prefer a jupyter notebook do you have that version too?
@TechXplainator
@TechXplainator 28 күн бұрын
No, I don't, sorry
@stableArtAI
@stableArtAI Ай бұрын
Great stuff as always. 😂 almost like when we try to hold a conversation with SIRI, she never can answer the hard stuff and sometime she is the only one listening. 😂
@TechXplainator
@TechXplainator Ай бұрын
Haha, glad you enjoyed it! 😂
@jamesshelt890
@jamesshelt890 Ай бұрын
Great video thank you! How can i access this studio from a front end like AnythingLLM?
@TechXplainator
@TechXplainator Ай бұрын
Thanks for watching! Yes, you can access the studio from a front end, but it does require some extra steps in the studio. I'll definitely plan a video on that soon!
@janielj5963
@janielj5963 Ай бұрын
That was a Great Video, I was searching to get this issue sorted., This Video Helped me.., It would be helpful how to use LitGPT from chat to model usage sometime
@TechXplainator
@TechXplainator Ай бұрын
Glad the video helped! Thanks so much for the suggestion-I'll definitely consider making a video on using LitGPT! I haven't tried it out yet, but it sounds promising!
@SunShinesBlack
@SunShinesBlack Ай бұрын
very helpful thanks! easy to run coding 27b instruct model with continue extension there.
@devon9374
@devon9374 Ай бұрын
Crystal Clear, great video
@Joelz_photogram
@Joelz_photogram Ай бұрын
how to install on mac that uses intel plssss
@sergeyosenniy8769
@sergeyosenniy8769 Ай бұрын
Thanks. All is work. Last part done with docker . Dont use windows system, its extracting feils
@wowfielder101
@wowfielder101 Ай бұрын
HELP PLS the "export OLLAMA_HOST=" command is not working in cmd pls help
@TechXplainator
@TechXplainator Ай бұрын
You mean on windows? This is how it should work (have not verified it - I'm using mac): 1. Open Command Prompt as Administrator. 2. Run the command below, replacing `<paste_url_here>` with your Ngrok URL: setx OLLAMA_HOST "<paste_url_here>" 3. Close and reopen Command Prompt to apply the changes.
@attilavass6935
@attilavass6935 Ай бұрын
I love the content and find it very useful, though I'm not a huge fan of the AI voice, so I read the blog post instead. No offense, just a feedback :) BTW could you add info how can we expose this Ollama's API using LightningAI's API Builder or eg. LitServe ?
@TechXplainator
@TechXplainator Ай бұрын
Hey, thanks for the feedback and for checking out the blog post! I totally get the preference for reading over listening. The voice is actually my own that I use to train an AI model, but no worries if it's not your thing. Great idea about covering how to expose Ollama's API with Lightning AI! I'll definitely look into that for a future post or video. Thanks again for watching and for the suggestions. I really appreciate it!
@AliAlias
@AliAlias Ай бұрын
Thanks 🎉🙏 very helpful
@stableArtAI
@stableArtAI Ай бұрын
Great Stuff As Allways! Thanks.
@incrediblekullu7932
@incrediblekullu7932 Ай бұрын
great help! thanks I think people will be interested to use Flux model on lightning AI with their own trained Lora
@TechXplainator
@TechXplainator Ай бұрын
Thanks! I'll definitely do a video on that 😊
@amerikyuss
@amerikyuss Ай бұрын
@@TechXplainator Please do!!! ❤❤❤
@ApoorvKhandelwal
@ApoorvKhandelwal 7 күн бұрын
@@TechXplainator yes please make this video
@stableArtAI
@stableArtAI Ай бұрын
We did finally get a working version of FLUX and ComfyUI to work locally. However, we like the way Automatic1111 is and easy to adjust a workflow with styles and the large selection of extensions with the support built in for SD 3 and SDXL. We hope to post our system specs soon on our site. We just add this video about Sample Methods use with SD kzbin.info/www/bejne/fWPGhqiAjZKdqpI. We been working with a stable install using: version: v1.10.1  •  python: 3.10.14  •  torch: 2.1.2  •  xformers: N/A  •  gradio: 3.41.2  •  checkpoint: 76be5be1b2. We have also run with torch 2.2.2. We did have python 3.12 once(lol but forgot how to update automatic1111 to use it as it is defaulting to 3.10 which we moved from x.x.6 to x.x.14 w/o issues so far) It is also great not to have a hour limit, since we have generated over 15k images in learning and testing features. Which we are now at a point to dive into some of the other feature SD and Automatic1111 have with both txt2img and img2img(w and w/o inpaint) as well as Extras.
@thegreatcerebral
@thegreatcerebral Ай бұрын
Running Ubuntu server, can't access openwebui from another PC on the local LAN. How do you do that?
@TechXplainator
@TechXplainator Ай бұрын
You'll need to expose the local URL to the internet. For example, using Ngrok (free plan is sufficient). You will find a how-to on my blog: techxplainator.com/how-to-set-up-ollama-and-open-webui-for-remote-access-your-personal-assistant-on-the-go/
@hrmanager6883
@hrmanager6883 Ай бұрын
Hi , can you please help answer my query about multiple users: can this url link be used by multiple user simultaneously or only one user can access it remotely at a time, kindly reply
@TechXplainator
@TechXplainator Ай бұрын
As far as I know, a free Ngrok URL can be accessed by multiple users at the same time, but there are still limits on monthly usage. Just keep that in mind!
@marcelocruzeta8822
@marcelocruzeta8822 Ай бұрын
Great. Thank you for the class. I Installed Open WebUI from GitHub Repo. No Docker. Can I configure it to run with the remote Ollama? I found it. Have to change in settings. Never mind.
@TheSamwilliams33
@TheSamwilliams33 Ай бұрын
Hi can you offer any assistance enable voice to the docker or ollama webui?
@TechXplainator
@TechXplainator Ай бұрын
sorry no, haven't tried that out yet.
@AxeFxMike
@AxeFxMike Ай бұрын
http won't allow mic/cam, it has to be https which is the reason for the ngrok forwarding part in this video. What a mess
Ай бұрын
Great video mate
@rd-cv4vm
@rd-cv4vm Ай бұрын
Ngrok isn't free
@TechXplainator
@TechXplainator Ай бұрын
They have a free tier. What I show in the video uses the free tier
@rd-cv4vm
@rd-cv4vm Ай бұрын
@@TechXplainator yes i heard it is limited, and when you are done with bandwidth it is time to pay
@TechXplainator
@TechXplainator Ай бұрын
You're right, the free tier is limited to 1GB and 20,000 requests, which is often sufficient. You can monitor usage on their site, and no payment info is needed to use it.
@rd-cv4vm
@rd-cv4vm Ай бұрын
@@TechXplainator thanks, does it renew every month, or it is a one time thing?
@TechXplainator
@TechXplainator Ай бұрын
It resets every month ☺️
@edwassermann8368
@edwassermann8368 Ай бұрын
cool
@edwassermann8368
@edwassermann8368 Ай бұрын
Excellent, thank you. Would it be possible and reasonable to run this on Lightening AI (CPU), full time, and have permanent mobile access on the go while my desktop machine is sleeping?
@TechXplainator
@TechXplainator Ай бұрын
Honestly, I'm not sure how it will behave when the studio is sleeping. I'll have to try that out. Also: from what I know, the lightning.ai free tier disconnects after 4 hours (even in CPU). But that is definitely something I will try and put in a future video ☺️ Thanks for the suggestion!
@edwassermann8368
@edwassermann8368 Ай бұрын
Fucking awesome. I don't like Meta, but this is very very cool.
@edwassermann8368
@edwassermann8368 Ай бұрын
Excellent. Thanks very much. Very helpful.
@kaafirTamatar
@kaafirTamatar Ай бұрын
Thank You!
@nicocabanna
@nicocabanna 2 ай бұрын
Hi! Great tutorial! But the ignoregit step didn't work for me either :C There is another way?
@TechXplainator
@TechXplainator Ай бұрын
Thanks a lot! Sorry to hear about the issue. The files might not be showing because the folders are in the .gitignore. If changing that setting doesn’t help, try restarting the studio and checking again. If it still doesn’t work, go to the .gitignore file in the ComfyUI folder and remove the lines for /custom_nodes/ and /models/.
@enzobrand571
@enzobrand571 2 ай бұрын
Hello, I tried to follow your tutorial, but I got stuck after installing the ComfyUi manager in the second step, which is about dependencies. I can't figure out how to refer to the requirements file. I tried writing what you wrote, but it doesn't work.
@TechXplainator
@TechXplainator 2 ай бұрын
No worries. If "pip install -r ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt" didn't work, navigate to the folder in which the requirements file resides via the `cd` command. So: cd ComfyUI/custom_nodes/ComfyUI-Manager/ then install the requirements: pip install -r requirements.txt
@enzobrand571
@enzobrand571 2 ай бұрын
@@TechXplainator thanks
@cekuhnen
@cekuhnen 2 ай бұрын
Great direction and guide!
@burncloud-com
@burncloud-com 2 ай бұрын
This video really help to use lighting ai
@burncloud-com
@burncloud-com 2 ай бұрын
I’m having trouble using Lighting AI. Can I contact you?
@TechXplainator
@TechXplainator 2 ай бұрын
What's the trouble?
@burncloud-com
@burncloud-com 2 ай бұрын
@@TechXplainator I can’t type the message here, can you send e to me?
@Billyn0treally
@Billyn0treally 2 ай бұрын
This was incredibly useful and very well made, thank you! ❤
@TechXplainator
@TechXplainator 2 ай бұрын
You're so welcome!
@Abhinayanagarajan-x8p
@Abhinayanagarajan-x8p 2 ай бұрын
This guide is so insightful! Fine-tuning AI models can feel like trying to solve a puzzle-every piece matters! I'm super curious about how tools like SmythOS can simplify this process.
@TechXplainator
@TechXplainator 2 ай бұрын
Thank you so much! And... good point! So far, I have been using crewAI for agents. But I'm trying to explorer other options. I'll definitely give SmythOS a try ☺️
@GetawayVisions
@GetawayVisions 2 ай бұрын
So far , this is the best video covering every aspect of leonardo AI! thank you so much for the clear explanation ! I subscribed because i know i will find any answer im looking for in one of your videos stay blessed <3
@TechXplainator
@TechXplainator 2 ай бұрын
Thank you sooo much for your kind words ☺️
@DanielChongvideos
@DanielChongvideos 2 ай бұрын
perfect. This should be the gold standard for tutorials. Well done TechXplainator. This is so much better than 90% of the runpod comfyui installation tutorials.
@TechXplainator
@TechXplainator 2 ай бұрын
Spread the word 😉 - Just joking... Thank you soooo much for your kind words! This really made my day ☺️
@TechXplainator
@TechXplainator 2 ай бұрын
Good to know! Never heard of it before! I'll be sure to try it out ☺️
@bnermine9780
@bnermine9780 2 ай бұрын
Thank you for the great video! Could the model then be used inside a local python code? I am writing a classification script using an llm but running it on my cpu takes ages. Can I edit my local python code so that the classification is done with the model running on google colab but the results are stored locally? This would also help me apply the same model to different use cases. Thank you!!
@TechXplainator
@TechXplainator 2 ай бұрын
Thank you so much for your kind words! And yes, you can definitely do that. Here is how that could work: 1. Keep the Colab notebook running with Ollama and Ngrok set up as shown in the tutorial. 2. In your local Python script, use the 'requests' library to send classification requests to the Ollama model via the Ngrok URL. 3. Process the responses and store the results locally. I hope that helps. Happy coding ☺️