Thanks for leaving all the errors in and correcting them. Excellent.
@datpspguy Жыл бұрын
Was using Ubuntu Desktop running mixtral on ollama so i can make api calls with my FastApi app on VS code but realized i should separate them out and go headless for ollama. I didn’t realize that CORS was preventing outside calls from my dev machine and this video helped once i found the github page as well. Thanks for sharing
@IanWootten Жыл бұрын
Glad to hear you sorted it!
@datpspguy Жыл бұрын
thank you, I ended up storing the environment variable into the .conf file to bind the IP address so it handles this process automatically.@@IanWootten
@DataDrivenDailies Жыл бұрын
just what i was looking for, thanks ian!
@IanWootten Жыл бұрын
No problem!
@sto3359 Жыл бұрын
This is amazing news! I'm limited to 16gb RAM on my Macs, but not so on my Linux machines!
@trapez_yt7 ай бұрын
i cant run it on service ollama start, it says the following: $sudo: service ollama start ollama: unrecognized service
@SMCGPRAАй бұрын
What is the pc or laptop min config needed to run ollama llama 3 model and mixtral model, pls let us know
@atrocitus77711 ай бұрын
how does this scale for multiple users sending multiple requests at a time? do you need to use a load balancer / reverse proxy? i don't think ollama supports batch inference still
@jakestevens369411 ай бұрын
You would have to launch and run the application multiple times, the best way is to just use something like docker. Otherwise, I believe theirs the "screen" command. If I remember correctly on Linux this will allow you to run applications in the CLI with multiple virtual "screens" or rather more like sessions, you would then want to make sure what ever port it uses is different from the others. Also take note the ram it uses, is the ram it uses, CPU can be shared. It might be possible with ram (with some tricks) however it's unlikely.
@atrocitus77711 ай бұрын
what about pulling from a custom endpoint where i have my own hosted models? i want to run this on an air gapped network that doen'st have any access to the internet so if i could point it to an on prem server i have that would be awesome. @@jakestevens3694
@wcgbr10 ай бұрын
Hello. I'm developing an OnPremises application that consumes Ollama via API. However, after a few minutes, the Ollama Server stops automatically. I would like to know if there is any way to keep it running until I stop it. Thank you very much.
@PengfeiXue9 ай бұрын
can we use ollama to serve in production ? if not,what is your suggestion?
@timjx3675 Жыл бұрын
Mistral 7B running really sweet on my old Asus (16GB ram ) laptop
@IanWootten Жыл бұрын
Runs really fast on my MBP too, just started playing with it yesterday.
@timjx3675 Жыл бұрын
@@IanWootten sweet
@bigsmoke45684 ай бұрын
Are you running it without a GPU? I have an old Laptop with 16gb as well with a beefy CPU, but I'm not sure if It'll be able to run somewhat smoothly with just the specs I have.
@receps.83963 ай бұрын
Thank you, indeed. It worked.
@peteprive1361 Жыл бұрын
I got an error while executing the curl command : Failure writing output to destination
@IanWootten Жыл бұрын
Weird. Perhaps try running it from a directory you are certain you have write access to.
@ITworld-gw9iy9 ай бұрын
for 70B model, what server would I need to rent? docs says at least 64GB of RAM... but regarding NVIDEA card no minimal specs in the docs. Who has experience with this?
@amjadiqbal53534 ай бұрын
You are real hero
@VulcanOnWheels9 ай бұрын
0:08 How did you get to your pronunciation of Linux? 10:53 How could one correct the error occurring here?
@SuperRia336 ай бұрын
How do you connect to server via Python Client or Fast APIs for integration with projects/notebook?
@IanWootten6 ай бұрын
If you simply want to make a request to an API from Python, there are plenty of options. You can use a package from Python itself like urlllib, or a popular library like requests.
@rishavbharti522511 ай бұрын
This was a really helpful video Ian! But I am facing one issue after running ollama serve the server is shutting down when I close terminal. Please tell me if there is a way to prevent this. Thanks!
@perschinski7 ай бұрын
Great stuff, thanks a lot!
@JordanCassady9 ай бұрын
Which version of Ubuntu did you choose? It seems to be missing from the video.
@74Gee8 ай бұрын
Run Pod is very affordable too. From 17c per hour for a Nvidea 3080
@IanWootten8 ай бұрын
Yeah, I wanted to do a comparison of all the new services appearing.
@sugihwarascom Жыл бұрын
How come the model run in 8gb of ram? On the docs it self it need at least 16gb for llama2
@IanWootten Жыл бұрын
No idea - I was going on experience using ollama rather than the model itself.
@GenerativeAI-Guru Жыл бұрын
How do i change IP and port for Ollama
@IanWootten Жыл бұрын
Use the env var OLLAMA_HOST. e.g. OLLAMA_HOST=127.0.0.1:8001 ollama serve
@GenerativeAI-Guru Жыл бұрын
Thanks
@AdarshSingh-rm6er7 ай бұрын
hello Ian, Its a very great video. I have some query, i will very thankful if you can help me. I am stuck since 3 days. Apparently, I am trying to host the ollama on my server. i am very new to linux and dont understand the whats wrong i am doing. I am using nginx to host the ollama on my proxies and configure the nginx file and yet getting access denied error. I can show you the code if you want, please respond.
@blasandresayalagarcia3472 Жыл бұрын
what is the cost of webhosting ollama or these type of LLM models?
@IanWootten Жыл бұрын
In this case, it'll be the price of the virtual machine you choose to install it on so depends on the provider.
@petermarin Жыл бұрын
benefits of running it like this vs docker?
@IanWootten Жыл бұрын
Running anything within a container will always mean the app runs slower.
@user-wr4yl7tx3w Жыл бұрын
do you think it is safe to install on your own laptop instead of the cloud server?
@IanWootten Жыл бұрын
Yes. Ollama has desktop versions too and it doesn't send anything externally when query when you go that route. I have another video where I do this on my mac.
@nickholden585 Жыл бұрын
Right now there is an issue with Ollama where if you create an model, it spams you with "do not have permission to open Modelfile" It's super odd, because even if you give full read and execution rights to every user or run the command with sudo it still fails. The only viable work around is to run it on /tmp
@IanWootten Жыл бұрын
This is an issue with the current user not having access to the ollama group. There's a recommended solution posted here (though sounds like it might not be completely resolved): github.com/jmorganca/ollama/issues/613#issuecomment-1756293841
@nickholden585 Жыл бұрын
@@IanWootten saw that. Even after running sudo usermod -a -G ollama $(whoami) It still won't work. The idea to run it in /tmp came from that thread haha. Outside of this issue, the rest of the project is pretty cool imo. Local llm with reinforced learning, wifi and direct brain integration will be the future.
@davidbl1981 Жыл бұрын
Even if the killer is dead on the floor the killer is still there and would still be a killer 😅 so the correct answer would be 3.