Рет қаралды 3
This is video explains how to run your own LLM AI server like ChatGPT, but locally in your home network. That means it will run only on your computer and no one is able to access that information besides you, if you got a secure network of course ;).
The tooling I'm using are LM Studio and Open WebUI on Fedora Linux 39.
I do NOT use Docker in this installation for Open WebUI, but I DO use a Python virtual environment.
PC:
AMD RX 7700 XT
Fedora Linux 39
Links:
LM Studio - lmstudio.ai/
OpenWebUI repository - github.com/ope...
Chapters:
0:00 Intro
0:10 The idea
0:54 Install LM Studio
2:30 Install Open WebUI
5:24 Config Open WebUI with LM Studio API
6:30 IP address in your network
6:44 Running the Open WebUI on your mobile