@@m4saurabh seriously? that's not for everyone T_T
@LinuxTex5 ай бұрын
For Phi 3.8b, 8 gb ram. No gpu needed. For llama 3.1 7b, 16 gb ram. Most consumer GPUs will suffice. H100 not necessary.😉
@乾淨核能5 ай бұрын
@@LinuxTex thakn you!
@go0ot4 ай бұрын
Do more local llm installs videos
@nothingbutanime49755 ай бұрын
Do make a video regarding ai generated images local models
@LinuxTex5 ай бұрын
Definitely bro👍
@obertscloud21 күн бұрын
thanks how do I get modules that are no censored ?
@MrNorthNJ5 ай бұрын
I have an old file/media server which I am planning on rebuilding as a future project. Would I be able to run this on that server and still access it with other computers on my network or would it just be available on the server itself?
@LinuxTex5 ай бұрын
That's a great idea actually. You could make it accessible on other devices on your network as ollama suppots that.
@MrNorthNJ5 ай бұрын
@@LinuxTex Thanks!
@shubhamshandilya61604 ай бұрын
How to make api calls to these offline llm. For used in projects
@MuhammadMuaazAnsari-l1b5 ай бұрын
0:39 The Holy Trinity 😂😂🤣
@nehalmushfiq1415 ай бұрын
okay thats someting good and appreciatable content
@LinuxTex5 ай бұрын
Thanks Nehal. 👍
@kennethwillis83394 ай бұрын
How can I have my local LLM work with my files?
@LinuxTex4 ай бұрын
Sir you need to setup RAG for that. In the MSTY app that I've linked in the description below, you can create knowledge bases easily by just dragging and dropping files. Then you can interact with them using your local llms.
@atharavapawar32725 ай бұрын
which linux is perfect for my samsung NP300E5Z 4 RAM , Intel Core i5-2450M Processor , plz reply
@caststeal4 ай бұрын
Go with mx linux. Xfce desktop environment
@decipher3654 ай бұрын
Yes we need much more such videos
@Soth0w764 ай бұрын
I already use ollama on my Galaxy A54 with Kali Nethunter