Пікірлер
@k01db100d
@k01db100d 20 күн бұрын
llama.cpp works fine on CPU, it's slower than on GPU but still usable
@mopeygoff
@mopeygoff 21 күн бұрын
Good video. I run a similar setup on an R-720, but i'm using an RTX 2000 Ada Gen (16gb). No external power needed, uses a blower style fan so no need for an "external" cooler solution, really, but they run about $500-$600 on ebay. I got mine for $550. I'm on the hunt for another one. It's basically an Nvidia 3060 with a couple hundred more tensor cores and more vram. So not too shabby. I'm using a proxmox container for the AI Gen stuff. My model is a fine-tuned version of Dolphin-Mistral 2.6 Experimental with a pretty chonky context window.
@alivialee
@alivialee Ай бұрын
nice shots of your record player
@examen1996
@examen1996 Ай бұрын
Nice video, but a demo would have been cool. Great skills
@HaydonRyan
@HaydonRyan Ай бұрын
What cpu or cpus do you have? I’m looking at a gpu for my r7515 for ollama.
@Connorsapps
@Connorsapps Ай бұрын
@@HaydonRyan 2x Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz, 8 cores each
@TheSmileCollector
@TheSmileCollector Ай бұрын
Could you fit two Tesla P4? Also what os you using on your machine?
@Connorsapps
@Connorsapps Ай бұрын
@@TheSmileCollector it could fit another one but I’d have to remove its idrac module. Ubuntu server.
@Connorsapps
@Connorsapps Ай бұрын
What OS do you usually use?
@TheSmileCollector
@TheSmileCollector 29 күн бұрын
@@Connorsapps Sorry for the late reply! Just got proxmox on mine at the moment. Still in the learning stages of servers.
@Flight1530
@Flight1530 2 ай бұрын
so when are the other Gpus coming in?
@Connorsapps
@Connorsapps 2 ай бұрын
@@Flight1530 I just got a 4GB NVIDIA GeForce RTX 3060 for a normal pc but maybe I could get some massive used ones for heating my house once the AI hype cycle is over.
@Flight1530
@Flight1530 2 ай бұрын
@@Connorsapps lol
@Benderhino
@Benderhino 2 ай бұрын
I love how sarcastically he was talking about piracy
@technotic_us
@technotic_us 2 ай бұрын
I have a PER730 8LFF running unraid. I found this video with a very vague search (tesla llm for voice assistant self hosted) but I was looking at the Tesla P4 for all the same reasons. 75w max. I don't want my r730 going into r737-max mode (with the door plug removed in flight, so you get the full turbine sound in the cabin, if you want that "riding on a wing and a prayer" vibe, like you're literally strapped to and riding on the wing during flight). I considered the p40 but I'm in California, the electricity cost difference could be a week worth of groceries in the Midwest, or lunch and dinner here... Thankfully theres one on ebay for only a couple dollars more than china and i can have it in 3 days. But its good to see someone else with basically the same use case. Also running jellyfin, and wanted acceleration for that too. Anyway glad you did this. Your vid made me confident in the $100 for a low budget accelerator. Btw what is your cpu/ram config? Im on 2x e5-2680v4 14cx2 (28c56t) and 128gb 2400 ddr4 ecc. Everything i want to accelerate is in containers so i should be good. Thanks again 👌
@Connorsapps
@Connorsapps 2 ай бұрын
In the midwest, food cost is actually pretty dang close to everywhere else but you're definitely right on the electricity. I made this video due to the lack of content on this sorta thing so I'm very glad it was worth the time. 2x CPUs Intel Xeon E5-2640 v3 (32) @ 3.400GHz Memory: 6x 16GB DDR4, in total: ~95GB
@DB-dg9lh
@DB-dg9lh 2 ай бұрын
Ya might want to try blur that receipe again. I can read it pretty easily.
@Connorsapps
@Connorsapps 2 ай бұрын
Oops. I added some extra blur now thanks
@LeeZhiWei8219
@LeeZhiWei8219 2 ай бұрын
Interesting! A tour of the homelab maybe? Subscribed!
@SamTheEnglishTeacher
@SamTheEnglishTeacher 2 ай бұрын
Did the instructions it gave you actually work though? If so, I expect a lot more output from your channel, although it may become nonsensical over time.
@Connorsapps
@Connorsapps 2 ай бұрын
I've already started using TempleOS
@SamTheEnglishTeacher
@SamTheEnglishTeacher 2 ай бұрын
@@Connorsapps based. After all what are LLMs but a scaled up version of Terry's Oracle application
@Connorsapps
@Connorsapps 2 ай бұрын
@@SamTheEnglishTeacher hhahaha i forgot about that
@loupitou06fl
@loupitou06fl 2 ай бұрын
Great video, I got my hands on a couple of supermicro 1U servers and tried the 1st part (CPU only) of your video, is there any other GPU that would fit in that slot ?
@Connorsapps
@Connorsapps 2 ай бұрын
The GeForce GT 730 will as seen here: kzbin.info/www/bejne/a5zYlnV3nM6aoJYsi=Bl1zuecYDxfYJNgQ&t=188 but you've gotta cut a hole for airflow. You're super limited if you don't have an external power supply so I'd consider buying a used gaming pc and using it as a server.
@vulcan4d
@vulcan4d 2 ай бұрын
A good test would be to show how many tokens/sec you got instead of duration.
@internet155
@internet155 2 ай бұрын
pull the lever kronk
@JoeCooperTech
@JoeCooperTech 2 ай бұрын
Brilliant work. Really well done, Connor. New subscriber here.
@mishanya1162
@mishanya1162 2 ай бұрын
Nah guys, 8gb vram is too little I just tried 8B llama3.1 and its trash So, buying this will.... Its better to just pay for chatgpt or others
@Connorsapps
@Connorsapps 2 ай бұрын
ChatGPT can’t help with basic daily tasks like making meth as shown in video
@AprilMayRain
@AprilMayRain 2 ай бұрын
Have an r720 with a GTX 750ti and need more uses for it! Do you think the 2GB of VM would make any difference for Ollama?
@Connorsapps
@Connorsapps 2 ай бұрын
100% for the smallish models. It's definitely worth trying out a few to see. I'd first try ollama.com/library/gemma:2b then maybe ollama.com/library/llama3.1:8b to see what happens.
@Flight1530
@Flight1530 2 ай бұрын
I just found this channel, I hope you do many more LLM with your servers.
@jaykoerner
@jaykoerner 2 ай бұрын
+19:20 you know you can still read that blurred text right.... At least I can
@Nightowl_IT
@Nightowl_IT 2 ай бұрын
Mhm kzbin.info/www/bejne/qmWtkH6PpZWBfa8
@halo64654
@halo64654 2 ай бұрын
For anyone trying this on old enterprise hardware on top of VMs. Tread carefully with the HPE Gen 7 through 8. There's a bios bug that will not allow you to do PCI passthough and you wont be able to do anything PCI related. Also, underated channel.
@JzJad
@JzJad 2 ай бұрын
Im guessing this is on specific bios versions, have done pci pass through on some gen 8s and luckily did not have any issues.
@halo64654
@halo64654 2 ай бұрын
@@JzJad Mine is a G7. I'm personally on the most recent BIOS version. I've pretty much given up trying to make it work.
@JzJad
@JzJad 2 ай бұрын
@@halo64654 I had done it with VMware and proxmox once I do remember proxmox being a bit more of a paint and having issues in some slots but never realized it was a HP BIOS issue,rip
@lundylizard
@lundylizard 2 ай бұрын
Nice video :)
@MrButuz
@MrButuz 2 ай бұрын
Good interesting video.
@taktarak3869
@taktarak3869 2 ай бұрын
Thank you. I've been thinking of starting my own home lab for final year project, wasn't able to find a source of where i should start with :) cheers mate
@Connorsapps
@Connorsapps 2 ай бұрын
I’d love to hear more about it. So do you have any particular hardware in mind?
@taktarak3869
@taktarak3869 2 ай бұрын
@@Connorsapps There are a few IBMs around near my local. I probably can start with them. The last time i try a Supermicro it didn't like some gpus. I have plenty of gpus laying around too, mostly Quadro cards or Tesla. Recently got a batch of AMD's vega gpus (like the 56 and 64) from a retired mining ring too. Since Ollama are getting support for them, i believe it's worth a try.
@cifers8928
@cifers8928 2 ай бұрын
If you can fit the entire model into your GPU you should use exl2 for free performance gains with no perplexity loss
@roykale9141
@roykale9141 2 ай бұрын
Ok this was funny and educative
@FroggyTWrite
@FroggyTWrite 2 ай бұрын
the r630xd and r730xd have room for a decent sized GPU and PCI-E power connectors you can use with adapters
@Connorsapps
@Connorsapps 2 ай бұрын
I was actually looking into buying one of those models but I couldn’t justify another heat generating behemoth in my basement
@alivialee
@alivialee 2 ай бұрын
love the emperor's new groove reference haha
@bennett1723
@bennett1723 2 ай бұрын
Great video
@TheCreaperHead
@TheCreaperHead 2 ай бұрын
this was a well made video, is this channel going forward going to be about home lab or server stuff? Im working on my own home lab with Ollama3 with my 3090 fe (ik its overkill lol) and I love seeing ppl make their own stuff. Also, do you know how to make 2 gpus work for Ollama? I added in a 3060ti fe and it isnt being used at all with Ollama3
@Connorsapps
@Connorsapps 2 ай бұрын
Programming and tech is my biggest hobby so next time I have a bigger project I’ll probably make a video. Depending on the models you’re using GPU memory seems to be the real bottleneck. As for getting 2 GPUs to work for ollama I wouldn’t think this would be supported. Here’s a GitHub issue about it github.com/ollama/ollama/issues/2672
@mopeygoff
@mopeygoff 21 күн бұрын
I have not been able to split a model across multiple gpus, but Ollama has loaded a second model to a second GPU, or offloaded a part of a model to the CPU. I have an RTX 2000 Ada Gen (16gb) and an old NVIDIA 1650. With the context window, my main LLM is about 12.5GB or so. That goes onto the Ada Gen. When I send something to the 4gb llava/vision model it dumps most of it onto the 1650, with a small chunk going to CPU. It is significantly slower than the main model but not annoyingly so (and hey, I only use it occasionally).
@shreyasbhat
@shreyasbhat 2 ай бұрын
The title says Tesla P40, but you are using Tesla P4. I'm not sure if the title is wrong or if I got it wrong. Aren't they different GPUs?
@Connorsapps
@Connorsapps 2 ай бұрын
Oops
@trolledepicpeeterstyle1678
@trolledepicpeeterstyle1678 2 ай бұрын
I like this video, keep this up!
@noth606
@noth606 2 ай бұрын
You know there's a button to save you the time to express this as a comment, right? As a bonus it tells YT that you like it too, so it can be prioritized higher in searches and stuff 😉
@misterpmacd
@misterpmacd 2 жыл бұрын
Connor Skees was born to be famous.
@Connorsapps
@Connorsapps 2 жыл бұрын
6 views woo hoo
@misterpmacd
@misterpmacd 2 жыл бұрын
11/10 would recommend to a friend.
@opensh0t
@opensh0t 2 жыл бұрын
hey its beck49. i don't know if you remember me but i was an admin for your minecraft server 7 years ago. i didn't know if this was you at first but i saw i was already subscribed and your voice sounded very familiar, so i connected the dots.
@Connorsapps
@Connorsapps 2 жыл бұрын
Haha yes I do remember. Oh Minecraft, those were the days.
@Harry-dk2yd
@Harry-dk2yd 2 жыл бұрын
thanks for sharing those very cool synths that I've never played with (since the preset factory is huge af)
@Connorsapps
@Connorsapps 2 жыл бұрын
Yep yep, I’m very picky so I was surprised there weren’t that many other videos with good presets
@Harry-dk2yd
@Harry-dk2yd 2 жыл бұрын
@@Connorsapps make some more vids
@IsraelMolina1997
@IsraelMolina1997 2 жыл бұрын
Cool
@6moon18
@6moon18 2 жыл бұрын
Thanks
@kevinkramolis7800
@kevinkramolis7800 2 жыл бұрын
Some cool synths here. Thank you!
@misterpmacd
@misterpmacd 2 жыл бұрын
amazing
@fanimations.co2023
@fanimations.co2023 3 жыл бұрын
Beautiful man