Best video about the subject so far. Plus the comparison at the end. Really valuable information.
@Rushil6942021 күн бұрын
Those GPUs all together cost more than my car omg
@Rushil6942021 күн бұрын
Nvm I thought those were A100s lmao
@SB-qm5wg19 күн бұрын
The ram is worth more than my car 😆
@greymalkin16 күн бұрын
@TechnicallyUnsure, are these LLM replying with or without internet access? - fan greymalkin
@TechnicallyUnsure16 күн бұрын
Without Internet, once you download the models, you can disable the Internet on the server
@_symmetry_18 күн бұрын
You have an impressive rig.
@jj-icejoe664221 күн бұрын
Electricity bill ?
@bluesquadron59320 күн бұрын
Yep, where is the power meter? 🤣
@glenswada20 күн бұрын
wow what an incredible setup. Thanks for showing all this. I imagine AI really cannot determine what is a joke or not, so presume its a canned response. But a database full of canned responses to questions that most people would ask is still an incredible thing to have on hand. I imagine something like the Minisforum BD790i SE with lots of ram and a decent gpu would run llama3.3 without costing anything electricity wise. Thanks man.
@bluesquadron59313 күн бұрын
DeepSeek is a Chinese LLM. This may or may not be your concern.
@ye-17236 күн бұрын
interesting content , btw does these LLMs recieve auto updates or you have to do that manually ?
@TechnicallyUnsure6 күн бұрын
Manually, you can pull newer versions and use the updated models
@_A_lex_K_19 күн бұрын
You don't seem that unsure
@TienNguyen-ky4dx14 күн бұрын
Don't want to buy a ticket so buy a jet plane 😂😂😂
@TechnicallyUnsure14 күн бұрын
More than that, privacy, accessible offline, using with LM Studio, and solving problems and challenges that other public paid LLMs can't solve
@ghost_of_you_tube21 күн бұрын
Bro 😂😂 ... I'm technically unsure if you are batman 😂😂😂
@anujparekh75219 күн бұрын
😂
@anujparekh75219 күн бұрын
He is Batman
@Ibey_0118 күн бұрын
Good! Great video, very useful, I'm thinking about putting together something similar in the future, could you please tell me approximate consumption of a hardware like that?
@SB-qm5wg19 күн бұрын
Nice server rack.
@RayhanSardar-tj3ji17 күн бұрын
Are you batman?
@TechnicallyUnsure17 күн бұрын
Batman? No. But my rig can definitely run Batman Arkham Knight
@BinaryCloudChaser20 күн бұрын
Hardware ? NVIDIA Jetson Orin Nano
@ghost_of_you_tube21 күн бұрын
Questions... 1. An NPU pci it's better this amount GPUs ? (In actual market).😅 2. You Can try image generation models😮. 3. How many minutes it wast to compile, example: linux kernel ? 🤔 5. Can it run minecraft with ultra realistic shaders or any sort of game that can use an multigpu if there is one? 😂😂 6. Are you batman ?😂😂
@TechnicallyUnsure20 күн бұрын
I don't know any NPU card thats as powerful as the cards I have. As for image generation model, I haven't tried, not sure if there are any good ones that are open source. Compiling kernel doesn't use GPU and yes I can probably run any sort of shaders as long as they use GPU.
@airbeast567118 күн бұрын
Tenstorrent
@SinisterSpatula21 күн бұрын
I'm interested in hosting my own LLM server, but so far I don't use it enough to justify the cost. I'm thinking of making my mac mini m2 into a self hosted llama server. I'd be interested in automations for myself and family, like stuff that would be useful to us 24hrs/365days. Just don't want to drop lots of money on hardware that I know is already behind the curve. There's got to be some consumer inference accelerator cards coming to market in the next few years.
@dieselphiend21 күн бұрын
Why not install something like LM Studio?
@SinisterSpatula21 күн бұрын
🎉a new Technically Unsure!😸 And I'm curious about this topic too. 🍿🍿🍿
@pavelyankouski49137 күн бұрын
Sphere
@Jamer1Smith19 күн бұрын
not even o1 solved it?
@TechnicallyUnsure19 күн бұрын
Unfortunately no, it kept making changes to the code, fixed one part, corrupted the other, went back and forth with it, but couldn't get it to work properly.
@Yves_Cools20 күн бұрын
@Technically Unsure : your "Hello World" boot message in deepseek cannot be compared to what you did in ChatGPT or other online LLM's since you didn't do the exact same thing in deepseek (you started from another reference point) so your argument about deepseek regarding the boot message machine code is completely useless and means absolutely nothing.
@TechnicallyUnsure20 күн бұрын
No, I did the same with DeepSeek as well, and it gave me the correct code first try.
@shawnvines251419 күн бұрын
Happy for you, but completely useless for almost everyone else.
@TechnicallyUnsure19 күн бұрын
hmmm... care to elaborate ? Useless how? You can train your own models, fine tune open source models, unfiltered models, not sharing your data, etc. There are many use cases for having such a server, how's this "completely useless"?