Cheap mini runs a 70B LLM 🤯

  Рет қаралды 221,089

Alex Ziskind

Alex Ziskind

Күн бұрын

Пікірлер: 421
@shapelessed
@shapelessed 2 ай бұрын
I wouldn't necessarily say that this PC can "run" a 70B model. It can walk one for sure...
@warsin8641
@warsin8641 2 ай бұрын
Still replying faster than all my people trying not to reply right away to not look desperate 🤣
@quinxx12
@quinxx12 2 ай бұрын
I didnt quite get what the limiting factor is for running the model faster. Processor speed?
@edism
@edism 2 ай бұрын
​@@quinxx12 concurrency and memory bandwidth.
@sascha6841
@sascha6841 2 ай бұрын
With a rollator walker 😁
@cwiniuk2778
@cwiniuk2778 2 ай бұрын
@@quinxx12 AFAIK to run LLMs effectively the data needs to be held in VRAM, as GFX cards can process data significantly faster than CPUs. I didn't fully understand this video btw so I assume its some kind of hack to run the 70b model in memory and CPU processed.
@pythonlibrarian224
@pythonlibrarian224 2 ай бұрын
This is my favorite subsubsubgenre because figuring out how to run LLMs on consumer equipment with fast & smart models is hard today. Gaming GPUs (too small), Mac Studios (too expensive) are stop gap solutions. I think these will have huge application in business when Groq-like chips are available and we don't have to send most LLM requests to frontier models.
@univera1111
@univera1111 2 ай бұрын
Excellent explanation
@haganlife
@haganlife 2 ай бұрын
Been running a local LLM (Mistral 7B and gemma-2-2B) on my iPhone 15 Pro for about a year. Output is instant.
@justtiredthings
@justtiredthings 2 ай бұрын
​@@haganlife and virtually useless, I presume
@dicktucker-o6g
@dicktucker-o6g Ай бұрын
@@justtiredthings you didnt have to kill'em like that. lolololol
@flrn84791
@flrn84791 11 күн бұрын
I'll take a smaller model running faster on VRAM over a larger model at 2 tok/s because it's running on CPU and RAM anyday
@kiloabnehmen2592
@kiloabnehmen2592 Ай бұрын
I wonder if it would be capable to run Mixtral 8x22b. Does anybody have experience with it? How fast would it be if it can run it?
@perelmanych
@perelmanych 2 ай бұрын
I don't want to disappoint you but I am quite sure you will get the same 1.4t/s running 70b parameters model purely on a CPU and it will use half of the memory. So theoretically you will be able to run 180b models on CPU (q4_K_M version). The thing is that on current PCs not a compute power is the limiting factor but the memory bandwidth and since both iGPU and CPU using the same memory you will get very similar speeds. Make a follow up video, may be I am wrong if so I will be happy to learn that.
@Fordance100
@Fordance100 2 ай бұрын
Strix halo will have 256-bit memory controller. DDR6 will be 2xDDR5 speed. Potential we will 4X memory bandwidth in 2-3 years. Expensive Mac studio ultra has 800 GB/memory bandwidth right now.
@perelmanych
@perelmanych 2 ай бұрын
@@Fordance100 I was talking about Ultra 5 125H. Strix Halo iGPU will be in its own league and I am impatient to see its test results. Mac Studio has unified memory, which is basically soldered on the chip. As I understand Intel is also going to employ this approach for ultra thin series. Let's see, let's see.
@criostasis
@criostasis Ай бұрын
@@perelmanych I ran models on a 13900K and on a mobile Quadro RTX 5000. They both ran about the same, about 20-30 seconds for responses. With an RTX 4080 though it was way faster, responses in about a second or two. This was on a self hosted website with a fast api backend using GPT4All with langchain for local docs, memory, context awareness and torchserve for fast model loading and to help with concurrency.
@msmith323
@msmith323 Ай бұрын
​@@criostasis May I ask which models and what API you managed to get working?
@infini_ryu9461
@infini_ryu9461 Ай бұрын
You can "run" it like that, but you are not going to get any kind of decent speed. That's the issue with running these models on CPU. Yes, the 4090 only has a space of 24GB, but that 24GB is super fast. The more layers you give to your GPU, the faster it will be. So I doubt it will be faster purely on CPU.
@tyanite1
@tyanite1 Ай бұрын
Thank you. I learned a lot. With respect, we have a truly vastly different idea of what cheap means. U.S. $700-$800 total for this unit (after tax ~$900.00) is a whole lot to me. I get it that it's cheaper than other new stuff by comparison.
@allyouracid
@allyouracid Ай бұрын
@@tyanite1 right now, they sell them (barbone) for USD 400. Still not nothing, I could imagine better use for my money, too, but getting the barbone, I'll buy 1 48gb RAM bar first for ~100, soon, and play with it and whenever I feel like upgrading, I'll buy another 48gb bar. This should cost a bit over USD 600 tops. Yes, that's also still a chunk of money, and I'm not saying this to prove you wrong, but since you said money is a limiting factor, who knows... Maybe it helps. I'm curious AF. Can't wait. There are situations where I can't bring myself to send certain info (mostly work / code related) over to some overseas company, so whatever brings me closer to a usable local LLM is greatly appreciated. Even if I'll have to wait around 30 seconds for a reply. Everything is better than waiting 10 minutes or getting faster replies which are of no use whatsoever.
@meateaw
@meateaw 10 күн бұрын
the alternative is 6000-8000 usd + for just the GPU with enough memory
@WonderSilverstrand
@WonderSilverstrand 2 ай бұрын
In 10 years, videos like this will be nostalgic
@fusseldieb
@fusseldieb 2 ай бұрын
It will be like watching people spinning up 56k modems and getting amazed at the internet.
@WonderSilverstrand
@WonderSilverstrand 2 ай бұрын
@@fusseldieb haha yes
@imacg5
@imacg5 2 ай бұрын
it will be a reminder of what humans look like for the machines
@lockin222
@lockin222 2 ай бұрын
How old are you guys? I assume 20 or younger Because 10 years ago 16 gb ram was okay, and today macbooks still have 8 gb RAM. 10 years in the future this will still hold up
@fusseldieb
@fusseldieb 2 ай бұрын
@@lockin222 You forget that most technological advancements aren't linear, but logarithmic.
@isbestlizard
@isbestlizard 28 күн бұрын
MiniPCs are amazing!!! I got a ser8 last week with 96GB of memory and a 4TB nvme and it matches my old threadripper 1950x in multicore but has more memory and storage and BLOWS It away in single core and fits in the palm of my hand I literally am in love with it now o.o
@isbestlizard
@isbestlizard 28 күн бұрын
I might get another and connect them via the usb4 40gbps and cluster them if that's possible o.o
@_RobertOnline_
@_RobertOnline_ 2 ай бұрын
Yes, keep exploring these alternatives to running expensive GPU cards or Apple silicon
@univera1111
@univera1111 2 ай бұрын
That's why I like this channel.
@infini_ryu9461
@infini_ryu9461 Ай бұрын
This is not an exploration into alternatives to GPUs, more so getting a model to even fit on such a tiny machine. We know that the more RAM you slap on a machine, the larger the models you can "run", but as you can see it slowed to a snails pace. VRAM will always be king to RAM. That 24GB in the 4090 is incredibly fast. If you can fit a model solely on the 4090, there's no bigger model you could really need. 70B models are quite underwhelming for their weight.
@rbus
@rbus 2 ай бұрын
Running Ollama with Phi3.5 and multimodel models like minicpm-v on an Amazon DeepLens, basically a camera that Amazon sold to developers that is actually an Intel PC with 8GB of RAM and some Intel-optimized AI frameworks built in. Amazon discontinued the cloud-based parts of the DeepLens program so these perfectly functional mini-PCs are as cheap as $20 on eBay. I have 10. :)
@kiloabnehmen2592
@kiloabnehmen2592 Ай бұрын
well but phi3.5 has such low quality output its basicly useless
@rbus
@rbus Ай бұрын
@@kiloabnehmen2592 No, compared to prior >3GB LLMs, the fact that it wasn't rambling with incomplete sentences, repeating sentences and start inventing new questions to answer, was beyond a fsking miracle -- and it does often produce high quality output. And now there's even smaller LLMs like IBM's Granite MOE 1b, only freaking 862Kb and it's a __mixture of experts__ model, was able to output functional VHDL even, and being a mixture of experts model, it's perfect for embedded devices. The point of tiny LLMs is not it's ability to recall esoteric facts but to provide a way to do menial tasks by way of voice conversations. Function-calling is a nifty way to give LLMs knowledge it may not have within it's training data, as well as up-to-date info, but to be able to ask an LLM running on Home Assistant to turn on a light for 10 minutes, or add breakfast cereal to a shopping list. An LLM that can do that conversationally, with semblence of humor & cultural awareness, in realtime, on an embedded chip is a freakin' game changer.
@danielselaru7247
@danielselaru7247 24 күн бұрын
did you try running a 4 bit quantized larger model on those? what's the best tok/s you got?
@iheuzio
@iheuzio 17 күн бұрын
those are using an intel atom cpu with only 100+ GFLOPS of power.
@AnOldMansView
@AnOldMansView Ай бұрын
You can always pick up a second hand tesla k80 and run it side by side with your 3090/4090 or other gpu I have a 3090 and the tesla k80 sure its old but heck I have 48 gig of vram to play with and things just run smoothly. Sure Im not going to break the 100meter sprint but coming last in an Olympics out of 8 billion people is good enough for me. Lots of alternative ways to leverage big company clean outs of servers which no longer have value to them but are of value to us consumers running AI on the smell of an oily rag. Love the videos.
@deucebigs9860
@deucebigs9860 2 ай бұрын
Please keep this series going!
@monoham1
@monoham1 2 ай бұрын
i run 103B on a 4 slot RAM and also get about 3T/s and this is almost exactly half that with 2 slots The way LLMs run at 1~20 T/s until they get a decent GPU is entirely dependant on the memory bandwidth. The best machine for a 70B is actually a 256GB 12 slot dual CPU circa 2015 xeon which run about $2000 total with ebay parts (90% of the cost is the mb and cpu) in other words no GPU is required at all, just as many iRAM slots as you can find.
@NetrunnerAT
@NetrunnerAT 7 күн бұрын
Uhm ... i need to Test my HP DL380G9 -> 768gb RAM. Cost 200€ THX for the idear 😁
@ryanchappell5962
@ryanchappell5962 2 ай бұрын
They need to start making GPUs with DDR slots. It would be slower for gaming but great for LLM and image generation
@DunckingTest
@DunckingTest 2 ай бұрын
soooo excited to see you testing the new lunar lake intel cpus
@AZisk
@AZisk 2 ай бұрын
me too. coming soon hopefully
@ptrxwsmitt
@ptrxwsmitt 2 ай бұрын
Thanks for testing this out. Thought about testing this for myself using the Minisforum version of this mini PC. There seems to be another way of running LLMs using the actual NPU of the Ultra CPUs instead of the Arc GPU when running it via OpenVino. I would be very much interested in some more testing on linux+openvino.
@davidtindell950
@davidtindell950 2 ай бұрын
thank you very much! I have the newest 70b model on both an MSI laptop AND a MSI desktop each with 64 Gb DDR5. They run somwhat slow, but useable and FASTER than your Demo !😮
@everythingofgames4658
@everythingofgames4658 2 ай бұрын
@@davidtindell950 so, what are the gpus use in there
@paultparker
@paultparker 2 ай бұрын
Same CPU and GPU? Anyway, please give token rates, quantization, and relevant machine specifications. I would assume better performance in the desktop at least because of better cooling versus a mini PC.
@vaibhavbv3409
@vaibhavbv3409 2 ай бұрын
Which CPU and GPU
@davidtindell950
@davidtindell950 2 ай бұрын
@@paultparker Sorry for Delayed Response to Replies. Have Muti-Project DEADLINES ! MSI CodR2B14NUD7092: Intel Core i7-14700F, NVIDIA GeForce RTX 4060Ti. 64GB DDR5 5600 and 2TB M.2 NVMe Gen3 !
@davidtindell950
@davidtindell950 2 ай бұрын
@@vaibhavbv3409 Sorry for Delayed Response to Replies. Have Muti-Project DEADLINES ! MSI CodR2B14NUD7092: Intel Core i7-14700F, NVIDIA GeForce RTX 4060Ti. 64GB DDR5 5600 and 2TB M.2 NVMe Gen3 !
@ps3301
@ps3301 2 ай бұрын
Most hardwares are still not designed for running ai. Average Joe won't buy 192gb Mac to run llm. 4090 doesn't have enough vram to run most llm.
@ThePgR777
@ThePgR777 2 ай бұрын
Maybe 5090 will have 48 gb VRAM
@frankwong9486
@frankwong9486 2 ай бұрын
My average Joe uni classmate bought a max out MacBook pro with near hundred GBs of ram to run LLM and he is happy with it 😂
@MiesvanderLippe
@MiesvanderLippe 2 ай бұрын
Apple is the ONE company to actually push local LLM's. Surely they'll upsell you to a 40GB language model if it makes any sense.
@andikunar7183
@andikunar7183 2 ай бұрын
It's not just the computation-sepeed of the 4090, its VRAM has an extremely high bandwidth (and is therefore so expensive but also crazily power-hungry). Apple Silicon has not "just more RAM", Pro/Max/Ultra each double the base M-series memory-width/bandwidth. So the M2 Ultra gets closer to a 4090's >1TB/s with its 800GB/s bandwidth. LLM token-generation is mainly dependent on memory-bandwidth. THIS (and power-consumption) is why many buy a Mac Studio instead of multiple 4090s, if they do just LLM inference and not machine-learning. But NVIDIA is nearly without peers for ML because of its raw compute-power.
@Raskoll
@Raskoll 2 ай бұрын
@@ThePgR777 Nah it's 28gb
@ChadZLumenarcus
@ChadZLumenarcus 2 ай бұрын
I have a core i7 9750H and am running llama3.1 model pretty well. I'm just now getting into AI models and learning about this stuff and it's pretty crazy. I want to scale up and mess with this stuff but finances are the limit lol. It's crazy to think that in 8 or so years, we'll likely have something far better running on our phones without a problem. It doesn't have to be perfect, just "good enough. " to help people with their work.
@andikunar7183
@andikunar7183 2 ай бұрын
Hmmm, besides RAM/VRAM size, its mostly RAM bandwidth for token-generation, which determines llama.cpp's speed (the 4090 has >1TB/s, the M2 Ultra has 800GB/s) . GPU-horsepower is mainly useful for (batched) prompt-processing and learning. And for RAM-size, its not just the model! With the large-context models like llama-3.1, RAM-requirements totally explode, if you try to start the model with its default 128k token-limit. But cool video, thanks!!!!
@AZisk
@AZisk 2 ай бұрын
i definitely need to include bandwidth in my next vid in the series
@xpowerchord12088
@xpowerchord12088 2 ай бұрын
Is there a bottleneck for discreet cards moving memory though versus a shared memory bus that can load it faste, as in the 4090 bandwidth is only within itself and he regular ram utilized to feed it being 3-5x slower? Unfortunate that Nvidia will most likely never make something in the middle of the 4090 - A4400 for ML and ai people
@andikunar7183
@andikunar7183 2 ай бұрын
@@xpowerchord12088 its simpler, yet complicatd: A transformer has to go through its entire compute-graph for each end every token it generates. So it has to pump ALL billions of parameters as well as the transformer's KV-cache (which can get to additional many GBs for 128k context-sizes) from memory (RAM or VRAM) via the (multi-level but small) on-chip caches to the ALUs (in the GPU or CPU or NPU). Token-generation (unlike prompt-processing) is not batched, so this has to be repeated for EACH and every token it generates. Modern CPUs (with their matrix-instructions), GPUs and NPUs have very many ALUs calculating/working in parallel. Because of this its not calculating, but pumping the parameters/KV-cache from memory to these ALUs becomes the bottleneck. Current NVIDIA (e.g. 4090) is able to pump more than 1TB/s with ultra-fast and wide RAM. Apple Silicon uses 128Bit wide RAM in the M, 256 in the M Pro (except for the M3 Pro, which is crippled), 512 in the M Max, and 1028 Bit for the M Ultra. Combined with the RAM's transaction/s this yields 120-133GB/s for the M4 with its LPDDR5X (new Intel/AMD and Snapdragon X do similarly), but faster for the Pro and up to 800GB/s for the M2 Ultra (with its older LPDDR5). Hope this clarifies, sorry for the long-winded explanation.
@xpowerchord12088
@xpowerchord12088 2 ай бұрын
@@andikunar7183 Thanks for the concise and educational response! Appreciate your time.
@dmitrymatora442
@dmitrymatora442 2 ай бұрын
It only explodes if you are not using context quantisation
@yewhanlim8916
@yewhanlim8916 Ай бұрын
7:05 Always try to power up before screwing to close cover of the device. On rare occasion need to reseat them DIMMs.
@rootnotez
@rootnotez 2 ай бұрын
Probably good form to put a link in the description to the post you based this video off of. 👍
@shawnparker2692
@shawnparker2692 2 ай бұрын
Alex doing a commercial made me laugh- plus never knew he was in bare feet 😂
@deucebigs9860
@deucebigs9860 Ай бұрын
I bought this setup and 96GB of memory after seeing this so I'm hoping you do more in the future.
@4.0.4
@4.0.4 2 ай бұрын
1.43 t/s is kinda OK, but realistically, it's not very useful. I think a bang for buck situation would be to use a couple Tesla P40s to get like 5t/s. It won't look pretty, but if you chuck it in the garage or something it's not a problem.
@destiny_02
@destiny_02 2 ай бұрын
10:48 not impossible, llama.cpp can do partial acceleration, running some layers on GPU and remaining layers on CPU.
@maxxflyer
@maxxflyer 2 ай бұрын
bro, this is the fucking videos we need. why everyone talking and never do videos like this?
@Y0UTUBEADMIN
@Y0UTUBEADMIN 12 күн бұрын
@@maxxflyer watch your mouth
@maxxflyer
@maxxflyer 12 күн бұрын
@@Y0UTUBEADMIN no
@jelliott3604
@jelliott3604 2 ай бұрын
my mini-pc, for general usage, is a Ryzen 7 APU with integrated AMD graphics and 64GB DDR4 RAM, 56GB of which has been set as dedicated to graphics in BIOS. It's slow, it's AMD, but it runs stuff in GPU and still alot faster than CPU only (still sucks at running Cyberpunk 2077)
@jelliott3604
@jelliott3604 2 ай бұрын
.. keep wondering how a modern AMD desktop CPU *G model, with a load of CPU cores and a decent integrated GPU and 128 or 256GB of fast DDR memory available would handle things. Certainly the cheapest way to get (close to) 256GB of memory on a GPU that I can think of - you could have a rack of them for cost of the Nvidia GPUs you would need to get to that 256GB
@whodis5438
@whodis5438 2 ай бұрын
@@jelliott3604 more cores don’t help you’ll actually get better performance with hyperthreading off. singlecore benchmarks are a better indicator for llm/ai as it’s about clockspeed/turbo boost x RAM bandwidth throughput
@whodis5438
@whodis5438 2 ай бұрын
@@jelliott3604 AMD keeping AVX512 in their consumer line is gonna make the competition really interesting for CPU-centric builds tho maybe as soon as this next gen refresh. Intel making all the wrong moves
@jelliott3604
@jelliott3604 2 ай бұрын
@whodis5438 my gaming box, the one in the nice case with all the ARGB lighting, is another Haswell-E CPU (i7-5690X) and LGA-2011 board with hyper-threading turned-off and that octa-core 3GHz processor clocked up to just under 4.6 GHz on all cores
@mehregankbi
@mehregankbi 2 ай бұрын
Technically, nvidia could allow the user to use normal system ram as vram. Similar to swapping. They could also use memory compression for vram. It’s pretty usual for system ram. Maybe they already do this for vram too. I’m not sure. Yes it’d be slower if they used these techniques, but it’d be better than not being able to run the task at all.
@TommieHansen
@TommieHansen 2 ай бұрын
Tehnically you could just mount X amount of RAM as a volume and use that as swap-disk as well though, no need for Nvidia to do anything. If they allow swapping any volume could be swapped to.
@mehregankbi
@mehregankbi 2 ай бұрын
@@TommieHansen swap won’t help if nvidia doesn’t support offloading video memory pressure to system memory pressure.
@Techonsapevole
@Techonsapevole 2 ай бұрын
Also Amd Ryzen 7 7700 can run 70b without gpu but at 3tokens/s
@AaronBlox-h2t
@AaronBlox-h2t 23 күн бұрын
Let me get this straght: spend nearly usd 700 on the k6 mini, which has a lobotomized Intel Arc Graphics(2.2 GHz)8 Xe Cores 112EU Graphics Card.... then spend another usd 200 on 100GB DDR5 RAM. So about usd 1000 with tax to run a QUANTIZED 70B model = reduced accuracy and precision. I did n't have usd 1000 to throw away but I am a gamer and coder, and I have windows intel cpu intel gpu ARC A770 16GB and 64GB DDR4 RAM and 5TB 7400MB/sec NVME ssd with DirectStorage all bought for gaming by the way, so I set about coding and set up my system to run inference on a qwen2.5 72B model and it runs fine. Ok, it takes minutes to warm up and load the first time but after that it runs good and it runs BF16, not quantized, but as it is in HuggingFace so no reduced accuracy and precision. In contrast, running a Q8 qwen2.5 32B model, moderately reduced accuracy and precision via the Q8, through LM Studio, and doing inference was SLOW.....I mean, I could count the letters being printied out if I wanted to. haha Yes, LM Studio on the same system.
@adnctz
@adnctz 2 ай бұрын
Wow, we’re having a Hack Week, and I was thinking of this-nice timing!
@S-Technology
@S-Technology 2 ай бұрын
I’ve been using an Intel i7-1255u in a mini pc to run GPT4All with some pretty good results, as long as you stick with smaller highly quantized models.
@PeterKoperdan
@PeterKoperdan Ай бұрын
GTP-4 ? I thought GPTs are all proprietary and not released to the public…
@HashimHS
@HashimHS 2 ай бұрын
While the 4090 can't run the whole model, it can still speed up the process significantly as you will offload some layers to the gpu. BTW a year ago I was able to run llama2 70B on my laptop with 6900HS 8 core cpu. and I only have 24gb of ram, so it was using swap memory (virual memory) aka the internal ssd. I was getting one token output every 10 secs. I only had a 3060 6gb so I couldn't offload much to the gpu.
@tdreamgmail
@tdreamgmail 2 ай бұрын
Totally not worth, but thanks for the information.
@dmitrymatora442
@dmitrymatora442 2 ай бұрын
It can load whole model (Google exllama2). And it can do so much more. Particularly use q4_0 on kv-cache bringing it down from 40Gb to 10Gb on 128k context
@Unineil
@Unineil 16 күн бұрын
My beelinks on the way to test this. Probably go smaller then 70b though. Super excited.
@maxvamp
@maxvamp 2 ай бұрын
I am a huge fan of the minisforums PCs. Extremely similar in form factor. Sounds like soon we will be having a AMD/ARM/Intel AI benchmark race. :-)
@AZisk
@AZisk 2 ай бұрын
I've got one of them too, video coming soon :)
@TommieHansen
@TommieHansen 2 ай бұрын
@@AZisk Cool, especially since you do some development @ Win but for some reason use WSL2 even when not needed (it can be quite a bit slower for some stuff then "native"...)
@univera1111
@univera1111 2 ай бұрын
I knew window was going to win at this LLM thing. I knew that Intel Arch would also win. Nice one. I want to to know that some of us might not have access to Mac machine ever due to location. And the fact that you can just buy an upgrade of RAM is amazing.
@YouTubeGlobalAdminstrator
@YouTubeGlobalAdminstrator 2 ай бұрын
Miniforum are unreliable and have BIOS bugs.
@AZisk
@AZisk 2 ай бұрын
@@KZbinGlobalAdminstratorsounds like a little update and god to go
@brymstoner
@brymstoner 2 ай бұрын
if your only unit of measure for success is that it can run it regardless of how quickly, i made llama 3 7b run on a khadas vim4 pro using ollama. every cpu core spikes and pins at 100 for the majority of the output, but that's expected of an iot sbc.
@billmarshall3763
@billmarshall3763 Ай бұрын
I have a 7w intel core N300 mini pc with just 8gb lpddr5 ram, it runs Q8-Q4 models very well of qwen2.5 models and llama3.2, I mostly use gpt4all and ollama but the newer 7b q4 models with embed knock the sox of all google's models, and arent slow. I usually use qwen2.57ninstruct q4, and the new whiterabbitneo 3b sat 8bQ, machine cost 180$
@AaronBlox-h2t
@AaronBlox-h2t 23 күн бұрын
Hey that's cheap for what it can do. Cool.
@vin.k.k
@vin.k.k 2 ай бұрын
While at it, install LM Studio. It now supports Volkan.
@mareck6946
@mareck6946 2 ай бұрын
in order to view GPU utilization. set the taskmanager to compute ( rightclick one of the diagrams ) .
@alexo7431
@alexo7431 2 ай бұрын
thank you for your experiment, great job
@timeflex
@timeflex 2 ай бұрын
I can feel mobile phones with 128Gb+ of RAM approaching already.
@lincebranco1520
@lincebranco1520 2 ай бұрын
this is very true. it works vey nice. would be interested to see it with RTX 4090 or any other like RTX 4060, or RTX 4070.
@unokometanti8922
@unokometanti8922 2 ай бұрын
The same library seems to support multi gpu setups. Hence an 8x Intel Arc Pro A60 totalling 96 GB of VRAM could in theory be attempted and still be more competitive than a MacStudio from a TOPS-per-dollar perspective. Don’t expect the same size, silence and power efficiency though…
@xpowerchord12088
@xpowerchord12088 2 ай бұрын
For a small lab maybe. Functionality would be way down though. An M3 Max with 96gb would be a better all around deal for an individual. You should see the pro level Nvidia cards that can be linked each with 48gb of ram. Too band Nvidia will not jump in this world when it has the gaming and pro sector nailed
@mclab33
@mclab33 2 ай бұрын
GEM12 AMD Ryzen 7 8845HS 32GB DDR5 5600 Radeon 780M (Fixed 8GB for iGPU) ::LMStudio:: Llama 3.1 8B Q5 => 7.4 tok/sec Llama 3.1 8B Q6 => 6.78 tok/sec Llama 3.1 8B Q8 => 5.548 tok/sec
@mystealthlife6991
@mystealthlife6991 2 ай бұрын
I was running Llama 3.1: 70b on a old server 2x xeon chips 128gb running at 1333hz ....total cost for server = $125 off facebook marketplace. (Poweredge R710). Responses took awhile but it ran.
@djayjp
@djayjp 2 ай бұрын
Why not use the new processors with huge TOPS perf instead...?
@mzamroni
@mzamroni Ай бұрын
you should close the browsers as it uses significant ram. and instead of gpu, the npu should be faster in that intel apu for more igpu ram, you can try desktop amd 8700G with 4x full size dimm. the cpu can go up to 256GB but max udimm in the market is 48GB so it will be 192GB max. the integrated gpu spec is around 17 tflops fp16/bf16 and 16 tflops npu int8.
@SnowDrift-bh7wb
@SnowDrift-bh7wb Ай бұрын
that's the video I was waiting for! Very impressive and well done. However, it's still a bit too slow. I wished it could do 15+ tokens/sec, maybe I would be happy even with 10. It's too bad it doesn't use the NPU.
@SK-bl1lp
@SK-bl1lp 2 ай бұрын
Alex, try to check it out with minipc or laptop together with eGPU like 4080 or 4090. Thank you!
@РустемСиразов-м9м
@РустемСиразов-м9м 2 ай бұрын
Actually, installing a model is much easier now. You can even have UI for free. Msty, for example. Or use ollama directly, if you prefer CLI.
@SmirkInvestigator
@SmirkInvestigator 8 күн бұрын
Serious question. Hows the chair still? That springy lumbar, arm rest adjustment and mesh seats for summer are what I’m looking for
@hansofmadata3565
@hansofmadata3565 2 ай бұрын
Great vid Alex! Please 🙏 consider another vid where you do an Ubuntu install. I’m running ollama on the GMKTek M3 16GB RAM and am very pleased with inference with Gemma2 and Llama3.1 using open webui as a front end on any device. Thanks for your awesome content 👏
@imadelachiri5475
@imadelachiri5475 Ай бұрын
Could you make a video comparing the iGPU vs the NPU?
@mentalmarvin
@mentalmarvin 2 ай бұрын
Now I'm curious how well your mac can run the 70b model
@Corteum
@Corteum 2 ай бұрын
"but what's impressive is that this tiny little bo can run a 70B LLM... like a snail. So if you're really REAAALLY patient, this is possibly a solution for you" Lol
@denvera1g1
@denvera1g1 2 ай бұрын
In theory, DDR5 should allow for up to 256GB with 2 sticks, or 512GB with 4 sticks, EVEN MORE with ECC RDIMMS and LR-DIMMS but those for sure would require a different processor and motherboard.
@blackhorseteck8381
@blackhorseteck8381 2 ай бұрын
Running Llama 3 (8B 4Q) on Ubuntu 24.04 with a Ryzen 7 6800H and it's iGPU (680M)! I get about 30 tok/s consistently. Ditch Windows if you want the best results.
@HitsInSandbox
@HitsInSandbox 2 ай бұрын
Yes, I also find Linux allows for unique configurations that can not be done in Windows in order to run large LLM's with less memory requirements. Windows 11 is still bloatware that hogs resources that could be better used elsewhere for LLM's. Heck, can run windows faster in a Linux Virtual system than windows installed directly.
@alexanderzikal7244
@alexanderzikal7244 29 күн бұрын
I think Apple has a big advantage here, because Ollama supports GPU-metal --> 128GB running on an M3 Max or the coming M4 max is much faster.
@RomanKiprin
@RomanKiprin 2 ай бұрын
That is so weird. I wanted to ask you to perform exactly that! I was thinking about upgrading my labbox to run llama. But I was not sure if that was anyhow better than a paid subscription or if it would work at all. Thank you! The only thing I would love you to try is to setup a llama inside a virtual machine on this very box. It might be significantly more challenging task though. :)
@Puneeth-d6h
@Puneeth-d6h 2 ай бұрын
Alex, Intel lunar lake cpu laptop are out.Please review and share your experience in development environment
@emiliochang3734
@emiliochang3734 2 ай бұрын
I don't think they're for sale right now. What we've seen so far are reviews thanks to brands like Asus and Acer partnering with some KZbinrs
@PatrickOMara
@PatrickOMara 2 ай бұрын
Got this working last year on the AMD chip of their mini pcs.
@GeraldBryant
@GeraldBryant Ай бұрын
Happy Friday You could have done the same thing with 70 to 79 memory and spent less time with the responses. Anything over 80 is going to give you performance hits. balance
@NuncNuncNuncNunc
@NuncNuncNuncNunc 21 сағат бұрын
Nice chair, but why wht the HM lumbar support part removed? Seriously though, I think monitor size effects me moer than my chair. Never used a curved screen and looking at them in stores never impressed me, so I'd like to get a sense of whether or not switching is worthwhile.
@waroonh4291
@waroonh4291 2 ай бұрын
yeah, 2022 hardware you have a problem with VRAM on nVidia Card.. that cost like gold and always out of stock.
@74357175
@74357175 2 ай бұрын
A bit disappointing that we can't use the NPU ?
@TazzSmk
@TazzSmk 2 ай бұрын
M2 Max Mac Studio (64GB) runs Reflection 70B about 7-10x faster than any PC that lacks 48GB graphics vram, just tried yesterday
@GVankov
@GVankov 5 күн бұрын
What's the deal with the integrated NPU in Intel Ultra CPU's. Do you take advantage of it in this setup? I couldn't find detailed information about it, in various articles it usually just says "designed to accelerate artificial intelligence (AI) tasks" which is pretty vague.
@ganon4
@ganon4 2 ай бұрын
Nice video, can't imagine how good it should be with the new GMKTEC M7 Mini PC it's a joke.
@husanaaulia4717
@husanaaulia4717 2 ай бұрын
If you want to use those external GPU, i think MoE model is better. With Ktransformer library.
@aaronbanks3673
@aaronbanks3673 14 күн бұрын
Great video!
@Nemesis-db8fl
@Nemesis-db8fl 2 ай бұрын
I just discovered these mini PCs(sue me) and i gotta tell you these are pretty darn good. I just ordered one to run my website cuz digital ocean’s prices are too high compared to these in the long run
@goblinphreak2132
@goblinphreak2132 24 күн бұрын
AMD new laptop part with AI NPU with 96gb would perform better and most likely same price or cheaper Just run LM STUDIO. you can run 70b llama no issue.
@AaronBlox-h2t
@AaronBlox-h2t 23 күн бұрын
Interesting...but then, all it's good for is running inference on that quantized LLM. Anyways, how much did you pay for it? Pretty cool machine for running inference.
@Carambolero
@Carambolero 2 ай бұрын
So, doing what already was done is the thing. Nice.
@AZisk
@AZisk 2 ай бұрын
yep, but video! cool concept
@gnsdgabriel
@gnsdgabriel Ай бұрын
about the "GPU spike" change the track parameter to CUDA
@vchong8
@vchong8 5 күн бұрын
I run the llama-3..2 7B model on my Macmini M1 with 16GB, it actually runs, not walk at 1.4 steps. Can't wait to run it on the M4 $599 Macmini 😂😅
@DeepThinker193
@DeepThinker193 2 ай бұрын
Jensen can breathe a sigh of relief. Its slow.
@sirflimflam
@sirflimflam 2 ай бұрын
Pretty sure the limiting factor here is just the amount of memory. I can run a 70b model with my 3070ti, only a very small portion of it is offloaded to the gpu since it's 12GB card, the rest is on the cpu and it's...comparably slow. maybe a bit faster.
@royambrose6363
@royambrose6363 2 ай бұрын
your promo cooler than the content !!! I love you man !!
@justtiredthings
@justtiredthings 2 ай бұрын
Can you run the model in something with a better UI like LM studio? Or serve it up as an endpoint? How much does that quant reduce quality?
@mrdali67
@mrdali67 Ай бұрын
I'd like to see how the INTEL fairs against the newest AMD mini's with the 780M igpu .. just need it to be added to the support list. Just set a up a Minisforum with 96gb Ram an 2x 4tb Samsung 990Pro. Definetly gonna test this once they add th 780 to the support list. I'll at least guess they gonna be fairly even since they both use DDR5, but actually I'm anxious to see how much the CPU and GPU means to the actually result speed. It seems the ammount of available ram to the gpu influences much more how good peformance is. These mini PC's should actually be able to pull off the 70Billion model eve they propably not gonna beat a big Mac Mini, those things are also several times more expensive than Intel or AMD Mini PC's
@JoeVSvolcano
@JoeVSvolcano 2 ай бұрын
WIth everybody talking about 70b and looking for 2x24GB video cards to run them, I wouldn't be surprised if Nvida or AMD start releasing 64GB VRAM cards next year...
@supercurioTube
@supercurioTube 2 ай бұрын
Although it's cool to see that it works at all, I can't think of how it would be usable with such low output speed. Besides maybe confirming that the model runs? I'm comparing it with my M1 Max MacBook as a reference for 70b, which provides usable generation speeds (reading speed between 5-8 token/s depending on quantization)
@goodcitizen4587
@goodcitizen4587 2 ай бұрын
Would this PC support two 128GB modules? Maybe those modules are too large.
2 ай бұрын
I am mesmerized by the keyboard,
@dvpzy
@dvpzy 2 ай бұрын
What a nice chair!
@steve_jabz
@steve_jabz Ай бұрын
would like to see u try putting the 96gb in the 4090 machine so the 4090 can process the whole model
@AZisk
@AZisk Ай бұрын
you can’t add ram to the 4090
@steve_jabz
@steve_jabz Ай бұрын
@@AZisk no i mean adding it to the regular ram slots of the PC that has the 4090 in it. With modern nvidia drivers it increases the shared vram by 96gb, but with a text generator like oggabooga it makes even more efficient use of it for offloading, and deepspeed will make further use of ram with special optimization techniques only PCs with >64GB of ram can handle. Not much different to what you're currently doing by just holding the model in ram and processing with the CPU, although there is some overhead swapping between ram and vram so experimentation is necessary
@GenshinYuppe
@GenshinYuppe 2 ай бұрын
i'm gonna end up building one of these. hopefully amd doesn't pull a nvidia and gives us at least 32gb of vram soon. hopefully 48. otherwise, idk i might start modding my cards with more vram
@mirkamolmirobidov1991
@mirkamolmirobidov1991 2 ай бұрын
Awesome video. I think lm studio also uses llama.cpp as engine.
@BlazingVictory
@BlazingVictory 2 ай бұрын
aside from the ability to run the 70B LLM, is there a practical use case here when the tokens/second is pretty slow?
@fontenbleau
@fontenbleau 2 ай бұрын
You need to try EXO cluster project for Macs, kinda the only way to juice out that uber-expensive hardware as Ai server, but it's super pricey way.
@renatomartins5901
@renatomartins5901 2 ай бұрын
Could AMD APUs be even better?
@GoddamnAxl
@GoddamnAxl 2 ай бұрын
I've never seen anyone sell a chair that well to his target audience I'm in tears. 😭
@vannoo67
@vannoo67 2 ай бұрын
Nvidia cards can now use system RAM as VRAM. Of course it it quite a bit slower, but it makes some tasks possible that previously were not. My RTX 4070 TI Super has 16 G VRAM, but if I look in Settings | Display | Advanced Display Settings | Display adapter properties for Display 1, I see; Total Available Graphics Memory: 32729 MB Dedicated Video Memory: 16384 MB System Video Memory; 0 MB Shared System Memory: 16385 MB Note: I have 32GB of system RAM So don't feel that you will be limited to only 24 G VRAM on your 4090 if you have some system memory available.
@IOIOI10101
@IOIOI10101 2 ай бұрын
They always can use RAM. Nothing new here.
@vannoo67
@vannoo67 2 ай бұрын
@@IOIOI10101 Well actually it is. It has only been recently (since driver 536.40 released in October 2023), and only on Windows, that System RAM in addition to on card VRAM can be used.
@eduardvendrell9136
@eduardvendrell9136 Ай бұрын
@@vannoo67 does the same apply to the laptop version of 4090 with 16GB?
@vannoo67
@vannoo67 Ай бұрын
@@eduardvendrell9136I don't know for sure. Probably. You can tell by opening Task Manager then going to the Performance tab, GPU page. Mine says GPU memory N.N/32 GB, Dedicated GPU memory N.N/16 GB, Shared GPU memory N.N/16 GB (where N.N is an number that represents the current actual use)
@aqeelaliani
@aqeelaliani Ай бұрын
Cool... Is it really practical though??
@Bicyclesidewalk
@Bicyclesidewalk 2 ай бұрын
Would love to see some Linux content. Perhaps you already have something - will check your vids~ Neat stuff here.
@JoeBurnett
@JoeBurnett 2 ай бұрын
That would also be a great cheap option to use Stable Diffusion, Flux.1, ComfyUi, etc. for aspiring artists.
@shApYT
@shApYT 2 ай бұрын
Artists want real time generation like SD turbo to actually be able to draw with it. Something like a 3060 would be better.
@大支爺
@大支爺 2 ай бұрын
@@shApYT Even my Ryzen9+4090 can not do it. lol
@TJ-hs1qm
@TJ-hs1qm 2 ай бұрын
Thanks!
@davidtindell950
@davidtindell950 2 ай бұрын
Sorry for Delayed Response to Replies. Have Muti-Project DEADLINES ! MSI CodR2B14NUD7092: Intel Core i7-14700F, NVIDIA GeForce RTX 4060Ti. 64GB DDR5 5600 and 2TB M.2 NVMe Gen3 !
@cwbh10
@cwbh10 24 күн бұрын
Man, I just want an M4 Ultra to hit the market already
@dr.ignacioglez.9677
@dr.ignacioglez.9677 2 ай бұрын
It's a software issue. Ultra processors are optimized to use the NPU for artificial intelligence, NOT the GPU. You're using the wrong part (a very slow GPU instead of a very fast NPU), but I understand it's because the software you're using doesn't allow you to use the NPU. You should also SHOW how many tokens the CPU alone can process (NOT using the GPU) to compare the performance. I insist, you’re using a very slow GPU, maybe even the CPU is better. In science, you always have to check all the factors experimentally and not take anything for granted. Good luck 🍀
@AZisk
@AZisk 2 ай бұрын
thank you 🙏
@dr.ignacioglez.9677
@dr.ignacioglez.9677 2 ай бұрын
@@AZisk NO, thank to you for all your hard work in making such an interesting and useful video.
@TaFeiYen
@TaFeiYen 2 ай бұрын
Some models won't run on NPU. We still need some time for all the software and hardware to align
Local LLM Challenge | Speed vs Efficiency
16:25
Alex Ziskind
Рет қаралды 76 М.
M4 Mac Mini CLUSTER 🤯
18:06
Alex Ziskind
Рет қаралды 145 М.
How Many Balloons To Make A Store Fly?
00:22
MrBeast
Рет қаралды 127 МЛН
How To Choose Mac N Cheese Date Night.. 🧀
00:58
Jojo Sim
Рет қаралды 93 МЛН
Real Man relocate to Remote Controlled Car 👨🏻➡️🚙🕹️ #builderc
00:24
The Mini PC You SHOULD Be Looking At
11:50
Hardware Haven
Рет қаралды 1,3 МЛН
Qwen Just Casually Started the Local AI Revolution
16:05
Cole Medin
Рет қаралды 87 М.
NVIDIA CEO Jensen Huang Leaves Everyone SPEECHLESS (Supercut)
18:49
Ticker Symbol: YOU
Рет қаралды 968 М.
30 Cool Smart Home Devices You’ll Actually Want!
17:18
Smart Home Solver
Рет қаралды 109 М.
How Physicists Broke the Solar Efficiency Record
20:47
Dr Ben Miles
Рет қаралды 795 М.
The MOST GENIUS Mini PC EVER! | Minisforum MS-A1 Teardown & Review
20:10
Do we really need NPUs now?
15:30
TechAltar
Рет қаралды 762 М.
World's 1st Coding Monitor
11:10
Alex Ziskind
Рет қаралды 571 М.
Building the Lowest Rated PC
25:35
Linus Tech Tips
Рет қаралды 2 МЛН
When Did Raspberry Pi become the villain?
21:54
Jeff Geerling
Рет қаралды 1,7 МЛН
How Many Balloons To Make A Store Fly?
00:22
MrBeast
Рет қаралды 127 МЛН