As another addendum, the locking up issue was due to to two things: Running too large of a model, and the SSD I was using in the test. I swapped it for another SSD after seeing some read/write problems, and it's far more stable now.
@energyideas20 күн бұрын
More details please? What was the SSD specs that did not work and what were the SSD specs that did work thanks.
@ajinkyax24 күн бұрын
sorry I dont understand what can you build with it ? we already have powerful pi
@cykovisualsАй бұрын
Great video! Thanks for uploading it! Cool device. 😎
@JeremyMorganАй бұрын
Thanks bro!
@johncarr12328 күн бұрын
Can you do a segment on installing a notebook LM on the system?
@sureshdurairaj9316Ай бұрын
Thank you for showing in detail. How's the thermal?
@JeremyMorganАй бұрын
Thermal is great on it so far. I have some other tasks to throw at it as well though!
@brianboan960126 күн бұрын
Are you running it at 25watts?
@JeremyMorgan25 күн бұрын
No. I am currently running it at 15 watts. I am not sure how to turn it up to 25, or even if that is possible right now. But I'll look! Thanks for asking.
@abcdesignstudio101Ай бұрын
Thank you. That’s very cool!
@JeremyMorganАй бұрын
Thank you!
@johncardussiАй бұрын
Why didn't you install the SSD on the board? It comes with connectors.
@JeremyMorganАй бұрын
Thank you, you're right. I didn't know there were mounts on the bottom for SSDs, haven't gotten that far in the manuals yet! Thanks again
@AznormlRema27 күн бұрын
Thanks for the forecast! Just a quick off-topic question: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). Could you explain how to move them to Binance?
@johntdaviesАй бұрын
Just FYI, that's not a European plug, that type is used in Australia, New Zealand and China. I was keen to buy one of these to compliment with my several Raspberry Pi 5s which run most all of the models you're trying but incredibly slow (around 4 tokens/sec). I was looking for something a little more powerful. The performance is about about half of what you get from a new Mac M4 Mini with 16GB RAM and I've yet to find a Jetson Nano for under $400 in the EU/UK so the Mac is effectively a lot better value. There are no external power supplies and runs 7-8B Ollama models out of the box and twice the speed with no issues. Like you, I've got some more serious hardware but these palm-sized gadgets are fun to play with. Had the Mac M4 Mini not been available then this would be a cool gadget but you'd need two of these to come close to the base model Mac Mini and they're more expensive. I enjoyed the video, thanks for posting.
@JeremyMorganАй бұрын
Yeah that is a great point. I think the Mac is better value if you can't get your hands on one of these, and also a bit more capable. It depends on what you're prototyping as well, having CUDA and NVidia hardware is valuable, as well as a GPIO and MIPI Camera Ports. If you do end up using the Mac for this share your results here!
@johntdaviesАй бұрын
@@JeremyMorgan I've got an M4 MacBook Pro so don't really need the Mini. I've got a few clients that have purchased them for local inference and they're loving them. They're getting about 40 tokens/sec for the llama3.2 Q4 model so about twice the speed of the Jetson Nano but they can also run two models in parallel which is a plus.
@aminemooh7192Ай бұрын
Hmm, can I use multiple models on it? Also, what about files? Can we feed them to it? thanks for the vid : D
@JeremyMorganАй бұрын
Yes, you can use multiple models, so far I've tried out quite a few. Also, you can feel files to it through SCP or other means. It's just a Linux system running everything so there's a ton of possibilities.
@aminemooh7192Ай бұрын
:o thank u do much ❤@@JeremyMorgan
@MukulTripathiАй бұрын
See on the top right, you are running on 15Watts, you can switch it to 25 Watts and you wont get that throttling. At the least theoretically.
@JeremyMorganАй бұрын
Thank you! I'm going to check that out for sure. I only operated at the desktop for a short time, and never noticed that. Thanks for saying something!
@teriannet716328 күн бұрын
best way to tell speed with ollama is with the --verbose flag and it tells you the details
@yorkan213swd6Ай бұрын
8 GB is a joke. It would be a huge opportunity for RISC V to built a device like this w/o having the fear canibalizing other revenues. Memory slots for up to 64 GB of vram would be killer.
@JeremyMorganАй бұрын
As an addendum, I am able to run qwen2 7B at a reasonable rate on this! Largest model yet. 13.50 tokens/second.
@pfeliciano59765 күн бұрын
Would you recommend it to be used for coding purposes, such as using it with VS Code and Qwen-2.5-Coder: 7B?
@JeremyMorgan2 күн бұрын
@@pfeliciano5976 Yeah, it would work for that. You can run VS Code remotely from it.