Great video. I liked the comparison with the 3060. 45 tokens for a low-cost lab setup is not bad.
@OminousIndustries9 сағат бұрын
Thanks very much! Yes, the 3060 is still a kick ass local AI option where vram is still king.
@Dave_SКүн бұрын
I really like this channel. Can see it becoming a very large channel.
@OminousIndustries9 сағат бұрын
I really appreciate this!
@marcomerola4271Күн бұрын
Community engagement: great!
@OminousIndustries9 сағат бұрын
Hahah I appreciate it!
@nickthompson205214 сағат бұрын
You should collab with Dave's Garage as he already has the OS running from the NVME and is the original developer of Windows Task Manager, so he knows a thing or two about Windows.
@OminousIndustries9 сағат бұрын
That is awesome, though his channel is way bigger haha!
@setuverma2311Күн бұрын
I want to add network interface card along with NVMe. Network card with 4x 1Gbps electrical port. Is it possible to add it on this board.
@OminousIndustries9 сағат бұрын
I unfortunately can't answer this, the nvidia developer forums may be a good place for this as a lot of those folks have more knowledge about the actual board itself.
@warr3ngonКүн бұрын
Nice Video! I particularly appreicated the comparison with the 3060, regarding that was the result that you showed with the 45 tokens with the dual 3060s? or just 1? Also do you already have a video explaining the difference between the various models? And what causes the performance difference between the different ones?
@OminousIndustries9 сағат бұрын
Thanks very much. The 3060 demo showed was only using one of the 3060 cards. In terms of the second question, which models are you referring to, different nvidia gpus or the different jetson models?
@pmokeefe17 сағат бұрын
Have you looked at any camera options?
@OminousIndustries9 сағат бұрын
I have a cheap rpi camera I am going to test it with but aside from that I have not seriously looked into anything yet.
@JeanPierreLavoie20 сағат бұрын
I think the real benefit or use of this device is like you said running local object detection systems that read rtsp camera feeds for instance on your network. Can you do some videos on this?
@OminousIndustries9 сағат бұрын
I very much agree. I am going to do a video related to this perhaps comparing it to the Pi 5 with ai kit.
@nickthompson205214 сағат бұрын
My Super is in the mail, but there are a couple things I am hoping to do with it. For one, I would like to open up remote access to friends and family. I would also like to use this remote access with something like Apollo AI on iOS for mobile. The other thing I wonder is if there is any possibility to have something like a remote cluster / p2p cluster of these. If I have 5-10 buddies with Orins, can we somehow remotely cluster these in a useful manner?
@OminousIndustries9 сағат бұрын
That's an interesting scenario. While I haven't tried this myself, this may be of interest to you: github.com/exo-explore/exo I have personally used distribute inference using vLLM over my local network but if the devices were not in the same location there would be a lot of networking configs to wrangle with. Here is the vLLM page as well: docs.vllm.ai/en/latest/serving/distributed_serving.html
@TheArchitect101Күн бұрын
I don’t think you can install straight to NVMe, you need a machine with Ubuntu or similar running with Nvidia installation manager or something like that, then connect the Jetson via usb in recovery mode to flash it.
@OminousIndustries9 сағат бұрын
Thanks for the info, that would make things a bit more frustrating if true haha
@cflhardcorekidКүн бұрын
Are you seeing faster responses with on board A.I.? I love Grok and I guess would use llama as on board?
@cflhardcorekidКүн бұрын
Nvm I see the token speed part
@OminousIndustries9 сағат бұрын
Yeah local is most likely to be slower, but has benefits like data privacy etc. I haven't played much with Grok myself, but I have heard it is good in terms of less censorship.
@johnnybehappy9663Күн бұрын
Can it rewrite Code, or "create" codes for python or similar?
@OminousIndustries9 сағат бұрын
At a basic level, yes. Though this is dependent on the model being run on it, so for example it would depend on Llama 3.1 8B's ability to generate lucid code.
@AspenVonFlufferКүн бұрын
I have 2 of these backordered at 2 different sellers. Anyone know when additional units will be sent out to sellers by Nvidia?
@OminousIndustriesКүн бұрын
I wish I could answer this!
@nickthompson205214 сағат бұрын
My backorder from SparkFun was shipped today, ordered 12/17
@AspenVonFluffer9 сағат бұрын
@@nickthompson2052 Thank you Nick!
@FUTShorts-fs12 сағат бұрын
Can Unminable detect GPU to mine Digital Currency?
@OminousIndustries9 сағат бұрын
I am not familiar with current crypto mining so I couldn't say.
@CDanieleLudusКүн бұрын
Theoretically, the Jetson Orin Nano Super, with 202 TFLOPs, is likely superior for training models compared to the RTX 3060 Ti with 16.3 TFLOPs, though I cannot currently confirm the speed difference through testing.
@OminousIndustries9 сағат бұрын
I am not seeing the 202 number anywhere for this 8gb orin nano super, these are the numbers Nvidia has posted for it: "67 TOPS (Sparse) 33 TOPS (Dense) 17 FP16 TFLOPs "
@PankajDohareyКүн бұрын
Running Exo for distributed inference on it.
@OminousIndustries9 сағат бұрын
You are running it with exo? If so that is awesome. I have only used vLLM for distributed inference and it was cool to be able to do that, I should try exo at some point.
@expensivetechnology9963Күн бұрын
#OminousIndustries I’m hoping you’ll purchase at least one more Orin Jetson Nano and try to create a cluster that allows you to pair them for a single workload.
@OminousIndustries9 сағат бұрын
Perhaps when they are available again this would be an interesting experiment, though at the price of 2 it would likely just make more sense to get a 16gb 4060ti and desktop build
@Eng.Mohammad_AlotaibyКүн бұрын
hi can you make a tutural on it with opencv
@OminousIndustries9 сағат бұрын
I am going to make a video on some tasks like this with the raspberry pi AI kit as well and will try to include a small tutorial!
@OneWithIPDAlgo20 сағат бұрын
Yes, it's possible to boot off nvme. You'll need to have an sd card installed, boot in recovery (jump pins 9/10) and usb to a separate physical computer running ubuntu and nvidia sdk manager. Using a virtual machine will not work. Much bette performance.
@OminousIndustries9 сағат бұрын
Thanks for this information, sounds like more work to get set up, but as you say better perf so worth it!
@yorkan213swd619 сағат бұрын
Thanks for the video , but this device is waste of time- only 8 GB VRam.
@MukulTripathi11 сағат бұрын
Not really. I'm building a personal agent that needs high quality voice input and output using whisper turbo large v3 and opendaivoice + tiny LLM. For edge computing I need approximately 6GB vram for the quantized models I use. This is perfect for my usecase!
@OminousIndustries9 сағат бұрын
Yes for edge cases it is definitely a good device
@aminemooh7192Күн бұрын
I can't find any unit lol
@OminousIndustries9 сағат бұрын
It seems they are quite popular haha, hopefully soon!