Answering Your Questions On The NVIDIA Jetson Orin Nano Super

  Рет қаралды 3,927

Ominous Industries

Ominous Industries

Күн бұрын

Пікірлер: 47
@Giannis_D.
@Giannis_D. Күн бұрын
Great video. I liked the comparison with the 3060. 45 tokens for a low-cost lab setup is not bad.
@OminousIndustries
@OminousIndustries 9 сағат бұрын
Thanks very much! Yes, the 3060 is still a kick ass local AI option where vram is still king.
@Dave_S
@Dave_S Күн бұрын
I really like this channel. Can see it becoming a very large channel.
@OminousIndustries
@OminousIndustries 9 сағат бұрын
I really appreciate this!
@marcomerola4271
@marcomerola4271 Күн бұрын
Community engagement: great!
@OminousIndustries
@OminousIndustries 9 сағат бұрын
Hahah I appreciate it!
@nickthompson2052
@nickthompson2052 14 сағат бұрын
You should collab with Dave's Garage as he already has the OS running from the NVME and is the original developer of Windows Task Manager, so he knows a thing or two about Windows.
@OminousIndustries
@OminousIndustries 9 сағат бұрын
That is awesome, though his channel is way bigger haha!
@setuverma2311
@setuverma2311 Күн бұрын
I want to add network interface card along with NVMe. Network card with 4x 1Gbps electrical port. Is it possible to add it on this board.
@OminousIndustries
@OminousIndustries 9 сағат бұрын
I unfortunately can't answer this, the nvidia developer forums may be a good place for this as a lot of those folks have more knowledge about the actual board itself.
@warr3ngon
@warr3ngon Күн бұрын
Nice Video! I particularly appreicated the comparison with the 3060, regarding that was the result that you showed with the 45 tokens with the dual 3060s? or just 1? Also do you already have a video explaining the difference between the various models? And what causes the performance difference between the different ones?
@OminousIndustries
@OminousIndustries 9 сағат бұрын
Thanks very much. The 3060 demo showed was only using one of the 3060 cards. In terms of the second question, which models are you referring to, different nvidia gpus or the different jetson models?
@pmokeefe
@pmokeefe 17 сағат бұрын
Have you looked at any camera options?
@OminousIndustries
@OminousIndustries 9 сағат бұрын
I have a cheap rpi camera I am going to test it with but aside from that I have not seriously looked into anything yet.
@JeanPierreLavoie
@JeanPierreLavoie 20 сағат бұрын
I think the real benefit or use of this device is like you said running local object detection systems that read rtsp camera feeds for instance on your network. Can you do some videos on this?
@OminousIndustries
@OminousIndustries 9 сағат бұрын
I very much agree. I am going to do a video related to this perhaps comparing it to the Pi 5 with ai kit.
@nickthompson2052
@nickthompson2052 14 сағат бұрын
My Super is in the mail, but there are a couple things I am hoping to do with it. For one, I would like to open up remote access to friends and family. I would also like to use this remote access with something like Apollo AI on iOS for mobile. The other thing I wonder is if there is any possibility to have something like a remote cluster / p2p cluster of these. If I have 5-10 buddies with Orins, can we somehow remotely cluster these in a useful manner?
@OminousIndustries
@OminousIndustries 9 сағат бұрын
That's an interesting scenario. While I haven't tried this myself, this may be of interest to you: github.com/exo-explore/exo I have personally used distribute inference using vLLM over my local network but if the devices were not in the same location there would be a lot of networking configs to wrangle with. Here is the vLLM page as well: docs.vllm.ai/en/latest/serving/distributed_serving.html
@TheArchitect101
@TheArchitect101 Күн бұрын
I don’t think you can install straight to NVMe, you need a machine with Ubuntu or similar running with Nvidia installation manager or something like that, then connect the Jetson via usb in recovery mode to flash it.
@OminousIndustries
@OminousIndustries 9 сағат бұрын
Thanks for the info, that would make things a bit more frustrating if true haha
@cflhardcorekid
@cflhardcorekid Күн бұрын
Are you seeing faster responses with on board A.I.? I love Grok and I guess would use llama as on board?
@cflhardcorekid
@cflhardcorekid Күн бұрын
Nvm I see the token speed part
@OminousIndustries
@OminousIndustries 9 сағат бұрын
Yeah local is most likely to be slower, but has benefits like data privacy etc. I haven't played much with Grok myself, but I have heard it is good in terms of less censorship.
@johnnybehappy9663
@johnnybehappy9663 Күн бұрын
Can it rewrite Code, or "create" codes for python or similar?
@OminousIndustries
@OminousIndustries 9 сағат бұрын
At a basic level, yes. Though this is dependent on the model being run on it, so for example it would depend on Llama 3.1 8B's ability to generate lucid code.
@AspenVonFluffer
@AspenVonFluffer Күн бұрын
I have 2 of these backordered at 2 different sellers. Anyone know when additional units will be sent out to sellers by Nvidia?
@OminousIndustries
@OminousIndustries Күн бұрын
I wish I could answer this!
@nickthompson2052
@nickthompson2052 14 сағат бұрын
My backorder from SparkFun was shipped today, ordered 12/17
@AspenVonFluffer
@AspenVonFluffer 9 сағат бұрын
@@nickthompson2052 Thank you Nick!
@FUTShorts-fs
@FUTShorts-fs 12 сағат бұрын
Can Unminable detect GPU to mine Digital Currency?
@OminousIndustries
@OminousIndustries 9 сағат бұрын
I am not familiar with current crypto mining so I couldn't say.
@CDanieleLudus
@CDanieleLudus Күн бұрын
Theoretically, the Jetson Orin Nano Super, with 202 TFLOPs, is likely superior for training models compared to the RTX 3060 Ti with 16.3 TFLOPs, though I cannot currently confirm the speed difference through testing.
@OminousIndustries
@OminousIndustries 9 сағат бұрын
I am not seeing the 202 number anywhere for this 8gb orin nano super, these are the numbers Nvidia has posted for it: "67 TOPS (Sparse) 33 TOPS (Dense) 17 FP16 TFLOPs "
@PankajDoharey
@PankajDoharey Күн бұрын
Running Exo for distributed inference on it.
@OminousIndustries
@OminousIndustries 9 сағат бұрын
You are running it with exo? If so that is awesome. I have only used vLLM for distributed inference and it was cool to be able to do that, I should try exo at some point.
@expensivetechnology9963
@expensivetechnology9963 Күн бұрын
#OminousIndustries I’m hoping you’ll purchase at least one more Orin Jetson Nano and try to create a cluster that allows you to pair them for a single workload.
@OminousIndustries
@OminousIndustries 9 сағат бұрын
Perhaps when they are available again this would be an interesting experiment, though at the price of 2 it would likely just make more sense to get a 16gb 4060ti and desktop build
@Eng.Mohammad_Alotaiby
@Eng.Mohammad_Alotaiby Күн бұрын
hi can you make a tutural on it with opencv
@OminousIndustries
@OminousIndustries 9 сағат бұрын
I am going to make a video on some tasks like this with the raspberry pi AI kit as well and will try to include a small tutorial!
@OneWithIPDAlgo
@OneWithIPDAlgo 20 сағат бұрын
Yes, it's possible to boot off nvme. You'll need to have an sd card installed, boot in recovery (jump pins 9/10) and usb to a separate physical computer running ubuntu and nvidia sdk manager. Using a virtual machine will not work. Much bette performance.
@OminousIndustries
@OminousIndustries 9 сағат бұрын
Thanks for this information, sounds like more work to get set up, but as you say better perf so worth it!
@yorkan213swd6
@yorkan213swd6 19 сағат бұрын
Thanks for the video , but this device is waste of time- only 8 GB VRam.
@MukulTripathi
@MukulTripathi 11 сағат бұрын
Not really. I'm building a personal agent that needs high quality voice input and output using whisper turbo large v3 and opendaivoice + tiny LLM. For edge computing I need approximately 6GB vram for the quantized models I use. This is perfect for my usecase!
@OminousIndustries
@OminousIndustries 9 сағат бұрын
Yes for edge cases it is definitely a good device
@aminemooh7192
@aminemooh7192 Күн бұрын
I can't find any unit lol
@OminousIndustries
@OminousIndustries 9 сағат бұрын
It seems they are quite popular haha, hopefully soon!
NVIDIA Jetson Orin Nano Super COMPLETE Setup Guide & Tutorial
56:35
Ominous Industries
Рет қаралды 36 М.
Run A Local LLM Across Multiple Computers! (vLLM Distributed Inference)
16:45
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
My scorpion was taken away from me 😢
00:55
TyphoonFast 5
Рет қаралды 2,7 МЛН
NVIDIA Jetson Orin Nano Super FIRST LOOK ($250 AI SuperComputer)
13:56
Ominous Industries
Рет қаралды 37 М.
The SIMPLEST Way To Run Local AI Agents! (AnythingLLM Agent Demo)
22:34
Ominous Industries
Рет қаралды 1,4 М.
Gemini 2.0 vs ChatGPT: Is Google’s AI Leading the Race?
14:46
FifteenForward
Рет қаралды 1,6 М.
Qwen QwQ-32B Tested LOCALLY: An Open Source Model that THINKS
14:26
Ominous Industries
Рет қаралды 2,8 М.
An Open Source VIDEO LLM (Apollo Test and Install Tutorial)
14:22
Ominous Industries
Рет қаралды 737
This model is better than ChatGPT and 10x cheaper
5:25
Nate B Jones
Рет қаралды 2,4 М.
Dual RTX 3060 12GB Build For Running AI Models
14:49
Ominous Industries
Рет қаралды 4,2 М.
Run AI Simulated People Locally with Ollama! (Qwen2.5 & TinyTroupe)
16:23
Ominous Industries
Рет қаралды 1,9 М.
My Favourite NAS Releases of 2024
14:27
NASCompares
Рет қаралды 16 М.
ChatGPT Pro vs. Gemini 2.0 (Python Game DEV Test)
12:38
Ominous Industries
Рет қаралды 2,5 М.
ALWAYS LOVE YOUR MOM! ❤️ #shorts
0:55
LankyBox
Рет қаралды 23 МЛН
I WANT SUMO (Shorts Version)
0:30
FilmPop
Рет қаралды 50 МЛН
chor chor chor 🤣 #shortsvideo
0:16
arati sahani & jyoti 2.0
Рет қаралды 20 МЛН
ALWAYS LOVE YOUR MOM! ❤️ #shorts
0:55
LankyBox
Рет қаралды 23 МЛН
Апельсин побеждает в бою)))
0:15
Кинокомпания AURUMfilm
Рет қаралды 11 МЛН