Jetson AI Labs - Generative AI on the Edge

  Рет қаралды 5,054

JetsonHacks

JetsonHacks

Күн бұрын

NVIDIA recently launched the NVIDIA Jetson Generative AI Lab. This Jetson AI Lab is a gathering place for tutorials and resources for running Generative AI applications on the Edge.
If you have a Jetson Xavier or Jetson Orin, this is a great place to explore the development environments and capabilities of Generative AI on the Jetson.
In the video, we use a NVIDIA Jetson AGX Orin 64 GB Developer Kit:
amzn.to/45VEaTp
The tutorial in the video covers LLaVA, a multimodal model which combines vision transformers and LLMs.
NVIDIA Jetson Generative AI Lab: www.jetson-ai-...
As an Amazon Associate I earn from qualifying purchases.
Visit the JetsonHacks storefront on Amazon: www.amazon.com...
Visit the website at jetsonhacks.com
Sign up for the newsletter! newsletter.jet...
Github accounts: github.com/jet...
github.com/jet...
Twitter: / jetsonhacks
Some of these links here are affiliate links. As an Amazon Associate I earn from qualifying purchases at no extra cost to you.

Пікірлер: 25
@bertbrecht7540
@bertbrecht7540 11 ай бұрын
It's the early days of this technology. It will become part of our daily lives (self driving cars, assistant for the blind, weed pulling machine, garbage sorter, etc.... times a million) Thanks for presenting this. I will be putting my Jetson to work.
@JetsonHacks
@JetsonHacks 11 ай бұрын
You're welcome. I like your analysis! Even though it's early on, there are glimmers of what could be. Thanks for watching!
@VicVegaTW
@VicVegaTW Ай бұрын
I’m watching all your videos, is there a pi equivalent of your channel?
@JetsonHacks
@JetsonHacks Ай бұрын
You're very brave to sit through these videos. I don't know how many are strong enough not to end up twisted afterwards. There are a lot of good RPi channels, it really depends on what subject you're trying to learn about. There are very many more channels devoted to the RPi, so a lot more subjects covered. @JeffGeerling has a very good, popular channel. @Dronebotworkshop is great if you're more interest in hardware projects, many of which incorporate RPi. @paulmcwhorter has some great tutorials about RP if you like sit down/walk through. Good luck on finding what you're looking for!
@robrever
@robrever 9 ай бұрын
This might be a stupid question, but it is performing these analysis with or without internet access?
@JetsonHacks
@JetsonHacks 9 ай бұрын
Once you download the docker images and models, everything runs on the Jetson with no further need of an Internet connection. Thanks for watching!
@robrever
@robrever 9 ай бұрын
@@JetsonHacks Great thanks a lot! I have some big plans in the next year utilizing the Jetson computer vision/image recognition. I look forward to your videos as this is all very new to me. Thank you for doing what you do.
@JetsonHacks
@JetsonHacks 9 ай бұрын
@@robrever You are welcome. Hopefully you can share some of your work.
@suryanshu3724
@suryanshu3724 11 ай бұрын
will this work on the 4gb jetson nano? or only the orin line?
@JetsonHacks
@JetsonHacks 11 ай бұрын
The generative AI is centered on the Jetson Orin and Xavier machines. The memory requirements of these models rule out using smaller machines. Thanks for watching!
@leibaleibovich5806
@leibaleibovich5806 11 ай бұрын
Greetings, Jim! I would love to hear your opinion on the following question: Often, when I read or watch videos about hands-on deep learning (i.e. neural networks, NNs), people discuss the hardware requirements because most of deep learning is done on GPUs. Someone told me that one has to have a graphics card with min. of 16 Gb of memory. Hi-end graphics cards are expensive! Recently I was browsing through tech specs of different Jetson Orin products. For example, reComputer J4012 (based on Jetson Orin NX 16GB) is capable of up to 100 TOPS. If memory serves, Jetson Orin Dev Kit is capable of up 40 TOPS. My question is: how does one compare graphics cards vs. Jetson Orin in terms of performance for deep learning? In case of Orin, you can get a mini-PC, a complete system. On the other hand, you need a decent rig to fit it with a capable graphics card. What are trade-offs? I have not found much info on this subject, so your take will be much appreciated.
@JetsonHacks
@JetsonHacks 11 ай бұрын
Big question! Here's the way I usually look at it. If you compare a NVIDIA RTX 30 series (ie 3050, 3090 or in between) they use the same GPU architecture (Ampere) as the Orins. The RTX 40 (e.g 4090) series are a generation newer. Depending on which graphics card you pick, they use 350 Watts+. The Jetson run on 1/10 of that or so. The number of GPU cores on the graphics cards run from ~3000 to 10,000. On the Jetson, the AGX Orin is 2,000. The Orin NX is half that. The memory in the graphics card is GDDR6 which is much faster than the Orin LPDDR4. Remember, the LP stands for Low Power meaning that the memory is tuned for power management rather than performance. The G in the GDDR6 means Graphics, which optimizes for speed. The memory bus width is wider on the graphics card. The Orin NX has 16GB of memory, but it is unified meaning the CPU and the GPU share it. Also, the clocks on the graphics card for the GPU are faster. As you note, there's the uncomfortable bit about everything fitting into graphics memory, and swapping things in and out. There are various strategies around this, but depends on the machine learning models that you are running. What that tells you is that it is an apples vs oranges comparison. One would expect the graphics cards to be *much* faster than the mobile processors. But the tradeoff is that it uses more than 10x the power. If low power consumption isn't your main goal (like in embedded systems like the Jetson), then there's no reason for that constraint. And vice versa, if you're looking at low power it doesn't make sense to run the big iron. I'm sure there are people who know way more than me about this that can address it more eloquently. Thanks for watching!
@leibaleibovich5806
@leibaleibovich5806 11 ай бұрын
@@JetsonHacksThank you very much for this comprehensive answer! I appreciate time and effort you put into it, Jim! That's really valuable information for someone not so tech savvy, like myself! Thank you, Jim!
@JetsonHacks
@JetsonHacks 11 ай бұрын
@@leibaleibovich5806 You are welcome.
11 ай бұрын
@@JetsonHacks Apples are for apple pie and oranges are for orange juice. Application specific devices ;-)
@JetsonHacks
@JetsonHacks 11 ай бұрын
@ This is a much more elegant way of saying the same thing. Thanks for watching!
@allvisualmedia7575
@allvisualmedia7575 11 ай бұрын
Can you please make a video on LLama 2, memgpt, autogen running on this machine?
@JetsonHacks
@JetsonHacks 11 ай бұрын
Thank you for the suggestions. Here's a video on running Llama 2: kzbin.info/www/bejne/fp2rZIShiJZ5a7ssi=149llDVY3LH_6Sl0&t=436 The other two are still a little bit too young to be able to work with. Is there something in particular you want to know?
@ArtificialDNA
@ArtificialDNA 11 ай бұрын
Hi, big fan of your i know it is not related to this directly but some what ... because when try to flash AGX to NVME it only use 32GB of the disc and many have this problems that could not find solution I try your hack by flash to EMMC and move to NvME and other methoer all seem same stuck in 32GB .. which is nothing when try to do this generative AI because it keep run out of space ..
@JetsonHacks
@JetsonHacks 11 ай бұрын
Thank you for the kind words. Unfortunately I don't have a good answer for you on this. There apparently is an issue in the flashing scripts that confuse the partitions on the NVMe drive. People have work arounds, but I can't say they're easy to implement. A workable solution might be to flash to eMMC and set everything to store to the SSD. Here's some instructions from the Jetson AI Lab: www.jetson-ai-lab.com/tips_ssd-docker.html Thanks for watching!
@ArtificialDNA
@ArtificialDNA 11 ай бұрын
@@JetsonHacksthank you every much i guess I will try this work around. yes before the latest one it work fine .
@JetsonHacks
@JetsonHacks 11 ай бұрын
@@ArtificialDNA Another workaround which will get you to where you want to be is to downgrade JetPack to 5.1.1 and flash the AGX. Once the Jetson boots and you set it up, install JetPack. Then do sudo apt update && sudo apt upgrade which will update everything on the system. You should be at the same place that a correct 5.1.2 install should have done.
@ArtificialDNA
@ArtificialDNA 11 ай бұрын
@@JetsonHacks Jet it is work by install 5.1.1 however to upgrade to 5.1.2 i need to change source list from R35.3 to R35.4 thank a lot for help ..
@dggcreations
@dggcreations 11 ай бұрын
So where is this even remotely useful?
@JetsonHacks
@JetsonHacks 11 ай бұрын
Unfortunately you have to use future goggles to predict where and how this technology will be applied. The demo shown here is one of many on the website. That demo is a little esoteric and requires a little imagination on how you might apply it. "Look in the refrigerator and tell me what I could fix for lunch" may not be useful to you. However, people may find it useful to find assistance when analyzing images of natural phenomenom. Or analyzing medical images, or asking for advice in an industrial setting. You're probably familiar with a web-ui to LLM chat, such as GPT-4. The Jetson AI website has an open source version (LLaMA) which runs everything locally. There are advantages to being able to run these types of models in a local device. There's the low hanging fruit of automatic speech recognition and text to speech (ASR and TTS). Combining those with a GPT provides a conversational AI in a more natural way than a web browser. One application which already has a lot of traction is object segmentation which has application in many areas such as robotics and security. There are several tutorials on the Jetson AI website about how to leverage these features. It seems more clear that much of computer vision processing will be done with machine learning such as shown on the website. Thanks for watching!
NVIDIA Jetson AGX Orin Unbox, Setup, Demo - Just Wow
13:24
JetsonHacks
Рет қаралды 77 М.
Best Single Board Computers (SBC) for A.I projects
23:54
Electromaker
Рет қаралды 19 М.
Крутой фокус + секрет! #shorts
00:10
Роман Magic
Рет қаралды 41 МЛН
1 сквиш тебе или 2 другому? 😌 #шортс #виола
00:36
"كان عليّ أكل بقايا الطعام قبل هذا اليوم 🥹"
00:40
Holly Wolly Bow Arabic
Рет қаралды 6 МЛН
Speech AI on Jetson Tutorial
9:20
JetsonHacks
Рет қаралды 14 М.
Build Your Own GPU Accelerated Supercomputer - NVIDIA Jetson Cluster
15:03
NVIDIA's Low Power AI Dev Platform on Arm
18:36
ServeTheHome
Рет қаралды 111 М.
23 AI Tools You Won't Believe are Free
25:19
Futurepedia
Рет қаралды 2,1 МЛН
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,2 МЛН
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 348 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
NVIDIA Jetson AGX Orin Full Review - 2048 GPU Cores
13:49
Gary Explains
Рет қаралды 33 М.
Jetson Nano B01 - Dual RPi Cameras + how to get faster frame rates
33:07
Крутой фокус + секрет! #shorts
00:10
Роман Magic
Рет қаралды 41 МЛН