Yes, please! Surely it can be run as a plug-in. I got lots a free bays now that I'm all ssd.
@jw4659Ай бұрын
2nd that - tutorial would be excellent!
@smurththepocket2839Ай бұрын
Yep we do ! Also what about the Nvidia Orin AGX ?
@carvierdotdevАй бұрын
If it's possible run image and video generation models. Thanks in advance
@brikkaАй бұрын
You don’t even need to ask, he will do it anyways. That’s how he makes a living
@GregoryWilnauАй бұрын
Would LOVE to see a video of you setting it up!
@desmond-hawkinsАй бұрын
"This tiny supercomputer can run *275 trillion* operations per second (TOPS)" → This is incorrect: the 275 TOPS number is for the Jetson AGX Orin which costs *$2,000* and certainly not $250. The $250 model is the Jetson Orin Nano which reaches *67 TOPS.* As the NVIDIA page shows, there are 7 Jetson Orin modules in total, with a whole range of performance. For comparison, an RTX 4090 has 16,384 CUDA cores while this $250 board has 1,024. It also has "only" 8GB of VRAM, which is an important limiting factor when it comes to LLM. Llama 3.2 should work on it, but maybe not much more. For most people who already have a PC, a gaming GPU makes way more sense. This specialized hardware is mostly interesting for robotics and edge compute, not much else.
@shirowolff9147Ай бұрын
What do you mean gaming? This is literally made for AI and robotics, what are you on about?
@generalawareness101Ай бұрын
@@shirowolff9147 I dunno, his own farts. This is the first stepping stone of what is to come. With this freedom, they could make one the size of my PC and video cards could go back to his gaming. I am not excited about this release, I am excited as to what will now come, and what this means in the grand scheme of it all.
@DennisForbesАй бұрын
@@shirowolff9147 - You seem to be angrily misreading their comment. They are pointing out that a lot of the "wow, now I can run a quantized, terrible LLM!" comments are probably better served by a GPU. You know a "gaming" GPU, though they clearly aren't talking about gaming. Even pretty mid to low GPUs offer higher TOPS and TFLOPS than this, much higher memory bandwidth, etc. The Jetsons are neat *embedded* devices. They're relatively low power, low-er cost devices that you can use for automation, robotics, etc. Maybe you want to hook up your security cameras with your garage opener, using some license plate reading code to auto-open, etc. nvidia's purpose for the jetson line is mostly for industrial automation. Not for enthusiasts to run Llama. Yet strangely so many of the comments are talking about running LLMs or the like, which simply would be the wrong use for this. "AI" is a lot, lot more than LLMs, yet that seems to be all that many can think about.
@HKue74Ай бұрын
@@shirowolff9147He is just telling that even older mid-range gaming GPUs like a RTX 3060/12G have a much bigger and faster Ampere GPU with more and faster VRAM. So just for experimenting with small LLMs or stuff like image recognition, you don't need this Jetson Orin Nano Devkit. This only makes sense if you build something like a robot (small, low power consumption, GPIOs, ...)
@isedwhatwhatАй бұрын
Spot on. And it isn't a super computer and NVIDIA isn't calling it that. Super computers are big and power hungry. Embedded AI computers are tiny and efficient to be embedded on robots. A PC is somewhere in the middle; but more performant than this dev kit. That being said this is super useful and handy.
@kevinschmidt2210Ай бұрын
I think you need to do an entire training course for this super computer.
@haroldasrazАй бұрын
This tiny computer is fascinating. Please make a video on running 2 or more NVIDIA's Jetson Orin together as one supercomputer.
@dustinhightower7110Ай бұрын
Still couldn't even run an 8b model
@hrmd3537Ай бұрын
@@dustinhightower7110what about 5 of these?
@lllongreenАй бұрын
There is NOTHING "super computer:" about these devices at all ! Stop all this deceptive overhyping !
@AntonBrazhnykАй бұрын
Just went to NVIDIA specifications and well... Nano with 8G is what probably they mean talking about $249 is just 67 TOPS. 275 TOPS is the model with 64G and twice as much cores (both types). $2398 devkit on Amazon. Usual deceptive advertisement.
@fx-studioАй бұрын
And you expected more from Ngreedia ??
@testalesАй бұрын
Thanks, I already expected this to be fake since he didn't mention how much RAM or even VRAM it has. A price tag of about $2400 on the other hand gets you instantly in the area of high end gaming GPUs. That toy he showed is as much a tiny "super computer" as Rabbit r1.
@UnchartedDiscoveriesАй бұрын
reallly? so misleading
@BeriahsHTxRealtyАй бұрын
@@testalesit is, r1 was a flop. NVIDIA is first in class
@FlyingPhilUKАй бұрын
Don't see what you're upset about - yesterday, the 8G Nano Orin was $599 and did 40TOPS - today, it's $249 and does 67TOPS And, yes, that's the Jetson Nano Orin Super...
@robs0070Ай бұрын
Yes, sir. Would love to see a tutorial on how to set this computer up. Looking forward to it
@psypher01Ай бұрын
Yes, make a tutorial to set it up, elaborate the expansive spectrum of uses cases to built upon this. Thanks very much!
@donniejohn76Ай бұрын
Excellent and descriptive video. A video with instructions of how to set it up and use this would be AWESOME!!
@jksoftware1Ай бұрын
The amount of ram is pitiful 8GB is NOT ENOUGH should be at least 16gb of ram. I would not even mind if it cost a bit more if we got 16gb of ram.
@spleck615Ай бұрын
Would love it to be 16GB but 8GB is enough to run most 8B and below models. Still extremely useful for its use case.
@NworthholfАй бұрын
It can run llama 8b, and its a very capable model. But ye, 12-16 would have been so much better.
@michaelwoodby5261Ай бұрын
Wonder if you can just network 4.
@punk3900Ай бұрын
You want double the ram and pay...a bit... more? How generous!
@blisphul8084Ай бұрын
Can't call it a mini supercomputer if it can't even match a basic graphics card. I think the new Intel cards make a lot more sense for ai than this thing.
@ThaiNeuralNerdАй бұрын
Yes, definitely make a video on setup!
@frankjohannessen6383Ай бұрын
These are Ampere-GPUs...30-series. And it uses regular LPDDR-memory @ 100GB/s. Jus get a 3060 instead. It will have 50% more memory and be 260% faster.
@UTJK.Ай бұрын
At what cost and at what power consumption?
@cryoraАй бұрын
But a 3060 is not a single board computer.
@TomislawDalicАй бұрын
We need a tutorial how to set it up I Just placed my order.
@RichardHarlosАй бұрын
Dave's Garage went into a bit more detail on the setup. The video title is "NVIDIA's $249 Secret Weapon for Edge AI - Jetson Orin Nano Super: Driveway Monitor"
@BeriahsHTxRealtyАй бұрын
@@RichardHarlosI watched this yesterday, so many footnotes
@kals1284Ай бұрын
where did you order. I need one to test LLM training which is expensive in AWS.
@Sven_DongleАй бұрын
Sure, just download the 32 gigs of CUDA crap and watch the install errors.
@LunarEchoxАй бұрын
If you have to ask, it's not for you
@LucaMurgia-j7b4 күн бұрын
I think it's important to stick to stocks that are immune to economic policies. AI stocks that have the potential to power and transform future technologies. It seems AI is the trajectory most companies are taking, including even established FAANG companies. Maybe there are other recommendations?
@JoeWilmoth-k2w4 күн бұрын
I bought into NVIDIA around September last year because my financial advisor recommended it to me. She said the company is selling shovels in a gold rush. It accounted for almost 80% of my market return this year.
@TerrencesSheldons4 күн бұрын
@@JoeWilmoth-k2w That's a great analogy and I love the insight. Professionals could make a really big difference in investing, and I think everyone should have one. There are aspects of market trends that are difficult for the untrained eyes to see.
@LucaMurgia-j7b4 күн бұрын
@@TerrencesSheldons That's a great tip. I'm setting out 50k to invest in the market this year. Any particularly useful tips you could offer to me?
@TerrencesSheldons4 күн бұрын
@@LucaMurgia-j7b There are many independent advisors to choose from. But I work with MARGARET MOLLI ALVEY and we've been working together for almost four years and she's fantastic. You could pursue her if she meets your requirements. I agree with her.
@LucaMurgia-j7b4 күн бұрын
@@TerrencesSheldons Thank you for this Pointer. It was easy to find your handler, She seems very proficient and flexible.
@philreeseАй бұрын
Yes, please do a full rundown of this seemingly amazing product! I'm thinking home agentic configuration possibilities.
@Luiblonc11 күн бұрын
I already have their earlier LLM board in my vehicle running an LLM. But definitely will pick on of these boards up. Thanks for the video.
@TheeBadTakeАй бұрын
I mean, We need to know how to chain together. A tutorial on that would be cool. And we need to know how many we can chain to gether. Can we do it up to 80 vram? What is doable and what isnt. The limitations.
@paulmichaelfreedman8334Ай бұрын
Yes, plus we need to know if they can be chained together. You forgot that.
@BinaryFrameProductionsАй бұрын
Yes! Tutorial would be cool!
@TheStickyBusinessАй бұрын
1:52 I remember the time I had something with that size on my pocket, it was called WALKMAN! The world has changed just a little bit LOL
@xenuburger7924Ай бұрын
Your walkman didn't dissipate 25W
@ShieldsWebDesignАй бұрын
Tutorial video will most defitley be something I would love to see from you. Thankyou for always being on top of this!
@spleck615Ай бұрын
Some confusion here.. the tiny computer is the jetson nano.. the super jetson nano is capable of 67 TOPS… the larger more expensive jetson hardware (not the nano) can get up to 275 TOPS. The tiny computer can NOT get 275 TOPS.
@MemeSandwichesАй бұрын
Jetson Orin Nano 8GB Module AI Perf: 67 TOPS £289.80 Jetson AGX Orin 64GB Module AI Perf: 275 TOPS £1,413.44
@noproofforjesusАй бұрын
I’m getting one
@hqcart1Ай бұрын
it's not 249, everywhere i checked it's 3-4x the advertised price, let me know if there is an authentic agent out there.
@revealing1372Ай бұрын
Definitely would like to see a setup video
@Alex-pm8wrАй бұрын
Wow, this is the most informative video about Jetson thus far. Thank you.
@rpetrilliАй бұрын
Thanks for you fantastic job here on the channel, I vote up for the tutorial. I would also suggest an additional updated toturials on fine tune cutting edge open source models
@gerardiaiАй бұрын
Thank you. Yes please grab one and show us. In fact grab a couple and show us how to link them up too. Appreciate all you do everyday 💯
@mendthedivideАй бұрын
Super interested to see you set this thing up!!
@nassirama2009Ай бұрын
Great video Mathew, please make a video for testing the NVIDIA Jetson Orin ASAP, I am so excited about it
@avioz8901Ай бұрын
Great video, I would like to see a comprehensive video rutorial on how to set this up and several use cases.
@vinxmod793Ай бұрын
YES Matthew a JETSON Orin tutorial would be wonderful. Great Video. Well Done !
@quincy1048Ай бұрын
Dave's Garage did a install and walktru video of the device...it ships with a sd card preloaded...but he didn't see that in the bottom of the box...so he downloaded a image and went thru a bunch of steps only to later fish the box out of the trash...it runs ubuntu....I just wish it had a bit more power...8gig of ram would never ingest the pdfs I need...and 1024 cuda cores is about 3x off the mark...and with 8gig not sure how much is GPU memory there is, could not find a spec for that. But at this price and power point it gives raspberry pi concern I think...as it is more capable for AI than there hardware.
@rickhunt318319 күн бұрын
If you cant find specs on a product. It's because they don't want you to know the specs and walk away thinking you just took it in the ass. With the availability of cheap ram, 8 gig is laughable. It should be 32GB at that price and come with a 500GB M.2 drive. It's clearly for industrial automation and not computational intensive applications. @ 99 dollars I'd buy it but for 249 im going to pass and wait for something else.
@leonwinkel6084Ай бұрын
When setting it up I would be really interested in seeing how from for example a picture input, the llm converts into analysing the picture and for example move an arm. Or just sending a current to a port of the microcontroller to keep it simpler
@alrasch4829Ай бұрын
A tutorial on how to set it up would be very helpful. Merry Xmas 🎉
@ngana8755Ай бұрын
Can you explain what this means for those of us who are lay persons who don't have degrees in computer science? Does it mean we can plug in this processor to our current laptop and our computing speed will be increased to warp speed?
@Sven_DongleАй бұрын
Lol, nope
@tomaszx3867Ай бұрын
No i think rtx 3050/3060 its much faster. This device is standalone low power device
@RobotechIIАй бұрын
You're not plugging this into a car to have it autonomously drive, you're vastly underestimating the hardware requirements. Also with just 8gb of RAM this is just a toy.
@clarencejones4717Ай бұрын
@@RobotechII I’m sure you could probably do that exactly once
@TheAlastairBrownАй бұрын
Imagine unleashing a team of autonomous humanoid robots running on a a 2B multimodal vision model. Human: "I order you to STOP". Robot: "Queen Victoria was born in Australia in 1358". 🤣
@youngopАй бұрын
Ah Nvidia dumping millions of dollars in to just a toy, says the random commenter on KZbin. Perhaps you should be the CEO of Nvidia
@apache937Ай бұрын
@@youngop The Tesla HW4 has 16 GB of LPDDR4 RAM. NOT 8!
@youngopАй бұрын
@ what does that to do with what I’m talking about? Are you saying that makes this a toy?
@tigs9573Ай бұрын
I would love to see a tutorial and you testing it.
@e11e7enАй бұрын
Definitely interested in seeing it get set up
@LehtusBphree2flyFPVАй бұрын
Definitely make a video showing all the potential distant can do and what it can do and what applications you can put to it. Teach us how to program this thing for the newbies
@chrisliddiard725Ай бұрын
This would be great for real-time 2d video to 3d video, and I don't mean using the Pulfrich effect, but using AI to learn depth using actual 3d movies as source data.
@davieslackerАй бұрын
Excited to see you try one.. maybe go a step further and get 2 or 3 and cluster them using EXO, cause this feels like the most cost effective performant option at the moment and running larger models in a cluster of these seems particularly exciting.
@BAD_CONSUMERАй бұрын
LLM limited usually by the memory, so the ram is really all that matters. 8GB is pretty low, and I would not say the models that can fit in that constraint are "very capable" except for some narrow use cases. Were going to see a lot of LPDDR5x and LPDDR6 devices that can do better. So the real news here is the price.
@nathanbanks2354Ай бұрын
Yeah...hopefully their 64GB Jetson comes down in price.
@apache937Ай бұрын
8Gb should be enough for Flux? I wonder how fast it would be at that
@dukefleed9525Ай бұрын
yes please a setup tutorial will be great!
@AutomateTopicalAuthorityАй бұрын
Yes, please. I’m interested, but would like to see your setup video first before I pull the trigger. Also, its capabilities and use cases. Thanks in advance!
@davocc2405Ай бұрын
This is one of the pillars of the next-gen home environment I've been looking for, an edge-computing solution to facilitate AI functionality without exposing the network to the outside or cloud; I can see voice being captured and sent to this thing for interpretation potentially (or something like it). As they're CUDA cores - could this be actually tasked with things like video recompression (e.g. conversion of video sources from MPEG2 to HEVC-h.265 for instance)? That would be a potentially huge use case too.
@philq01Ай бұрын
Looking forward to setting it up
@newchannel-gl4ezАй бұрын
Yes you should get a few and stack them and see how they do together. And then see if they can use different agents for each one to do things together.
@RetiredEEАй бұрын
I will wear mine like Twiki, around my neck. BEDEBEDEBEDE pretty cool Buck, BEDEBEDEBEDE
@cmelgarejoАй бұрын
wut
@Justin_ArutАй бұрын
Theo was a boss. But tbh, a human wearing one like that would look too much like Flavor Flav.
@chriswatts3697Ай бұрын
Great device - Agents local at home - multimedia - control of smart home - think about old people having those at home asking them for question to organize stuff around the house , everyone can afford it. Local AI is the future, I am now running phi4 14b Model on my local Unity Game and the quality of this LLM is amazing on a medium gaming pc.
@mariuscgАй бұрын
First 20 seconds into the video convinced me to get one 😁 Anyone else felt the same? 🤔
@gonreebgonreebАй бұрын
it s for sale on amazon since months
@voncolborn9437Ай бұрын
@@gonreebgonreebThe Super version at $249 has not been.
@apache937Ай бұрын
yes but then i saw 8gb ram
@Saad14dboyzАй бұрын
If you get the Mac mini base model with a student discount, it will be 2x the price but you get 2x memory and 3-4x inference speed. So I do not see the point.
@paulwolff2121Ай бұрын
Embedded.
@ChigosGamesАй бұрын
15 watt
@G4GUOАй бұрын
I am using the predecessor of the Orin series, the Jetson Xavier 16 GB model, sitting on my network running Ollama + Open WebUI in a Docker container provided by Dusty at NVIDIA. Works well running Mistral, and it is nice to have something that runs 24/7 that doesn't gobble up all my electricity. What Jenson seems to have announced is simply an overclocked version of something he launched 1 year ago at a more reasonable price. I did try to run EXO on my Jetson devices but ran into issues with Clang compiler and CUDA toolkit. No doubt someone smarter than me will figure how to network Jetsons.
@ricowallabyАй бұрын
Yes Matthew, buy one and show us what it can do, can't wait, cheers.
@chrismachabee3128Ай бұрын
Just wow dude! Total mind blown. One minute LLMs, next minute space travel, hands free, or should I say cloud free.Matt I like to follow you because I can understand what you saying. Matt this Jetson Orin is all encompassing, and as usual, we are at stage 1, the future is unbelievable. Matt, we have to make choices, as individuals we aren't going to be able to everything Orin can do, how can a person choose? I have been procrastination and meandering with my LLM course, this is the signal to cut the BS. The salaries are going to be not every is going to be able to develop in this space, and imagine the companies that won't have the personnel to build on thi platform. I feel lucky to be a subscriber Matt. Rock On bro, a new world's a coming!
@j0shj0shj0shАй бұрын
Could you connect it to a desktop or laptop as an external NPU? Similar in concept to an external GPU?
@Groeeta8879Ай бұрын
Yes a tutorial on how to set up; and showing differences you mentioned. Example; connect to a Arduino car robot and show differences before and after. Please and thank you
@HaraldEngelsАй бұрын
We need a full course with this AI Mini PC!
@RogueExplorer75Ай бұрын
so cool. Please do a tutorial on this. Maybe we could build the next generation of Lego Mindstorm robots 🙂
@Stephane-w8lАй бұрын
very powerful but what about vram?
@MickeyValentineLondonАй бұрын
I imagine that this is just motherboard and core components - you can add Vram i would imagine -- still with that ammount if trillion operations per second ... is it possible it wont need much or hardly none ?
@MickeyValentineLondonАй бұрын
Here , found the specs : Jetson Nano System Specs and Software Key features of Jetson Nano include: GPU: 128-core NVIDIA Maxwell™ architecture-based GPU CPU: Quad-core ARM® A57 Video: 4K @ 30 fps (H.264/H.265) / 4K @ 60 fps (H.264/H.265) encode and decode Camera: MIPI CSI-2 DPHY lanes, 12x (Module) and 1x (Developer Kit) Memory: 4 GB 64-bit LPDDR4; 25.6 gigabytes/second Connectivity: Gigabit Ethernet OS Support: Linux for Tegra® Module Size: 70mm x 45mm Developer Kit Size: 100mm x 80mm
@HarryHeck2020Ай бұрын
I think it shares the 8gb in parallel with the arm processor and the GPU. So 8gb
@blisphul8084Ай бұрын
It's basically a trash graphics card with a cheap arm core attached to it
@blisphul8084Ай бұрын
Okay actually, now that i think of it, maybe it could be used as a way to get a cheap gaming PC. While 8GB is outdated, $250 for the whole system could make it a decent gaming box running a lightweight Linux os.
@MakilHeruАй бұрын
Yes! Please do a video. I would love to see what you can do with one.
@garyb6577Ай бұрын
yes you should 100% have a tutorial / a cool project with this
@kevinl20082008Ай бұрын
Yes PLEASE make a tutorial. To be honest I have no idea what I am gonna do but am so excited😅.
@sumitkumar-iq5sj15 күн бұрын
Could we play games on this
@meanwhiles432Ай бұрын
Would it be possible to use a model that just runs off the drive and not in ram? Or would that obliterate the inference speed?
@CircusOfFiveАй бұрын
yes please - i deffo want a tutorial Matt!!
@karankatkeАй бұрын
Yes,need a setup video
@vidfan1967Ай бұрын
@Matt, I understand your enthusiasm. I will get mine tomorrow. BUT: I think you have been taken away a little bit this time! It is not the compute power, that you need to drop into a car to make it self-drive! It’s sensors, training data, AI models, software and control motors that are required as well - plus compliance of some kind.
@DjwhynotloveАй бұрын
Not just an unboxing and setup but list out the things you just said and try to tackle a project with it. Your first project with a montage of errors haha. That would be fun
@rogerbruce2896Ай бұрын
Yes, pls do a build where I can run a script on. would love to figure out how to load a local ai on it that is workable.
@ericvanartsdalen161Ай бұрын
One of the problems with being able to run models in a local or edge device is that GPUs and SBCs have a limited amounts of Memory (VRAM) to work with. What would be great is if NVidia could develop a way for a GPU or this device to allow for memory like SODIMM cards to be upgraded to increase the ability for larger models to load into that GPU utilized memory. In my opinion, this would be the single most useful revolution for GPUs used for AI to accomplish. I think for a majority of hobbyists and students, it's incredibly hard to afford compute that requires cards the a $1000s in cost, and might not allow the loading of some of the cutting edge open-source models.
@patrickheneghan1228Ай бұрын
This is exactly how terminator starts. Except this time it’s the AI in your coffee maker trying to kill you.
@djmango2124Ай бұрын
Yes, please make a tutorial on how to set it up. Thank you
@5StepTrainingАй бұрын
Yes a tutorial would be great thank you! This is really interesting. It’s definitely moving things in the right direction. The human neocortex doesn’t require a nuclear reactor for power.
@phillydogsarmagameplayАй бұрын
Yes...please a tutorial! Thank you, Matthew! 😁
@jackwilson1818Ай бұрын
That kitchen promo is very “Miles Dyson” (Terminator 2).
@kbqvistАй бұрын
Look forward to seeing what one can realistically do with this 😊
@luizcamillo9933Ай бұрын
Would love to see a tutorial and examples of real life use cases.
@AINEETАй бұрын
My rabbit r1 just shrieked
@spleck615Ай бұрын
@@AINEET Mine just jumped with joy as it will have a new play friend soon when I implement teach mode skills that call web services that run on my jetson nano super dev kit to access some custom local agents ♥️
@mroberts7519Ай бұрын
@@spleck615 Can you expound a bit? Sounds interesting.
@kaseycarpenter73Ай бұрын
Mine is still in post-purchase hibernation, lol.
@stal1963Ай бұрын
A tutorial would be fine. Thanks for the awesome video. I have been working on Edge AI running on embedded devices for many years now. SBCs like the Raspberry Pi are not capable of processing AI workload, though you could integrate a Coral TPU. The NVIDIA board is providing what I am missing. By the way, the 8 GB version which I’ll buy is more expensive.
@ryandmaalАй бұрын
please set it up with a mac mini 4 as the host computer for development and the nano for the specialized LLM tasks.
@patrickwhite9902Ай бұрын
Seems these Jetson kits have been around for like 10 years, who knew! I'd dig a cluster tutorial - that would be neat. But maybe wait for a 32 or 64GB variant or use the older Jetson AGX Orin 64GB from 2022
@GiorgioTomasiАй бұрын
Great work thank you. Yes please show how to set up a local machine to run local afents and how ti plug in expansions. Great videos thanks.
@erniea5843Ай бұрын
I find it hilarious how much press this new Jetson upgrade is getting. We’ve had different flavors of nanos for some time. Also this “super” upgrade can be used on Orin’s you already own. But hey, I’m a fan of these boards.
@exumatronstudiosАй бұрын
Happy for this. Will be ordering one.
@jackstrawfulАй бұрын
ive been thinking about getting an external GPU for my laptop so i could run local image generation - could i plug this in instead? And take advantage of my laptop's 32GB of RAM?
@smith6058Ай бұрын
I would definitely love to hear more about this subject. How would I connect it to an ordinary everyday Dell desktop. …and allow it to log in.
@themagnusgoeshisownway2393Ай бұрын
Definitely make a tutorial Matthew. Thank you.
@machsolid6402Ай бұрын
Yes please do the setup video thank you
@aanthanyjАй бұрын
definitely video on set up and use case please please
@LuizPeixotoАй бұрын
Do the tutorial. Buy it, please. Last week we are discussing use a edge divece, but .... A lot of pies... flavores. Seems that oven is a killer one!
@jpoole4931Ай бұрын
The $249 price is for the Jetson Orin Nano Developer Kit, which includes more than just the Orin Nano module itself. Specifically, the developer kit typically includes: Jetson Orin Nano Module: This is the core processing unit, containing the NVIDIA GPU and other essential components. Carrier Board: This board provides the necessary interfaces and connectors for the Orin Nano module, including ports for power, display, USB, networking, and expansion. Thermal Solution: Often, a basic heatsink is included to help with thermal management. Documentation and Resources: You'll get access to documentation, software, and other resources to help you get started. So, you're getting a complete development platform for $249, not just the bare module. This makes it much easier to start developing and prototyping your projects. You don't have to worry about sourcing a compatible carrier board or other essential components separately.
@DoppelrockingАй бұрын
of course dude, get a handful of these and smart up a bunch of regular stuff - heck yeah pls make a fun vid on this, it's exciting / assessable
@patrickmchargue7122Ай бұрын
A tutorial would be nice. The memory is too small for me and the LLMs I run, 70B Q6. I would like to see how this integrates with LM Studio, if possible.
@simeonnnnnАй бұрын
Hey Matthew. Question here. Coild this Nvidia tiny pc run a 14B parameter or a 70b parameter model with sensible speed of inference?
@ronbridegroom8428Ай бұрын
Yes, please make a tutorial video about how to set this up. Thanks in advance
@UnineilАй бұрын
I’ll be testing one very soon next to a cluster of raspberry pi 5s and clustered beelinks. I was just playing with the concept of indoor drones.
@BestCodes_OfficialАй бұрын
8 GB RAM? It's recommended to have 8 GB just to run Windows 11, any large AI model beyond like 20b params takes more RAM than that...
@ChigosGamesАй бұрын
Why run windows 😅
@BestCodes_OfficialАй бұрын
@@ChigosGames Yeah to be fair running Win 11 is probably more intensive than running an AI model lol
@TundaMAsegaАй бұрын
YES YES YES! do a setup vid!😆
@mei-hsienhsu6899Ай бұрын
Yes, please have a tutorial to teach how to set it up. Thanks.
@sahilx4954Ай бұрын
Yes, we want that tutorial. Demonstrate with a robot if possible. 😬 🙏