Hot mug of cold coffee is ready to watch the magic show Paul, great going 👍
@hann96302 жыл бұрын
At the end of this lesson, I decided to create a separate model and training folder to test with your single board computer data and while I get no errors running it all I can't detect any of the boards, even when pointing my camera on your video stills. Its detecting other random items from the pretrained model though, so something works. Until now I have successfully gone through all your lessons in this series on jetpack 4.5 with minor tweaks here and there. It's a testament to how good you are as I am not a developer or computer scientist whatsoever.
@somebody90334 жыл бұрын
For me, this is the lesson I have been most looking forward to as this is where I can harness the power of AI and use it to detect more obscure objects like playing cards or musical instruments! Also, really want to start learning Numpy and Pytorch whenever those series of lessons come out!
@thomascoyle37154 жыл бұрын
Alex, I believe that the jetson-inference is using PyTorch for the training.
@somebody90334 жыл бұрын
@@thomascoyle3715 Thomas, Paul announced that he is thinking of doing a video series on Numpy which would flow into PyTorch
@thomascoyle37154 жыл бұрын
@@somebody9033 Paul is having trouble just keeping up with the Xavier NX tutorials and isn't doing any Xavier NX Premieres so I wonder if he isn't overloading himself.
@somebody90334 жыл бұрын
@@thomascoyle3715 I think the Numpy/Pytorch series is only going to happen after the Jetson Nano series ends (and probably the Elegoo smart car will end too as there isn't much to do with it).
@opalprestonshirley17004 жыл бұрын
Tedious but exciting. Now we can go and create the images we want and really go to town. Thanks, Paul have a good weekend.
@patis.IA-AI Жыл бұрын
Deep respect thanks Mister McWorther
@eranfeit3 жыл бұрын
Hi, Important remark for the labels.txt file. double check that you did not add any more line after the last item. the python code will return error message of the count of the items is not equal to the items in the model
@ismaelachekar39412 жыл бұрын
Your teaching skills are amazing! Thank you for all your efforts so far
@jaredthomas29573 жыл бұрын
Paul, you seem like a fun guy to talk to based off your cryptic desktop folders. Thanks for the great tutorial, almost done with the jetson series!
@ricardobjorkeheim7753 жыл бұрын
Great lesson Paul, thank you. I trained on different Swedish birds, (I have wood models), I put the Swedish names and the names used the Swedish letters ÖÄÅ. The program does not recognize those letters so the names were difficult to read. I used my webcam with autofocus and some pictures were blurred, the recognition did not work så well, but now I know exactly how to make it work!. Thanks a lot Paul :)
@David-yg7hx4 жыл бұрын
It would be interesting to learn how to do object detection with the same issue. F.e. you could have every board on the table and then label every part with a rectangle and label. Thanks for those great videos!
@thomascoyle37154 жыл бұрын
Nice work Paul. Very impressive as to the classification accuracy after the training.
@r1rmndz5073 жыл бұрын
you have given me serious skills and I want to learn more. Thanks Paul!
@paulmcwhorter3 жыл бұрын
Glad to hear it!
@marksholcomb3 жыл бұрын
Thanks Paul! I have used your teaching as a launching pad for deeper understanding of 'deep learning'. The Arduino and Jetson classes are a game changer to those of us who can learn, but, don't know where to start. I really looking forward to the Python classes, now that I have a hint of what it can do.
@paulmcwhorter3 жыл бұрын
Fantastic!
@Mircea0073 жыл бұрын
Hi Paul, Did not get the pictures taken but watched the lesson to the end to see your results. Don't own that many single board computers but I have a shelf with tools that I can use. I will post my results tomorrow. This is really exciting, looking forward for tomorrow.
@paulmcwhorter3 жыл бұрын
Excellent
@Mircea0073 жыл бұрын
@@paulmcwhorter Hi Paul, did not work so well for me, I took some really bad photos on a white background and now it thinks almost everything it sees it's a digital multimeter. I will try again with photos taken with my phone camera.
@mikethompson51193 жыл бұрын
Code-OSS menu item View > Toggle Word Wrap Alt+Z , is your friend. Both for Paul so we can still see long lines as he switches his attention or scrolls and for ourselves so we can see the surrounding code and manage long lines...
@paulmeistrell17263 жыл бұрын
An excellent lesson Paul. The little 2g is showing lack of memory now. No way to put in opencv2.... interesting results. Did get it to work staying with the jetson.utils. My camera images are not as sharp as it would be nice to have them. Thanks
@pralaymajumdar78224 жыл бұрын
What a knowledge you have..incredible...
@OZtwo3 жыл бұрын
can't wait to watch this video. this is what I'm going to be looking into more. I want to build a robot which will be able to explore the house and detect objects and any object it my not know have it automatically learn the new object. This could even be a very cool new line of videos you can do.
@geeksatlarge3 жыл бұрын
For those of us who have made it to this point, where in the NX series would you suggest we jump in at, to minimize the overlap?
@rantheone97894 жыл бұрын
hey, is it possible to adjust this training into net.jetson.inference.detectnet('ssd-mobilenet-v2',['--model=/.....'])?
@javiervargas26513 жыл бұрын
Dear Paul, you have been my best teacher. I have seen all your videos about the jetson nano and I learned a lot. I would like you to help me create a chicken detection network. Please can you return to this series of videos. Greetings from Bolivia
@insidiousmaximus3 жыл бұрын
This trained on a resnet model, how do we select a different model to use? So if I wanted to use jetson inference trainer but with mobilenet where would I specify that? I am used to building my own CNNs but I have not been successful with finding out how to convert TF1.15 h5 file to a pure TRT file for jetson inference so I am doing these tutorials to use the jetson utilities instead. I have a model working on a jetson but its nowhere near fast enough because I am running inference using Tensorflow h5. Thanks. Also...35 epochs is nowhere near enough for deployable models. How do we change that, and what about learning rate and patience?? Restore best weights? This does not seem very thorough....am I missing something?
@xacompany4 жыл бұрын
Can’t wait!
@CodingScientist4 жыл бұрын
Hey Paul, which ELP camera you mentioned about ? could you share some link ?
@codecage93334 жыл бұрын
Only watched today! Running a little behind on keeping up with the lessons. And as I'm typing this I see two 'thumbs down,' so per usual there are folks that just drop by to say they don't like the video. More than likely they have no clue as to what is being taught or what are the uses of this instruction!
@paulmcwhorter4 жыл бұрын
More accurately, they specifically do not like me, independent of the topic or content of the video.
@tanjiro32854 жыл бұрын
@@paulmcwhorter you are awesome
@codecage93334 жыл бұрын
@@paulmcwhorter For the life of me I can not see why anyone could dislike you, at least from only getting to know you from KZbin videos! Or maybe it is the ones that had their messages deleted for repeating themselves or being disrespectful.
@markpritchett35434 жыл бұрын
Another great video, thank you, it worked nicely for me with pi camera, logitech webcam and an HP webcam. I'm unclear about what is happening when selecting alexnet, or another net. I'll need to do a bit of research.
@salvadorcobos40194 жыл бұрын
Hello Paul, I did this example but I got this error: import cv2 ImportError: /usr/local/lib/libopencv_cudaarithm.so.4.1: undefined symbol: _ZN2cv4cuda14StreamAccessor9getStreamERKNS0_6StreamE Do you know how I can fix this error? I did also the example 53 and I got the same error. Thank you
@bontecristian39182 жыл бұрын
It is good to train Sign Language Gesture by using this approach?
@thomascoyle37154 жыл бұрын
Paul, when you referred to your ELP USB Camera, is it the 5-50 mm Fl one on Amazon Prime? That is the one that I have presently because the Logitech 920C hasn't been available for a reasonable price for a very long time.
@petersobotta36014 жыл бұрын
Was really exciting to do this true AI for my own custom objects, thanks Paul. Would be great if we could locate the objects.. then we could think about using this for tracking etc. Is it possible to do transfer learning to train for custom object detection? Would be so useful. Thanks
@sansarlkhagva11414 жыл бұрын
Hello. Can you use an alternative route of jetson-inference/build/aarch64/bin/camera-capture ?
@maxlof44473 жыл бұрын
First question: Can I have other stuff in the background in the Test and Validation? or will that only confuse? Second question: If the images I want to train on can't have blank backgrounds, can I use something to pinpoint a square in the image to tell what I want to focus on?
@ismaelachekar39412 жыл бұрын
After succesfully capturing the training images i want to make the myModel. But at this point i get a syntax error: future feature annotation is not defined. Im using python 3.6.9. Do you know what the problem might be?
@ismaelachekar39412 жыл бұрын
~/Downloads/jetson-inference/python/training/classification$ python3 train.py --model-dir=myModel ~/Downloads/jetson-inference/myTrain Traceback (most recent call last): File "train.py", line 24, in import torchvision.transforms as transforms File "/usr/local/lib/python3.6/dist-packages/torchvision-0.7.0a0-py3.6-linux-aarch64.egg/torchvision/__init__.py", line 6, in from torchvision import datasets File "/usr/local/lib/python3.6/dist-packages/torchvision-0.7.0a0-py3.6-linux-aarch64.egg/torchvision/datasets/__init__.py", line 1, in from .lsun import LSUN, LSUNClass File "/usr/local/lib/python3.6/dist-packages/torchvision-0.7.0a0-py3.6-linux-aarch64.egg/torchvision/datasets/lsun.py", line 2, in from PIL import Image File "", line 971, in _find_and_load File "", line 955, in _find_and_load_unlocked File "", line 656, in _load_unlocked File "", line 626, in _load_backward_compatible File "/usr/local/lib/python3.6/dist-packages/Pillow-9.2.0-py3.6-linux-aarch64.egg/PIL/Image.py", line 52, in File "", line 971, in _find_and_load File "", line 951, in _find_and_load_unlocked File "", line 894, in _find_spec File "", line 1157, in find_spec File "", line 1131, in _get_spec File "", line 1112, in _legacy_get_spec File "", line 441, in spec_from_loader File "", line 544, in spec_from_file_location File "/usr/local/lib/python3.6/dist-packages/Pillow-9.2.0-py3.6-linux-aarch64.egg/PIL/_deprecate.py", line 1 SyntaxError: future feature annotations is not defined
@ismaelachekar39412 жыл бұрын
Its the same error after commanding import torchvision in python3
@victorezekiel53742 жыл бұрын
how can we download the jetson-inference folder.
@sharifdynamics46844 жыл бұрын
hi i have an error " assertion failed: tensor.count(input_name)" "failed to parse ONNX model"
@marommeir54994 жыл бұрын
I have this problem too. I think its relate to your jetpack 4.3
@OZtwo3 жыл бұрын
Man I love your videos and would love to support you BUT all my money went to all the very cool items you suggested I buy for your videos! (yes, most likely at the end I will support you -- if not in the AudoDesk videos) :)
@codecage93334 жыл бұрын
Which ELP camera is it that you have? The one in a housing, with a focus adjustment and the threads to mount to a tripod? What Sony IMX sensor does it use?
@paulmcwhorter4 жыл бұрын
It is an older camera . . . Elp 2 Megapixel HD digital. The model was MF40. Not sure if this model is still offered.
@codecage93334 жыл бұрын
@@paulmcwhorter Maybe this one is pretty close. www.amazon.com/gp/product/B07226JNFX It does some higher resolution to boot.
@asdfds67524 жыл бұрын
Hi Paul, do you plan to run transfer learning on object detection as well? That would be super cool! And BTW, how much different it is? I understand you will need to provide bounded boxed ground truth, but more or less the complexity of the procedure should be similar, or?
@jrickyramos2 жыл бұрын
yes, a video on transfer learning for object detection would be super useful. The NVIDIA's object detection demo has many restrictions.
@thomascoyle37154 жыл бұрын
Paul, there is an error in the program at line 36: IS: cam.releast() S/B cam1.release()?
@melvinsajith44483 жыл бұрын
i already have 7.9GB swap do i want to do add 4GB more
@majidalahmadi21333 жыл бұрын
Please make lessons in field ros robotics 🤖
@camel15023 жыл бұрын
Could anyone tell me why my result turns to Segmentation fault (core dumped),please!
@keithemerson93494 жыл бұрын
Hi, Despite having changed the path for the model to /home/username .... I'm still getting the imageNet failed to load network. I've checked the path three times and can't find an error. When I search your website for jetson nano lesson 55 I cant get to lesson 55 to copy the path ??
@paulmcwhorter4 жыл бұрын
Also make sure your images are exactly where the path is. The path and the images have to match each other. If your path command is exactly like mine, but your pictures not exactly in the same place, you will have problems.
@keithemerson93494 жыл бұрын
@@paulmcwhorter Paths are correct as per your tutorial. Lesson 52 also ran fine using googlenet. I removed the myModel directory and re-trained on my images, rechecked the long command but still get an error ? I am running on Jetpack 4.3
@jevylux4 жыл бұрын
Hi community, does anyone else experience a 'Segmentation Fault', probably due to a lack of available memory when running the training script, at the end of the last epoch calculations, as well as when running the conversion tool ? The files are created however, but not sure if they are correct, at least the programs does recognise the boards. In order to advance quickly, I used Paul's captures, but are in the process in creating my owns.
@somebody90334 жыл бұрын
Hello. I had the same problem. I didn't use a swap file though as I was using a 32GB SD card and didn't have enough space for one. 64GB SD is coming so when I get that I will try again. EDIT: I have got the card and have run the testing and now it is perfect.
@IMSezer4 жыл бұрын
can we find location of object from this code?
@CodingScientist4 жыл бұрын
Hey Paul, how do we put bounding box on a recognized object in this lesson ?
@paulmcwhorter4 жыл бұрын
Cant really do that here . . . this is image recognition, not object detection
@CodingScientist4 жыл бұрын
Paul McWhorter got it thanks Paul
@CodingScientist4 жыл бұрын
Paul McWhorter hello, I just found out by using YOLO object detection model we can bound box the detected object, need to figure out a way to install on Nano or NX
@sttagelaboneh86403 жыл бұрын
I can not find your repository
@quintonditmore1639 Жыл бұрын
ERROR: ModelImporter.cpp:296 In function importModel: [5] Assertion failed: tensors.count(input_name) Does anybody know how to solve this error? I followed the steps exactly throughout the lesson.
@keithemerson93494 жыл бұрын
Hi Paul, I've quadruple checked my code, paths and locations but still get an 'imageNet failed to load network' error ?(I've updated the paths to include /home/myusername). I've tried searching toptechboy.com for lesson 55 so that I can copy your code and see if that runs but the search only finds up to lesson 53? I'm currently reloading the nvidia models but I'm not confident that that is where the problem lies. really frustrating as I want to move on with training my own models. Hope you can help
@paulmcwhorter4 жыл бұрын
Keith the challenge is that there are many parts to this, and if you are not on the same software versions as me, there can be problems. First, I am on Jetpack 4.3, and dont do the 'system upgrades' offered by ubuntu on the Jetson Nano. There is also the possibility that NVIDIA updated the inference libraries you downloaded and they are different than what I used when I made the video. If you watched my video, then you can see the process of how to make it work. Since that is not working for you, you might go to Dusty's page, move to the latest versions of all the software, and go through the steps he outlines. Here is the page. github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-transfer-learning.md All of that info is simple to follow, and then you will want to go to the link on that page with 'collecting your own classification datasets' The challenge I have with these videos is I have hundreds of videos on youtube, and when something changes, I am not able to remake every video. I have a similar challenge with my fusion 360 videos. It was a really good set of training, and then autodesk changed all types of things in the interface, and now the videos do not work perfectly for people. Not sure what the solution is, given there is only one of me, and things changing quickly in the high tech world.
@keithemerson93494 жыл бұрын
Hi Paul, thank you for your reply. I fully appreciate the number of variables that could affect the process and I'll keep plugging away to try and find a solution. I also appreciate the amount of effort you put in to your wide range of tutorials. I recommend them to anyone who will listen. I have learnt a tremendous amount and pass this knowledge on whenever I have the opportunity. thanks again, Keith
@daviddoidge12524 жыл бұрын
@@keithemerson9349 hi, I have the same issue, did you find a solution ? looking at Pauls "terminal" in the video, it shows Producer name:Pytorch and Producer version:1.1.....Mine is on version 1.3 in the terminal window when running the program ?
@daviddoidge12524 жыл бұрын
@@paulmcwhorter uninstalled 1.3 and installed 1.1 with no luck.......but uninstalled torchvision and installed dusty-nv forked version: $ sudo pip uninstall torchvision $ python -c "import torchvision" # should make error if succesfully uninstalled $ git clone -bv0.3.0 github.com/dusty-nv/vision $ vision $ sudo python setup.py install I removed the "myModel" directory and rerun the 2 commands.....all working :)
@keithemerson93494 жыл бұрын
@@daviddoidge1252 Hi David, No, I haven't found a solution. I went through Dusty's tutorials from scratch and everything worked OK until I got to the transfer learning lesson. I re-loaded Torch etc so I'm now on version 1.4 and 0.5.0 for torchvision. I tried to run the cat_dog model and got errors. I'm really out of my depth here but the common problem (Paul & Dusty's tutorials) seems to be in loading the onnx model. I also get warnings saying that "the onnx model has a newer ir version (0.0.4) than this parser was built against (0.0.3)" . Also Dusty's tutorials mention Torch version 1.1 and torchvision version 0.3. All a bit frustrating as I'd really like to create my own models. Please add a comment if you get anywhere with this, Keith
@CodingScientist4 жыл бұрын
Hi Paul, what do I need to do if I want to view all these premiere videos much ahead of actual release ?
@paulmcwhorter4 жыл бұрын
Just not set up for that. I can only make videos so fast, then if I release them at once, then there are no new ones for a long time.
@CodingScientist4 жыл бұрын
Paul understood, keep up the great work 👍
@melvinsajith44483 жыл бұрын
jetson nano L-55 , I could to get it working the error is this , i am i jetpack 4.5 , i got all the thinks working till now . 'jetson.inference -- imageNet failed to load built-in network 'alexnet' Traceback (most recent call last): File "/home/melvin/Desktop/AI on jetson nano/Nvidia/3deep_learing-2_opencv.py", line 21, in net = jetson.inference.imageNet('alexnet',['--model=/home/melvin/Downloads/jetson-inference/python/training/classification/mySBC_COMPUTERS/resnet18.onnx','--input_blob=input_0','--output_blob=output_0','--labels= /home/melvin/Downloads/jetson-inference/myTrain/labels.txt']) Exception: jetson.inference -- imageNet failed to load network'
@mikethompson51193 жыл бұрын
The code copied from lesson 52 has a bug in the clean up. The 2nd to last line is: cam.releast() it should be: (cam vs. cam1 and t vs. e at the end of release) cam1.release() toptechboy.com/ai-on-the-jetson-nano-lesson-52-improving-picture-quality-on-the-raspberry-pi-camera/
@pranav30414 жыл бұрын
can you also teach qualcomm 450c becuase you teach really good and i do not find any good content on that.
@davewesj4 жыл бұрын
I appreciate your lessons, but on this one, (just did it this week) apparently I have a different version of jetson.inference because instead of (model,[,,,]) this one needs (model,(,,,)) changed into a tuple from brackets. Once that was fixed,every procedure succeeded, but no recognition of custom objects. I repeated it with your data set and still not able to see anything custom, I.E. arduino, raspberry, etc. On hold for the time being. dwj
@paulmcwhorter4 жыл бұрын
I did this on jetpack 4.3. I suggest trying it with the same setup I am using.
@ke4est4 жыл бұрын
@@paulmcwhorter I also did this on Jetpack 4.3 and followed what you did to a T. Tried everything would not run. Even tried on Jetpack 4.4. This too is the one I have waited and waited for. So I painstakingly got a fresh new card put jetpack 4.3 on it and went through each lesson from the lesson 30 something about upgrading to 4.3. Followed everything as you said careful not to install anything else. Only thing I did was do updates as needed. Got back to lesson 55 and got the same errors. Will not run. Everything else has went great! So, yeah there is something going on somewhere. Something has upgraded somewhere doing installs. Paul I really hope you revisit this lesson very soon. I know you are overly busy, but please put this on your list. Thanks
@paulmcwhorter4 жыл бұрын
The possibilities I see are that an upgrade occured on your machine, and put you on different versions of some software. the other possibility is that they changed the Jetson Inference Library since I made the video. You might try it from the NVIDIA site and see if it works. Here is the link to their instruction. github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-collect.md It is hard for me to be helpful, as the jetpacks are changing, updates change things and the underlying libraries can change.
@ke4est4 жыл бұрын
@@paulmcwhorter Oh I understand Paul! I understand that with teaching something like this on KZbin, things change so fast that even if someone finds and downloads JetPack 4.3 two years from now, the other libraries will have changed so much it will make many of the videos un-usable. That is why I said I hope you re-visit this. Maybe when you have time at a later date, you can make a new sd card from scratch and then see if you can get it to run. If not, maybe figure out what needs to be changed and let us know in an addendum video or something. I also hope you do it on the XaviarNX with 4.4 and maybe get it working on that. Thanks for everything Paul! You can only do the best you can do and we thank you for it! Hope you have a great weekend and sure did miss seeing you on Friday hangout.
@paulmcwhorter4 жыл бұрын
Yes. I had the same problem with My Fusion 360 lessons. I put a lot of work in that series, and the autodesk makes lots of silly changes to the user interface, so now people are confused trying to take the lessons. The challenge is I have hundreds of videos and can not keep them all updated. Nice thing is that the arduino does not change, so those stay up to date. It seems things are changing VERY fast on the Jetson Nano. I even notice now that they are adding sub-numbers to the jetpack . . . like jetpack 4.3.2. I also noticed recently if you take the offered Ubuntu upgrade on Jetson Xavier on jetpack 4.4, it changes the CUDA libraries, and then face recognizer completely breaks. So, really not sure what the solution is. It is hard to teach if the platform is changing so quickly. Wish we could have option to lock into a stable distribution and stay with it a few years.
@Cnys1004 жыл бұрын
hi my pi cam wont work my old Logitech cam wont work my sheep Chinese cam (Boom it works!) ?? i didn't suspect that! Thanks for a most excellent education on Jetson nano
@daviddoidge12524 жыл бұрын
Hi all. Getting following errors, any help would be appreciated (nothing has been upgraded) david@david-nano:~/Downloads/jetson-inference/python/training/classification$ python3 train.py --model-dir=myModel ~/Downloads/jetson-inference/myTrain Use GPU: 0 for training => dataset classes: 6 ['ArduinoNano', 'ArduinoUno', 'JetsonNano', 'JetsonXavierNX', 'RaspberryPiThree', 'RaspberryPiZero'] => using pre-trained model 'resnet18' => reshaped ResNet fully-connected layer with: Linear(in_features=512, out_features=6, bias=True) Epoch: [0][ 0/79] Time 18.514 (18.514) Data 0.708 ( 0.708) Loss 2.3538e+00 (2.3538e+00) Acc@1 25.00 ( 25.00) Acc@5 87.50 ( 87.50) Epoch: [0][10/79] Time 0.681 ( 2.288) Data 0.000 ( 0.069) Loss 2.1609e+01 (2.1136e+01) Acc@1 12.50 ( 18.18) Acc@5 87.50 ( 82.95) Epoch: [0][20/79] Time 0.685 ( 1.524) Data 0.000 ( 0.056) Loss 2.0765e+01 (1.7702e+01) Acc@1 0.00 ( 19.05) Acc@5 75.00 ( 83.93) Epoch: [0][30/79] Time 0.682 ( 1.252) Data 0.000 ( 0.052) Loss 1.5673e+00 (1.3999e+01) Acc@1 25.00 ( 18.95) Acc@5 100.00 ( 84.27) Epoch: [0][40/79] Time 0.681 ( 1.113) Data 0.000 ( 0.049) Loss 7.0146e+00 (1.1985e+01) Acc@1 12.50 ( 20.43) Acc@5 87.50 ( 85.67) Epoch: [0][50/79] Time 0.683 ( 1.029) Data 0.000 ( 0.048) Loss 5.8061e+00 (1.0671e+01) Acc@1 25.00 ( 21.57) Acc@5 87.50 ( 86.76) Epoch: [0][60/79] Time 0.682 ( 0.972) Data 0.000 ( 0.047) Loss 4.0776e+00 (9.6286e+00) Acc@1 25.00 ( 22.13) Acc@5 75.00 ( 86.07) Epoch: [0][70/79] Time 0.683 ( 0.931) Data 0.000 ( 0.046) Loss 4.4129e+00 (9.2949e+00) Acc@1 25.00 ( 22.36) Acc@5 100.00 ( 86.97) Epoch: [0] completed, elapsed time 75.001 seconds Test: [ 0/18] Time 1.386 ( 1.386) Loss 2.5063e+00 (2.5063e+00) Acc@1 37.50 ( 37.50) Acc@5 100.00 (100.00) /media/nvidia/WD_BLUE_2.5_1TB/pytorch/20200116/pytorch-v1.4.0/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, signed long *, Dtype *, int, int, int, int, signed long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed. /media/nvidia/WD_BLUE_2.5_1TB/pytorch/20200116/pytorch-v1.4.0/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, signed long *, Dtype *, int, int, int, int, signed long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed. Traceback (most recent call last): File "train.py", line 506, in main() File "train.py", line 135, in main main_worker(args.gpu, ngpus_per_node, args) File "train.py", line 280, in main_worker acc1 = validate(val_loader, model, criterion, num_classes, args) File "train.py", line 383, in validate losses.update(loss.item(), images.size(0)) RuntimeError: CUDA error: device-side assert triggered
sorted: when I cloned the github files there is only 5 directories in the "test" directory instead of 6. The "JetsonXavierNX" directory is missing As you advised in the video/lecture, I should have done my own images......lazy and impatient....story of my life !!!!
@eranfeit3 жыл бұрын
Hi Paul, Thanks for wonderful lessons. I have two remarks / questions : 1. What is the printing log of the class ID. The code does not print anything, so which part of the code print it ? 2. The jetson.inference.imageNet() function has a little bit syntax of the arguments . The input is 'input-blob' and not 'input_blob'. Thank you Eran
@bmckenzie3871 Жыл бұрын
I have followed along until now, with my Nano running JetPack 4.6.1 and surprisingly I have been able to get most everything to work. However, the training here does not. I get this error: Traceback (most recent call last): File "train.py", line 29, in from torch.utils.tensorboard import SummaryWriter File "/home/user/.local/lib/python3.6/site-packages/torch/utils/tensorboard/__init__.py", line 1, in import tensorboard ModuleNotFoundError: No module named 'tensorboard' The problem seems to be this line: from torch.utils.tensorboard import SummaryWriter indeed tensorboard does not exist in torch.utils? Has anyone else seen this? Not sure how I can get around this?
@bmckenzie3871 Жыл бұрын
Doh! I just had to pip install tensorboard. Not sure why that wasn't already there? But, training is progressing now :)