Пікірлер
@noumanijaz5353
@noumanijaz5353 2 ай бұрын
Thanks for the great explanation. question : if we are doing episodic training and creating a large number of few-shot task to train the prototypical network so it mean it also require a large amount of labeled data .how we can say that few-shot learning need less amount of labeled data ? Please guide
@dakshnakumar1816
@dakshnakumar1816 2 ай бұрын
Is the model able to retain the knowledge that it gained from past or the model able to predict the new dataset image only
@etaifour2
@etaifour2 4 ай бұрын
Genius.. The way you explain this is epic
@daritzateheran8570
@daritzateheran8570 5 ай бұрын
pptk is taking too long. It doesn't show nothing.
@prakruthikoteshwar1821
@prakruthikoteshwar1821 8 ай бұрын
Hi, Can u provide an example for few shot learning, for object detection. May I know the time it takes to complete the learning? Is it faster than training the model?
@kvnptl4400
@kvnptl4400 8 ай бұрын
Nice video. Thank you for the clear and easy-to-understand explanation of PointNet. It was super helpful to see the side-by-side comparison of the code and network block. It helped 👍
@rohinr7
@rohinr7 9 ай бұрын
i have .xyz fromat a lidar point cloud data how do i visualize that?
@naveedarif3565
@naveedarif3565 4 ай бұрын
hello, i need some help.
@rohinr7
@rohinr7 4 ай бұрын
yup bro @@naveedarif3565
@user-ls5rg3wp1w
@user-ls5rg3wp1w 11 ай бұрын
Thanks for this video, I need help for object detection code of few shot learning
@ganjarulez009
@ganjarulez009 Жыл бұрын
Hey one question: A lot of the times I see the terms "episodic learning" and "meta-learning" used interchangebly in the context of few-shot learning. are there any substantial differences between those terms or are they identical in this context?
@aiswaryaunni8437
@aiswaryaunni8437 Жыл бұрын
Finally found an algorithm with intelligence than usual object detection algorithms. Thanks a ton.
@bernhardvoggenberger9850
@bernhardvoggenberger9850 Жыл бұрын
i found this video helpful;) so i liked it 👍
@bernhardvoggenberger9850
@bernhardvoggenberger9850 Жыл бұрын
but I was wondering on how the information flow works. Like an Interpretation why it is able to see shapes. I did some research and made this summary of my findings: kzbin.info/www/bejne/mpqYmGeEl5mbgdU
@EzequielBolzi
@EzequielBolzi Жыл бұрын
Hello, your video is really useful! But i have a question, im doing a proyect about image classification of problems in Wind Turbines! I have 3 classes, with 3 diferentes problems like impactLightnings/pitting/fissure in each classes i have 7 images, its ok?
@aninditamohanta2310
@aninditamohanta2310 Жыл бұрын
How to run EasyFSL code on my own customize dataset??
@salmadiary2991
@salmadiary2991 Жыл бұрын
I couldn't install both, the open3d and pttk in my web-based cs50
@karthikm2941
@karthikm2941 Жыл бұрын
really fantastic video. but how can I use my own dataset? (load into modal and train-set split). please give the details or example. thank you..
@Naveen-jr2io
@Naveen-jr2io Жыл бұрын
Thanks a lot sir. It was a really useful video to visualize point cloud from a public dataset. It will be helpful if you could get a video regarding point cloud segmentation processing
@LightsCameraVision
@LightsCameraVision Жыл бұрын
Thank you for the kind words. I have plans to make segmentation videos. ✌️
@AssamSahsah
@AssamSahsah Жыл бұрын
Great! Thank you ! can you please make a video for Few-Shot with Graph Neural Network? :)
@_shreya.ramakrishnan_
@_shreya.ramakrishnan_ Жыл бұрын
Hey! It's a great video. I'm trying to classify some house images with this method. My images are in google drive. It'll be great if you can make a video on how to use custom data from drive!
@LightsCameraVision
@LightsCameraVision Жыл бұрын
Thank you. I’ll try to make one or share some resources in a few days.
@_shreya.ramakrishnan_
@_shreya.ramakrishnan_ Жыл бұрын
@@LightsCameraVision That'll be great :)
@nehaejaz2556
@nehaejaz2556 Жыл бұрын
Hi, Thanks for the great video but I don't understand one thing, we are training the network with 5 images/support set, 5 images/query set, and 40,000 tasks so that means we are using 400,000 images but omniglot dataset consists of 1623 images each class so for 5 classes the total will be 8115 images, how are we having the 40,000 tasks or the images can be repetitive in the task? Also, I have a dataset that consists of 100 good images and just 4 or 5 bad images so should I use 2-way-1-shot approach?
@LightsCameraVision
@LightsCameraVision Жыл бұрын
Hello, Thank you. Yes, there are image and class repetitions in training tasks. But the meta training classes/images are different than the meta testing classes/images. Btw the Omniglot data has 1623 alphabets from 50 different languages, and each alphabet has 20 images. So total number of images is 32460. Could you please explain a little more what do you mean by you have good images and bad images?
@NehaEjaz29
@NehaEjaz29 Жыл бұрын
@@LightsCameraVision I have 2 kinds of vehicle parts each part has 2 types good part and faulty part, the problem is that for faulty part I have just 5 images and good part are around 700. Therefore, I am looking for some solutions which require less training data. Do you think few shot will work in this case?
@LightsCameraVision
@LightsCameraVision Жыл бұрын
5 is very low. Of course, the best solution is to get more images if possible. Otherwise, you can look for data that is similar to what you are trying to do then do the meta training on that, and then meta test on your data. You can also look at zero-shot learning.
@LightsCameraVision
@LightsCameraVision Жыл бұрын
Feel free to comment here if you have any questions or if you're getting any errors.✌
@teetanrobotics5363
@teetanrobotics5363 Жыл бұрын
Animation and content is great but audio quality, accent and clarity of speech is really bad.
@LightsCameraVision
@LightsCameraVision Жыл бұрын
Noted. Thanks for the feedback. Didn’t have access to my regular mic, probably that’s the reason. Hope it’s not that bad. ✌️
@youssefmaghrebi6963
@youssefmaghrebi6963 Жыл бұрын
@@LightsCameraVision accent is good enough bro, keep it up. But having a better mic is a nice boost to have.
@m.alfatehmurkaz7247
@m.alfatehmurkaz7247 Жыл бұрын
audio is ok though
@user-qq4hm9ov3o
@user-qq4hm9ov3o 10 ай бұрын
@@LightsCameraVisionaccent ok, no issue
@prasidhsriram6649
@prasidhsriram6649 Жыл бұрын
Thank you!
@LightsCameraVision
@LightsCameraVision Жыл бұрын
Appreciate it. ✌️
@user-dk8hr8xs7v
@user-dk8hr8xs7v Жыл бұрын
great!, I have one question. i want to use your model with my personal data, but im faced with a problem. i want to orgainze the train_set and test_set by my personal data, not Omniglot data, how can i modify that part..? plz give me your wisdom thanks.
@LightsCameraVision
@LightsCameraVision Жыл бұрын
Thank you. I’m sharing two links here, the first one is for PyTorch dataloader for the few-shot learning and the second one is for TensorFlow. You may have to modify it for your case. Hope it helps. ✌️ github.com/sicara/easy-few-shot-learning/blob/master/easyfsl/samplers/task_sampler.py github.com/schatty/matching-networks-tf/blob/master/matchnet/data/mini_imagenet.py
@Alaah576
@Alaah576 Жыл бұрын
can I apply Nlp with meta learning without use deep learning , I mean machine learning algorithm ,nlp, meta learning?
@LightsCameraVision
@LightsCameraVision Жыл бұрын
Yes, you can.But I'm skeptical about the performance of ML algorithms using meta-learning on complex NLP tasks. The recent papers are written on neural networks or transformers. But you can definitely try.
@Alaah576
@Alaah576 Жыл бұрын
is meta-learning different from triplet loss?
@LightsCameraVision
@LightsCameraVision Жыл бұрын
At a high level, the goal of both triplet loss is same as metric learning. We use them to learn a representation function. But they are doing it in different ways. Triplet loss is usually used in self-supervised learning where it learns by comparing represented vectors. Whereas metric learning algorithms learn a function that maps instances to a new space and later on a test instance is projected on that space and classified based on the closest distance to a learned class. Other types of meta-learning algorithms are different than construction learning (triplet loss).
@Alaah576
@Alaah576 Жыл бұрын
Thanks for explaining, great video, can I apply random forest with meta-learning?
@LightsCameraVision
@LightsCameraVision Жыл бұрын
Appreciate the kind words. ✌️ Yes, you can. Meta learning algorithms are model agnostic in general. There are some work on this. Check these out. arxiv.org/pdf/2203.01482.pdf edoc.ub.uni-muenchen.de/24557/1/Probst_Philipp.pdf ieeetv.ieee.org/video/meta-algorithms-in-machine-learning
@saeed577
@saeed577 Жыл бұрын
great explanation👌, thanks a lot. Hope to see more videos about few-shot learning
@LightsCameraVision
@LightsCameraVision Жыл бұрын
Appreciate the kind words. Thank you. ✌️
@maker72460
@maker72460 2 жыл бұрын
Hi, I recently reviewed pointnet too as part of my research. There are three main takeaways: Permutation invariance, Canonical space transformation & Local-Global knowledge. For permutation invariance, authors use symmetric functions such as Max() function. For Canonical space transformation, TNet is used and for local-global sharing, features learned from second mlp (N, 64) are combined with the input(N,1024) to the last mlp are passed to another MLP network. This simulates the local knowledge combined with original global knowledge. Really liked your concise and clear explanaiton. Perhaps, a more detailed (~20 min) video just about the theory and another implementation video would be awesome. Regardless, if you plan to cover more networks, I would like more videos on PointNet++, GradSLAM and DeepGMR. Awesome! Thanks!
@LightsCameraVision
@LightsCameraVision Жыл бұрын
Thanks for your kind words. It's great to see that you are also passionate about point cloud processing. Thanks for the suggestions, I'll keep them in mind. ✌️
@modx5534
@modx5534 2 жыл бұрын
Great video! I have one question. Can you use few-shot learning also in combination with 1d CNN's? I have some acceleration data I want to classify but my "traditional" CNN's have a very hard time doing that because I (intentionally) don't use a lot of data. I don't want to use a classical machine learning algorithms like random forest and few-shot learning looks very promising so far
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Thank you. Yes, you can use it for your case. I have only seen people using FSL in computer vision and NLP. But I don't see why you can't use it in other domains, people may already have. ✌️
@sumedhvidhate3383
@sumedhvidhate3383 2 жыл бұрын
conda install -c anaconda -c conda-forge mayavi is the updated package for installing mayaavi it will solve all the errors
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
I would suggest you to fix your conda not found error first and then install the packages/dependencies. You can run "conda --version" to check if it's installed properly.
@sumedhvidhate3383
@sumedhvidhate3383 2 жыл бұрын
Install it 3 to 4 times
@sumedhvidhate3383
@sumedhvidhate3383 2 жыл бұрын
And check to still got same error
@sumedhvidhate3383
@sumedhvidhate3383 2 жыл бұрын
conda: command not found getting this error
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
So this may mean many things. You may not have anaconda installed or maybe conda isn't added in the path variable. If you don't have anaconda installed go to www.anaconda.com/ and follow the instructions to download it. If you already have it downloaded you have to follow something like this monovm.com/blog/conda-command-not-found-fixed/ or other StackOverflow posts that fits your case. Then follow the video to create the environment and install the necessary dependencies. ✌️
@sumedhvidhate3383
@sumedhvidhate3383 2 жыл бұрын
I installed conda and after installing mayaavi package I was still getting other error called as opengl error
@user-mr9he1or4u
@user-mr9he1or4u 2 жыл бұрын
thank you so much, it was perfect.
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Appreciate the kind words. Thank you. ✌️
@prasidhsriram6649
@prasidhsriram6649 2 жыл бұрын
Very useful video
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Thank you. I'm glad you found it useful. ✌️
@danielmathew6961
@danielmathew6961 2 жыл бұрын
Thanks for the video, it is very helpful and I really appreciate it! Is there a way to download just a subset of the KITTI dataset so I don't have to download the entire dataset?
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
I'm glad that you found it helpful. Here is a small subset of the KITTI dataset that you can download. pl-flash-data.s3.amazonaws.com/KITTI_tiny.zip ✌️
@danielmathew6961
@danielmathew6961 2 жыл бұрын
@@LightsCameraVision Thanks so much for the dataset! Is it also possible to get the images for the subset you gave me? The script doesn't run without the images corresponding with the labels, calibration, and pointcloud data. If not, is there another way to run the script without the images?
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Hi Daniel, try this subset of KITTI data. drive.google.com/drive/folders/1Y6QKWAEN0mUuz2lq7D1pJNCsg2cnwhjC?usp=sharing
@camtrik3686
@camtrik3686 2 жыл бұрын
This tutorial is very helpful, but I still have some problems. I tried to run it on Colab, but it seems that there are some problems when using TaskSampler and I cannot figure it out, can you check it out?
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
I'm glad that you found it helpful. Thanks for pointing out the error. I have fixed it. Check out the updated Colab link in the video description. ✌️
@camtrik3686
@camtrik3686 2 жыл бұрын
​@@LightsCameraVision Thank you!!
@focused9227
@focused9227 2 жыл бұрын
You are awesome
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Appreciate the kind words. 🙂✌️
@ginogoossens8952
@ginogoossens8952 2 жыл бұрын
How can I make my own point cloud dataset? What tools can I use to classify parts of the point clouds?
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
If you mean how to generate point cloud data, then you need a LiDAR device to capture point clouds, or you can use a simulator to augment real point cloud with synthetic obstacles and environment. There are simulators like CARLA you can check out. For data annotation, you can try this tool (supervise.ly/lidar-3d-cloud/). It supports the KITTI format. Since KITTI data/format is used as a benchmark in many algorithms, generating annotations for custom data in KITTI format is a good idea. This tool is well documented. You can also read this Medium post (medium.com/deep-systems/releasing-first-online-3d-point-cloud-labeling-tool-in-supervisely-4faca42b5d6e) about this tool. If you are looking for models for segmentation then you may look into PointNet, PointSIFT, Squeezesegv3 and there are many more models. Hope it helps. ✌️
@birdropping
@birdropping 2 жыл бұрын
Thank you for the amazing video! I am working on a project that is exploring the estimation of age using 3D facial depth maps. Is it possible for this PointNet implementation to be used for regression instead of classification?
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Appreciate it. You definitely can do it. You just need to tweak a little at the end. I remember this paper using PointNet for regression. Check it out. arxiv.org/pdf/2010.04865.pdf ✌️
@birdropping
@birdropping 2 жыл бұрын
@@LightsCameraVision Thanks so much for taking the time to point me in the right direction! Will check it out
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Happy to help.
@houdaa1810
@houdaa1810 2 жыл бұрын
hello, thanks for this vedio, i want to know if we can use the kitti dataset with pointnet model ??
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Appreciate it. You can use it. You may need to change the code a little bit. Please check this repo github.com/kargarisaac/PointNet-SemSeg-VKITTI3D They used PointNet for semantic segmentation on Virtual KITTI dataset. It should give you some idea. ✌️
@mohammedy.salemalihorbi1210
@mohammedy.salemalihorbi1210 2 жыл бұрын
Great! you have made my day. Thanks a lot for this wonderful video!
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
I’m glad it helped you. Thanks for the kind words. ✌️
@nayansarkar6952
@nayansarkar6952 2 жыл бұрын
Hello sir, I tried to run your code in Google Colab with omniglot dataset, it worked fine, but I couldn't make it work with different dataset (FashionMNIST or other dataset) it shows error. I know a bit of python and new to FSL, will you please guide me to understand your code properly to run with other dataset.
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Hello, I'm a little busy right now with some deadlines. I'll look into the issue soon. Please comment the error you are getting when you tried it with Fashion MNIST. Thanks!
@matheusfilipemartins8309
@matheusfilipemartins8309 2 жыл бұрын
Eu trabalho com isso e nao sei se possui limitações em um ambiente industrial, com diversas coisas diferentes para classificar
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Não é mais o modelo mais preciso; existem modelos mais novos com melhor precisão. Com dados suficientes, ele deve ser capaz de lidar com uma quantidade significativa de classes. No entanto, não funciona bem em cenas complexas. Mas este é um dos primeiros modelos que deram início a tudo. Alguns dos novos modelos publicados recentemente ainda usam PointNet como backbone para extração de recursos ou outros motivos. Por ser tão fácil de usar, tenho certeza de que muitas pessoas na indústria começam com ele e, em seguida, desenvolvem de acordo com suas necessidades.
@arooshmishra234
@arooshmishra234 2 жыл бұрын
Does this work good for large point cloud files say in GBs? Can it detect all the occurances of each class?
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
It should work assuming you have enough computational resources. The original PointNet architecture takes 1024 points as input which is definitely not much. You can also modify this for your project so that it can handle more points. Depending on what you are trying to classify you may not need many points or a big point cloud for each object. You can always downsample points.
@cocoarecords
@cocoarecords 2 жыл бұрын
i have seen many vids, but urs are the best. straight forward
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Thank you for the kind words. I really appreciate it. 🙂 Thanks for watching. ✌️
@nayansarkar3462
@nayansarkar3462 2 жыл бұрын
Thank you sir for making such a wonderful video.
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Appreciate the kind words. Thanks for watching. ✌️
@vishnupradeep6113
@vishnupradeep6113 2 жыл бұрын
Thank you so much !
@yasminbanu2597
@yasminbanu2597 2 жыл бұрын
Sir i tried the same but while running the show command my output window gets disappear within a second. I can't able to see anything in window
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Are you getting any error?
@yasminbanu2597
@yasminbanu2597 2 жыл бұрын
@@LightsCameraVision No error sir
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
​@@yasminbanu2597 I'm assuming you are using the same GitHub repository for KITTI data. Then make sure you have installed all the necessary libraries and are using the correct command (like below) in the terminal/shell. python kitti_object.py --show_lidar_with_depth --vis
@yasminbanu2597
@yasminbanu2597 2 жыл бұрын
Hello sir what I have searched in internet I found the same in ur video. Thanks for that. It will be helpful if you make video regarding how to give point cloud segmented images to yolo network
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
I’m very glad that you found this video helpful. 🙂 Thanks for watching and for the suggestion. A point cloud segmentation video is in the making. ✌️
@yasminbanu2597
@yasminbanu2597 2 жыл бұрын
Thank you for your reply
@parthasarathyk5476
@parthasarathyk5476 2 жыл бұрын
Superb....Thank you for such a great knowledge sharing video.
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
Appreciate the kind words. 🙂✌️
@LightsCameraVision
@LightsCameraVision 2 жыл бұрын
If you know a better repository/package/software for such visualization, please do share here. Thanks for watching. ✌️