Hi Florent, can you provide the exact github repo of KPConv used, I am unable to find a KPConv repo with a .yml file in it. Thank You.
@FlorentPoux10 ай бұрын
Hey! Yes of course! here is the repo: github.com/HuguesTHOMAS/KPConv
@krishnamurthy-ng3fb3 ай бұрын
Hi Florent, I need to train a semantic segmentation model with custom data. Can you suggest the best annotation tool? Also, could you give any recommendations and cautions for processing data and training models?
@FlorentPoux2 ай бұрын
Hey! so the best tool is your cretivity :). But packed tool, I would start with CloudCompare and python to tune in some automation. Maybe watch the last tutorial to get a hint as how you can get quicker at labelling. Else, if the classes you want are not too unique, (e.g. aerial lidar), you can always use open data to train your model.
@krishnamurthy-ng3fb2 ай бұрын
@@FlorentPoux Thank you
@PhilippHeld_phi3 ай бұрын
Hi Florent, thanks for this great video. I am very interested in machine learning on 3d point clouds. I would have loved to attend your workshop. Regarding the KPConv repository, you linked Thomas Hugues' github in another comment, but that doesn't seem to be the same one your colleague is working with in the video!?! The linked repo does not contain the .yml files that were used in the video. Did your colleague use a different one?
@FlorentPoux3 ай бұрын
Hi there! Thanks for reaching out. I saw your comment on the video. 87% of my students struggle to find the right KPConv implementation. I get you. The repo shown in the video is heavily modified version. It's adapted to work well with any datasets. My colleague added functionalities for easier experimentation. These `.yml` files likely define experiment parameters (like batch size, learning rate, etc.) making reproduction and tweaking easier. We often create these internal versions for our projects. This avoids messing with the original repo and lets us easily compare changes. Unfortunately, I can't share our internal version at this time, but this is part of the 3D Deep Learning Course. Here is a plan to help on the path. * Start with the original KPConv Github repository. Get familiar with its structure. * Understand the KPConv architecture described in the original paper. Pay attention to its strengths and limitations. * Review the training scripts in the original repo. Understand the data loading and training process. * Reproduce the results from the original paper. Make sure you can achieve the baseline performance. * Look at the different configuration options (YAML or other config files). * Start making small changes and experiment with different configurations. * Focus on understanding how the changes affect performance. This includes modifying hyperparameters, data augmentation, and other settings. * Document your experiments thoroughly. This will help you track your progress and make it easier to analyze your results. * Consider exploring other point cloud deep learning libraries. CloudCompare, PDAL, and PCL are great alternatives. * If you need labeled data for training, look into using or adapting public datasets like ShapeNet, ModelNet, or ScanNet. 1. KPConv Paper: arxiv.org/abs/1904.08889 2. KPConv Github: github.com/HuguesTHOMAS/KPConv 3. CloudCompare: www.danielgm.net/cc/ Best, Florent
@PhilippHeld_phi3 ай бұрын
Hi Florent, Thanks for your answer and your hints. I will try them out. Best, Philipp
@MesutKöroğlu-n7t3 ай бұрын
Do we must have "Intensity RGB" values to train that model?
@FlorentPoux3 ай бұрын
note necessarily, you can adapt the features you give to the model
@krishnamurthy-ng3fb3 ай бұрын
Please try to make video on Pointnet++ Semantic segmentation models
@FlorentPoux2 ай бұрын
it is planned! the solution with code (with commercial license) is in the 3D Deep Learning course at the 3D geodata academy