Thanks for the video! I am not an expert in this, but the basic intuition is that point clouds are sets so there is no natural way of ordering them i.e. you can NOT identify which is point 0, 1, 2... n, consistently between samples. On the other hand, grid convolutions assume a very precise local ordering so the regularization applied in PointNet I believe is trying to somehow learn an ordering in the first half of the network and then using it in the second half. I believe GNNs and Transformers are much better at this task than CNNs since they naturally operate on sets. Things like the SE(3) Transformer even (try to) encode 3D rotational symmetries into the architecture. A good data augmentation for this is 3D rotations so the network can try to learn to be invariant to this just like with CNN and rotated images.
@connor-shorten3 жыл бұрын
Hey Cristian, thanks for this information! I agree it definitely seems like the Transformer would be better suited for this problem! I'll check out the SE(3), still very new to point cloud research haha! Interesting to see data augmentation in the geometric dl space, 3D rotations like neural radiance fields seems like it could be interesting for 2D image data as well!
@stnmtambat93743 жыл бұрын
@@connor-shorten check out the "Point Transformer" published in 2020
@zddroy10253 жыл бұрын
Thank you for your video! Wonder where could we access the notebook.
@basithAA3 жыл бұрын
thanks for the knowledge
@blackeagleff3 жыл бұрын
Hi, could you bring a video on PointNET ++ or higher networks (SalsaNext, SPVNAS) regarding 3D semantic segmentation with lidar point cloud? I have my own point cloud data captured with velodyne lidar and i wanna know how to use one of this net to predict semantic segmentation on my own data, thank you !
@alexsteiner61032 жыл бұрын
how can i save th model ? to a .h5 file
@ahmadatta662 жыл бұрын
why is the validation loss so high?
@mehermanoj453 жыл бұрын
😀
@connor-shorten3 жыл бұрын
Thanks, hope you find this useful!
@jessar82 Жыл бұрын
But how can you explain and review work you did not understand? Did you check the validation accuracy? Did you plot the loss? Just take a moment to code a model Training and Validation loss, at least! Mate, this work on Keras is basically a fake replication of the original paper. The model is overfitting from the start to the end, and the results are just random.