I bet seeing those loop closures happen was an amazing feeling and vindication of your hard work! :)
@klaus-udokloppstedt62574 жыл бұрын
awesome how map (and position) is adjusted at 9:30 to fix the accumulated error.
@matlabbe4 жыл бұрын
After the loop closure is detected (visually), the underlaying pose-graph of the map is optimized with the new constraint, re-adjusting all the clouds created so far and moving back the robot at its initial position.
@aadityaasati8688 жыл бұрын
fantastic work,very impressive...
@sabtvg3 жыл бұрын
Great! Impressing! Show us more please please please
@andreasthegreat8509Ай бұрын
This is a really cool video. What scanner did you use?
@matlabbeАй бұрын
Follow the link in the description, it is a stereo camera.
@gulhankaya50882 жыл бұрын
hello, how do I use the map I created? I am using the application in an autonomous vehicle. How will it go through the map I created when I put the vehicle at the starting point?
@keshav21363 жыл бұрын
Awesome
@zyxwvutsrqponmlkh5 жыл бұрын
3:05 had some trouble closing the circle. Are you mayhaps only using the tachometers for localization?
@matlabbe5 жыл бұрын
Visual loop closures could not be detected backward in this test because only the front camera was used. A 360 camera or another backward camera would need to be used to detect those loop closures.
@TestSubject20008 жыл бұрын
Very nice, how heavy doest the map get?
@matlabbe8 жыл бұрын
The resulting database is 193 MB. The RAM used to generate the 3D is dependent of the number of points created (which is adjustable).
@chrisvonwielligh19623 жыл бұрын
Would it be possible to fuse GPS localisation to link the odometry frame to lat-long coordinates?
@matlabbe3 жыл бұрын
yes, see for example official-rtab-map-forum.67519.x6.nabble.com/Is-this-the-way-to-use-lidar-IMU-efficiently-tp7496p7515.html
@rahul1221124 жыл бұрын
@matlabbe Nice mapping. But is the system able to localize given this map ?
@matlabbe4 жыл бұрын
This is what happens at 9:27 (the robot detected it has come back to beginning)
@rahul1221124 жыл бұрын
@@matlabbe Awesome! I didn't notice that snip. Another question that I have: Which would be better for outdoors in your experience/opinion, depth based rtab slam or stereo rtab slam? I am thinking of using a ZED2 camera for unstructured environment slam and navigation.
@matlabbe4 жыл бұрын
@@rahul122112 For outdoor environments, I would go for stereo cameras (like ZED2)
@rahul1221124 жыл бұрын
@@matlabbe Ah yes. But I was asking if using ZED2 would be suitable with RGBD RTAB slam or STEREO RTAB slam in outdoor unstructured environment ?
@mathAI424 жыл бұрын
@@rahul122112 Zed can compute dense disparity on GPU, which can save CPU time. Pose estimation will be better estimated in stereo mode though, if sone features are outside the depth range.
@pauldatche84108 жыл бұрын
Wow! Awesome stuff! I would love to experiment with this on a 3D imaging project. How can I get your SLAM stereo-modelling software?
@matlabbe8 жыл бұрын
You can start looking here: introlab.github.io/rtabmap/
@rafcins6 жыл бұрын
What sensor do you use for depth, a Zed Stereo or a bumble bee?
@matlabbe6 жыл бұрын
Raf it is a bumblebee2 stereo camera, with zed you may also have similar results
@rafcins6 жыл бұрын
matlabbe I use a Zed with a Jetson Tx2 but it slows down heavily.
@-factos65193 жыл бұрын
Hey! Really awesome function! But I got a question, what is your processor which runs this algorithm ?? I'm really curious !
@matlabbe3 жыл бұрын
The bag was processed on a macbook pro 2010 if I remember well (I think it was an i7 2010)
@JabiloJoseJ3 жыл бұрын
What camera u r using....?
@matlabbe3 жыл бұрын
In this video it was an old Bumblebee2 stereo camera (yes the same name than the transformer)
@刘盼盼-g8h8 жыл бұрын
great job! Could you please tell me what type of robots u r using in this video?And u finished SLAM by one stereo camera?
@matlabbe8 жыл бұрын
The robot is AZIMUT3, I added the link in the referred tutorial. The SLAM was done with only one stereo camera. cheers
@Shaban_Interactive4 жыл бұрын
This is amazing. Can I capture the whole city with this? Does ultrabook can handle this much data. about 60km2 area.
@matlabbe4 жыл бұрын
It depends if you want to do it in real-time. It is however possible to record data and process them offline. Example with stereo on ca car: kzbin.info/www/bejne/rnqqfJR7lNeNlbM, example with Lidar on a car: official-rtab-map-forum.67519.x6.nabble.com/Ouster-drive-under-the-rain-td6496.html
@Shaban_Interactive4 жыл бұрын
matlabbe excellent. Is tia support iphoneX TrueDepth Camera? Record 3D app can give RGBD realtime streaming via USB cable. Is it possible to map with iphoneX?
@jacksonkr_8 жыл бұрын
I am here because I was viewing your blog. I am looking for information on your model of stereo camera but I'm not finding anything. Do you have that info listed anywhere online? Is it a bumblebee?
@matlabbe8 жыл бұрын
It is a Bumblebee2. Indeed, it is not named anywhere. I updated the tutorial above, thx to point out!
@alexandergrau8879 жыл бұрын
Interesting point: when there is only lawn (so all 'normal' vectors show up, in your first frames), RTABMAP can already find enough feature points for a correlation using a stereo camera? My experiments on lawn were not that successful so far (not enough feature points on lawn...) grauonline.de/wordpress/?page_id=1282
@matlabbe9 жыл бұрын
+Alexander Grau RTABMAP uses visual features for odometry, not geometry (ICP-like motion estimation would use normal vectors). On your test on lawn, it is not the lack of visual features the problem, it is the lack of valid depth values: Xtion Pro Live or Kinect cannot be used outdoor, unless there is no sunlight. In the video above, the stereo camera can work outdoor as the depth is computed by feature triangulation between the left and right RGB images (no IR image used).
@5sweatingpalm3 жыл бұрын
holy shit!
@LowLightVideos8 жыл бұрын
This Video is perfectly good. So many views and so few 👍's, hmmm, wonder why that is ... Geniuses saw "Outdoor stereo SLAM with RTAB-Map" and they knew "An outdoor Slam, and RTAB's playing; featuring Buddy Map" - s-man, that's gonna rock gotta see it. All they were able to figure out is that they didn't have a clue what they were looking at and so they hit the Back Button. Unfortunate, since SLAM is a valid technique to build these images and this Video demonstrates it perfectly well.
@tokyowarfare67298 жыл бұрын
wow. would it be possible from an stereo video or left- right video files to build a pointcloud ofline? under windows.? I've an stereo vid recorded from inside my car. bonet is slighly visible but its mostly road and surroundings.
@matlabbe8 жыл бұрын
You may be interested to side-by-side video section on this page: github.com/introlab/rtabmap/wiki/Stereo-mapping#process-a-side-by-side-stereo-video-with-calibration-example
@tokyowarfare67298 жыл бұрын
Awesome! I did not expect did to exist!! All the videos in other channels referenced to complex code that had to be run in Linux, do this do the other.. ohhh tx!!. In the past I used for fun a nice program named Project Video Scanner, similar to this one but aimed to road mapping. It was unstable, unnacurate but fun. I'll test my dataset with yours and tell you how it goes. So far I've tested my dataset on SFM programs, some do catch quite large portions of road and when manage to densify these the ressults are interesting. Usually these SFM apps miss a lot of road details. This SFM apps should use as initial camera location a similar approach to yours. May be the camera poses can be exported as well as the pointn cloud. If it is dense enough I could teste to extract some breaklines (manually), mesh them with a clean topology and if there is a way to import camera poses with the model try to project the textures on teh meshes.
@tokyowarfare67298 жыл бұрын
[SOLVED] Following the database tutorial I get this error when I press Play. Ini file loaded, database loaded, check unuse existing odometry, uncheck stamps, apply, ok, New database... Play --> error: [FATAL] (2016-09-09 01:47:50) CameraModel.cpp:76::rtabmap::CameraModel::CameraModel() Condition (fx > 0.0) not met! [fx=0.000000] Tomorrow I'll print the checkboard to try to use custom dataset. Ok. Calibration files where in the .zip file of stereo dataset. May be should be available for ones testing only the database tutorial.
@tokyowarfare67298 жыл бұрын
Finally managed to run. In the Stero images you forgot to comment that creating a new database is needed. After this play button is available. DId test with you sample images, this is quite impressive!. Never saw a reconstrucction happen that fast XD
@tokyowarfare67298 жыл бұрын
If you run the processing more times you get extra points. Like :)
@oldcowbb4 жыл бұрын
is it possible to use this with two regular camera?
@matlabbe4 жыл бұрын
In general no unless they can be hardware synchronized (linked with a connector triggering a picture exactly at the same time on both cameras, like the industrial PointGrey or Flir cameras). Fortunately, you can find stereo cameras that are not very expensive, like Realsense D435, ZED or Mynteye.
@oldcowbb4 жыл бұрын
@@matlabbe wow, thank you so much for the quick response
@tienhoangngoc78674 жыл бұрын
Hi, can you tell me know what's algorithm this project used? rtab-map algorithm? Thank you.
@matlabbe4 жыл бұрын
Yes, RTAB-Map
@theolix99385 жыл бұрын
Can I use a raspberry pi 4 modelB for this project? Is there any way to contact you for help for our research? Thank you and God Bless
@mathAI425 жыл бұрын
Not tested on RPI4 yet. I know rtabmap is working on RPI3 with limited capability. For RPI4, I don't know if you can get ROS on it easily (with binaries). You may look at this page: ubuntu.com/blog/roadmap-for-official-support-for-the-raspberry-pi-4, if you install ubuntu, you may not have to rebuild ROS from source (which could be very long!). If you have more questions or installation problems, look at the Troubleshooting section on the project's page: introlab.github.io/rtabmap/
@이종법-o2m6 жыл бұрын
Are you using laptop? Or another mcu product?
@matlabbe6 жыл бұрын
In this setup, we recorded a bag, then we played it again on a laptop computer to record this video. However, the mapping node (without visualization) could have run on the Mini-ITX of the robot without problems.
@surbhipal9857 жыл бұрын
Quite impressive!! Will this be able to create a 3D model for some complicated outdoor object such as temple carving. Can you suggest some camera name to fulfil my requirement.
@matlabbe7 жыл бұрын
It depends at which precision you want the resulting model and time/cost you want to put to get a final model. For very complex objects and if you don't care about real-time processing, look at photogrammetry approaches (which can be done with a simple camera). Otherwise, reconstructing live an environment, I would go for TOF cameras indoor and stereo cameras outdoor.
@surbhipal9857 жыл бұрын
Sir, Can you name some stereo camera which will be good enough for temple carving.
@matlabbe7 жыл бұрын
Try realsense 3D cameras, zed camera...
@muratkoc46933 жыл бұрын
thanks for this work. Do we able image processing in OpenCV with constructed 3d map?
@matlabbe3 жыл бұрын
Well, you can subscribe to cloud_map topic, which is a sensor_msgs/PointCloud2 topic and do whatever you want!
@maatsche9 жыл бұрын
Can you upload the project to github ?
@matlabbe9 жыл бұрын
Marcel Cohrs you can find it here: github.com/introlab/rtabmap_ros
@shrijank5226 жыл бұрын
Can it be used for dynamic slam as well?
@matlabbe6 жыл бұрын
Algorithm doesn't segment dynamic objects, so if something is moving while the robot is mapping, the object will appear multiple times in 3d point cloud. For the occupancy grid output format, the object can be cleared.
@shrijank5226 жыл бұрын
@@matlabbe So for a dense dynamic environment , would occupancy grid with laser scan matching technique would work?
5 жыл бұрын
Is this possible with intel realsense T265?
@matlabbe5 жыл бұрын
Yes, see this post: official-rtab-map-forum.67519.x6.nabble.com/Slam-using-Intel-RealSense-tracking-camera-T265-td6333.html
@FutureAIDev20158 жыл бұрын
How do you deal with THIS much data? I bet if I tried to store all that on my Arduino, it'd probably not like it very well.
@matlabbe8 жыл бұрын
For visual odometry, a laptop's cpu at least is required to have a decent odometry update frequency (>10Hz). If your odometry is computed externally from the arduino, for the 3D map there are options to save RGB-D images in lower resolution to save some space or not save them and just keep the occupancy grid with visual words used for loop closure detection. EDIT: Note also that you don't need to visualize the map on the arduino, the resulting database can be opened after on a desktop computer for visualization.
@blackvalley0076 жыл бұрын
These stereo cameras surely obliterate the lidar systems when it comes to 3d mapping processing speed. It probably won't have as accurate depth information as a lidar, but that won't be much of an issue for my project.(low speed robot with obstacle avoidance) Interesting video!
@tokyowarfare67298 жыл бұрын
Unable to calibrate from video. When I click "Calibrate" butt "Camera initialization failed" error appears. I'm testing with a new camera, a 3d camera instead of dual camera rig and got this issues. Also tried to calibrate with your calibration sample video after loading the tutorial *.ini file and I get same error, as quick as I switch to Images mode, no errors appear. I believe that with SBS video mdoe it looks for a camera instead for a file as if it was loking for an usb cam. I'll extract frames from the video to try to calibrate the new camera.
@matlabbe8 жыл бұрын
With the calibration sample video, it works here. What is the full error message on terminal or console of the main window? What is the output of the 3D camera?
@tokyowarfare67298 жыл бұрын
This is what I get: [ERROR] (2016-09-21 20:13:04) CameraStereo.cpp:1237::rtabmap::CameraStereoVideo::init() CameraStereoVideo: Failed to create a capture object! [ WARN] (2016-09-21 20:13:04) PreferencesDialog.cpp:4275::rtabmap::PreferencesDialog::createCamera() init camera failed...
@matlabbe8 жыл бұрын
OpenCV cannot create a capture object for the video. This could be a video codec issue. The one used for the sample videos is "H264 - MPEG-4 AVC (part 10) (H264)" (from VLC codec info). The Windows binaries should have h264 support (OpenCV built with x264 library).
@tokyowarfare67298 жыл бұрын
Ok I'll check but I'm in W10 and I can open the videos without issues from the explorer.
@tokyowarfare67298 жыл бұрын
mm still no luck. I installed OpenCV ( the already built one for win). Added environment variables, and the Path as explained in this install guide for dummies. homepages.ed.ac.uk/cblair2/opencv/opencv_tutorial.pdf (adapting the path). I guess what is trully neccesary is to build in visual studio wih the x264 library... a bit out of the reach of mortals. During the weekend I'll try to make in car footage with the new stereo camera and if have time with the dual CCD cameras too.