The quality of this video and audio is crystal clear. Thank you for sharing these interesting knowledge.
@SohailSiadat Жыл бұрын
Also the quality of explanations
@SohailSiadat Жыл бұрын
Thank you Cyrill for teaching this and providing it for free. Also explained well.
@CoinedBeatz4 жыл бұрын
Great introduction on graph based slam. Refreshing to see high quality content about this topic accessible on youtube. Highly appreciated!
@aidankennedy69733 жыл бұрын
Thank you so much for continuing to share such high quality education. This is by far some of the best content on SLAM available.
@CyrillStachniss3 жыл бұрын
Thanks!
@olegzatulovsky53242 жыл бұрын
Thanks for sharing your knowledge on Pose Graphs.
@dimitrihaas2 жыл бұрын
Best video on this topic. Thank you, Cyrill.
@eyalfriedman59723 жыл бұрын
Thank you, your lectures are amazing!
@444haluk4 жыл бұрын
You are literally saving my life with these contents.
@geethanarayanan28967 ай бұрын
So much wisdom in these videos. Bedtime entertainment,
@janzim46402 жыл бұрын
Awesome videos! Thank you very much.
@jeffrey-antony2 жыл бұрын
Thanks for the High quality content. I really respect the effort you have taken to create these lectures. Thank you once again.
@CyrillStachniss2 жыл бұрын
Thank you
@wouladjecabrelwen10063 ай бұрын
i am new to SLAM, and what i can say, this is the first course which helped me to understand what SLAM is, idea behind, from where to start, where to go. such a course should be saved and taught to the next generation. Thank you Cyrill Stachniss.
@dave41484 жыл бұрын
I had to stick it through until the end until things cleared up for me, but great lecture. Thanks!
@dealeras21433 жыл бұрын
Thank you for these amazing videos. Could I find by any chanche the algorithm for ORB-SLAM ?
@shravanshravan44024 жыл бұрын
Thank you for sharing this presentation.
@BrunoSantos-ov1sw3 жыл бұрын
This content is gold.
@BrunoSantos-ov1sw3 жыл бұрын
Thanks for sharing this amazing content.
@CyrillStachniss3 жыл бұрын
Thanks
@andreaspletschko8404 Жыл бұрын
Hey Cyrill, thanks for the great materials you post on KZbin! I have a question regarding the slide "Building the linear system" (50:22) as well as the consecutive "Algorithm" (51:38). Do the indices i,j at the matrix H bar and vector b bar correspond to matrix/vector indices or are these completely different matrices/vectors? And what exactly is the form of those matrices H_bar and b_bar? As I understand, A_ij and B_ij are matrices themselves, thus H_ij bar is a matrix and b_ij a vector themselves. Furthermore, I'd like to understand, how exactly we obtain the matrix H and the vector b on the latter slide, given the matrices H_ij bar and vector b_ij bar. Are these the same? Greatly appreciate your help!
@CyrillStachniss Жыл бұрын
Yes ij are indices in the matrices/vectors.
@kevinr37988 ай бұрын
Tank you for the video ! I have one question. Considering that there is only one virtual measure e_{i,j}. Given that other poses do not appear in the error, how can these other positions be optimized as shown at 1:01:27 ?
@CyrillStachniss8 ай бұрын
With only one relative measurement, you constrain only two poses relative to each other. You will not fix them globally. For that you need an external reference or set your initial pose tu something arbitrary. Otherwise, your H matrix will have rank deficiency (a gauge freedom).
@Jahaniam4 жыл бұрын
Thank you for making these videos publicly available. Awesome lectures. I was wondering if there is a way to access the homework too?
@CyrillStachniss4 жыл бұрын
Homeworks are part of the exam admission process in Bonn and thus sharing them a bit tricky
@hobby_coding4 жыл бұрын
best regards from algeria thank you
@milingzhang61814 жыл бұрын
Thank you sir for sharing such wonderful lectures.
@zftan07094 жыл бұрын
Awesome video! Thanks for all the great explanation. It would be great if you could talk about marginalization of the Hessian Matrix in a sliding window method.
@GeorSala2133 жыл бұрын
Prof. Cyrill, many thanks for uploading this wonderful set of videos. If I may, I would like to ask a few questions (responses from the audience are also well appreciated): a) I found it hard to understand the physical meaning of matrix H, hence it feels the addition of 1 in H11 to fix x1 somehow arbitrary to me. Although I am not sure, it could potentially be possible to fix x1 by introducing a fictitious measurement such as: e11=z1 - x1; where z1=0 (to fix it to zero) and Ω11 a very high number (infinity?) as we want to provide confidence about this measurement. However, by doing so we also add a b11 term (not only H11), which seems quite different from what has been described in the video. b) My understanding says that the optimization algorithm used is called Newton's method (Newton-Raphson), whereas here in the slides I see it is named as Gauss-Newton method. I am mistaken? c) I was trying to find this t2v Matlab function but did not really manage - could anyone provide a link (or any other source of information) where I can find what it does and how it does it? d) regarding the mapping, does the vehicle store all the measurements and re-generates the map from scratch every time the optimization step is performed? I am sorry for the quite long comment. Thank you.
@emerydergro53329 ай бұрын
awesome video!
@CyrillStachniss8 ай бұрын
Thanks!
@jeffreydanowitz30833 жыл бұрын
Hi Prof. Stachniss. Another great video. You've taught me a lot. I do have 1 question. In the 1D problem when you need to "fix" the first x1 since everything is relative - clearly H12 is rank deficient since it is the outer product of a 2D vector. The rank has to be 1. This, as you said, is due to the relative nature of the update. Then you add (matlab notation) [1 0 ; 0 0] to H. Can you explain why this is equivalent to fixing the first node to some value? I see that indeed in the end delta_x = (0 1) therefore the first node is not changing positions. How did you know to add [1 0; 0 0] to H to achieve this? My only inclination here is to say that for any [a; b] [1 0; 0 0] * [a ; b] = [a ; 0] (again in matlab notation) so that indeed "a" remains the same. Is this the rationale? Beyond that (and even including) this issue - this is an amazing video as are all your videos. I'm a huge fan! Thanks.
@abhishekgoel16873 жыл бұрын
Hello Jeffrey, by any chance were you able to understand the reason for it?
@maciejtrzeciak92492 жыл бұрын
Hi @Jeffrey! A great question! If you found an answer, please let us know.
@akshayka94542 жыл бұрын
In video at 29.50, there is an explanation as follows "There is a difference between what observation tells me and what the current graph configuration tells me". Where did this current graph configuration come from ? How to derive/calculate/arrive at this current graph configuration ?
@yousofebneddin74304 жыл бұрын
Thanks for the video. Do you have extra resources about information matrix and what is it and how to compute it?
@CyrillStachniss4 жыл бұрын
The information matrix from the observation basically tells you how good (precise) your sensor is. Either the sensor has specs (precision of the depth reading for example) or it depends on how accurately you can determine the orientation to the object, which may relate directly to image/feature points that you extract.
@dorielar3 жыл бұрын
Thanks for the great Video and Series! one question I have is related to the definition of the error funtion, in the pose-graph setting: it is the norm of the residual rotation + translation. but it seems that specifiying a covariance is a must since rotation and translation are of different units. R there good examples/ mechanism to construct the covariance?
@Jrang88 Жыл бұрын
Can you recommend how to update the map after optimizing the graph? Update the current node or all nodes in the graph. Thank you so much.
@ThibaultNeveu3 жыл бұрын
Thansk !!
@ShubhamKumarpro13 жыл бұрын
For detailed mathematical steps, please see this lecture - kzbin.info/www/bejne/fJnad6x3ZbOEoac
@SimmySimmy2 жыл бұрын
Thanks for your comment! This link helped me a lot :)
@dr-x-robotnik2 ай бұрын
Love u
@ztyu-0074 жыл бұрын
Thank you for making and sharing these videos. I can understand the measurement Zij-1 in error function , but curious about how can we get xi-1xj , is it random or deduce from loop-closing?
@oldcowbb2 жыл бұрын
so the main difference between graph-SLAM and EKF-SLAM is that graph-SLAM is not assuming markov property and allows updating old estimation upon receiving new information?
@CyrillStachniss2 жыл бұрын
No, not really. The LS solution can relinearize, EKF not. That is the main difference in its basic form. Advanced LS (eg robust estimators) can however do more…
@oldcowbb2 жыл бұрын
@@CyrillStachniss so if the system is completely linear, the two methods would give us the same result?
@mirellamelo3 жыл бұрын
Thanks so much for sharing. The illustration of the pose graph with two poses helps me to understand this specific case. But this is related to the moment relocalization is detected, right? So the graph says you are in x, but the measurements of the loop closure say you should be around "omega". And the difference in these positions is the error, right? But in this case, due to the relocalization, I can know where I should be, but how can I find the error for the other previous pose estimations? Thanks in advance!
@pavancherukuri28248 ай бұрын
Hello prof, where can i find the homework assignments?
@CyrillStachniss8 ай бұрын
Send me an email
@pavancherukuri28248 ай бұрын
Mailed, thanks for the reply professor
@kanumarlapudisahithi55533 жыл бұрын
Thanks for the video. But I have a small question. In the example that you have given you have calculated the actual edge as distance between two nodes. What should be the criteria for 2-D or 3-D nodes?
@durandthibaud94454 жыл бұрын
Thank you for your course. As the error function has always the same form between two nodes, do Jacobians are always the same ? and do they takes only 1 or -1 values ? It's hard to understand this mathematic abstraction.
@CyrillStachniss4 жыл бұрын
No, they can vary for every measurement. And they are different from -1 and 1 in nearly all real-world cases (if the functions are non-linear)
@asafdahan68114 жыл бұрын
Hello, Thanks for the great lectures! one question... how is it that e_{i,j} is not a matrix but a vector instead?
@CyrillStachniss4 жыл бұрын
Because e_ij is the error vector (difference between what you should measure and what you actually measured) and not a matrix
@bankssurveyors4 жыл бұрын
Truly great content. I enjoy the "light bulb" moments with the new content 2.0 "5 Minutes with Cyrill". I can only imagine the model for learning on the content 3.0? Maybe paid online access to seminars (1 day class) for in depth solving these equations for each of the topics. Keep going!
@florianwirnshofer68144 жыл бұрын
Hi Cyrill! Great lecture! I have two minor questions: 1. Are there any thinkable benefits to using this method prior to having achieved the first loop closure? 2. Assuming I have finished the mapping process, do you have a reference on how to use the pose graph for pure localization? Or does one usually just use the corrected map for things such as MCL?
@anascharoud45402 жыл бұрын
I have thought a little bit about these questions. and realised that: 1- I need to check more for question 1 2- Pose graph and MCL or KF or others are just correction tools for the localization purpose. the main function of the localization is the motion tracking from the state t-1 and t which can be done using The ICP or NDT algorithm or any motion tracking algorithm (registration for the lidar scan and the matching camera's images) this is what I understand. Plz, feel free to correct if you have more information.
@edissonfabriciocanarortiz34874 жыл бұрын
Excuse me could you explain me abou the H matrix and how it os build?
@oldcowbb2 жыл бұрын
i wonder why are they called constrains if they are basically meant to be violated
@CyrillStachniss2 жыл бұрын
Comes from soft constraint. It is not a hard constraint as in CSPs