One of the unfortunate general problems with stereo disparity optical flow solutions is when under presence of occlusions.
@vanillagorilla86962 жыл бұрын
Can it be used with video?
@ugocapeto3d2 жыл бұрын
no, it's only for a pair of static images.
@vanillagorilla86962 жыл бұрын
Fair enough, I found some other stuff in my studies.
@importon2 жыл бұрын
I find I can get better results than DMAG by using RAFT optical flow and changing the colors of FlowVis to be greyscale.
@ugocapeto3d2 жыл бұрын
Do you have a link to what you use?
@importon2 жыл бұрын
@@ugocapeto3d It keeps deleting my comments when I try to tell you where to find it. I'm not really sure what to do
@ugocapeto3d2 жыл бұрын
@@importon oh yeah, i forgot about that. you can't put links in comments. maybe give the keywords to find it in google or send me link (address is on sidebar on my website 3dstereophoto.blogspot.com in about me box). Thanks in advance.
@ugocapeto3d2 жыл бұрын
Below is email that importon was kind enough to send me: --------------------------------------------------------------------------------------------------------------------------------------------------------- Here's the link to RAFT o-flow. To get the result into greyscale mess colors in the flow_viz code. github.com/princeton-vl/RAFT Oh, and there's also this, which looks super promising but I was not able to get it running locally. If you managed to figure it out, I'm sure it would make another excellent tutorial video ;) github.com/princeton-vl/raft-stereo ----------------------------------------------------------------------------------------------------------------------------------------------------------
@valdoraudsepp9532 жыл бұрын
Multiple scene photo differentiable diffusion seems to give also very reliable depthmaps. GitHub - diffdiffdepth. Tried to get it work on my Linux (Ubuntu, Mint) desktop PC but failed. Seems that deep knowledge of Python is needed.
@ugocapeto3d2 жыл бұрын
Here is the link: github.com/brownvc/diffdiffdepth That looks really good. It's multi view though. Would be interested in seeing what you typically get with just 2 views. Hopefully, they will set up a google colab so that the rest of us can try it.