One of the unfortunate general problems with stereo disparity optical flow solutions is when under presence of occlusions.
@oursigns63184 ай бұрын
The free version of Leonardo is nothing like your video any more. Perhaps a year ago it was relevant.
@ugocapeto3d4 ай бұрын
yeah, things change very fast in the world of AI.
@TheMediaMachine3 ай бұрын
I answered in detail but KZbin deleted it. It was step by step comment I posed here. Pity you would have know how to do it.
@MarioMarinho-j4s4 ай бұрын
Olá, me chamo Mario moro no Brasil e é o que eu estava procurando mesmo. Obrigado
@allourep5 ай бұрын
is there a way to use 2d video and make it into 3d?
@ugocapeto3d5 ай бұрын
yes, i believe so. You may have to google on that one though. Note that it will most likely be done frame by frame. Maybe huggingface has something or somebody might have to put a python notebook to do just that on google colab. I know some pple do video conversions like Philip Heggie (he's on youtube so you can ask him his process).
@abhinavmohan26576 ай бұрын
So, i have a 94 LPI lenticular sheet and a 600 dpi Brother 2320D printer. Can i expect to make and print something at home. This is just for testing purpose, if it works, i can always get it printed from a professional high quality printer. I am looking to have 10-15 frames of animation, what is the best combination for this use case?
@ugocapeto3d6 ай бұрын
94 lpi??? Are you sure??? You can do good stuff with inkjet if you are under 50 lpi. For animation, it's difficult if the images vary rapidly, like for instance, a batter swinging at a bat. You ain't gonna see the bat moving basically. If it's slow moving, should be ok. You can definitely try stuff at home, just use good quality paper and make sure you do a pitch test prior. I am really not an expert at doing lenticulars.
@thehulk01116 ай бұрын
may you try crestereo ? and put it on colab 😁
@alexmejias61587 ай бұрын
Hello! First of all, thank you for sharing this amazing project... I have a question about how I can change the number of brush strokes for large images because in my projects the image does not have as many details as I expected.
@ugocapeto3d7 ай бұрын
Thanks. In this day and age, it's probably easier to use AI to generate painterly images from photos. But, to try to answer your question, if you look at the input file github.com/ugocapeto/thepainter/blob/main/main/test/waterlilykiwi/thepainter_input.txt, you can see that the number of brushes used is 5, starting with a radius of 128 pixels, then 64, then 32, then 16, then 8. The lower the brush radius, the more detail you are gonna get. Also, the 4th number, here, 50.0, indicates how close the painting should be to the original photo, so is you put 20.0 instead of 50.0, you are gonna get more brushstrokes. What I used to do is create several output paintings at different level of resolutions and then use Gimp to combine the paintings. Hope that makes sense. Note that the code is available on github at github.com/ugocapeto/thepainter. Also, note that I haven't touched this stuff for a few years, so i don't remember all the details but i do try my best to explain what I recall :)
@rafatsheikh84429 ай бұрын
This aap name
@rafatsheikh84429 ай бұрын
How to download this app
@Axis239 ай бұрын
👍..Gracias por la informacion!👌
@viktorreznov16879 ай бұрын
hi, thank you for the video. even though I uploaded 2 stereo images and followed the steps you followed, it gives an output with effects instead of a depth map. no matter what I did, I couldn't fix it. can you help me ?
@produccionessobrinas75949 ай бұрын
Man I really appreciate your video but sadly I have to say this is not working for me, it seems leonardo ai has changed his model and now your tutorial , at less for me, does not work. Please is there any way to in other way this oil painting effect now?
@ugocapeto3d9 ай бұрын
Things change very rapidly. You can try hugging face spaces that relate to stable diffusion: huggingface.co/spaces?sort=trending&search=stable+diffusion. You are gonna need an image-to-image and in the prompt, put "oil painting" or something like that.
@TheMediaMachine3 ай бұрын
I answered in detail but KZbin deleted it. It was step by step comment I posed here. Pity you would have know how to do it.
@MyHe-art10 ай бұрын
Please help. When I try to open DPT large as in the tutorial, I get a runtime error. Please what should I do about it?
@ugocapeto3d10 ай бұрын
yeah, i don't know what's going on with those runtime errors. All the midas stuff on hugging face have runtime errors. Anybody knows why???
@1qzurliu2310 ай бұрын
Hi! I've triple check my pinch test etc. Printing 600dpi with 75dpi lenticulars (tried 2/4/8-image flip) it just doesn't flip nicely, always can see part of the other images in my view. Any ideas what part went wrong?
@ugocapeto3d10 ай бұрын
2 flip should work as long as the dpi is not too high. Higher lpi (like 75) is difficult to deal with. If you have more images to flip, you're likely to get ghosting. But a 4 image flip should be ok, i think. 8 is pushing it especially if the images are completely different. With 60 or lower lpi, you should be able to do animations with 8 images but that assumes the images are not too different from each other. If you do an animation of a bat swing in baseball, the bat movement will not be sharp. Not that I am not an expert in making flips but I have tried to do animations with 60 lpi, and it's quite difficult if not impossible with an inkjet printer.
@1qzurliu2310 ай бұрын
Hi~ I see Grape and SuperFlip only do vertical lenticulars. How can I use it horizontally? Rotate my original images 90 degrees?
@ugocapeto3d10 ай бұрын
yes, correct. The only rule to follow, If you use inkjet printer, is that you need to print so that the "stripes" are 90 degrees w/r to the print head carrier. in other words, You don't want the stripes going in the same direction as the print head carrier. This gives you better print for lenticular. At least, that's what I have always been told.
@1qzurliu2310 ай бұрын
got it, many thanks.@@ugocapeto3d
@jordanlotus18810 ай бұрын
nice
@leoncioresende695510 ай бұрын
Como faço para baixar este artigo em pdf? Onde encontro ele? pois pesquisei aqui no google entrei neste blog ai e não encontrei este artigo ai.
@ugocapeto3d10 ай бұрын
Check www.dropbox.com/s/wsuelhwgxnj8a6n/ugosoft3d-11-x64.rar?dl=0 This archive contains SfM10 and MVS10, it will have the manuals in pdf form. 3dstereophoto.blogspot.com/2016/04/structure-from-motion-10-sfm10.html 3dstereophoto.blogspot.com/2016/04/multi-view-stereo-10-mvs10.html
@yuvrajsinghrajpurohit334111 ай бұрын
thanks it helped me a lot 😁
@ugocapeto3d10 ай бұрын
Glad it could be of use. I had fun making the video as I love making "paintings" from photos.
@fellowkrieger45711 ай бұрын
Weird subject to choose :S, nice reconstruction though.
@jordanlotus18811 ай бұрын
nice
@teresa6775 Жыл бұрын
words would be nice
@derekhaller1835 Жыл бұрын
Get this every time I use the server - TypeError: expected size to be one of int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int], but got size with types [<class 'numpy.int64'>, <class 'numpy.int64'>]
@redman458 Жыл бұрын
Looks like Colab broke the notebook with it's update. I fixed the issue by reverting to a previous version of Torch.
@kinnguyen635 Жыл бұрын
Thank you for your video! Please let me know if there is an update so that I do not need to adjust the image size to 512x512?
@jordanlotus188 Жыл бұрын
Very nice Thanks!!
@faqu2gamer566 Жыл бұрын
I just fell over this youtube, very interested in the potential here to create Bas Reliefs. The test will be how something like zBrush or CnC carver program like Aspire will handle the greyscale hight map is imported. So far most such programs fail as there are too many artefacts that have to be attended to which makes it pretty much impossible to machine with a CnC. Hope this works thank you for the tutorial.
@ugocapeto3d Жыл бұрын
Hi, don't get your hopes too high. The depth maps obtained will, in most cases, require hand-tuning. But Midas v3.1 is the most advanced ai tool to get depth maps from single images. As it gets updated, it gets better and better. But the updates don't come out too often.
@faqu2gamer566 Жыл бұрын
@@ugocapeto3d I suspected as much, over the many past years things get close but not close enough. Hand tuning was a must with everything else I have seen previously (non AI). Zbrush does a wonderful job with 3d setup scenes in creating a bas relief almost perfect. I have seen midjourney images of depth models/bass reliefs that look amazing but I suspect only for viewing and not for real world application. Ty for your comment.
@yonahbs Жыл бұрын
Ótimo tutorial!👏🏾👏🏾👏🏾
@ugocapeto3d Жыл бұрын
Thanks a lot!
@MundusVR Жыл бұрын
Hello Ugo. It's been relatively recently that I've been researching how to make lenticular images. I am trying to get a sequence of photos with a 2d image and a depthmap image. From what I saw on your blog you also use a program called Frame Sequence Generator 6 (FSG6). But I found it a bit complicated, many steps. I've been playing around with Stereophoto Maker 6.25a and I think there is an easier way to make a sequence of photos with a 2d image + the depthmap. have you used it? Oh! And thank you very much for your generosity in sharing all your knowledge and material on the networks!
@milchoiliev4824 Жыл бұрын
The idea is very good but the video is useless. Blurry screen. It is not visible what he is doing and he does not speak, no sound. Probably the video will help for people who are good with photoshop or gimp. But the video will be waste of time for new ones.
@ugocapeto3d Жыл бұрын
Sorry, this was done at a time when annotations were still a thing on youtube. Anyway, this is old tech. I recommend using automatic AI tools now, like Midas V3.1. See this vid: kzbin.info/www/bejne/iGi7mKB5lL2opaM. If you want more ease of use, you can use LeiaPix converter although it is based on an earlier version of Midas: kzbin.info/www/bejne/oKLGlGuFlrtpeNE
@pranav_k__ Жыл бұрын
do you by chance have any direction i can take in order to actually learn how these monocular depth map models work? i would appreciate it if you could point me in the right direction if you have any info.
@ugocapeto3d Жыл бұрын
Check the github repo: github.com/isl-org/MiDaS. At the end, there is a link to their latest paper.
@pranav_k__ Жыл бұрын
@@ugocapeto3d well I went through said paper but I still struggled to actually understand how they got their loss function and such
@dankdreamz Жыл бұрын
I appreciate you taking the time to make videos. They have always been interesting.
@ugocapeto3d Жыл бұрын
Thanks a lot for your comment!
@Der_X_Buddne Жыл бұрын
Great tech and thanks for sharing! Is there a way to get those colored depth maps too?
@luchoprata Жыл бұрын
Thanks a lot! i been search a long for a simple explanation!
@acadvideoart Жыл бұрын
NOTHING WORKS FOR ME, HAVE I TRIED IT SEVERAL TIMES ALREADY??
@jordanlotus188 Жыл бұрын
nice
@thelightsarebroken Жыл бұрын
Thanks for this, really enjoying playing with this effect this eve!
@thes3Dnetwork Жыл бұрын
Owl3d does this to. I found that it's good to test leia and owl3d because one of them might do better than the other at converting.
@Omnifonist Жыл бұрын
I am not able to get a higher resolution than 1024 px. Do you have any ideas on this? How can I access a higher output resolution?
@tomaskrejzek9122 Жыл бұрын
Can I generate in batch for multiple images?
@AnasQiblawi Жыл бұрын
I agree 👍💯
@AnasQiblawi Жыл бұрын
can you try ffmpeg and review its results
@nuvotion-live Жыл бұрын
I recreated this using ffmpeg minterpolate. It's much slower and the results are more datamosh-y. But still cool.
@lucifergaming839 Жыл бұрын
Hello can you help me. I want to use 3d inpainting on google colab and i have used it quite a lot in past but there is some error regarding cynetworkx or other things like torch. Can you try to update them and make a working colab. Please
@jordanlotus188 Жыл бұрын
very nice!!
@tombradford7035 Жыл бұрын
Your sound is crap.
@vinayaka.b1494 Жыл бұрын
thank you for the tutorial, it was really helpfull
@ugocapeto3d Жыл бұрын
You're welcome!
@EditArtDesign Жыл бұрын
NOTHING WORKS FOR ME, HAVE I TRIED IT SEVERAL TIMES ALREADY??
@BrawlStars-jd7jh Жыл бұрын
thanks for sharing this, do you know if exist any chance to download the .obj model with the point cloud render mode?
@ugocapeto3d Жыл бұрын
You can download obj with depthplayer.ugocapeto.com but you will lose all the colors.
@BrawlStars-jd7jh Жыл бұрын
@@ugocapeto3d yes, i already did that, but when i open the .obj model in blender, it appears with the "solid" render mode. For the colors i just have to put the base image as a material.
@tombradford7035 Жыл бұрын
Youre so long winded...
@vanillagorilla8696 Жыл бұрын
I wish I could use a depth map I made with it.
@ugocapeto3d Жыл бұрын
If I remember correctly, you can use your own depth maps if you use the implementation that's on google colab. I remember making a video about it.
@ugocapeto3d Жыл бұрын
This one: kzbin.info/www/bejne/q4PXoKVrepKdpMk at around 18:52, I use my own depth map.
@VirtualTurtle Жыл бұрын
When I run Create Depth Map, it opens up the GUI for dmag5 and asks to input images and disparities, seemingly disregarding the entire host program. any reason why this might happen? thanks!
@ugocapeto3d Жыл бұрын
You need to download the nogui archive. You must have downloaded the other one. Follow this tutorial by the SPM creator if you have difficulty (in particular, step 4): www.stereo.jpn.org/eng/stphmkr/makedm/index.html. Note that this is to get depth map from stereo pair. If you have a single image, I recommend Midas. See this video: kzbin.info/www/bejne/jXuxgKGCmb5sgck where i compare spm dmag5/dmag9b and Midas V3.1.