Just speculation, but I am guessing you combat the motion blur by either using a really high shutter speed, or by utilizing the lights to strobe at a really high frame rate which is synced to the camera shutter speed. This is awesome Robert, it's pretty awesome to see your videogrammetry pipeline.
@BlenderBob10 ай бұрын
High sutter speed. :-)
@jamess.78119 ай бұрын
why would a strobe be necessary? why wouldn't you just have the lights on constantly?
@PhotiniByDesign9 ай бұрын
It all depends on the camera, the lights and final outputs. For example continuous lights aren't always suitable due to limitations in output and flickering, especially if they are not specifically designed for cinematography. I have used synchronized strobes to shoot bats flying overhead a few year back, I used this method to take several images of the bat in one photo. I used a long exposure of 1.3 seconds, in that 1.3 seconds the strobe lights were programmed to flash 5 times. And so I shot the same bat in mid flight 5 times in one shot with no motion blur. Some sonar devices use the same principle to freeze frames. @@jamess.7811
@AliasA19 ай бұрын
@@jamess.7811 the idea is to have the camera shutter open for longer, and let the strobing light be the thing that limits motion blur. Its not "necessary" it's just another way to do it that you might pick depending on what equipment you have on hand. Studio photography is often done this way, controlling the effective shutter duration with the flash duration instead of the camera setting.
@zachhoy10 ай бұрын
Bob, this is QUALITY! I can't wait to start getting into video production in the near future. I'm sure the 60k poly upper limit will eventually increase to 1M
@JorisPlacette-e5c26 күн бұрын
Awesome video! @blenderBob you may have figured it out already, but the 64k max vertices cap can be disabled, allowing improved mesh resolution, which is critical when capturing 3+ people at the same time. This cap is here by default because of an encoding/decoding optimization in unity and unreal for real-time playback but irrelevant in your use case. I hope you are having fun with your capture studio! (PS, I may be one of the guys who came to your office to install your volumetric capture studio ;) )
@BlenderBob26 күн бұрын
Really? Cool! Have you ever setup a system in Montreal?
@willowproduction10 ай бұрын
Man, what the actual frack. BRAVO
@luciox291910 ай бұрын
Thank u blender bob for sharing with us the professionalism of real fake
@MediaWayUKLtd10 ай бұрын
Really impressive Blender Bob! I hope this is really successful for you!
@llbsidezll10 ай бұрын
I'd be interested in seeing how this could be implemented in VR. Current 3d video breaks immersion as soon as you try to move and look around.
@BlenderBob10 ай бұрын
Most of the videogrammetry systems have been developed for VR so you can find lots of information on the web
@AyushBakshi10 ай бұрын
Interesting!
@Ruan3D9 ай бұрын
That's pretty AMAZING Robert!! Thanks for sharing.
@electronicmusicartcollective9 ай бұрын
WOW
@kidfl4sh29510 ай бұрын
I see a lot of possibilities for game stuff and for some VFX sequences, simulation applied to the body and what not. For background characters, how usuable is this, on a set, wouldnt it be less trouble to have extra on set ?
@scottesplin442610 ай бұрын
Amazing Mr. Bob! Busy pushing the boundaries as always,... while your cat lives the high life. 😹
@themightyflog10 ай бұрын
I want more information! Wow!
@BlenderBob10 ай бұрын
Ask away
@PrinceWesterburg10 ай бұрын
Wow - Remember seeing CSO (Colour Separation Overlay) done on the BBC in the early 70's as a child, now 50 years later that era is home movie tech and you've moved onto the next generation. With AI this will become easier and easier - look at the one image to 3D model tech that exists now, this is going to grow and grow. Amazing to see!
@BlenderBob10 ай бұрын
Yep. As director of innovation and technology it’s my job to check out all the new stuff
@vinnypassmore565710 ай бұрын
Looks fantastic, nice job. Thanks for sharing.
@SquirrelTheorist9 ай бұрын
This is absolutely brilliant! I wonder if this will eventually include reflective surfaces as with instant ngp NeRFs using radiance instead of meshes. Still, it is insane that something like this exists, and you guys handle it really well. Thank you for sharing these developments, although I probably couldn't afford it I would love to test out the limits of this system like tossing objects and watching them appear and disappear from the 3D output. Could make for some nice 3D magic tricks!
@starwars919110 ай бұрын
If you extend the scenes do you have to reshoot the videogrammetry or are they looped in some magical way
@BlenderBob10 ай бұрын
We can morph two animations together to a certain limit. You need to be more precise by extend.
@EdLrandom10 ай бұрын
This is sick, if you need close-ups you might be able to make these characters with actual CG hair particle systems, if only you could find a way to mount a tiny camera close to the face of the actor paint or key it out and project that sequence back to the character's face.
@BlenderBob10 ай бұрын
That would actually be possible but the geometry wouldn’t be hires enough anyways.
@ZeroBudgetDevelopments2 ай бұрын
HI BLENDER BOB how do i get in touch with you i would like to speak with you please :)
@BlenderBob2 ай бұрын
Tiki.movie.bb at gmail dot com
@amazinggraphicsstudios10 ай бұрын
You are always Super,Thank you.But please what software do you use for the videogrammetry.
@FireAngelOfLondon10 ай бұрын
It's their own custom software, that's the whole point of this video, they are promoting their services for 3D capture. It isn't for sale and probably won't be.
@amazinggraphicsstudios10 ай бұрын
@@FireAngelOfLondonok thank you
@MellowMelodiesHub61210 ай бұрын
Looking forward to hear more from you Bob.
@Voicetaco10 ай бұрын
Why are you using green screen? In my experience from photogrammetry you wouldn't necessarily need a green screen to key out a person from a background as that is already being done when capturing the person using multiple cameras. What is your reason for using green screen when I've already seen others do videogrammetry effectively without it and gettin the same results?
@BlenderBob10 ай бұрын
It’s the most efficient way to extract the character from the BG. Check the BCON 2023 clips on the Blender channel on YT. I have a more detailed approach on it. But I know that the goal is to eliminate it
@unrealengine1enhanced10 ай бұрын
imagine the ability to doctor other people's videos, with this technology, rofl. this tech gives a whole new meaning to the term: "trick photography"
@BlenderBob10 ай бұрын
Isn’t that the definition of VFX?
@Nicollaos10 ай бұрын
Потрясающая технология!
@Vassay10 ай бұрын
Looks pretty nice! How many cameras are you using, and how big is the resulting bandwidth per 1 second of a character's performance?
@BlenderBob10 ай бұрын
32 cams. The files are huge. 8GB for the guy juggling
@Vassay10 ай бұрын
@@BlenderBob the big size is to be expected =) Quite good quality for only 32 cams, great job!
@GaryParris10 ай бұрын
well done, hope its a success fot you
@uttula9 ай бұрын
I guess the next step for even higher fidelity and further options would be to implement the gaussian splatting principles … just like recent evolution from simple photogrammetry => nerfs => gaussian splats :)
@BlenderBob9 ай бұрын
You can’t shade splatters.
@uttula9 ай бұрын
The Blender plugins I’ve seen are admittedly still quite limited, but based on what I’ve already seen done in other engines, I’m feeling positive that eventually we should be getting to a point where they become highly useful for all sorts of things. We might not be there yet, but Rome wasn’t built in a day - could well be worth at least keeping an eye open … the road from research papers and proof of concepts to this day has been staggeringly fast and people are still continuing to make things better all the time. Of course, I could simply be hopelesly optimistic :D
@unrealengine1enhanced10 ай бұрын
amazing work guys.
@superkaboose106610 ай бұрын
Very cool! Crowd demo looked insane
@vassilidario802910 ай бұрын
Hey that's pretty neat
@S9universe10 ай бұрын
i'm curious about the tool :)
@BlenderBob10 ай бұрын
What do you want to know?
@S9universe10 ай бұрын
pricing, conditions and in which format does the app come ? please
@BlenderBob10 ай бұрын
The price depends on the project, how many characters, how long the sequences. We generate alembic files. FBX if you need a skeleton. If you have a project that could use that tech please contact us at Real by FAKE. :-)
@S9universe10 ай бұрын
thank you
@johntnguyen197610 ай бұрын
So next level!
@xalener10 ай бұрын
how the hell did you get motion blur working here?
@BlenderBob10 ай бұрын
Secret recipe
@keithtam885910 ай бұрын
clever
@bomosley922610 ай бұрын
Whoa
@tgavel469110 ай бұрын
Wow - very cool!
@thenout10 ай бұрын
Bam! Does the Head of Innovation need an intern by any chance?
@BlenderBob10 ай бұрын
Do you live in Quebec?
@thenout10 ай бұрын
Narp, Berlin. But hey, ready when you are. I'd even make coffee (in Blender, that is).@@BlenderBob
@keysignphenomenon10 ай бұрын
Merci Bob👏
@davebulow210 ай бұрын
Very impressive, Bob! I have to ask, how on earth did you do the motion blur? Surely the mesh is a different mesh from frame to frame and the vertices don't have a reference point from previous frame?
@BlenderBob10 ай бұрын
Secret recipe ;-)
@Vassay10 ай бұрын
I would do it AFTER rendering the 3d person - calculate motion vectors from the rendered 2d image, use those to drive the motion blur. Easy, and should be more than enough for mid-far characters.
@spitfirekryloff7449 ай бұрын
First thing that comes to mind would be to turn all the individual captures into a single animated mesh with 100+ shape keys (1 shape key per capture) and thus get the motion blur when rendering inside Blender. But that seems like a very tedious method, unless there was a way to automate the process
@Vassay9 ай бұрын
@@spitfirekryloff744 that would work, if the topology was consistent between frames - and it's not, it literally cannot be, because each frame is a totally different mesh =)
@BlenderBob9 ай бұрын
I'll give you a hint. Water simulation. The geometry changes at every frame yet it's still possible to get motion blur. The vectors are not computed in Blender. It's done in the proprietary software.
@rekad818110 ай бұрын
The future is definitely guassian splats, and even prompt generation. If i was you, i would spend a week doing thousands of shots and feeding this data into ai to be able to then generate the action you want on any skeleton based on a prompt. Chat gpt could probably guide you through this process🎉
@BlenderBob10 ай бұрын
Try to rig, key and shade Gaussian splatter and the we’ll talk. ;-)