Awesome to see more progress on this. I might have to see about getting a vive set now that they've come down in price so that I can pitch in.
@CNLohr6 жыл бұрын
that would be really cool.
@CharlesVanNoland6 жыл бұрын
A few months ago I was excited to set my sights on acquiring a Vive and start working on getting solid tracking into LibSurvive, and also OpenHMD... but then I got a Rift, and now I just want to make cool games for the first time in years, hooray for VR. As far as my thoughts on the lighthouse tracking: I thought it was rather telling of the production implementation that it updates the position with each sweep, not after a pair of (pseudo-)orthogonal sweeps to deduce an actual position, indicating to me that it just fudges around a last best-known position along a perpendicular plane, and is otherwise clearly relying very heavily on coasting on IMU data. The external tracking system is invariably going to suffer from all kinds of noise and glitchiness, and if you have something as solid and fluid as IMU data to go on (after filtering, granted) it'd be a shame to not extract every single bit of value from it. When I was looking at all this back during the holidays and hanging out in the Discord back in January I was dead set on milking that IMU for all it was worth. Nowadays I've resigned to just waiting for OpenXR (don't hate me), especially if all the major HMD vendors are willing to write their driver implementations to support it properly. As far as I can tell, the proprietary SDK/runtime nonsense is just a growing pain of XR, and will be a thing of the past just like 90deg FOVs and screen door effect. We're in the days of VR that are like the Atari days of console gaming. There's a lot that's going to change just over the next decade, and it's super exciting.
@KeithYipKW6 жыл бұрын
You may want to test your view projections using more realistic scene, such as a house and a dense city. Scenes are important for the 3D effect. Floating transparent objects in infinite space have a very weak 3D effect. It is also uncommon in your experience so it is difficult to tell if a projection is good. If the glitch is intrinsic to the hardware, it may be solvable by some existing glitch resistant filters. I feel like it has been a common problem since the past and people have solved it already.
@willrandship6 жыл бұрын
I would recommend making a tracked model of the boundaries of the room. That way, settings can be tweaked until the view inside the headset matches the view with it taken off. Nothing too fancy, just a basic wireframe of the floor and wall edges would be enough.
@rhoen80756 жыл бұрын
Perhaps the implementation of a Kalman filter would help with the position estimate of the controllers?
@ThereminHero9 ай бұрын
Any update on this? It's been 5 years but I noticed the github is still active.
@dorbie4 жыл бұрын
IMU should be much lower latency than the lighthouse scanner, so even with robust lighthouse approach you want to perform low latency correction using the IMU.
@CNLohr4 жыл бұрын
That was a core tenant of my charlesrefine driver.
@jacobdavidcunningham1440 Жыл бұрын
0:55 my god that acronym haha great
@CNLohr Жыл бұрын
EPnP or SBA?
@jacobdavidcunningham1440 Жыл бұрын
@@CNLohr EPnP, mouthful ha
@drink__more__water6 жыл бұрын
Man, I really need to put my big boy pants on and get better at C...
@dorbie4 жыл бұрын
If you are asking about convergence then you are conceptualizing the display geometry wrong. You need to put the pixels where they belong for each eye independently. The convergence is an emergent property. You do not rotate the view in for convergence. You draw the frustum to match the display intrinsics for each eye (a.k.a. field of view but supporting asymmetric frusta) and you position the camera extrinsics (a.k.a. relative viewing matrix, or eye space inverse model matrix for the display positions) relative to the tracking origin correctly for each display, extrinsics can include rotation but it is not a fudge to force convergence artificially and on a display like the Vive you are likely to have parallel viewing vectors and perhaps asymmetric frusta. This process produces a correct display geometry and the information is in the public domain. In addition you need to warp the rendered image to compensate for the display optics (using render to texture and a transfer to screen warp), this requires the warp center and radial distortion warp function for each eye. All this information is available for various headsets and you can probably estimate if you know what you're doing. Valve will give you this information if you just ask the right questions and it exists in libraries like OpenVR. Finally during this final warp you can compensate for the elapsed render time since you measured position and time-warp the rendered image to apply the latest estimated camera pose to each eye minimizing latency. Lots of tricks have been tried in this last stage like handling hand tracking independently from head tracking and compositing in the warp, accounting for depth and doing a parallax friendly warp that handles changes in head translation not just rotation.
@CNLohr4 жыл бұрын
LibSurvive is still being devloped by others, but I had to stop deving on it for conflict of interest reasons. Many of your insights are accurate and describe an excellent path forward. You may want to join the libsurvive discord.
@troylee41716 жыл бұрын
Awesome man
@DerSolinski6 жыл бұрын
Waaaaait that's the reason why no sensor fusion was done yet? I mean I followed pretty much from the beginning and it struck me a bit odd that nobody used the IMU yet, but since my RL sucks and keeps me occupied I never really read all the discussions on the github and in discord -_-. I always thought that you didn't want to use the IMU because you like the challenge... I knew that from the beginning that the main way of tracking in the Vive is done via the IMU since it has 1000Hz polling rate... The LH are for drift correction. That's how the most absolute position systems work (even the fancy smartphone AR stuff) with a few exceptions.
@ineedtodrive6 жыл бұрын
reflection. is there any different when u close to the wall or far from it.