EI 2020 Plenary: Quality Screen Time: Leveraging Computational Displays for Spatial Computing

  Рет қаралды 22,932

IS&T Electronic Imaging (EI) Symposium

IS&T Electronic Imaging (EI) Symposium

Күн бұрын

This Plenary presentation was delivered at the 33d annual Electronic Imaging Symposium (26-30 January 2020) held in Burlingame, CA USA. For more information see: www.electronici...
Title: Quality Screen Time: Leveraging Computational Displays for Spatial Computing
Speaker: Douglas Lanman, Director of Display Systems Research, Facebook Reality Labs
(USA)
Abstract: Displays pervade our lives and take myriad forms, spanning smart watches, mobile phones, laptops, monitors, televisions, and theaters. Yet, in all these embodiments, modern displays remain largely limited to two-dimensional representations. Correspondingly, our applications, entertainment, and user interfaces must work within the limits of a flat canvas. Head-mounted displays (HMDs) present a practical means to move forward, allowing compelling three-dimensional depictions to be merged seamlessly with our physical environment. As personal viewing devices, head-mounted displays offer a unique means to rapidly deliver richer visual experiences than past direct-view displays that must support a full audience. Viewing optics, display components, rendering algorithms, and sensing elements may all be tuned for a single user. It is the latter aspect that most differentiates from the past, with individualized eye tracking playing an important role in unlocking higher resolutions, wider fields of view, and more comfortable visuals than past displays. This talk will explore such “computational display” concepts and how they may impact VR/AR devices in the coming years.
Biography: Douglas Lanman is the Director of Display Systems Research at Facebook Reality Labs, where he leads investigations into advanced display and imaging technologies for augmented and virtual reality. His prior research has focused on head-mounted displays, glasses-free 3D displays, light-field cameras, and active illumination for 3D reconstruction and interaction. He received a BS in applied physics with honors from Caltech in 2002, and his MS and PhD in electrical engineering from Brown University in 2006 and 2010, respectively. He was a Senior Research Scientist at NVIDIA Research from 2012 to 2014, a Postdoctoral Associate at the MIT Media Lab from 2010 to 2012, and an Assistant Research Staff Member at MIT Lincoln Laboratory from 2002 to 2005. His most recent work has focused on developing Half Dome: an eye-tracked, wide-field-of-view varifocal HMD with AI-driven rendering.
© 2020, Society for Imaging Science and Technology (IS&T)

Пікірлер: 67
@yulcreates
@yulcreates 8 ай бұрын
Still an incredible watch 3 years later. Grateful that this is free on KZbin
@Dht1kna
@Dht1kna 4 жыл бұрын
Such an amazing talk. Cant believe facebook allows them to talk about all this openly and for free!
@RomanGuro
@RomanGuro 4 жыл бұрын
I can't believe facebook allows them to work for years with all different solutions that not needed for final commercial product)
@objectivemillennial2117
@objectivemillennial2117 4 жыл бұрын
Prime Technophilia actually Facebook revenue have been increasing every year
@pumpuppthevolume
@pumpuppthevolume 3 жыл бұрын
@@RomanGuro it's absolutely needed ....they r just growing the numbers of the consumers at the moment ...they will integrate this type of tech slowly
@JoinUsInVR
@JoinUsInVR 4 жыл бұрын
Came for the VR, stayed for the science :)
@cmdrdrdeath6624
@cmdrdrdeath6624 4 жыл бұрын
As a scientist. Well done. Exceptional talk. Good job Douglas Lanman.
@Mike-bg3cf
@Mike-bg3cf 4 жыл бұрын
As a layman. Well done. Exceptional talk. Good job Douglas Lanman.
@Navhkrin
@Navhkrin 4 жыл бұрын
A scientist with name "DrDeath". I hope your field of specialty is not what your nickname suggests
@anishadgaming
@anishadgaming 4 жыл бұрын
Navhkrin Lol
@cestchouette6167
@cestchouette6167 4 жыл бұрын
This freaking incredible, cant wait to get my hands on a vr headset with eye tracking + varifocal
@snozzmcberry2366
@snozzmcberry2366 4 жыл бұрын
Varifocal is great, but it's far from the star of the show when it comes to eye tracking. That trophy goes to foveated rendering. Human vision is only 20/20 in a tiny spot right where we look, corresponding to a small depression in the retina with the highest concentration of light receptors, called the fovea. Objects outside of this spot *rapidly* degrade in detail. Try looking a couple of inches to the side of some text and try to read it. It's impossible. Oculus/Facebook Reality Labs have devised a means of using deep learning to only render an image corresponding to the resolution of the whole retina - only render the image at full detail in this small some 3° or so spot of the fovea, then render it at a progressively coarser quality the further into the periphery you get. Done right, Michael Abrash, boss of FRB, estimates a potential *10x* reduction in amount of pixels rendered over traditional rendering of full detail across the whole scene. If we ever want the horsepower to render photorealistic worlds to make use of these amazing display technologies, foveated rendering is the only way it's gonna happen. If you already know about this stuff, welp, I was in a writing mood. If you don't, check out Abrash's keynote talk from Oculus Connect 6. He demonstrates & talks about this stuff.
@elijahlucian
@elijahlucian 3 жыл бұрын
@@snozzmcberry2366 yes, but we are also always fighting human instinct to "look closely" at something in vr. which is where the immersion fails. I would rather have Varifocal than foveated rendering tbh.
@manmythlegend3084
@manmythlegend3084 2 жыл бұрын
@Luqmaan Rashid I think he was just arguing that foveated rendering will have a biggger impact
@contigo121
@contigo121 4 жыл бұрын
Hats off to Facebook Research Labs for allowing us lay folk a fascinating peek into some of the current issues and the future direction of VR. I hope the fruits of this research comes to the marketplace sooner than later.
@atulsalgaonkar6222
@atulsalgaonkar6222 2 жыл бұрын
Excellent. Especially liked how VAC was explained.
@dragossorin85
@dragossorin85 4 жыл бұрын
Varifocal blew my mind when first saw it, this ability is a must-have in VR, excellent job, btw save those testing headset and that big multifocal machine, they need to be placed in a museum because this is history in the making
@robertweekes5783
@robertweekes5783 3 жыл бұрын
Awesome lecture !! People don’t know how cool this tech will look in VR headsets around the corner.
@mikellyvr
@mikellyvr 4 жыл бұрын
Super interesting. Very thought provoking and I learned a ton. Thanks!
@andreisperid
@andreisperid 4 жыл бұрын
Amazing, and absolutely inspiring!
@TomPeterson44
@TomPeterson44 4 жыл бұрын
This is so interesting to me. Unfortunately, my eyes are messed up. Retinal surgery has given me double vision and am hoping at some point, someone will be able to invent a headset that would allow me to use my computer with perfect vision. Love the games (early adopter on the Rift) but solving my vision problem with technology rather than glass is what I'm hoping for...
@OctaviusGeorge
@OctaviusGeorge Жыл бұрын
for me it was obvious not being able to focus on close up objects like books and text, I hope the next headset includes these inventions
@starchaser28
@starchaser28 4 жыл бұрын
Great presentation. Interestingly vergence conflict doesn't seem to be an issue for me. I tried the test @33:15 with my Rift S and Quest and the text looks as sharp a few inches away as it does at arms length. My wife said the same, although she said it looks slightly blurry throughout all distances, but no more up close than at arms length. A user survey might interesting to find out how many people perceive it and to what extent.
@TimoBirnschein
@TimoBirnschein 4 жыл бұрын
I agree. My eyes focus on the display no matter where the object being displayed is. To that extend, I can see objects in VR sharp much closer than I can see them in real life even though I have 20/20 vision.
@johnly_
@johnly_ 4 жыл бұрын
Incredible talk. Thank you for publishing this
@dhagn
@dhagn 4 жыл бұрын
This had me thinking about finding "the correct blur" that could be used so that people wouldn't need prescription eyeglasses to view a vr headset or even just a phone. It made me wonder what the eye does, hopefully something you can measure, if something is not in focus and then when it achieves proper focus.
@ShaulKedem
@ShaulKedem 4 жыл бұрын
Mesmerizing
@inceptional
@inceptional 4 жыл бұрын
The one person that voted this video down is a moron: This talk is touching one of thee most exciting and inspiring ideas and dreams in human history (not the specific problem his team is trying to solve but the bigger picture of aiming towards creating that ultimate version of "virtual reality").
@GeneralKenobi69420
@GeneralKenobi69420 3 жыл бұрын
"Don't ask about Half dome 2 you can read about it online" I don't understand, did something happen? Or was there drama that I missed?
@skypickle29
@skypickle29 3 жыл бұрын
Where can i go to ask a question? What’s missing from D.Lanman’s ‘TED Talk’ is the observation (forgive pun) that when we converge to look at a close thing, not only do far things get out of focus, we see them doubly. Pull your finger close to your eye while I am in the background and you will see I have two heads. This diplopia de-emphasizes the out of focal image. Similarly, moving the pixels of each eye screen laterally based on the z depth info of the model will force our eyes to focus at the appropriate distance and provide the depth cue that gives us ‘closeness’ sense. No ?
@ErikMiddeldorp
@ErikMiddeldorp 4 жыл бұрын
from about 25 minutes to 37 minutes was exciting to watch because I'm hoping Oculus will release a new headset with varifocal displays and eye tracking which would then allow dynamic foveated rendering and possibly larger fov and higher resolution (I want both!). The "beyond varifocal displays" section after 37 minutes kind of dampened my excitement as it drew into question the feasibility of eye tracking. I'm hoping they were just being thorough and investigating other options and that eye tracking and all the things it enables (varifocal, dynamic foveated rendering, higher resolution and fov) will happen. I love that Oculus/Facebook is this open about their vr development. No-one else is that I'm aware of.
@addisonwoods9367
@addisonwoods9367 3 жыл бұрын
Valve has been open sourcing all of their vr API work, but they have been notoriously secretive about their hardware development.
@Grunzel
@Grunzel 4 жыл бұрын
29:40 he looks so angry
@DuskyQed
@DuskyQed 4 жыл бұрын
Very interesting talk and I do understand the increased realism for close up objects. However, I don't see gamers using this feature... it is an advantage to have all in focus. I would not trade refresh rate, resolution or fov for this.
@VrsverigeSe
@VrsverigeSe 4 жыл бұрын
Like he said, blur is a depth cue. Your whole perception system gets more "fooled" or immersed in the scene, and that lets you make more correct spatial assumptions for your actions in the game.
@xalener
@xalener 4 жыл бұрын
what if I want to manually refocus to different depths without moving my gaze? I've found myself doing that on the rare occasions where I fire a firearm in real life to make sure the sights are lined up.
@Gaveno112
@Gaveno112 4 жыл бұрын
My guess is that your eyes are still making small adjustments in positioning. Unless you are completely using your peripheral vision in which case the natural blur he was talking about would still apply. But just a guess.
@snozzmcberry2366
@snozzmcberry2366 4 жыл бұрын
I had this thought too. I have a degree of independent, decoupled control of my vergence & accommodation muscles (for example, I can diverge my eyes while I bring my focus closer like I was verging them instead), but I guess the varifocal depth effect built with the technology they're aiming for here would break if I pulled off my little eye tricks.
@xalener
@xalener 4 жыл бұрын
@@snozzmcberry2366 yeah, we're just fuckin mutants with superpowers I suppose
@timmoore5574
@timmoore5574 4 жыл бұрын
You'll be able to accommodate (sorry) this with some form of eye tracking, I would think. It surprised me that he thought eye tracking was so hard compared to the other stuff they are doing... It seems like the kind of problem an ML system will be very good at, even if it needs a bit of training for each set of eyeballs.
@StormBurnX
@StormBurnX 4 жыл бұрын
Just wanted to pop in and say that the final headset they built is essentially the solution to this problem. They present a few layered displayed, and your eyes scan forward and back as you consciously adjust your focal point without moving your eyes. The next step up would simply be a full-on hologram as he mentioned, i.e. creating the light paths correctly rather than manipulating them to be “close enough”.
@TimoBirnschein
@TimoBirnschein 4 жыл бұрын
At 55 minutes you introduce Computational Displays and describe, with DeepFocus, a new way of blurring the rendered image based on machine learning and random training data. Maybe I missed it and someone can point me to it, but what display and lens technology are you proposing to use with this? Liquid crystal verifocal lenses with 64 layers with eye tracking and and DeepFocus? It doesn't seem to be holograms, Focal Surface, Adaptive Multifocal, or Fixed Multifocal.
@Reticuli
@Reticuli Жыл бұрын
Anything that needs blur added, which doesn't include holograms/light-field.
@linkup901ify
@linkup901ify 4 жыл бұрын
They basically need to solve soft-body tracking on the eye and optimize their blur algorithm so it works on mobile level hardware. Seems like Deep Learning is going to play a role in solving both. Unfortunately he didn't give a timeline for how long this usually takes. I just worry that hardware acceleration for machine learning on mobile isn't going to be able to handle it and the computational idea has to wait on hardware.
@osten222312
@osten222312 4 жыл бұрын
yeah but the wait may not be that long, next snapdragon is around the corner, then there is probably just one more generation until eye tracking is implicit
@linkup901ify
@linkup901ify 4 жыл бұрын
@@osten222312 I hope so, machine learning seems to have gotten a serious boost with Nvidia's RTX line making ray tracing and 4K a reality on some games. That kind of power likely isn't going to be in mobile for a few years yet, but I do recognize that optimization happen and many of these things will evolve to work better and take less processing power. I don't think FB is going to do multiple headsets anymore and there simply is just less reasons to do so, but it would really suck having to wait some 3-4 years for eye tracking as that would be about the generation of hardware you are talking about.
@arnaudsm
@arnaudsm 4 жыл бұрын
Awesome talk! I really hope Oculus will use this in production some day, they're really falling behind the competition in terms of graphics because of their (legitimate) obsession for mobile
@jibcot8541
@jibcot8541 4 жыл бұрын
I always wondered your close things were blury on Oculus Quest, I thought it might have been a problem with my headset or my eyes!
@lllllREDACTEDlllll
@lllllREDACTEDlllll 3 жыл бұрын
I'm not a scientist but I'm just going to stretch this problem out to its logical conclusion and say... There is no spoon.
@vvv2k12
@vvv2k12 11 ай бұрын
What do u mean
@geogan2
@geogan2 4 жыл бұрын
After watching all this, I am still a bit confused as to what Oculus is actually going to use for their next PC powered Rift 2 headset. No idea, I get the impression he said varifocal (even electronic) is not good enough. I'm sure Mark Z is just as confused as to what to sign off on. Don't understand that multi-focus thing at all.
@VrsverigeSe
@VrsverigeSe 4 жыл бұрын
Good thing they still have Carmack as consulting CTO then.
@geogan2
@geogan2 4 жыл бұрын
Immersivt oh is he? Thought he left completely to concentrate on AI
@michaelhartjen3214
@michaelhartjen3214 4 жыл бұрын
Your effort is commendable however it is seriously lacking in diversity and practicality , as is the case becomes when you narrow your vision inadvertently since you focused on solving the convergence accommodation problem for a long time. I could have told you literally 5 years ago that Deep Learning is the approach you should be taking , look at every other scientific field , deep learning has provided solutions for issues many thought to be absolutely impossible and it is only the beginning.
@Gaveno112
@Gaveno112 4 жыл бұрын
They used deep learning for natural blur and for foveated rendering. They are clearly aware of the potential of deep learning. How do you propose they use deep learning to solve convergence and accomodation?
@Navhkrin
@Navhkrin 4 жыл бұрын
@@Gaveno112 You cant use deep learning to solve a focus problem as computer displays cannot generate "negative light" as said on video. I dont remember which second. You have to chance the way light is bent and make it converge on your eyes properly.
@holymolys7692
@holymolys7692 4 жыл бұрын
Biggest failure: didn't bring any of this to the market
@Gaveno112
@Gaveno112 4 жыл бұрын
He gave this talk January of this year. Research comes before bringing to market and there are additional things to consider when mass producing, like he was saying, eye tracking isn't perfect. Those considering whether to bring one of these concepts to market have to decide whether moving parts would be acceptable, or would they incur too many costs in repairs, whether 99% of users is enough, or would 1% returning the headsets because they can't track their eyes be too costly. To call all of this progress and learning a failure is incredibly short sighted.
@holymolys7692
@holymolys7692 4 жыл бұрын
@@Gaveno112 that's BS, that's why one of the oculus founder left the firm. All we need is the first half dome with just wide FOV and higher res, and the rest doesn't really matter they can take their time cooking them.
@inceptional
@inceptional 4 жыл бұрын
Except it seems pretty clear at this point that their next headset is coming probably this year or next and will very likely have much of this stuff in it: kzbin.info/www/bejne/mIDTgJ-sZqmmrMk So unless you are beyond reason, that's a pretty decent time from research and solving the various tech problems to implementing them in an actual consumer VR headset. My expectation is that a Quest 2 or Rift 2 will have both the higher field of view they mention (up to around 140 degrees), plus a version of the varifocal lenses with computer interpretation stuff (the simpler version), as well as the obvious jump in resolution and reduction in size/weight that any new headset is going to see, and maybe even with an increase in refresh rate too. That would be a pretty impressive next step for Quest/Rift 2 imo, and it's, no doubt in my mind, right around the corner. Let's just see what they announce in the coming months. . . .
@barbuceanu2005
@barbuceanu2005 4 жыл бұрын
So he could have developed a 200 degrees FOV, high resolution HMD, but instead he concentrated his team on this quasi Science Fiction over-complicated concepts for 5 years ? Imagine what would have been to have today a Rift with 200 degrees FOV, high resolution, foveated rendering. Instead we have these experiments and theories. Now we know why the development of Rift 2 goes sooo slow and why there is no Rift Generation 2 yet. No worries, HP Reverb G2 is just around the corner. Instead on doing research to fix God Rays, SDE, low FOV, foveated rendering, wireless, they spent 5 years on convergence and depth blur. These things are probably the last things on a priority list for things VR technology needs today. Probably these things are not even needed at all, me personally I'm not bothered at all that all objects are in focus. I can perceive depth in VR very well without depth blur. And on the fact that he tried to eliminate the need for eye tracking is just another major step back. We absolutely need eye tracking for foveated rendering. Instead of concentrating on how to eliminate the need for eye tracking, better concentrate on how to perfect eye tracking. I just see all this as a lost opportunity to have a much better Rift today.
@tylerrosales7783
@tylerrosales7783 4 жыл бұрын
if you seriously think more FOV/Res = more realistic, you are pretty damn ignorant. VR is a complicated multilayer problem to solve that needs so many subtle working parts rather than MORE PIXELS or MORE SCREEN. varifocal displays will absolutely increase the potential for headsets to recreate reality much more. go ahead and buy your Reverb G2 and find yourself disappointed after a month. Even Valve is starting to research their own solvents to the vergence problem. You are severely understating the importance of it.
@barbuceanu2005
@barbuceanu2005 4 жыл бұрын
@@tylerrosales7783 I'm sorry but if you really think more FOV/Res is not more realistic, you are missing on a lot of recent VR developments. There are many hands-on videos on youtube with people who tried large FOV (StarVR One) or retina displays (Varjo) and their conclusions speak for themselves. I doubt I will be disappointed by G2 after a month, based on current reviews, will see, but I was surely very disappointed with Oculus Rift CV1, HTC Vive, Rift S, Quest, Odyssey+ after using them for much less than 1 month, due to their long list of flaws. If someone is ignorant, this guy is, who used a research team for 5 years pursuing theoretic goals just to answer some silly questions like "Do we really need eye tracking ?". Of course we need eye tracking, for so many reasons (gaze for avatars, UI interaction, foveated rendering etc). No need to spend years to answer silly questions like that when he could work on correcting the long list of flaws in Rift and Quest. That will push VR forward on the short/mid therm, not some theoretic ideas that will not come to fruiting in then next 10 years or so... Sure, once they fix most of current issues it's fine to look further, but to ignore all major issues VR has today and dream to fix 1 particular issue (not so relevant by all accounts) in the next 10 years, that is indeed pure ignorance. I follow the VR space very closely since 2015, and the vast majority of user's complains are related to pixelated image and small FOV. Now tell me, who is the ignorant ? The user who is disappointed by the pixelated image and small FOV or this guy who doesn't really give a damn about user's complains ? Let's just say that the prediction of Michael Abrash from 2016 about 4K x 4K panels and 140 FOV has been missed by a long shut by Rift, and that says a lot about the slowness of it's development.
@realpoorman3154
@realpoorman3154 3 жыл бұрын
@@barbuceanu2005 you slow
@Reticuli
@Reticuli Жыл бұрын
Pimax has high res 200 deg VR. VAC is the right path to be dumping money into. Research already shows that with VAC gone, there's basically little nausea, eyestrain, or headaches from the imaging system itself. So, even ignoring the realism aspect, much of this stuff's comfort and useability problems stems from VAC. You address VAC, you get comfort, useability, and realism all hugely improved.
The billion dollar race for the perfect display
18:32
TechAltar
Рет қаралды 2,5 МЛН
Как подписать? 😂 #shorts
00:10
Денис Кукояка
Рет қаралды 7 МЛН
Крутой фокус + секрет! #shorts
00:10
Роман Magic
Рет қаралды 14 МЛН
WORLD BEST MAGIC SECRETS
00:50
MasomkaMagic
Рет қаралды 51 МЛН
Why 3D sucks - the vergence accommodation conflict
10:01
Steve Mould
Рет қаралды 321 М.
The Death of Europe's Last Electronics Giant
18:39
TechAltar
Рет қаралды 4 МЛН
Lytro's ABANDONED 40k Resolution Cinema Camera
12:58
Frame Voyager
Рет қаралды 378 М.
How are Microchips Made? 🖥️🛠️ CPU Manufacturing Process Steps
27:48
Branch Education
Рет қаралды 4,1 МЛН
Apple Explains How They Make Money
5:39
SAMTIME
Рет қаралды 170 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 320 М.
Why does this exist... Vive Pro Review
10:29
Linus Tech Tips
Рет қаралды 3,5 МЛН