PROJECTIONS, Episode 13: Avegant's Lightfield Augmented Reality Prototype!

  Рет қаралды 105,265

Adam Savage’s Tested

Adam Savage’s Tested

Күн бұрын

Пікірлер: 187
@MRTOWELRACK
@MRTOWELRACK 7 жыл бұрын
This man's communication skills are topnotch. He has so many ways of saying yes while also exuding business confidence. I don't even know if the product is good but hell, he could make a shaky real estate proposition sound rock solid.
@suPerrduPerrX
@suPerrduPerrX 7 жыл бұрын
love this style of in depth videos. really enjoyed it. i hope you guys dont change. keep up the good work!
@JohnnyDoeDoeDoe
@JohnnyDoeDoeDoe 7 жыл бұрын
I liked that you guys got into details here! I don't watch many of the other Tested videos, but if I come across more which go into the technical side of cutting edge tech, I'll definitely watch through the whole video
@disky01
@disky01 7 жыл бұрын
This is what I've been waiting for since the Glyph. To me, it's the holy grail for AR/VR. It solves so many of the problems that current VR technology has. I'm so excited for a time when this is the norm for HMDs.
@SPcamert
@SPcamert 7 жыл бұрын
I like seeing how exciting you guys get when you're shown something confounding.
@tommihommi1
@tommihommi1 7 жыл бұрын
It really was just a question of time until someone did this. It fixes the most important problem of VR/AR, and is much more elegant than eyetracking + fake DoF
@vividhkothari1
@vividhkothari1 7 жыл бұрын
Your videos are awesome. I understood less than 10% of what you guys said, but I still I enjoyed it.
@SardiPax
@SardiPax 7 жыл бұрын
I think that if they are using micro-mirrors, so one mirror per pixel, when each pixel is illuminated the depth info is used to determine the angle of that mirror which will set the apparent focus.
@NeutroWorld
@NeutroWorld 7 жыл бұрын
You would only need two layers of video. One for each eye, as you can't see what is behind any object. They would be wrapping the two video images around two 3D moulds. Like 2 vacuum formed video images pointing at your eyes, hollowed out from behind. The crests and hollows can be in jagged discrete steps, because it's only the focus information that has a limited number of layers. The smooth "look" of the 3D image comes from the resolution of the 3D rendered stereoscopic images.
@Randyh9
@Randyh9 7 жыл бұрын
OK then, I'll just show you how simple it is... "PROJECTIONS - Hands-On with Avegant's Lightfield Augmented Reality Prototype!" Don't get me wrong, I love the show. I just don't want to miss it.
@KentAugust
@KentAugust 7 жыл бұрын
I think, the way that DLP works by reflecting light from light source selectively (using DMD chip) is the key for manipulating the focus range. When you see a reflection, you eyes accommodate to focus on the object, not the mirror. So if you have different light sources that are in different distance from the mirror, we can turn on specific light source sync with specific mirrors (pixels). By multiplexing the light sources fast enough, we can see different focus range at the same time. They did say the focus range sliced by several layers, since the number of light sources can't be infinite. All we need now is the distance information for each pixel. Of course this may produce a flicker. Or they can sacrifice the grayscale depth to a less bit (since DMD produce grayscale by switching the mirror on and off very fast). I don't know. And one thing, the light sources doesn't need to be separate in great distance, since they are behind a lens. A slight position changes in an object behind a lens can make a big different in focus.
@KentAugust
@KentAugust 7 жыл бұрын
And of course this is just a big maybe. I know very little about DLP projector
@michaelhackl8358
@michaelhackl8358 7 жыл бұрын
LEDs and DMDs switch fast enough. they'd have a set of lights with a corresponding set of mirrors. That'd also explain their narrow FOV and very discrete focal planes. moving lenses would endure very high stresses to do it alone with the DMD, and liquid lenses are probably too new for a nearly finished product. Both would work better in a continuous depth spectrum. Maybe 3 light sources with slow focus changes for the 2 nearest planes. and the third always at infinity.
@shimlaDnB
@shimlaDnB 7 жыл бұрын
that camera rail pan thing during the interview is sooo cool, never seen it before like this. works really well
@cintron3d
@cintron3d 7 жыл бұрын
I think I can help explain the depth confusion better. It's not that they're sending layers in the sense of multiple renders of the same frame at different depths. Rather think of it like this. In the world of pixels we have 8bit RGB, we have 16bit, RGBA, and then there's 32bit RGBAZ (RED, GREEN, BLUE, ALPHA, zDebth). I believe what he's saying is that their rendering plugins outputs a 32bit image signal which is interpreted by their display to render each individual pixel at the correct depth in the volumetric space.
@cr4zyw3ld3r
@cr4zyw3ld3r 5 жыл бұрын
Thats the best explanation I have seen for this so far!
@mikasuhonen7773
@mikasuhonen7773 7 жыл бұрын
Can you tell where I could find a shirt like Jeremy has? The space invaders/pilot helmet theme.
@elev8dity
@elev8dity 7 жыл бұрын
Would love to see these light field optics integrated into VR HMDs. Seems like a much better solution that OLED panels.
@samueln.3828
@samueln.3828 7 жыл бұрын
This guy seems to know a lot about the stuff he's talking about. The problem is that I cannot even see the thing being described. Good vid, would've hoped for some more footage of the product.
@samueln.3828
@samueln.3828 7 жыл бұрын
There is actually some more detailed stuff at the end, but the point is it would've been nice to see it at some other points as well, especially at the start to grab attention
@Exevium
@Exevium 7 жыл бұрын
Samuel N. Feels like Tested vids are getting shittier and shittier.
@Humma_Kavula
@Humma_Kavula 7 жыл бұрын
They do this in literally every video about VR. Talk about what they are experiencing the entire time without showing a thing.
@saintmain
@saintmain 7 жыл бұрын
"footage of the product." Prototype would be the right description.
@SANTARII
@SANTARII 7 жыл бұрын
They start showing footage of them using the device at 1:36, what are you talking about?
@jellevm
@jellevm 7 жыл бұрын
Great interview, this tech is really cool.
@guspaz
@guspaz 7 жыл бұрын
The depth data is probably either passed on alternating frames which aren't displayed, or just sent as extra bits per pixel. As of HDMI 1.3, you can have up to 48 bits per pixel, and only 24 of them are needed for a "normal" RGB image, meaning you've got 24 bits of depth data per pixel. That's probably super overkill, because it doesn't sound like they have many layers, and you'd only need enough bits to represent the number of layers. For example, just using 6 bits per pixel of depth information (the equivalent of "10-bit" colour since it'd be 2 bits per subchannel) gives you enough depth data for 64 layers.
@CyanOgilvie
@CyanOgilvie 7 жыл бұрын
My take on how they're doing the optics for multiple depths of field: - a micromirror array is fast. Really fast - pixel brightness is achieved by toggling a pixel on and off many times during a single frame (per colour channel). Therefore it is possible to tightly control the timing for the illumination of individual pixels. - you probably don't need too many "layers" of DoF to create a convincing effect - adjacent layers will (by definition) have very similar focal lengths and I suspect the layer intervals can be quite wide before we would notice "popping" as objects move between layers. Possibly as few as 10 or 15 layers would do. - if you have an optics system that sweeps through the required focal planes once (or probably several times) per frame, the micromirror array can selectively paint only those pixels that are at that depth during the time that the optics system is configured for that focal length. This could be done with a series of lenses on a spinning wheel analogously to the colour wheel used in a DLP projector. I know this contradicts the claims that it doesn't require moving parts (although micromirror (DLP) displays are arrays of moving mirrors and so some license is being taken here already) and that it isn't locked to their display technology (it isn't really, except that whatever technology is used would have to have very fine time resolution). If I'm right, the tech would show some artefacts if the head or eye (or projected object) moved rapidly across the visual field - it would appear to bend in the depth direction, analogous to the rainbow colour separation artefact seen with DLP projectors in similar circumstances.
@arickpalaganas5894
@arickpalaganas5894 7 жыл бұрын
this guy knows his shit and he knows this shit is gonna change vr.
@GivenGRaven
@GivenGRaven 7 жыл бұрын
Watching them nerd out about how they don't know exactly how the depth data works is great.
@willhendrix86
@willhendrix86 7 жыл бұрын
Really liking the backing music guys
@joanahkirk338
@joanahkirk338 7 жыл бұрын
i think this is a great part to vr ar which is important to displaying depth and that sort of technology, but doesn't have enough outside of that to be its own, thus would work better as a part of another vr ar set.
@jonathanrobson7716
@jonathanrobson7716 7 жыл бұрын
They might be projecting different layers of the same 2d pixel from different points and reflecting the light off the tinted mirror to the same point in your eye. Thereby giving multiple layers that will come into focus depending on your eyes focus. Although this would mean that adding layers will thicken the projector, making it harder to make thin glasses from the tech?
@gummibaer3597
@gummibaer3597 7 жыл бұрын
That guy Is a Marketing monster! Good Job!
@aohige
@aohige 7 жыл бұрын
I love interviews where the person clearly knows what's he's talking about, and answer questions in depth, instead of spewing vague PR nonsense.
@gregoriussoedharmo1206
@gregoriussoedharmo1206 7 жыл бұрын
If you have a couple DMD chips that can be actuated in several discrete positions (I think the rep slipped the term "micro mirror" in the interview (14:50)), have their position modified based on the depth buffer frame sent over HDMI, and then bounce the 2D image into the eye, you can modify the light path each pixel produces in the retina. You then modulate the path from each DMD to the retina so that it falls farther or shorter off the retina so that the eye lens has to compensate for said modulation, thus mimicking object distance. Problem is, I can't find any DMD chips that have precise positional control, all that I've seen are binary, where each pixels are either on or off.
@gregoriussoedharmo1206
@gregoriussoedharmo1206 7 жыл бұрын
Depending on the number of layers you want to aim with the system you're describing, the cost for the high speed display needed would be astronomical. Imagine that we want a fairly acceptable, constant 25 fps display, you would need a display speed of 25 * layers. Even with 4 depth layers, you would need a 100 Hz refresh rate display.
@gregoriussoedharmo1206
@gregoriussoedharmo1206 7 жыл бұрын
Well, my comments were based on a single term said by the rep, they could've used anything in that visor, so I could be way off myself. Guess someone should get their hands on a prototype visor and hack it to bits to see how it actually works XD
@vailias
@vailias 7 жыл бұрын
I think they're being purposefully obtuse. The LAYERS thing, I think, has to do with the digital scene. IE they only take so many depth slices, as that allows for discrete mirror angles for the end photons. And it makes sense as you'd want more data about that first 1 meter than you would for the rest of the world. And even if they aren't doing the angling directly with a DMD, a similar angular change could be done, and likely has to be done with a secondary mirror component.
@gregoriussoedharmo1206
@gregoriussoedharmo1206 7 жыл бұрын
Yes, I understand that completely, as the rep actually talks about "fixed planes volumetric display" with "digital focus planes". Lets say that you take the average joe eye, which can focus from 20 cm to 100 cm which gives you 80 cm of working depth that you need to work with, then you split that 80 cm into discreet layers, say... 12 slices that gives you about 10 slices of 8 cm focus plane resolution, a near slice where the pixels falls short from 20 cm, and a far slice for all pixels that falls farther than 100 cm. You then take the depth buffer, which is usually a 32 bit floating point bitmap image with the floating point values between 0 and 1, take the portion that represents those 12 region of depths, filter the pixels for each slice, and process them through some light processing magic which gives you the illusion of depth. That's the image filtering process, the more interesting part is how they actually do the light processing magic...
@KentAugust
@KentAugust 7 жыл бұрын
Exactly my thought, even if the mirror can be positioned in several steps rather than just 2 steps, on and off, it can simulate a considerably decent blur effect (a single out of focus pixel would look like several dots spread at a radius that changes as we accommodating).
@MaximilienRobespierre1
@MaximilienRobespierre1 7 жыл бұрын
Looks good
@dylandailey3191
@dylandailey3191 6 жыл бұрын
It'd be nice if these interviews were in 2 parts; one with a marketing guy (as seen in this vid), and one with an engineer who can actually answer the technical questions instead of dodging them, or making huge abstractions/analogies.
7 жыл бұрын
Let me tell you how it's done. It's really simple. The magic is in the capability of mems mirror array DLP. A normal display (eg LCD) is 2D. One plane on which the image resides. If you project an image using a 2D source (eg LCD) the in-focus target projection will also be 2D (preferably on your projection screen). Now, normally a mems mirror array DLP is set up to create a 2D projection plane. But the thing is; it does not have to. Enter the depth meta data. The meta data is used to guide the mirrors to create different in-focus target projections depending on the depth of the pixels in the image. If you would project a scene on to a paper, you then would have to move the paper back and forth in order to get an in-focus image of the different object in the scene. Analog to moving the paper is changing your focus when the image is projected into your eye. The mass market DMDs are currently mostly binary flipping ones (+/- 12 degrees or so) so they are probably tight pals with Ti to get to play with their more advanced chips. (or perhaps they do it some completely other way)
@RC-1290
@RC-1290 7 жыл бұрын
The multiple layers thing sounds more like an explanation of how digital things work, just like computer game depth is divided into multiple layers. But all it is, in that case, is a digital depth number. So while in theory there's a limit to the precision you can see, if the number can be large enough, you won't notice that it's digital.
@daniellelevy8056
@daniellelevy8056 7 жыл бұрын
Hey! So for people who would like to get started or are just really interested in being a maker, but don't necessarily know how to use some of the tools and materials, would you guys be willing to make videos going through some of the generic tools and materials and brief overview of how/when they would be used? Thank you!
@LyleAllenCairns
@LyleAllenCairns 7 жыл бұрын
Super cool tech they're going to be some integral part of this future tech and they know it.
@eddokter
@eddokter 7 жыл бұрын
For how they send the data, think of each image as having an extra piece of data that is the depth to the eye, only the slice that you see has to be sent at each pixel. The base machine needs to have all that distance for every object all of the time, but by virtue of you only having 1 point of view you only need to be sent one small slice of it. You're still only looking at a window into the rendered world, what gets blocked off can be ignored.
@goatcuteomgg
@goatcuteomgg 7 жыл бұрын
This is some Oasis shit right here
@cyberchin
@cyberchin 7 жыл бұрын
I think it must be all about projecting the light into your eye, pretty much as real life light does when it's reflected from objects. Your eye then focuses naturally. It must work somehow like those rainbow holograms do. On standard displays the light is not being projected, it's just being emitted all around from the same surface on which your eyes focus.
@JurassicCollectables
@JurassicCollectables 7 жыл бұрын
20:30 So if it's depth information, then YES it is data that can be displayed visually on a TV easily as a value between 0 and 1. Search depth maps and you'll see exactly what it looks like.
@andrut
@andrut 7 жыл бұрын
Maybe there's an array of lenses on DLP chip. Each lens would be configured for focusing different depth. So different parts of the chip would display pixels for different layers. Only few layers would suffice to give the illusion. Basically a simplified light-field projector - kind of a reverse to arrays of microlenses on sensor chips in lightfield cameras. It would be like using an array of projectors, pointing in one direction, but each set with different focal length. I wonder how would that work with edges of neighboring areas of different depths, since depth wouldn't be continuous. Would there be a huge need for preprocessing to account for that? Probably different focus for neighboring parts of layers would blend nicely thanks to blurring on the edges of layers that are out of focus.
@lukasjp11
@lukasjp11 7 жыл бұрын
Great vid
@davidkucerminiatures7851
@davidkucerminiatures7851 7 жыл бұрын
HEY TESTED guys, check out how bifocal contact lenses work. With one lens, that is static in the eye, the focal depth of the image the brain sees is partially dependant on what the brain is concentrating on.
@zushiba
@zushiba 7 жыл бұрын
My guess as to how they are passing that extra data is either via the audio channel or they are cutting a small strip off the bottom/top/sides of the frame and using it to pass data via some glyph style system.
@vailias
@vailias 7 жыл бұрын
Its not just "depth" its angle of incidence of photons to your viewing plane. That's the additional data and how you are able to focus on different points. Take the tech platform as a whole. They're using DLP Projectors which work via micromirror reflectance, and a digital scene generation with depth. So rather than just toggle the DLP micro mirrors between on and off, you control their angle to sync up with what a real incident photon's angle would be, coming through the final mirror plane from a point of depth in the scene. So you get this mass of photons that have different angular components from across the scene space. If you project that on a flat plane, you get nothing but blur, but you have an active lens element, (your eye) and you get selective focus.
@cr4zyw3ld3r
@cr4zyw3ld3r 5 жыл бұрын
But that seems like its highly computationally expensive and complex. Unless this is a feature that TIs DLP chips have baked in and then its just a matter of turning this feature on and it adjusts the angles based on your HMD orientation and the info sent from the game engine/renderer. And Norm mentions at the end that they are not utilizing the TI tech for this despite the fact the chips are capable of it @31.38 he says this solution is not DLP dependent.
@RussCottier
@RussCottier 7 жыл бұрын
I suspect they are blurring various depth layers in a manner that is inverse to the difference in focus of the screen distance to the virtual object later distance. You focus on that distance layer and you effectively focus past the eye to projector distance. However the virtual object is projected blurred, at that distance your eye will focus it back to sharp focus. The "layers" of the VR objects are not stacked they only show what is visible and in that layer not the whole thing. The principal of this compensated focusing is like someone who wears glasses using a telescope, microscope or binoculars. They each have a focus adjuster to accommodate for spectacle wearers who would usually take off their spectacles to use such devices. Anyone agree?
@Dunkle0steus
@Dunkle0steus 7 жыл бұрын
When (if at all) do they show what it looks like to wear one of these things?
@zenithquasar9623
@zenithquasar9623 3 жыл бұрын
Why is this not a tech we follow for VR and AR?!
@Clovenlife
@Clovenlife 7 жыл бұрын
It's simple really. Take a cube and slice it up. Each slice only has exactly what you can see when it is in focus, not the inside of the cube, just that thin slice of the cube around the outside. You still see all the other layers but they're out of focus, giving you the experience of depth. I think the confusion is that you think each slice is the entire rendered scene.
@trejkaz
@trejkaz 7 жыл бұрын
I would do it somewhat differently. Store an array of light from different directions at each individual pixel, like the reverse of a light field camera, so the screen is shooting the light into each pixel from multiple directions simultaneously.
@dykam
@dykam 6 жыл бұрын
But then he mentions at one point that he ends up sending the 2d image and depth information. Which solves the focus problem but not that some things are obfuscated behind other, he didn't seem to answer how they solved that, if they did.
@cr4zyw3ld3r
@cr4zyw3ld3r 5 жыл бұрын
@@dykam they clearly did solve it since the rover covers the planet. Look at the review from the Verge with Lauren Goode
@Przemo-c
@Przemo-c 5 жыл бұрын
Lookup nvidia talk about lightfield displays. It might not be the tech used here but it illustrates the principle of creating lightfield image.
@Lion_McLionhead
@Lion_McLionhead 7 жыл бұрын
That conference room looks familiar. It can't be Belmont across the street from EA.
@jedijeremy
@jedijeremy 7 жыл бұрын
Yes! I've been predicting for years that Light Field displays were coming for VR. I assume it's a fairly standard micro-lens array in front of a super-resolution display arrangement. I wondered if we might get them for FPV first (since depth matters when you're piloting a drone, and you can skip any computational processing by just using a matched light-field camera) but this tech was pretty much inevitable in this space. Good to see it at last. Just one thing; foveated rendering is a BAD idea and I wish people would work that out. Yes, cones are less common outside the fovea. But that merely spaces out the sample points - the eye uses super-resolution techniques which make use of tricks like the timing of when sharp edges cross those (sparse) sample points. Pre-blurring the image in those zones will remove information the eye uses, for minor computational gain. I'd have thought anyone into light-fields would know about super-resolution as well. Don't remove a major source of visual conflict, while replacing it with another!
@jedijeremy
@jedijeremy 7 жыл бұрын
Heh... just watching your mind-blown puzzlement at the end about how the the light-field display works. I've done the math and read the papers, so if you want to know "What Sourcery is This?" and how it does it (basically, by having a more complete understanding of light. What you learned in school about lenses was a lie, sorry, an "oversimplification".) hit me up on twitter at @JediJeremy and I'll break it down for you. It's really not as complicated as you think.. (oh, and as for software, nVidia has you covered there).
@WastingMyPotential
@WastingMyPotential 7 жыл бұрын
Was that a fart at 2:26?
@zushiba
@zushiba 7 жыл бұрын
Asking the real questions.
@pumpuppthevolume
@pumpuppthevolume 7 жыл бұрын
awesome
@DoItAfterSmoking
@DoItAfterSmoking 7 жыл бұрын
Sounds like a Honda to me... You can hear it for a while..
@estebandufanzo5530
@estebandufanzo5530 7 жыл бұрын
That was just Norm. When his hands aren't flailing randomly on camera he gets gassy
@DelorianKruz
@DelorianKruz 7 жыл бұрын
Wasting My Potential jajajaaja lol
@puppycatpony
@puppycatpony 7 жыл бұрын
can you do more one day builds
@oj4127
@oj4127 7 жыл бұрын
I think you guys are looking at it wrong, you could output it to a conventional display but each layer would need its own; when its projected onto the retina all the layers are projected simultaneously and then from that your brain creates one discrete image.
@TheNiters
@TheNiters 7 жыл бұрын
I am guessing they are rendering the scene in 720p, for instance, and then setting up a 1080p signal from the computer to the headset. That means you need to transfer 921600 pixels of actual image data, but you have 2073600 pixels of available space to do so. You can actually pack these two 720p frames of data sequentially inside the 1080p frame and then unpack it on the headset. Since you only need to transfer the depth of the the closest pixel (what ever is behind the pixel will be occluded) you can probably get away with just using the normal HDMI interface. Now, he did say that he didn't know what it would look like on a normal TV. I am guessing that was either a way to not tell us too much or they do some additional hacking on the signal stream so it is incompatible with a TV.
@gatekeeperUS
@gatekeeperUS 7 жыл бұрын
So, If I get this convergent and accommodation combination technique decoded from this interview, it sounds like the HDMI image is sent with distance data embedded into it for each object in the 2D field. The image is then recreated making each distance layer have its objects appear (when close) as being closer together for each eye, thus making my eyes get cross eyed to see them which is the convergence part and my eye then focuses on the object which now appears to be closer based on my eyes angle of view. When that occurs, object at a distance will appear to be double. The other object which are further away are moved to be right in front of my eye when it is not doing the cross eyed angle and is right straight ahead (more then 1 meter out there) and has a diopter value of 1 to 0. When focused out at object which are straight out in front of you, the near objects will then appear to doubled just like in reality. So the near objects are squeezed in closer together for each eyed the further out objects are stretched out to get directly in front of your eye. Then the image is packaged up into a 2D image and projected on the almost transparent screen which then allows your eye and brain to decide what your going to focus on and the convergence and accommodation takes over physically in your eyes and mentally in your brain. I have heard them say they are using 20 layers or distance in other interviews and then once they said 10 layers of distance. Obviously real life we have infinite layers of diopter focal distance but for most application and people 10 to 20 would be plenty. Amazing work they are doing. I will get their video headset to see the DLP mirror image on my retina and follow this development closely..
@starspawn507
@starspawn507 7 жыл бұрын
How many optical engineers did this take?
@YoRCreator
@YoRCreator 7 жыл бұрын
there was an interesting ar game I played in either the late 80s or early 90s that was a shooter and used mirrors for hologram imagine. just throwing that out there.
@trejkaz
@trejkaz 7 жыл бұрын
Wouldn't it work like a light field camera, only in reverse? And judging from what they were saying themselves, using many tiny mirrors, instead of many tiny lenses?
@LenLen2u
@LenLen2u 7 жыл бұрын
Still having trouble visualizing what a micro mirror array looks like , but exciting!! Please please PLEASE label the videos with 'projections'
@Change-Maker
@Change-Maker 7 жыл бұрын
Lenny Glionna it's DLP search Google. It has been around for many years on older TVs.
@luket9386
@luket9386 7 жыл бұрын
They've both got flying tops on
@jahurska
@jahurska 7 жыл бұрын
My guess is that they take the 2d HDMI image and the meta data carries depth information for each and every pixel. Kind of like voxels. They talked about micromirrors so if I understand correctly they actually have multiple beams for each pixel? I'm not an expert on optics, but if you divide the real world into small units (i.e. pixels) that refract light into all directions, then multiple beams do hit your eye from each one small unit. So to get that into focus, the eye lens takes those multiple beams and focus them to your retina. Like in here: upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Accommodation_%28PSF%29.svg/220px-Accommodation_%28PSF%29.svg.png So it could be something like a convex mirror that is showing one pixel which effectively then projects multiple beams from that one light source. And then you can change the curvature of the mirror to get different focal planes (the meta data tells which focal plane that pixel belongs to). Probably they have system of mirrors that each have a set focal plane, and they somehow refract the pixel from one that corresponds to that focal plane. Micromirror that can change the curvature would probably be too difficult make or take too much space. That could be the reason why they are limiting themselves to a set number of focal planes. Anyway thanks for this very interesting video. I had not heard of lightfield technology before at all :)
@MikeTrieu
@MikeTrieu 7 жыл бұрын
Did Norman Chan mis-speak when he called Avegant's technology "snake oil" @ 32:00? Just because he doesn't understand light field displays doesn't mean it's a farce.
@astapenell
@astapenell 7 жыл бұрын
What's the consumer application for AR?
@gavinw77
@gavinw77 7 жыл бұрын
AR games will be huge. Tourism will use AR to give information about anything on your journey. AI assistances will want to live in AR. Even desktop computing will start using AR to extend beyond the display, creating novel workspaces. It seems easy to imagine consumer applications.
@Hyncharas
@Hyncharas 7 жыл бұрын
Considering that Magic Leap STILL hasn't really demonstrated their prototype, it looks like Avegant is going to replace that company as the forefront of Light Field Mixed Reality... The past couple of days Microsoft launched two of their partnered, Mixed Reality headsets for pre-order - shame they decided nobody outside the US were allowed to purchase them.
@KlingonCaptain
@KlingonCaptain 7 жыл бұрын
I saw the thumbnail, and for some reason beyond me, I thought it was John Campea.
@stankwho
@stankwho 7 жыл бұрын
In order for this to replace smartphones as he says they will have to compress the headset to something like a pair of regular glasses because no one is going to walk around with a battle helmet on their head!
@christianserrano7344
@christianserrano7344 7 жыл бұрын
But will this fuck up my eyes?
@sadbucket
@sadbucket 7 жыл бұрын
CGS Sir lol thinking the same thing,
@42tancho
@42tancho 7 жыл бұрын
we all learnd not to look into projectors, but the world around you is also reflecting light in your eyes. You just need a dim projector.
@tunaware
@tunaware 7 жыл бұрын
if your eyes focus to a normal distance I don't see why it would
@jellevm
@jellevm 7 жыл бұрын
No, there's light entering your eyes all the time, it would do no more harm than natural light.
@toyeadeniran6394
@toyeadeniran6394 7 жыл бұрын
I actually think this is better for your eyes than regular displays, because its tricking you brain to see light like you normal would, rather than just looking at a display. When you look around, light is already getting projected into your eyes naturally, this sort of does the same thing as that.
@godsofthesingularity8308
@godsofthesingularity8308 7 жыл бұрын
so if they've got the whole lightfield thing down( at least an approximation of it)... why is magic leap worth eight billion dollars and they aren't? They've already got a working model of what magic leap was promising like two years ago.
@ZoniesCoasters
@ZoniesCoasters 7 жыл бұрын
serious question. has anyone developed AR pörn yet?
@SonicBelchFire
@SonicBelchFire 7 жыл бұрын
anyone else think the thumbnail looks like the Infant Memory Generator from Donnie Darko?
@jimwilliams1536
@jimwilliams1536 7 жыл бұрын
your brain is doing most of the 'computations'.
@AceSeptre
@AceSeptre 7 жыл бұрын
It looks like a futuristic robot cow mask.
@MurphyArtPrints
@MurphyArtPrints 7 жыл бұрын
I think I understand how it works. But what I can't understand is why every company working on this problem is focused on AR rather than VR.
@skagerstrom
@skagerstrom 7 жыл бұрын
This guy, in the way he speaks, sounds like a VR Christian Von Koenigsegg :P Exciting interview :P
@frankiesomeone
@frankiesomeone 7 жыл бұрын
still not quite sold on why this show is called Projections when 99% of HMDS don't use projectors
@TheBlackstarrt
@TheBlackstarrt 7 жыл бұрын
Well, what's the FOV? Did they discuss this? I am cleaning while listening.
@johnnyaxon_
@johnnyaxon_ 7 жыл бұрын
narrow 28:22
@occhamrazor
@occhamrazor 7 жыл бұрын
Could it be something based on this: en.wikipedia.org/wiki/Light-field_camera
@dongordayiii6976
@dongordayiii6976 7 жыл бұрын
How do they dynamically switch focus between objects at different ranges? My guess is that they don't. You do. They are projecting light directly into your eyes, and eyes come with a built-in lense that changes focus dynamically. They take advantage of this and project images into the eyes that will only be in focus when your eye's focus point matches the distance information attached to the virtual object. At least, that's what I would do.
@dongordayiii6976
@dongordayiii6976 7 жыл бұрын
I don't think that is the case. The discreet planes of focus would be in software, not the display. Since it is the reshaping of the lense of your eye interacting with the angle of the photons being projected that determines what is in focus, and they are using micromirrors to control the angle of light for every pixel indevidually, then it stands to reason that they can project different parts of the field of view at different focal distances without losing resolution. It can all happen in the rendering engine.
@dongordayiii6976
@dongordayiii6976 7 жыл бұрын
They said at the beginning that the technology is similar to the retinal projection system that they had developed earlier that used micromirrors to project images into the eyes. I guess I am making a bit of a leap... but since it would be impractical to seperate focus planes in the hardware without having separate projection systems for each, it just seems like the simplest answer. In the video they never fully explain how they do it. This is just me speculating.
@RussCottier
@RussCottier 7 жыл бұрын
Don Gorday III just saw this...I agree. I tried to say this in my answer.
@NinetooNine
@NinetooNine 7 жыл бұрын
Don Gorday III The separation is done in the lens. There are multiple separate focus lens built in to a single lens. Projected light is shown through these lens at the same time creating a single light field. Your eye then shifts through these different focal points at will just as it would in real life.
@cr4zyw3ld3r
@cr4zyw3ld3r 6 жыл бұрын
I don't see how that would work nine2nine, it seems to me that this approach of a multifocal optic would make the image warp or swim, plus they said the optical system was not very expensive. think they are simply sending multiple depth planes to the display system at the same time and let your eyes focus on them naturally. I think the system is setup like so at a few meters out is one plane and then as the object gets closer to your eyes you get a myriad of them like this | |||| Unless what they have done is downscale the an optic system like Lytro's www.dpreview.com/files/p/articles/5867769785/fig_3.2_ng_dissertation.png in which case you would be correct.
@rtkiiiprod
@rtkiiiprod 7 жыл бұрын
Magic
@greyareaRK1
@greyareaRK1 7 жыл бұрын
Will the hololens being using lightfield technology?
@gavinw77
@gavinw77 7 жыл бұрын
I don't think it's possible to know at this time. Currently the HoloLens doesn't use it. And the next version of HoloLens isn't going to be released for consumers. So we'll have to wait for the version after that to see. There are probably better problems to work on, like field of view. This is a novel feature of AR, but it's not going to get AR over the line.
@TapoutTommy11
@TapoutTommy11 7 жыл бұрын
Got the dell augmented reality ad 😂
@prof.m.ottozeeejcdecs9998
@prof.m.ottozeeejcdecs9998 6 жыл бұрын
You can see only so much with your eyes (and the brain behind it, which has to create the dimensions).
@sniperkitty3000xx
@sniperkitty3000xx 7 жыл бұрын
im at 10:30 watch him not actually tell them how they manage to simulate the light coming into your eyes as if it came from different distances because they want to keep it a trade secret xD
@lewisjames4187
@lewisjames4187 7 жыл бұрын
did anyone notice that everyone was wearing an Apple watch?
@zekedoesyt9735
@zekedoesyt9735 7 жыл бұрын
Lewis James so?
@lewisjames4187
@lewisjames4187 7 жыл бұрын
Nothing just thought it was strange
@FindecanorNotGmail
@FindecanorNotGmail 7 жыл бұрын
So, Tested has product-placement now? ;)
@nooneknowsnothing
@nooneknowsnothing 7 жыл бұрын
paid & bought in iHoopla noise. Now Stuck with it for another 1 year or so until it dies.
@jeffh4581
@jeffh4581 7 жыл бұрын
Lewis James yeah, well, they're kind of awesome
@Levi-Friss
@Levi-Friss 7 жыл бұрын
If AR kicks off like I hope it will the first thing I want to do is use magic!
@JoTokutora
@JoTokutora 7 жыл бұрын
Looks like the CTO lost a lot of weight.
@thinkofwhy
@thinkofwhy 7 жыл бұрын
Ah,... it's called CastAR. This fall. Supposedly.
@OzFaxFlyer
@OzFaxFlyer 4 жыл бұрын
What a shame that Avegant's hype actually never delivers its promise. I made the mistake of investing my dollars in the Glyph - what a fiasco! And yet they still advertise it on their webpage!
@MrChief101
@MrChief101 7 жыл бұрын
I'll betcha nickel they're taking info about your foveal focus (area of intent not image) out of a return image. Even if it's a very simple pattern that only it can see.
@D1yude
@D1yude 7 жыл бұрын
I love tested, I watch most of your content and it is great. That said, please change the intro for projections... it sucks!!
@WangoTiags
@WangoTiags 6 жыл бұрын
Love your stuff guys, but could you please ask your guests to not speak too fast especially when talking about terms we viewers might not be familiar with? For instance, I had to do a thorough (and rather difficult) search to figure out that he was saying "diopters" as I had no clue what the unit of measurement was for refractive power (focus as he said). Just ended up replaying certain sections repeatedly in order to catch what he was saying lol. About 12:53 he says something like "...so we get thing like FA#$*%*# rendering." (that wasn't cursing... it was just the part of the word that I couldn't understand for the life of me lol) Again, you guys do a great job, and your videos are very entertaining and educational! Thx!!!
@thesystemera
@thesystemera 7 жыл бұрын
Clearly using Z depth. Clever.
@benjaminds7465
@benjaminds7465 6 жыл бұрын
Golden rule of videos: Show, don't tell. The audience needs to _see the thing_ you're talking about.
@MrHeliMan
@MrHeliMan 7 жыл бұрын
So according to some comments it is impossible to see shadows / black color but others say there shouldn't be any problem. I'd be glad to have answer to this by someone who actually knows for sure how it works rather than few anons on YT. So Tested team, is it possible?
@hpmj999
@hpmj999 5 жыл бұрын
subtitles please
@Yewbzee
@Yewbzee 7 жыл бұрын
Technically you can't fault the info being discussed here but jesus put some demo video on ffs! That's the difference between an amateurish interview discussion and a top end documentary. I mean how many times did we need a close up of the headset being worn?
@MotleySchu
@MotleySchu 7 жыл бұрын
Why in the world do you guys always introduce each person as "____ from tested" Hi I'm Norm from Tested. Hi I'm Jeremy from Tested...Unless they are from some other show... YOU ARE ALL FROM TESTED. End of rant.
@joshuahowson22
@joshuahowson22 7 жыл бұрын
he reminds me of nate mitchel from oculus
@Kitmaker
@Kitmaker 7 жыл бұрын
4:38 Lolz
@3ATIVE
@3ATIVE 7 жыл бұрын
Looks like Someone (Gunther) got a new slider!!! - and decided to over use it. - Back-and-Forth, Back-and-Forth shakey slider shots [not good]
@tjesse
@tjesse 7 жыл бұрын
TALKING!
PROJECTIONS, Episode 14: Inside Nomadic VR's Physical Playground
45:36
Adam Savage’s Tested
Рет қаралды 110 М.
PROJECTIONS, Episode 30: Jedi Challenges AR Review
18:43
Adam Savage’s Tested
Рет қаралды 66 М.
Which team will win? Team Joy or Team Gumball?! 🤔
00:29
BigSchool
Рет қаралды 15 МЛН
Adam Savage's Four Year Build: Velociraptor Costume, Part 2
43:22
Adam Savage’s Tested
Рет қаралды 1,7 М.
1 MILLION FPS - The Slow Mo Guys
18:38
The Slow Mo Guys
Рет қаралды 21 МЛН
PROJECTIONS, Episode 17: The Past, Present, and Future of VR
42:44
Adam Savage’s Tested
Рет қаралды 38 М.
We shot a YouTube video about film formats on 35mm film
22:27
Stand-up Maths
Рет қаралды 647 М.
Adam Savage Meets an Original Star Wars Stormtrooper!
11:19
Adam Savage’s Tested
Рет қаралды 104 М.
PROJECTIONS, Episode 23: Inside-Out Tracking with Single Camera
31:49
Adam Savage’s Tested
Рет қаралды 66 М.
I Built a Robot that Plays FPS Games
21:23
Basically Homeless
Рет қаралды 885 М.
PROJECTIONS, Episode 21: Valve's SteamVR "Knuckles" Controllers
25:04
Adam Savage’s Tested
Рет қаралды 267 М.
I’m Finally Excited About VR Again! - Bigscreen Beyond Review
15:23
Linus Tech Tips
Рет қаралды 3,3 МЛН