OpenGL - deferred rendering

  Рет қаралды 30,126

Brian Will

Brian Will

4 жыл бұрын

Code samples derived from work by Joey de Vries, @joeydevries, author of learnopengl.com/
All code samples, unless explicitly stated otherwise, are licensed under the terms of the CC BY-NC 4.0 license as published by Creative Commons, either version 4 of the License, or (at your option) any later version.

Пікірлер: 26
@krytharn
@krytharn 4 жыл бұрын
Good video. Quick note: to solve the issue of rendering the light volume when the camera is inside it (7:55), the standard solution is to only render the back faces (and cull the front faces).
@arsnakehert
@arsnakehert 2 жыл бұрын
Damn, this was a great video, I like how you're showing every relevant part of the code in its due time, very useful video to use even as a kind of reference
@ramoncf7
@ramoncf7 2 жыл бұрын
Thank you for the whole openGL series, you've helped me a lot to understand many concepts.
@kampkrieger
@kampkrieger 2 жыл бұрын
Oh my god. This video is sooo crisp and clean cut raw all-ini information. True genius!
@CodeParticles
@CodeParticles Жыл бұрын
@Brian Will, with all due respect I apologize for being 3+ years too late to comment on this terrific video. But according to Joey De Vries, he mentions that one of the disadvantages of deferred shading is that, "Deferred shading forces us to use the same lighting algorithm for our scene's lighting..." mentioned in his Deferred Shading page about halfway down on disadvantages. However, it could be alleviated by including 'More material-specific data' in the G-buffer. I'm encountering this tricky situation where I have a ground object I don't want affected by specular lighting, but all my teapot objects is okay with specular. And so I'm not sure how to approach this unique scenario on only allowing specular on teapots and not with my ground object in the final buffer...
@jeroen3648
@jeroen3648 2 жыл бұрын
Thank you for this video, it really helped me understand the differences between deferred rendering and forward rendering
@raghul1208
@raghul1208 2 жыл бұрын
Excellent
@Mazarhan
@Mazarhan 4 жыл бұрын
excellent
@Mcs1v
@Mcs1v 4 жыл бұрын
Nice and detailed video! ;) You can save a lot of memory and memory bandwidth if you don't store a separated "position layer" in your gbuffer, because you can recalculate the positions from the depth buffer (and you write and use the depth buffer anyway).
@briantwill
@briantwill 4 жыл бұрын
Does the extra fragment work outweigh the bandwidth? I'd think for high-end rendering on higher end hardware , the computation cost would outweigh the bandwidth savings.
@Mcs1v
@Mcs1v 4 жыл бұрын
@@briantwill The main problem with the deferred rendering is the memory bandwidth cost which is huge. Doing some math on the GPU is most of the time costs less than abusing the memory itself (there are exceptions, ofc;)). Reconstruct the position from the depth is just one multiplication and it's faster than sample a 32F texture. In other hand, you need to write the texture (which costs more than a sampling itself) and for the position, this implementation do it twice (once for a depth and once for a position buffer). You can do the same with the normals: convert it to screen space and save more bandwidth With screen space normals, you only need the X/Y values. You can save the Z channel and save extra precision with it, because you loose precision with the ability to store normal values for polygons which are backfaced to the camera
@eddek6141
@eddek6141 3 жыл бұрын
@@Mcs1v nice!! Thx
@user-dh8oi2mk4f
@user-dh8oi2mk4f 2 жыл бұрын
@@Mcs1v how can you store a normal with only 2 values?
@Mcs1v
@Mcs1v 2 жыл бұрын
@@user-dh8oi2mk4f Hey! You can convert it the "screen space" (in screen space you only need the vertical and horizontal normal), and you can convert it back to 3d normal after that
@franesustic988
@franesustic988 4 жыл бұрын
Amazing video! I do have a funny hypothetical question tho. Let's take a fixed camera with 2D background system( such as RE2 of FF9). How would someone go about having a pre-rendered G-buffer, and make it so that the first pass just skips the static elements and only updates the buffer where dynamic 3D objects are found??
@oonmm
@oonmm 2 жыл бұрын
Sorry for a very late answer, and also for the fact that I have never done this. But you could simply just render the static background to the screen buffer, and then render the 3D objects first to a texture - that you then render the texture to the screen buffer on top of the background.
@movax20h
@movax20h 4 жыл бұрын
Just a question, at 5:40, I am not OpenGL expert. Does it make difference to use glBlitFramebuffer here (with nearest and same source / destination dimensions which disables resizing basically), vs using glCopyTextSubImage2D or glCopyImage ? I think the primary intent of glBlitFramebuffer is to do resizing of buffers and converting texture formats. I know for the fact that on some older hardware and older drivers glBlitFramebuffer can be slower.
@keptleroymg6877
@keptleroymg6877 11 ай бұрын
Because it's hard to find what I need
@andreafasano4755
@andreafasano4755 4 жыл бұрын
Does it make any sense to have another gbuffer that stores values indicating which shader to use for each pixel? In this way it is possible to use multiple fragment shaders right?
@briantwill
@briantwill 4 жыл бұрын
Yeah, you might put more pixel info in the gbuffer, including a value that governs which shader should process that pixel. You wouldn't necessarily need a separate gbuffer but rather an added attachment of the same gbuffer, or an added channel on an existing attachment. Keep in mind though that, skipping over code with a branch doesn't really spare us the work on the GPU except in cases where all 64 cores happen to skip the code; if just one core doesn't skip the code, all other of the 64 cores will have to wait. So your idea is doable, but it requires branching in the light pass shader and so has this performance drag. It generally won't be quite as expensive as processing all pixels with all N of your light pass shaders, but it'll often be close.
@gnorts_mr_alien
@gnorts_mr_alien Жыл бұрын
how would light occlusions work with this? like if there is a model between the light and the target model, that light's contribution might be zero but there is no way to find out about that from this setup I presume. So is this the job of a separate shadow pass? Thank you for the series by the way, amazing content.
@pytchoun140
@pytchoun140 3 жыл бұрын
hello can you share source code ?
@zentyrant
@zentyrant 2 жыл бұрын
Came from annie's video
@tezza48
@tezza48 4 жыл бұрын
Captions look like this. Good video though :)
@Cheesecannon25
@Cheesecannon25 4 жыл бұрын
9:30 Did you just mistake something 2d for 3d?
OpenGL - PBR (physically based rendering)
12:47
Brian Will
Рет қаралды 29 М.
Don't Waste!🚫 Turn Ham Into Delicious Food😊🍔 #funnycat #catmemes #trending
00:25
КИРПИЧ ОБ ГОЛОВУ #shorts
00:24
Паша Осадчий
Рет қаралды 824 М.
ТОМАТНЫЙ ДОЖДЬ #shorts
00:28
Паша Осадчий
Рет қаралды 9 МЛН
OpenGL - SSAO (screen space ambient occlusion)
11:22
Brian Will
Рет қаралды 32 М.
Learning VULKAN by Rendering a GALAXY
6:10
frozein
Рет қаралды 24 М.
Volume Tiled Forward Shading
7:57
Jeremiah van Oosten
Рет қаралды 17 М.
OpenGL - camera movement
12:34
Brian Will
Рет қаралды 36 М.
Why you should never use deferred shading
30:14
bazhenovc
Рет қаралды 9 М.
Physically Based Rendering // Intermediate OpenGL Series
17:31
An introduction to Raymarching
34:03
kishimisu
Рет қаралды 102 М.
The Glitch that Broke Link's Cel Shading
35:10
Jasper
Рет қаралды 543 М.
Making a Game With C++ and OpenGL
6:36
Zyger
Рет қаралды 39 М.
Don't Waste!🚫 Turn Ham Into Delicious Food😊🍔 #funnycat #catmemes #trending
00:25