Пікірлер
@LeYang-uj6jo
@LeYang-uj6jo 6 күн бұрын
I can reproduce the diffuse part by Sampling the Environment Maps method in this video, but how should I handle the specular part? Can you provide some relevant articles? Thank you very much!
@gsn-composer
@gsn-composer 4 күн бұрын
For the specular part, it is usually better to sample the BRDF rather than the envmap. You can watch Shaders Monthly #11. The most general solution would be "multiple importance sampling", which is mentioned at the end of the video: graphics.stanford.edu/courses/cs348b-03/papers/veach-chapter9.pdf
@LeYang-uj6jo
@LeYang-uj6jo 22 күн бұрын
In the `directionToSphericalEnvmap` and `sphericalEnvmapToDirection` functions, used z axis as up direction. But in OpenGL, y axis is up direction. Do we need to swap the y and z axes?
@gsn-composer
@gsn-composer 21 күн бұрын
Yes, the y-axis points up in the OpenGL camera coordinate system. However, you can set the orientation of the environment map to any direction in the global coordinate system of the scene, which is what we did here.
@LeYang-uj6jo
@LeYang-uj6jo 24 күн бұрын
Hi, very nice video! I just have a simple question, the output value of the fragment shader is RGB color or linear radiance?Because I saw that the `outColor` ball light was called `rgb2lin`, but when coloring the object, `lin2rgb` was also called, so I was confused?
@gsn-composer
@gsn-composer 24 күн бұрын
Inputs and outputs of the presented shaders are in the sRGB color space. The inputs are converted from sRGB to linear with the function "rgb2lin()" (gamma expansion). Then the shading is calculated in linear space. Finally, the result is converted from linear to sRGB with the "lin2rgb()" function (gamma compression).
@LeYang-uj6jo
@LeYang-uj6jo 22 күн бұрын
Thanks!
@mahalis
@mahalis Ай бұрын
Thank you for publishing this-really appreciate the clear explanation and walkthrough.
@peterburgess9735
@peterburgess9735 Ай бұрын
One thing I should say, if anyone referencing the linked GSN repro in their own project, beware of the 0 alpha value when storing the final frag colour. Took me a while to debug that :D I guess GSN composer, or that particular project file, has alpha blend turned off or something
@peterburgess9735
@peterburgess9735 Ай бұрын
Great video, thanks! I've been messing around with hard to understand shadertoy noise functions, and this has helped alot. Am I right in thinking that to get the gradient for the actual pixel, I just need to interpolate between the 4 gradients, just as you did with the values you calculated from the gradients?
@gsn-composer
@gsn-composer Ай бұрын
Yes, correct. At each grid cell, we generate a gradient of random size and orientation. Then we get the four gradient values for the current pixel. One gradient value for each of the four neighboring grid cells. These gradient values are called d11, d12, d21, and d22 in the code. And then we perform interpolation based on the location of the pixel with in the grid cell using the same code as in the value noise example.
@peterburgess9735
@peterburgess9735 Ай бұрын
@@gsn-composer Thanks very much!
@cialk
@cialk Ай бұрын
This is so good thank you
@DasAntiNaziBroetchen
@DasAntiNaziBroetchen Ай бұрын
Some German snuck into your slide #21.
@user-kl9wj3dh2d
@user-kl9wj3dh2d Ай бұрын
very nice toturial !
@Mcs1v
@Mcs1v 2 ай бұрын
Nice and detailed video! Just a hint: you didnt need the position buffer, its much faster to reconstruct the 3d position from the depth buffer with the inverse camera matrix (it's saves a lot of gpu bandwidth)
@Khytau
@Khytau 2 ай бұрын
thanks!
@HaonanHe-ep5vs
@HaonanHe-ep5vs 2 ай бұрын
Excellent video! But how to render shadow when using image-based lighting in physically based rendering?
@gsn-composer
@gsn-composer 2 ай бұрын
When we do rasterization and perform LOCAL shading, shadows are always difficult because they are a GLOBAL effect that we cannot decide locally. We need to know if the path to the light source is occluded or not. This is why ray tracing approaches are so popular, because they can solve this problem by shooting a ray towards the light source (or environment map) and can check if the light is occluded by some other object. Here is an image-based lighting example for ray tracing. It has shadows and other "global" effects: gsn-lib.org/apps/raytracing/index.php?name=example_transmission I have some slides about raytracing here: www.uni-marburg.de/en/fb12/research-groups/grafikmultimedia/lectures/graphics2 In a rasterization approach, we need to cheat somehow. For example, we could look for the brightest location in the environment map and assume that this direction is the only one that matters for shadows (a very rough approximation, of course). Then we could add a cascaded shadow map for that direction: developer.download.nvidia.com/SDK/10.5/opengl/src/cascaded_shadow_maps/doc/cascaded_shadow_maps.pdf
@csanadtemesvari9251
@csanadtemesvari9251 2 ай бұрын
I just got this recommended: kzbin.info/www/bejne/h4fFgKNnnZJ9mJY
@roozbubu
@roozbubu 2 ай бұрын
the goat has returned
@iNkPuNkeR
@iNkPuNkeR 2 ай бұрын
Thank you for the work you put out!
@DigitalDuo2211
@DigitalDuo2211 2 ай бұрын
Excellent video as always! ♥
@neil_m899
@neil_m899 2 ай бұрын
Great to see you back man! Loved your IBL videos!
@thegeeko1
@thegeeko1 2 ай бұрын
thanks for the very helpful videos
@oSteMo
@oSteMo 2 ай бұрын
Excellent explanation, thanks for all the effort you put in those videos :)
@xgamer-od7mb
@xgamer-od7mb 3 ай бұрын
thank you for sharing
3 ай бұрын
I've a second question, thanks for calculating these radiation quantities for point light! Are you going to do the same for directional lights, later in the course? Because I'm having difficulty in wrapping my head around visualizing radiant intensity of a directional light (i.e. a light model that does not have position, and it's direction is constant through space, a good approximation of sunlight on earth's surface) because I can't surround its source via spherical shell.
@gsn-composer
@gsn-composer 3 ай бұрын
Good point. Point lights could be controlled in a user interface with the parameter "radiance flux" (in watts). For an idealized directional light, such a parameter makes no sense because the directional light covers an infinitely large area, which would require an infinite radiance flux. Instead, for a directional light, the user should be allowed to directly specify the perpendicular irradiance (in watts per meter).
3 ай бұрын
Hi professor! Thank you so much for making this very high quality resources publicly available. I'm studying PBR rendering at the moment and found your videos through a KZbin search. Yours is the best resource I was able to find that involves both math/theory and code/implementation. ^_^ I got confused by the meaning of theta, and definitions of and relations between irradiance and radiance. AFAIU, on slide 14 when you explain irradiance, around t=18:55, theta is the angle between the incoming light direction and the surface normal. Again in slide 14, when you explain radiance, around t=20:30, theta is the angle between the outgoing/reflected light direction (view direction) and the surface normal. you say that's why we multiple dA with cos(theta), to get the projected area. Again in slide 14, when you explain the relationship from incoming radiance to irradiance, around t=22:18, theta again becomes the angle of incidence. oh! I think I got it now! In the third case we are talking about the incoming radiance, but in the second case we were talking about outgoing radiance. We always use theta, but meaning of theta changes depending on whether the radiance we are interested in is incoming or outgoing wrt surface. Lol, it took me a few days to finally grasp this and that happened while I was writing this question. :-) I'm glad I posed the question. Still gonna leave my comment here in case anyone else got confused about the same concepts. One tiny bit confusing aspect that was left in my mind is that, the outgoing radiance, the essential quantity that we are calculating in our shaders, by definition has a cos(theta_prime) in it. (called it prime to distinguish it from angle of incidence). However, AFAIU, by definition, BRDF handles the "projected area" for that case... I mean, the integral to calculate L_o(v) is kind of a transformation of irradiance from l-space to v-space via the BRDF kernel, and by definition, the direction v is into our eye/camera. so, theta_prime is always 90. At least, this is how I'm going to sooth my mind. :-)
@gsn-composer
@gsn-composer 3 ай бұрын
It is normal to be a little confused. This is a difficult topic and the video would need to be even longer to fully explain everything. You can watch this video kzbin.info/www/bejne/i6u0iYucfpV1otEsi=ctmtY_JDbwqOaiJx&t=1299 Matthias Teschner (University of Freiburg) goes into more detail and may answer your remaining questions about radiance starting at 21:39 min:sec.
@WatchInVR1
@WatchInVR1 4 ай бұрын
Hi. I was typing "understanding registers in shaders" in the search bar in KZbin and got to your video. r1, r2 etc are temporary registers; o0, o1 etc are output register; v1, v2 etc are input register (for vertex shaders 3.0). I would really like to understand this because I want to stereorise shaders in games. This means I want to edit shaders of games through DLL wrappers. Do you have any videos or sources that explains what these registers do and what they mean in ASM or HLSL language..?
@gsn-composer
@gsn-composer 4 ай бұрын
Sorry, writing shaders at assembly/register level is very uncommon and I have never done it myself, so I cannot help you. As far as I know, writing shaders directly in assembly is only possible in DirectX/HLSL, not in OpenGL/GLSL, which I cover in these videos. learn.microsoft.com/en-us/windows/win32/direct3dhlsl/dx9-graphics-reference-asm It is strange that you found this video, which covers the very basics, while searching for such an advanced topic. Normally, shaders are compiled at runtime by the graphics card driver to generate machine code for the specific graphics card. These compilers are pretty good at optimizing the code, so I think you really need to know what you are doing to beat them.
@WatchInVR1
@WatchInVR1 4 ай бұрын
@@gsn-composer thank you very much for your response! Yeah it is a very complicated topic and while I have thousands of examples of what is done very few are able to explain exactly how its done. Even KZbin and google are dry. Ive been to the Microsoft reference but its also scratching the surface. The issue is primarily to alter the shader through alterations in the matrices but the back and forth between the vertex shader and pixel shader with the complexity on top of using the registers is very hard to understand.
@camerbot
@camerbot 4 ай бұрын
This is an awesome video series thank you so much at 7:05 you said that use of three primary colors has something to do with the fact that we have 3 cones where can we get more info on this topic? i though that we use RGB model mostly for technical reasons (im guessing its easier/better to produce red green and blue diode screens than lets say cymk) is there really a link of some kind between how our cones work and how rgb models are defined?
@gsn-composer
@gsn-composer 4 ай бұрын
Yes, the RGB color model is certainly motivated by the fact that humans are trichromats. Have you seen the section about the CIE RGB color space from 1931 at 7:20 min:sec? You cannot cover the entire space of perceivable colors with three primary colors, but you can cover a large area: en.wikipedia.org/wiki/CIE_1931_color_space . Unlike the additive RGB model, CYMK is a subtractive color model typically used by printers: en.wikipedia.org/wiki/CMYK_color_model
@doriancorr
@doriancorr 4 ай бұрын
A truly great work, thank you for sharing your work with us. I have learned so much from your series.
@roozbubu
@roozbubu 5 ай бұрын
This series is a truly phenomenal resource. Thank you!
@wkxvii
@wkxvii 5 ай бұрын
You deserve more views and likes friend! Awesome content!!! Thank you very much!
@azbukaChisel
@azbukaChisel 6 ай бұрын
Best of the best, thank you teacher!
@xinyanqiu3601
@xinyanqiu3601 6 ай бұрын
This is the best class I've ever heard, thank you!
@unveil7762
@unveil7762 7 ай бұрын
I was thinking to precompiute the brdf integration map as an attribute. This can save maybe gpu resource since than the texture is an attribute at i guess point level! 🎉 thanks for all those lessons. Anytime i have time i come here to become a better artist! Little pearls each time. Thanks!!
@gsn-composer
@gsn-composer 7 ай бұрын
Thank you very much. The result of performing shading (or parts of shading) in the vertex shader depends on the number of vertices in your mesh. Of course, if you have control over the tessellation of your input mesh, you can use it for low-frequency shading components, but I cannot recommend it as a general solution, especially in combination with high-frequency (detailed) textures. The texture coordinates are interpolated by the rasterizer when they are passed from the vertex to the fragment shader. This means that the texture coordinates and material properties read from textures, such as roughness (see the last example at 39:33), may change per pixel, not per vertex.
@unveil7762
@unveil7762 7 ай бұрын
This isi so cool!! is there a way to out put the result on an external screen? have a quad mapper as web app can be pritty handy!!
@gsn-composer
@gsn-composer 7 ай бұрын
It is not possible to have two synchronized instances of the GSN Composer in two different browser tabs, one running on an external screen and the other on the primary screen. However, it is possible to view the output in full screen mode. In the GSN Composer interface, you can use the "View Control" panel to set the output of any node as the background. But you will still see some of the interface. Currently there are two ways to get a "clean" full screen mode: 1) If you make the project "public", you will find a "GalleryLink" in the project dialog (next to the graph name). If you append "&fs=1" to this link, you will get a full screen output, see www.gsn-lib.org/docs/gallery.php To switch the browser to full screen mode, press "F11" on Windows. 2) In the project dialog, use the "Export as Website" button. If you are familiar with basic CSS, you will be able to place/scale the output as you like on your website.
@lambcrystal583
@lambcrystal583 7 ай бұрын
That's the most clear and convincing learning resource for cg i had ever seen.
@4AneR
@4AneR 7 ай бұрын
Many times before I got stuck with radiance and BRDF math, but this video finally allowed me to understand it. This is the best video on lighting computations on KZbin. Big thank you!
@unveil7762
@unveil7762 8 ай бұрын
Thanks!! I am trying to make all lession you do here in Touchdesigner!! Your composer is the best web madness i ever saw. Thanks for what you share. You made me a better artist . ❤
@gsn-composer
@gsn-composer 8 ай бұрын
Thanks! I have never used Touchdesigner, but judging from the documentation at derivative.ca/UserGuide/Write_a_GLSL_Material it seems possible to reproduce most of the examples. I am very interested, so please share your project if it works. 👍
@user-zd7pl5pv5h
@user-zd7pl5pv5h 8 ай бұрын
at 22:07 return p +0.5 to avoid negative value and since p is [-1,1] should not it be p*0.5 + 0.5 ? I am sorry just checking if i am not understanding it right. I am learning random noise and by far this is a top notch content in internet. You deserve so much appreciation.
@gsn-composer
@gsn-composer 8 ай бұрын
Thank you. Because of the linear interpolation, the value of p is approximately in the interval [-0.5, 0.5]. Therefore, (p + 0.5) maps to [0, 1].
@unveil7762
@unveil7762 8 ай бұрын
Thanks, all this is pure gold!
@unveil7762
@unveil7762 8 ай бұрын
🥰
@DigitalDuo2211
@DigitalDuo2211 8 ай бұрын
Love these videos! Keep up the great work!🎉
@lksxxtodin5292
@lksxxtodin5292 8 ай бұрын
I loved the video! Great slides and apresentation But I think that you went soo much fast with the shader explanation I think that you could get more in detail on the "normal", "position" (this atributes came from the mech?), what is rasterization, etc
@user-zd7pl5pv5h
@user-zd7pl5pv5h 8 ай бұрын
why did you make t2p when you could have used gl_FragCoord ?
@gsn-composer
@gsn-composer 8 ай бұрын
Yes, that is true. Thank you. The difference is only an offset of 0.5. For example, if the output is 4 pixels wide, gl_FragCoord would return "0.5, 1.5, 2.5, 3.5" while t2p would return "0.0, 1.0, 2.0, 3.0". But since we round to the nearest integer with uvec3 later, using gl_FragCoord would be perfectly fine. When I think about it, in this particular use case at 4:48 min:sec using gl_FragCoord would be even better because rounding the result of t2p is prone to floating-point inaccuracies.
@user-zd7pl5pv5h
@user-zd7pl5pv5h 8 ай бұрын
@@gsn-composer thank you for response and when i checked documentation, i found that you can assign the center of pixel whereever you want and if you assign pixel_center_integer, it will give you 0.0, 1.0, 2.0 and 3.0. And, not only this but all of the videos you have in GLSL or shader are top notch !! Thank you for your contribution.
@antontretyakov486
@antontretyakov486 9 ай бұрын
These series are pure gold. Thank you sir for provision of such a great exemplification!
@theman7050
@theman7050 9 ай бұрын
finally makes sense after looking around at other tut videos and articles.😊
@pauloviniciuscoral1128
@pauloviniciuscoral1128 9 ай бұрын
Wow. I'm constantly using noise to do some procedural stuff but I always struggled to understand what was the math behind it. That was super cool. Very clear and paced instructions. Cheers!
@emperorpalpatine6080
@emperorpalpatine6080 10 ай бұрын
Hello ! I have a question , how do you deal with aliasing in envmaps that have very bright and small lights , like the sun ? Using split sum approximation , I need extremely high number of samples to only minimally enhance the pre filtered cubemap , and , when using shaders , you quickly reach the point where the GPU crashes and the driver resets . in this situation , I'm about to bake the environment map totally offline CPU side, any advice ?
@gsn-composer
@gsn-composer 10 ай бұрын
In this case, you can perform importance sampling based on the envmap (incoming radiance), not based on the material. I am currently working on the next episode, which will cover exactly this topic. In the meantime, here are some slides: www.mathematik.uni-marburg.de/~thormae/lectures/graphics2/graphics_3_4_eng_web.html The other options is to compute the prefiltered envmap incrementally, doing a few hundred samples per render pass. You can just compute an incremental average: www.mathematik.uni-marburg.de/~thormae/lectures/graphics2/graphics_2_1_eng_web.html#42 This way the "watch dog timer" is not triggered and the GPU driver does not crash/reset.
@emperorpalpatine6080
@emperorpalpatine6080 10 ай бұрын
@@gsn-composer Thank you so much!
@emperorpalpatine6080
@emperorpalpatine6080 10 ай бұрын
really good video
@yuliana4924
@yuliana4924 10 ай бұрын
Such a great video! Anyway, could you provide the literature references (i.e. book or journal article) that explain the Phong model equation in this video? Because I saw in several paper the equation slightly different. Thank you!
@gsn-composer
@gsn-composer 10 ай бұрын
Yes, there is a slight difference between "Phong shading" (from Phong's original paper) and the "(modified) Phong BRDF". The difference is that the cos(theta) term from the rendering equation is also used for the specular term (e.g., compare the first two rows of the table "The Normalization Zoo" on this website: www.thetenthplanet.de/archives/255 ). The video uses the (modified) Phong BRDF. Here is a technical report: Lafortune and Willems, "Using the modified Phong reflectance model for physically based rendering", www.cs.princeton.edu/courses/archive/fall03/cs526/papers/lafortune94.pdf
@yuliana4924
@yuliana4924 10 ай бұрын
@@gsn-composer noted thank you! I'll deep dive to these source
@konstantinbondarenko5235
@konstantinbondarenko5235 10 ай бұрын
Thank you, wonderful video.
@wewakemartin
@wewakemartin 10 ай бұрын
Thanks for making these tutorials. Very helpful to visualise whats going on in the background of games and 3d animation software. I tried mapping a texture on a plane object and the texture stretches and distorts when I increase the segments number to more than one. Why is this happening?
@gsn-composer
@gsn-composer 10 ай бұрын
Thank you for your comment. In which application? Blender? GSN Composer? In the GSN Composer, I have tried to replace the triangle with a plane (3D.Compute.Generate.Plane) in the provided example ( www.gsn-lib.org/index.html#projectName=ShadersMonthly05&graphName=TexturedTriangle ) and it is possible to increase the number of segments without any problems. In general, you can debug the texture coordinates by assigning them to the output color (see 6:19 min:sec in the video). Or if you do not have access to the shader, you can simply assign a UV color grid as a texture for debugging. In the GSN Composer, a color grid image can be generated with the ImageProcessing.Compute.Generate.ColorGrid node. For Blender, you can follow this tutorial: kzbin.info/www/bejne/m6LdhZSAedWhmc0 Does this answer your question?
@wewakemartin
@wewakemartin 10 ай бұрын
@@gsn-composer hello, thank you for your response. :) I'm using one of the examples on GSN composer. I have increased the segmentX of a Cube to 2, effectively increasing number of faces to 10. Then I used texCoordEdit node to map different areas of labelled colorGrid texture to different faces of the cube using count. However, the texture is distorting on certain areas. What am I doing wrong? Please advise. GSN Composer Graph: #projectName=CubeMappingTest&graphName=CubeMapping
@gsn-composer
@gsn-composer 10 ай бұрын
​@@wewakemartin Okay, now I get it. Yes, in the second example, when you increase the segments of the cube, the texture coordinates are messed up by the 3D.Compute.Mesh.TexCoordEdit node. You can either bypass this node by connecting the mesh from the Cube node directly to the Shader node, or you can "reset" the TexCoordEdit node by selecting its "offset" output and deleting the text. Otherwise, the TexCoordEdit node assigns offsets to the initial texture coordinates and they end up at wrong locations that cause this distortion.
@wewakemartin
@wewakemartin 10 ай бұрын
@@gsn-composer I see, thank you, ill try that out. I was hoping to create some sort of animated mosaic by sampling random parts of a single texture image onto different segments using shaders.. but if they are going to stretch/distort, I will have to figure out another way..
@gsn-composer
@gsn-composer 10 ай бұрын
​@@wewakemartin I created this “mosaic“ example for you: www.gsn-lib.org/index.html#projectName=ForumShadersMonthly05&graphName=RandomTextureLocations Distortion can be avoided by using the same similarity transformation for the texture coordinates for all vertices of a quad (or triangle).
@Jarmony
@Jarmony 10 ай бұрын
Thanks for sharing, a wonderful course!
@maxlykos940
@maxlykos940 11 ай бұрын
What's the reason behind chosing that PDF of GGX normal distribution function? Where do cosθh and sinθh come from?
@maxlykos940
@maxlykos940 11 ай бұрын
9:27
@gsn-composer
@gsn-composer 11 ай бұрын
Theoretically, you can choose it freely. The only constraint is that the integral of the PDF for the entire domain must be one. At 10:03 min:sec in the video, we check that the normalization to one is fulfilled for our particular choice. This is not a coincidence. Walter et al. constructed the GGX normal distribution function in this way, as described in their paper "Microfacet Models for Refraction through Rough Surfaces", section "3.1. Microfacet distribution function"; Equation 4: www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf