All the source code of this series can be downloaded from: iki.fi/bisqwit/jkp/polytut/ It also includes, as patches, fixes for _all_ the bugs I mentioned in this video. *A reminder of what I said at **31:25**: Do not reply to **_this post_** if you want me to see your comment. Post your words as a new comment (the line for that is above), not as a reply, unless you are addressing the contents of **_this comment_** specifically. KZbin does not show creators new replies, it only shows new comments. If you reply here **_I will not see it_** unless I manually check for it.* If you are addressing a comment someone wrote, then you _should_ reply it, though. Note: Luxels are also sometimes called _lumels._
@awesomekling4 жыл бұрын
I love how this series is teaching me what all those game settings I’ve tweaked in my life actually mean :)
@sZenji4 жыл бұрын
so u life with vsync?
@MESYETI4 жыл бұрын
Hello SerenityOS person ;)
@Mad30114 жыл бұрын
Ayy, you're here too.
@luckylove724 жыл бұрын
Or may be you could have read some books on graphics instead of wasting your time on KZbin.
@sharoyveduchi4 жыл бұрын
Andreas Kling and Bisqwit collab when?
@unevenprankster4 жыл бұрын
I think this has been truly the greatest series of videos in the channel so far. No one else has explained better these concepts on a video format before, and hopefully this will pave the way for creators to touch the topic better. Thank you very much. Onto the future we go!
@duuqnd4 жыл бұрын
Oh boy, I'm gonna have to watch this a few times
@ashleyoliver31934 жыл бұрын
Watching those lightmaps get progressively recomputed as you move the lights around is absolutely fascinating.
@logins4 жыл бұрын
I am amazed on how you can keep up any topic in programming and just implement it, especially when talking about graphics lighting. And after all that, you are also able to explain it! Great work.
@Bisqwit4 жыл бұрын
You have to remember I only do videos about topics I know about.
@vegardertilbake13 жыл бұрын
I love the dry witty humor you manage throw in. Truly a master at work!
@kruruneiwyn21074 жыл бұрын
I watch a lot of stuff here on KZbin but nothing on here can or ever will match one of your uploads. Been a subscriber for a while & I don't work with C++ or even anything remotely close to game-related libraries or what have you... But thank you so much for making these videos! Always look forward to watching this stuff when I see your uploads in my feed. Even though I may not work with C++ or graphics libraries, I'll always learn something, which is always good. Tonight this video was accompanied with a pizza, after waking up at ~6pm. Continue being awesome!
@gsuberland4 жыл бұрын
Absolutely loving this series. I know some superficial information about 3D rendering but it's great to see the actual details and mathematics broken down in a practical sense. Much of this topic is often presented in a manner that makes it feel very daunting to even begin, but you've really done it justice here.
@DanielLopez-up6os3 жыл бұрын
Yeah super easy to understand.
@SuperSpeed524 жыл бұрын
Your mental strength to come up with these kind of solutions is what motivates me to keep going and never think I have learned enough, its also quite depressing that my university doesn't exploit our potential more into projects like this, in my context is either impossible or really, really hard, mathematically speaking and to put it all together in code. You're basically what my companions and myself aspire to be, very thankful for this showcase of pure math and code skill. I love your work and the lufia music you fit in :)
@Je3f0o4 жыл бұрын
Amazing work. I'm so amazed how hard you work on this video. Many years ago I watched your NES emulator video and inspired me to learn C++. Now I'm already very decent programmer in wide range of computer science. Few years ago I tried to create my own 3D game engine using OpenGL. I learned a lot about rendering and some game physics, lighting etc... but didn't figure it out how to animate character yet. And last few years I didn't write code related to Game engine (Busy working javascript all the time). But 3D game engine is still my most favorite subject in computer science. It is very challanging and teaches me a lot about Math, physics and computer science. That is why I love it so much. By the way great work again. You are still inspiring me to grow up more and work harder like you too. :P Thank you.
@therewasblood4 жыл бұрын
Google/KZbin must not know me very well. I saw this channel for the first time after years of watching programming, hacking, gaming channels and they didn't even realize THIS is my new favorite channel. Google, get your act together! Friggen awsome, amazing channel Bisqwit! Subscribed.
@timetravelbeard35883 жыл бұрын
Thank you so much for these videos. Your voice and personality are so comforting and your coding is inspiring. Your solution to lighting here is oddly elegant even though it's processor heavy. Using cameras at every surface is something I wouldn't even consider even though it solves every lighting problem at once. To solve the leaking light between polygons I would add one pixel to the rasterizer x and y loops, as in "for(x=left,x
@frahohen2 жыл бұрын
You are so good that you can explain it simple and interesting. I swear to god I miss people like you on youtube, who take the hard topics and break it down in understandable simple pieces and present them even very amazing visually.
@ipaqmaster4 жыл бұрын
I love your videos and they're very entertaining! Especially upgrading from a single rendering thread to full cpu 48 core multithreading in the middle of the video!
@Acuebed4 жыл бұрын
Hey Bisqwit! Just wanted to say thank you for posting all these crazy high quality videos. I'm not in the same domain, or the same caliber as you at this. But It's motivated me to document myself writing code as well. Best wishes, and cannot wait to see more of your content!
@ricardo.mazeto4 жыл бұрын
1 - You're finally taking the time to explain your code! And you're explaining it well! Big thumbs up for that! 2 - Wouldn't it be faster and better if instead of putting a camera on every single pixel of the light map, rendering a small image and setting the light accordingly, checking if lines between the center of the pixel and the light sources are intersected by any objects?
@Bisqwit4 жыл бұрын
That would only account for direct lighting, and is essentially the same as raytracing. It would not create indirect lighting. For example, the tunnel near the ceiling (which I apparently did not traverse in this video) which has no light sources, would be pitch-black - which is not realistic. It should still receive _indirect_ (reflected) lighting from walls that are illuminated. Also raytracing towards the center of light sources creates ugly razor sharp shadows, as if the light source is very far away (like the sun) or tiny and directly pointed at the object. You _can_ add indirect lighting by also casting a few hundred rays in random directions (not just towards light sources) and getting whatever pixel color the ray hits - and this is in fact exactly what I did when generating the lightmaps for the OpenGL video - but then you’ve lost any performance advantages over the method I described in this video.
@GoldenThumbs Жыл бұрын
@@Bisqwit To add to this (two year old) answer here, this is also not like some random concept he came up with either... Or maybe it is... But it *is* also an existing technique for generating lightmaps. It's called "radiosity", or also "radiosity lightmapping" if you prefer. It's been used in videogames for decades at this point. First game *I* personally know about that used it is Quake(1996), but more recent games such as Half Life (1998), Half Life 2 (2004), Portal (2008) (you probably get where I'm going with this, it's a part of Source's mapping tools lol... Also Unity (the game engine) used it, in the form of a third-party library called "enlighten"). There's several papers about this algorithm, one of which I see cited a lot but have a hard time tracking down is one by Hugo Elias. There's *also* the open source... Library? I guess I'll call it a library... The open source library lightmapper (github.com/ands/lightmapper) which is a single-file C & opengl implementation of this effect.
@mateusarruda234 жыл бұрын
You explain things in a way that is easier and pleasant to understand. I hope you continue to do these amazing videos. Congrats!
@henryso44 жыл бұрын
GI is always fun - I've been doing it in "real time" (runs at like 20 fps right now which isn't very acceptable) using a voxel structure - by cone tracing with slightly randomized cones aligned with any diffuse surface's normal. The results are surprisingly decent but I'm still working on optimization and reducing flickering from voxelization
@DoctorGester4 жыл бұрын
You can check out Handmade Hero series for voxel-based GI running in realtime (implemented from scratch)
@chetana98024 жыл бұрын
Can you share link to your work?
@sirpizza20444 жыл бұрын
The amount of knowledge this man has is insane
@peacefulexistence_4 жыл бұрын
hes like a walking library lol
@TheSenorTuco4 жыл бұрын
Actually your series motivated me to give up on 17 and start re-learning the 20
@captainshitbrix72714 жыл бұрын
i love a good graphics lecture/video essay/knowledge explosion from bisqwit
@vitaliyforij5205 Жыл бұрын
Thanks for explaining) I searched for a long time for explanation of light map ) You are amazing)
@3DSage3 жыл бұрын
28:30 I like the 64x64 pixels! It's like my 3D Minecraft on the GBA I Programmed.
@PietroNardelli4 жыл бұрын
It is blast to see how the quality of videos improved, I remember when you could not record directly your screen, I am very happy for you. Your content is one the best regarding programming. It is a bit weird hearing you saying hello and not shalom. Could you do more videos on obfuscated programming?
@Bisqwit4 жыл бұрын
If I get ideas in that area, maybe.
@HugRunner4 жыл бұрын
So glad you take the time to create these videos and projects. It's truly amazing and very inspiring. Hearing about (real time) global illumination these days, it's really hard not to think about Unreal Engine 5 and the demo they showed there. You talk about your code not being optimized etc. and being CPU intensive and I understand that's not the prime purpose of this series, but it could be really interesting if you could make a video trying to describe what kind of techniques or differences it would take for your project to obtain similar result to the light demoed in UE5. Thanks a lot, no matter if you have time to make something like that!
@Bisqwit4 жыл бұрын
An engine such as UE5 uses a combination of dozens of different techniques to achieve its result. I could not hope to catch up with that. However, I do try to keep doing this series and covering progressively more complex themes. The next thing that I will cover, will be probably HDRi, and after that, maybe portal rendering. But before I get there I may need to take a short break and do a less demanding video first so I don’t burn out.
@MissNorington4 жыл бұрын
29:47 "It's a bug in my engine". If you have the book Michael Abrash's Graphics Programming Black Book Special Edition, go to page 1066, chapter 57 Figure 57.1: "Gaps caused by mixing fixed-point and all-integer math". And if you know it is not your polygon edge interpolation math that is causing it, then you might be running into the other problem: Texture sampling, interpolating from outside the texture, or reading outside the texture (clamp texture edge doesn't solve everything). Many games use textures that expand the borders of the polygon, so that when being rendered from far away, will still look okay. Even Michael Abrash admitted in his book that he completely skipped over his own advice of polygon rendering, and had to go back and fix his code in order to solve the problems.
@de_generate4 жыл бұрын
Big fan of the practical light experiment, thanks for the effort!
@ihspan68924 жыл бұрын
I LOVE how deep you go with each topic. I salute you!
@Arti9m4 жыл бұрын
Videos like this motivate me to keep working on my own rather complicated projects. Thank you!
@mikec34124 жыл бұрын
Wonderful video Bisqwit! You have always been one of my favorite programmers to watch. Never afraid to dive deep and explore different ideas.
@petacreepers234 жыл бұрын
Extremelly nice video, probably would have to watch it again, to fully comprehend It as I am not an expert programmer. It is really good that this content is created since most of the "tutorials" are just basic stuff and the realc omplicated thing that are written in the underlying libraries are rarely explained. Really nice content, again
@SomeRandomPiggo9 ай бұрын
Great video! I finally understand how lightmaps are calculated, even if as you said it isn't the most efficient method, I might try calculating them with an FBO on the GPU instead. Thank you very much
@starc0w4 жыл бұрын
Very, very impressive Bisqwit! Keep going! It's very intressting and entertaining!
@matthewconte4 жыл бұрын
[cool music intensifies] Love your videos and dedication, Bisqwit!
@janmroz3483 жыл бұрын
Wow, that's a great video about lightmapping! I tried some time ago to bake ambient occlussion for 3D models using very simmilar technique. I faced the same problem as described in 28:37, because I was baking 32x32 texture with a single camera shot with 170 FOV angle (with fisheye remapping), but I solved this problem by using MSAA x8 in a bake texture + lightmap denoise algorithm. Also 32x32 texture worked really well, because the texture was fitting into my GPU cache nicely, so the mean computation was done on GPU almost without any cache misses and without using CPU GPU bus. With this approach I could bake high quality 4k AO map and still measure the bake time in seconds, not minutes! From my approach I've learned that lightmapping is not about writing lightmapper logic, but mainly about fixing small details and fighting for milliseconds in optimization, but I highly recommend - if you are a graphics programmer - give it a shot, it's a great journey. It's great that someone created so intuitive explanation, keep it up!
@BRUXXUS4 жыл бұрын
This is absolutely incredible! I've recently gotten into the Demoscene and love seeing what people can do with code. I've also been mapping in Source since HL2 was released. My skills definitely lean more to the actual design and mechanics side rather than coding, although I really wish I knew how to code. Someday I will learn and watching videos like this really inspire me to finally just get started.
@TripleBla4 жыл бұрын
bang on, looks great in 4K .. I want to start creating content in 4K .. all I need is a camera, can't wait to see more of your new 4K content. Keep up the great work!
@ians.23494 жыл бұрын
Thank you for this series, I've loved it. I may try to implement some of the techniques from this series in a C based software renderer sometime in the future.
@terraria99344 жыл бұрын
i love this series so far. keep going dude you are doing amazing.
@az0r223 жыл бұрын
Amazingly well worked video! You are the best bisqwit. They are so nice to watch.
@shire79494 жыл бұрын
Amazing video, thank you Bisqwit as always for such valuable knowledge
@kadiyamsrikar95654 жыл бұрын
Great work please keep doing it
@SimulatedWarfare2 жыл бұрын
This guy is a genius
@田中くま-f1i4 жыл бұрын
From my point of view you is the image of power. Want to be like you in future.
@GIJOEG364 жыл бұрын
I like the new "intro animation"/"transition animation"
@l3p34 жыл бұрын
Nice video. The shift from voiceover to life footage felt really weird. A good reason for keeping that style in future videos. My understanding: We need to loop over all those luxels over and over again since the processor can only look at one luxel at a time. Over the number of iterations, it gets closer and closer to the ideal value. (Like man running against turtle.) We don't have many other choices here. I was reading about analog computing the last few days and I thought: How could we model this problem so it is solved without atomic stepping and looping. Then I thought about using light in a room and a camera. I feel stupid now because I actually concluded building the scene physically and taking a photograph of it. Nice solution Bisqwit, try it out! Perfect and realtime, programmed and offered by the best programmer ever.
@ni.ko38693 жыл бұрын
your video made lightmapping understandable to a rube like me and got me to think of how to apply a lightmapping algorithm myself
@007LvB2 жыл бұрын
You are a great teacher!
@AcmeMeca4 жыл бұрын
nomenclature-wise i prefer to think of it as "pixel" = picture element, and "texel" = texture element.
@jasdeepsinghgrover24704 жыл бұрын
Really amazing video man!!
@LambOfDemyelination4 жыл бұрын
Looking pretty good!
@humanman9514 жыл бұрын
This demo looks fantastic! I guess bounce lighting could be done by repeating this process a couple of times while reading the light map calculated previously. Then all surfaces can become light emitters. Keep up the fab work
@Bisqwit4 жыл бұрын
I’m not sure how what you are describing differs from what I am already doing in this episode. This technique already does radiosity perfectly. That is, surfaces that are only illuminated _indirectly_ by other walls that are lit.
@humanman9514 жыл бұрын
Bisqwit ah, so it will eventually converge on a total light level or will it continue to get brighter forever? Given each quad will get more and more light each iteration.
@Bisqwit4 жыл бұрын
It converges on the total light level. The total sum of light reflected by all walls can never exceed the brightness of the lightsource times its surface area, or something to that effect. One particular factor that makes this true is how the weightmap in lightmap rendering is normalized to 1. That is, unless you get full brightness of the light on _every possible pixel_ in the lightmap camera view, the brightness on the wall will always be less than the brightness of the light source. If even _one_ of those pixels does not see the light source, or sees just its reflected light from a wall (that is already dimmed), the luxel will be dimmed too.
@carlospaz85644 жыл бұрын
Bisqwit siempre veo tus vídeos para inspirarme!! :)
@thefastjojo4 жыл бұрын
Amazing video like always, thank you Bisqwit, take care of yourself man
@Mautar554 жыл бұрын
one of the most beautifull tutorials!
@gustavoandrade584 жыл бұрын
your smartness is frightening
@Hatsune_Miku2 жыл бұрын
I love Bisqwits!
@nosuchthing84 жыл бұрын
One million thumbs up
@TheBlackMusicBox4 жыл бұрын
This is gold!
@yigitpolat4 жыл бұрын
hello bisqwit thanks for the beautiful content! a normal map is not the same thing as a bump map. bump map encodes bumps as a map in which each pixel value corresponds to the elevation amount of that particular location.
@Bisqwit4 жыл бұрын
I see.
@abcxyz58064 жыл бұрын
These names are a complete mess. These elevation maps are also often called displacement maps and I have seen normal maps referred to as bump maps
@emperorpalpatine60803 жыл бұрын
yeah , the naming convention is a mess ... I always considered the bump map to be a normal map , whereas I know the elevation map to be the height map instead. I think there's a way to compute a normal map from a height map , but not the other way around though.
@yigitpolat3 жыл бұрын
@@emperorpalpatine6080 you can, you would lose precision though.
@MrDavibu3 жыл бұрын
This isn't true. Bump maps and normal maps are the same thing. What you are referring to are height maps or displacement maps. Normal maps create the illusion of bumps, thus the name. I just read the Wikipedia article and they refer to bump mapping as category of texture mapping , but I personally never read this in actual CG literature, but even then they explicitly say bump mapping doesn't change the geometry of the object. So height maps still wouldn't be considered bump mapping either way.
@MKVideoful4 жыл бұрын
Every programmer should: create compiler, 3D graphics renderer, voice synthesiser and AI. ... Then then the programmer will be probably ascended to another dimension. =D
@islilyyagirl4 жыл бұрын
i'm currently coding a CNN
@SuperSpeed524 жыл бұрын
I have only made the compiler part, my university won't even touch anything related to graphics or voice synthesisers, and AI is just about the end of the career, guess I will just have to ascend to another dimension in satanic ways
@peacefulexistence_4 жыл бұрын
add to it an MPM simulator and an OS, the list is too short lol
@mariobrother18024 жыл бұрын
Don't forget emulators!
@greatsaid52714 жыл бұрын
as always great, thank you
@RahulJain-wr6kx4 жыл бұрын
Awesome series. I learnt some good things with this, do you plan explain colorspace like srgb and conversions as well. It can be a good follow up 😀
@alexpaww4 жыл бұрын
Bisqwit, you can use RenderDoc to change the OpenGL state in applications. It's normally used to debug things, but you can ofc also change things like texture filtering on a per-texture basis
@Bisqwit4 жыл бұрын
Thank you. It won’t help this video anymore, but I will keep that in mind, and study how to use it.
@alexpaww4 жыл бұрын
@@Bisqwit Yea, I just wanted you to know it exists. It's a nice tool to have in ones toolbox.
@Centurion2564 жыл бұрын
Great video, as always, Bisqwit. On a side note, how familiar are you with the topics of memory consistency and lock-free programming? I find them quite intriguing, however, there doesn't seem to be nearly enough high quality content on these topics, especially lock-free programming, and I don't feel qualified enough to produce any myself. In case that you are familiar with them, would you perhaps consider making a brief video series about this sometime in the future?
@Bisqwit4 жыл бұрын
Not very familiar to be honest. I study when I need something, and I haven’t much needed to delve into complex thread-safety topics. The whole c++20 memory_order thing is still an unexplored land to me, for instance. But in case I do get intimate with the topic, it may make into a new video some day.
@taza994 жыл бұрын
hyvältä näyttää, raskashan tuota on pyörittää, mutta hyvältä se näyttää. tekstuuriselitykset alussa oli myös helppoja seurata, tiesin niiden merkityksen toki etukäteen mutta hyvin selitetty, varmasti saa muutkin selvää
@Bisqwit4 жыл бұрын
Kiitos. Itse asiassa puolet suorituskyvystä uppoaa pelkästään tuohon gammakorjaukseen. pow() ei ole mitenkään tehokas funktio…
@octopustophat33974 жыл бұрын
This is basically the coolest programming video I've ever seen. Thanks for putting so much work into this! I just have one question. If the calculations are continually being done on each wall, how do they not just keep getting brighter until everything is white? Wouldn't there always be more brightness accumulating every time it renders the views?
@octopustophat33974 жыл бұрын
@UCKTehwyGCKF-b2wo0RKwrcg Ah okay, I was under the impression each luxel is repeatedly being added to, as it loops. That makes more sense. Thanks!
@Bisqwit4 жыл бұрын
Looks like KZbin glitched there; my comment disappeared as soon as I posted it. You were able to reply nonetheless, although the @-tag glitched too. For posterity, I said the lightmap calculation replaces luxels rather than adding to them constantly.
@EnriquePage91 Жыл бұрын
Mr. Bisqwit also known as “Render Daddy” 😎
@szymoniak754 жыл бұрын
20:17 lol, actually using gamma symbol in code. looks strange
@Bisqwit4 жыл бұрын
Yeah, C++ allows plenty of Unicode characters in identifers. en.cppreference.com/w/cpp/language/identifiers#Unicode_characters_in_identifiers As does C since C99. This page does not mention it, but the feature was introduced in C++11. However, compiler support has been incomplete for a long time. Only as recently as in GCC 10, was support added for those symbols presented verbatim in UTF-8 encoding, rather than having to type them as escapes, such as \u03B3 for γ.
@Legnog8222 жыл бұрын
love the acsent!
@alexandru-florinene41734 жыл бұрын
What font are you using for the editor? (i've always wondered).
@Bisqwit4 жыл бұрын
The editor does not deal with fonts at all. It’s terminal program. It only deals with inputs and outputs. Visual representation is entirely the terminal’s job. Within the terminal various fonts are used at different times.
@Bisqwit4 жыл бұрын
Followup: Answered in kzbin.info/www/bejne/q3q3oYFjhL-Wq9E
@kojoig4 жыл бұрын
Amazing!
@arrangemonk4 жыл бұрын
antibitangent! also how comfortable are shiny spandex longsleeves?
@Bisqwit4 жыл бұрын
Pretty nice. Not ideal for hot weather though.
@zerdagdir19884 жыл бұрын
would this be faster if you used frustum culling?
@Bisqwit4 жыл бұрын
Already done. kzbin.info/www/bejne/nqmyqJKmZdB_nKsm41s A significant loss of performance actually happens in the gamma correction. pow() is a rather slow function, and calling it three times for every pixel at 1280x720 is not exactly efficient.
@franciscoaguilar82344 жыл бұрын
Bisqwit, your content is amazing, however, your handwriting skills using just a mouse are overwhelming!
@rlenclub4 жыл бұрын
Bisqwit: we are going to write a graphics engine with global illumination and raytracing me in unity: well it only took 5 hours to figure out how delegates work
@therealdutchidiot4 жыл бұрын
The bug you ran into sounds very much like a note in the QuakeIII engine, where as a fix some vertices are drawn next to eachother to prevent seams from showing.
@LittleRainGames4 жыл бұрын
Why not usw true color × intensity? And rays instead of cameras? Wouldnt that be cheaper, to just send a ray from each texel to each light, instead of a camera in 5 directions? Love your work btw, just found out a few weeks ago that you also had a big part in snes development, i just started delving into that.
@Bisqwit4 жыл бұрын
You may have to elaborate a little on your proposal. EDIT: As for rays, that would only account for direct lighting, and is essentially the same as raytracing. It would not create indirect lighting. For example, the tunnel near the ceiling (which I apparently did not traverse in this video) would be pitch-black, because none of the light sources are directly visible from it. It should still receive indirect (reflected) lighting from walls that are illuminated. You can add indirect lighting by also doing a couple hundred lines in random directions (not just towards light sources) and getting whatever pixel color the ray hits - and this is in fact exactly what I did when generating the lightmaps for the OpenGL video - but then you’ve lost any performance advantages over the method I described in this video.
@skaruts3 жыл бұрын
One good thing about the Source Engine (I suppose the same can be said of GoldSrc and Quake Engine) was that you could specify the lightmap resolution for each surface separately, while editing the maps. I don't see this possibility in Godot, and I suspect neither in Unity and Unreal. Though I could be wrong about the latter two. Basically, a HL2 map defaulted to low resolution lightmaps all around, and you specified the surfaces where higher resolutions were needed.
@sznio4 жыл бұрын
Could you do something like an edge-detect on the lightmap to find areas with streaking and decide to increase the camera resolution for them? This could also be used for optimization: run a bad low quality render, if that render is completely uniform then most likely increasing the resolution won't add more detail, if the render is noisy/streaky then discard the result and increase the resolution. This also would increase resolution around shadow boundaries, and reduce resolution where it's not as needed.
@Bisqwit4 жыл бұрын
It would need a custom storage format for bitmaps that have varying resolution in various parts of the bitmap. I don’t know any approach to do that efficiently, neither in writing nor in reading.
@szymoniak754 жыл бұрын
are you planning on making a video where you would rewrite the code to use the GPU, using CUDA for example?
@Bisqwit4 жыл бұрын
It is not in plans at the moment.
@PankajDoharey3 жыл бұрын
Genius!
@tiagotiagot3 жыл бұрын
For the fisheye light-probe for diffuse lighting, what if you do it with rectilinear projection, but temporarily apply a distortion factor to the coordinates of the vertexes just for the light-probes, matching the approximate look of the true fisheye rendering?
@Bleenderhead4 жыл бұрын
If you wanted to get real time dynamic lighting, rather than constantly running what amounts to path tracing in a background thread, you could use the hemisphere cameras to precompute the radiosity form factor matrix, which encodes the (cosine-weighted) visibility from every element to every other element. Then computing the global illumination amounts to solving a sparse linear system, and the form factor matrix does not need to be recomputed if you change the emission of various surfaces. It wouldn't handle moving lights, though. So I guess it's only dynamic with respect to which surfaces are glowing.
@suarezvictor772 жыл бұрын
Awesome series! I'd like to know why triangles aren't rendered with antialias, particularlly prefilitering antialiasing to avoid the high overhead of supersampling
@Bisqwit2 жыл бұрын
Antialias is difficult to implement because it involves transparent pixels (reading what’s underneath and modifying the pixel such that its new color is something between the old color and new color), and transparency is sensitive to rendering order. For example, suppose that there is a red polygon and a blue polygon that share an edge, and first the red polygon is drawn. Its edge pixels are a mixture of black (background) and red, i.e. darker shades of red. Then the blue polygon is drawn. Its edge pixels are a mixture of those dark-red pixels and blue pixels, even though they should be a mixture of red and blue. This effectively means that the black leaks through. If the polygons are drawn in opposite order, then the edge pixels would be a mixture of red and dark-blue. Different result, but still wrong. It is difficult to avoid this problem. Additionally, antialias requires drawing more pixels. An aliased line from (1,1) to (2,2) would be two pixels. An antialiased line would be four pixels: a square with bright pixels in two corners and dark pixels in other corners. The mathematics of drawing antialiased polygons are heavy: One needs to calculate the bounding box of the triangle with rounding up and down for all corners, and the blending proportion of color for every edge pixel and its neighbor and perform the blending (read-modify-write) for each of those edge pixels. Supersampling, such as drawing the entire screen at 2x size, and then downscaling, is a mathematically simple way to solve all these problems.
@suarezvictor772 жыл бұрын
@@Bisqwit I agree with and thanks your explanation. I'm trying to write some rasterizing code to be run on a FPGA and I plan to sort it out those problems. One way I think is a possible solution to the blue+red polygons is to use a 4th byte to store alpha for each poligon's pixels then blend colors considerong the alpha generated from each polygon, this should solve the mixing with black as you explained since the blending is not done first. I plan to use some of your really nice code to test that. Hopefully there's interest to improve the rendering and avoiding supersampling
@Jonnio4 жыл бұрын
Awesome, makes me wanna do something lighting stuff too.
@KishoreG23964 жыл бұрын
I am planning on learning openGL, but I have some questions regarding the mathematical background I would need for this. Are linear algebra and differential geometry needed to work with OpenGL? It can I get by with just knowledge of basic 3D vector mathematics (cross products, etc)?
@pigworts24 жыл бұрын
Basic linear algebra is helpful (for projection and transformation matrices etc) but differential geometry is almost never required (there is some advanced stuff that uses it but not that much). A good grasp on thinking about coordinate systems is all you really need.
@KishoreG23964 жыл бұрын
@@pigworts2 Thanks. Do you have any examples where you might need differential geometry?
@pigworts24 жыл бұрын
@@KishoreG2396 non-Euclidean rendering is probably the most common example.
@theboy1814 жыл бұрын
Still love your voice, and your videos!
@95vinoth4 жыл бұрын
How many years of learning one needs to achieve this level of knowledge
@josedejesuslopezdiaz4 жыл бұрын
you can learn it, you maybe will need to invest more time in the right things but can do it if you really want
@Bisqwit4 жыл бұрын
As I’ve written before, IQ has nothing to do with it. Different people just have brains working differently, with talent for different things. For example, I am _very_ dumb when it comes to learning by observing and repeating. I am a dance teacher, but unlike most of my pupils, _I_ cannot learn dances by repeating what others are doing. If there are no explanatory words involved, in most cases I cannot learn it. I have to process it in words, even if just in my mind, to learn it. Another example is that I cannot throw a ball very far. It perplexed me to no end when I was a child how my peers could throw a snowball to the topmost floors of a six-floor apartment building, while I could hardly make it reach the second one. I never figured out the trick. Yes, I know the theory of assisting the motion with your whole upper body. Nope, not getting it.
@95vinoth4 жыл бұрын
@@Bisqwit Thanks for the reply. I really admire your work and knowledge.
@birdbrid93914 жыл бұрын
@@Ljosi blunt but correct
@stickfigure424 жыл бұрын
@@Ljosi IQ is widely recognized as basically worthless at predicting anything at all besides how good you are at taking IQ tests.
@lusiaa_4 жыл бұрын
I think that on 22:48 you wanted to implement something like a probability density function, I used PDFs (like Lambertian distribution for completely matte surfaces or GGX distribution for rough/glossy surfaces) in my path tracer so I could "weigh" how important would a ray be in a calculation. I might be completely wrong about this though, so please take this with a grain of salt, this might be completely irrelevant for your project 😅
@the-guy-beyond-the-socket6 ай бұрын
Hi! What is the filter for lightmap? The way it overlays on the texture and changing color. Is it the same as "soft light" In photoshop?
@Bisqwit6 ай бұрын
Multiplication.
@the-guy-beyond-the-socket6 ай бұрын
@@Bisqwit oh ok, thanks!
@seesoftware3 жыл бұрын
I am woundering how you keep lightmap threadsafe with the main loop? i was looking at the code but i couldnt find any synchronization, but i might be missing something...
@Bisqwit3 жыл бұрын
The only aspect of thread-safety demanded by this code is that writing a float value to a memory address is atomic, i.e. another thread that simultaneously reads the memory address will receive either the previous value or the new value, but never a bitwise mishmash of those two, and never a bus error or some other fatal signal. To my knowledge, this is true on the x86_64 platform.
@seesoftware3 жыл бұрын
@@Bisqwit (you might not get this reply because youtube) I understand, but just fyi and for people stumbling on this post, i would suggest using a std::atomic with std::memory_order_relaxed which would most likely boil down to a single move instruction x86_64, but would also show better intent and be safe on any platform.
@Bisqwit3 жыл бұрын
Fair point, but it would also make the code rather cumbersome, because the canvas is an array of pixels and the pixel is a structure that contains several floats. Any time you want to read or write the canvas you would need to cast/convert between structs containing floats and structs containing atomic floats. I realize it’s a correctness thing, but there are times where I sacrifice pedantic correctness, if it would get way too verbose. I think such a verbose and intrusive change would also compromise the “intent” aspect. After all, in this situation, the intent and focus is in the calculations and rendering, not in the multithreading which only serves as lubrication.
@destinydonut4 жыл бұрын
Can you try out the Vulkan API?
@Bisqwit4 жыл бұрын
It is a frequent request, but _so far_ I have been putting it off, because Vulkan is an epitome of boilerplate. You need like 200 lines of code to do even the equivalent of “hello world”. It is _extremely_ dull reading, and doesn’t have ingredients for a good video in my opinion.
@razielsunlimited3 жыл бұрын
Hello! What is the music you use here? Thanks and awesome videos!
@Bisqwit3 жыл бұрын
The music is listed in the video description.
@razielsunlimited3 жыл бұрын
@@Bisqwit thanks master!
@paark49804 жыл бұрын
I'm novice in programming and I find your channel few days ago. And I have 2 questions. Which Linux distribution do you use and who you choose Joe as a text redactor instead of emacs or vim or even ide?
@Bisqwit4 жыл бұрын
The latter question is answered in kzbin.info/www/bejne/kH6lgqCehJ1-p6s, and the first one in kzbin.info/www/bejne/opSViKmeeMusqJY (the description of it anyway).
@Crossbow1232 жыл бұрын
Most lightmappers that are used in production use some sort of raytracing to render the lightmaps (I used path tracing for mine). However, the camera approach seems really simple. Is there any reason this is not more widely adopted?
@Bisqwit2 жыл бұрын
The camera approach actually _is_ raytracing. It just does it for multiple rays simultaneously. In traditional raytracing, the bounces for single rays are traced and the data is collected from those bounces. The multiple rounds of rendering using the already-calculated lightmaps, which are then progressively improved, achieves the same effect - converges towards the same result, and nearly all game engines that do prebaked lightmaps do exactly that; for example the Source engine.
@Crossbow1232 жыл бұрын
@@Bisqwit Thank you for your quick reply. Although not sure if that many engines do the camera approach. For example both Unity and Frostbite from EA (there are GDC slides from 2013) use Path Tracing. The old Enlighten Lightmapper that Unity still includes uses Radiosity to solve for GI. The UberBaking System from Activision uses Raytracing by means of shooting rays individually. And it seems that Lightmass is also using Raytracing directly. The camera approach might have the same output as raytracing, but really is is just rasterizing the scene. And rendering the scene using raytracing through a camera seems like a huge waste of performance.
@HA7DN4 жыл бұрын
Damn I love this series. And you just mentioned a raytracing one... I won't watch it for now, at least before I try doing that on my own. Have you tried doing electronics? I can imagine you having lots of fun with digital electronics ESPECIALLY FPGA stuff...
@Bisqwit4 жыл бұрын
I have electronics education from vocational school, and I deal with embedded programming for my work, but I haven’t really done much with electronics. This was maybe the most complex electronics project I have done. kzbin.info/www/bejne/fIq7g35rhZWkgJY It is a NES music player running on a PIC16F628A, which has 128 bytes of EEPROM memory, 224 bytes of RAM, and 3.5 kilobytes of program flash. It has no signal generator hardware suitable for this purpose, so the program generates the audio as PCM. I also wrote an emulator for it. kzbin.info/www/bejne/hmmVi5lpZs-ihs0 I have never done FPGA stuff. I would probably just need some getting-started material, but aside from reading through the entire VHDL specification in 1996 or so and skimming through a couple of VHDL/Verilog source codes in the years, I have absolutely zero experience about FPGA programming.
@HA7DN4 жыл бұрын
@@Bisqwit Wow! Impressive project as always the case with you. Electronics is very fun!
@voxbine40054 жыл бұрын
So dificult to my brain but bery spectacular to my eyes
@Mad30114 жыл бұрын
Will you also implement path-traced lightmap generation?
@Bisqwit4 жыл бұрын
It is not currently under plans, as I have never tried that method before. If I ever do try it, I might feature it together with Intel Open Image Denoise.
@Mad30114 жыл бұрын
@@Bisqwit Sounds good
@abcxyz58064 жыл бұрын
I wonder how a path tracer would compare to the perspective projections in this video. One could then increase the propability of shooting a ray to a light source, which could solve the problem with too small light sources.
@Mad30114 жыл бұрын
@@abcxyz5806 Unity3D uses path tracing for static lightmap generation. I believe other engines do too. The algorithm shown in this video is the classic radiosity method that had its video game debut in Quake 1(?) I think. Path tracing is the more generic method but I bet it's also slower, apart from solving the perspective problem it also allows to take light reflected from specular surfaces into account, i.e caustics. Problem with path tracing is the noise in the generated image, hence the recent efforts to create powerful denoisers.
@Bisqwit4 жыл бұрын
It would solve the problem with too small light sources, yes. However, you also don’t want your nice sphere light sources turn into point light sources either, or you will get nasty sharp shadows. Essentially you would need to predict where the most significant concentrations of light are, but simultaneously also counter the bias when averaging the light.