This guy is simply the best at what he does on KZbin.
@paap25173 ай бұрын
Simply the best! 💯
@Bunny99s10 ай бұрын
I've done most of that stuff 25 years ago so this video didn't really taught me anything new. However I have to agree with all the other commentators: This has to be the best and most comprehensive video on rasterization. Most videos skim over it as they will be targetting an actual graphics API like OpenGL or DirectX which does all that for you. So they usually focus just on the vector and matrix stuff and many even mix up some terminology (NDC and Clipspace are what most confuse and when the homogeneous divide actually happens). I'm sure many probably will struggle sitting through this video as some concepts are explained in a quite tight format. Though I think you really mentioned every little detail that is necessary. Even providing some visual and mental hints for some concepts (barycentric coordinates for example) which is certainly helpful for many. So I'm really impressed that you managed to pack all this into one video.
@sng6392 Жыл бұрын
I was so sad when the previous video removed, but now it is back with more stuff!. Thank you!!!
@andrew_lim Жыл бұрын
Note that the diagrams at 47:29-47:54 only work for y-up cartesian coordinates and if the vertices are defined and passed to the cross() function in counter-clockwise (CCW) order. They do not work for y-down screen coordinates. However the edge_cross() function in the C code works because the vertices are passed in clockwise (CW) order which are okay for y-down screen coordinates so the w0,w1,w1 >= 0 test works.
@pikuma Жыл бұрын
Yes! Thank you. My life is a constant tug of war between traditional y-up math notation and formulas that work in screen coordinates with y-down.
@Yuvaraj-pd6ng8 ай бұрын
the best explanation of Rasterization on youtube
@ulysses_grant Жыл бұрын
This is like someone explaining me my childhood and adolescence playing games that I could copy in Floppy Disks and play them in my friend's computers, because I had none back in the day. I'm definitely gonna cry and hug people after this.
@hapboyman10 ай бұрын
It was an impeccable performance. I'm so grateful for the motivation you've given me to study.
@mandasartur Жыл бұрын
By far the best video on triangle rasterization which I have seen, professionally made and explained. It made me rework in the middle of the night my naive and slow hybrid bresenham/scanline solution to this one based on half-planes, which in fact is theoretically simpler. Fantastic tutor skills, I will most probably buy the course when I'm done with my the current pile of shame.
@pikuma Жыл бұрын
Hahaha. Thanks. PS: We allhave our pile of shame. 😅
@braveitor Жыл бұрын
Really good stuff. If my college math teachers would have tought this kind of equations and formulas that way, I'd have loved maths eagerly. I understood everything on it and I hope I can show my son this video when the time to study this matter comes. Thank you, you're a wonderful communicator. :)
@pekka86055 ай бұрын
Even though I'm only half way watching right now I just had to thank you for this. It's been such great presentation so far. Length of this video might seem intimidating at first, but this video has great balance between not overexplaining everything but also not skipping important things.
@paulooliveiracastro Жыл бұрын
I just bought the full course because of this amazing video. I'm very glad I've found it. I was reading the book "Computer Graphics from Scratch" and although they have different approaches to the subject, they go very well together. I hope one day you make a lecture on Raytracing from scratch as well. Thank you :)
@yuriorkis_scream7 ай бұрын
Great work! Thank you for detailed explanation of such an important thing for all people who doing some staff in computer graphics!
@undofix Жыл бұрын
The HUGEST thanks for this absolutely comprehensive tutorial! I've spent a lot of time writing rasterizers and always wanted to but couldn't make a gapless rasterizer because of the lack of the information for this topic. Your video finally solves the problem! You explained everything as clear as possible!
@pikuma Жыл бұрын
Thanks for the kind words. It's an extremely fun topic to study. 🙂
@martincohen28 Жыл бұрын
Yaaay! It's back!
@tsumimityan11523 ай бұрын
What an awesome guy! Im not a native english, but can understand him very easily.
@tenthlegionstudios1343 Жыл бұрын
Epic walkthrough! Thanks for linking all the articles as well!
@rafaelsantana994611 ай бұрын
Valeu Gustavo, seu video ta sendo usado pra minha aula aqui na SFSU. Parabens!
@pikuma11 ай бұрын
Que legal! Qual o seu curso?
@rafaelsantana994610 ай бұрын
@@pikuma COMPUTER GRAPHICS SYSTEM
@AllanSavolainen23 күн бұрын
Hmm, what Hmm, what I did in the good old DOS days was to first split the triangle into two so that in the middle there would be one vertical edge. In case the top or bottom was vertical, this step could be ignored. Then I would calculate the slope per row for the left side, eg 0.3px/row. Then do the same for the right side, lets say 0.4px/row. Then would calculate delta length per row which was how much the each rasterized line would grow per row, and would be the sum of both slopes. Then I would just iterate from the top, fill the cur_row_length of pixels, move the start_x_pos by left_slope and inc/dec the cur_row_length by delta_length. And loop until I get to the middle vertical line, or the end vertex of the triangle. To get gouraud shading on this, I would calculate similar deltas and slopes for the color gradients and inc/dec those per pixel and row as needed. Simple rasterization on MCGA 320x200x256c DOS screen. Well, there was extra step to map the current RGB to palette/dithering color.
@pikuma23 күн бұрын
Yes, splitting the triangle in two and finding the slopes to raster each scanline is the approach we use in our course as well. I just thought it was important to also explain this approach since it can take better advantage of paralleliaation.
@araarathisyomama787 Жыл бұрын
Instantly subscribed! I have functionally rewritten PS1 GPU in C so I felt called out when you mentioned how PS1 handled rasterization ;). Even with multithreading I still had problems with performance on weaker devices like PSVita. Calculating barycentric coordinates "the proper way" on every pixel is just out of the budget. This video solved most of my problems with this, though I may add some empty space skipping later if profiler will say so, but I want to avoid divisions somehow... Speaking of which at 1:25:49 you could've calculated invArea instead of area at line 68. This way you could replace divisions at lines 86-88 with multiplications. Now I just have to understand the bit arithmetic sorcery Duckstation project has in their `GPU_SW_Backend::ShadePixel` function (GitHub) and maybe I can finally squeeze the performance I need for this thing... if I could understand it. I recommend checking out some projects with software renderers (especially emulators targeting weaker devices) some of those are literal gems and maybe you'll come up with more video ideas. There is very little good content on YT and internet in general regading that topic. My two cents. Keep up the great work you do!
@pikuma Жыл бұрын
Loved your comment. Thanks for the tips on that division. Divisions really were 'bad hombres' back in the day. :)
@aviator14723 ай бұрын
You can use gradients for interpolating in across triangle. You calculate them once for entire triangle and than you simply increase interpolant (like texture coordinates, color, depth) and so on. You can find this methon on chris hecker texture mapping article and on thebennybox chanel in software 3d renderer series.
@osmancanernurdag5 ай бұрын
An amazing explanation. Congrats bro :)
@krakulandia Жыл бұрын
You can scan convert the edges using floating points without issues if you just use algorithm which ensures that connected edges of two triangles are calculated the same way. Then you won't get any black pixels at all. Back in the 486/Pentium days I used to do the edge scan conversion so that both left and right edges were calculated simultaneously, which forced the algorithm to keep track which is the left/right edge. A month ago I wrote a new polygon filler algorithm and decided that the benefits of doing those things on a modern CPU are minimal. So these days I simply use edge buffers: 2 floats per row --> left X and right X coordinate of the polygon. Now I can for real render polygons instead of triangles and the algorithm itself is simpler than if I was drawing triangles only. And the speed is really good and there are never any overlapping pixels or holes between polygon edges. No biases of anykind are needed.
@pikuma Жыл бұрын
That's interesting! Thanks for taking the time and explaining. Now that you mentioned it, I see many programmers writing engines that work with quads (and polys) and they all mention the same benefits you did. My engines usually work with tris, but I will give this poly approach a try soon. Just one question, do you always keep track of triangles in "pairs" of left-right? How do you reason about their connectivity?
@krakulandia Жыл бұрын
@@pikuma You don't need to keep track which triangles are connected. You only need to make sure you calculate the edge X coordinates for each line the exact same way for both triangles/polygons which share that edge. Easiest way to do this is to take points P1 and P2. Before you calculate the edge P1-->P2 X coordinates for each line on screen, just sort those vertices (P1 & P2) by their Y coordinates so that P1.Y
@harshguptaxg2774 Жыл бұрын
Awesome video Gustavo sir
@Plrang Жыл бұрын
Great stuff. Took me some time to make it work on Commodore Plus/4, but it was an awesome ride.
@pikuma Жыл бұрын
Oh that's great! Pics please. 🙂
@MissPiggyM976 Жыл бұрын
Very well done, many thanks!
@normaalewoon6740 Жыл бұрын
1:59:20 taking this a step further, you can offset the rasterization point randomly for every pixel and every frame, as long as it stays inside the pixel area, instead of using only the pixel centers. This turns jagged edges into noisy approximation of an endlessly supersampled image, if done in real time. Compared to ssaa, msaa, fxaa, dlaa and tsr, this could be the cheapest and most detail preserving way of anti aliasing in gaming. Blending the current frame with previous frames takes place inside our eyes due to a phenomenon called persistence of vision, which suppresses the noise by a lot, of course depending on the framerate. There is a github project called gaussian anti aliasing that does this. I have implemented it in a ray marching shader and it works really well. Now, only the gaming industry needs to pick it up
@pervognsen_bitwise Жыл бұрын
This is called stochastic rasterization in the literature and you are drastically overstating its virtues. Even if it was a perfect solution to jaggies (it isn't but it's a useful tool), the major aliasing problems for the last 15 years in games and real-time graphics are primarily about lighting, shadows and material shading. That's why TAA/TSR has won--it integrates samples over time, so it naturally filters temporal aliasing, and combined with intentional temporal subpixel jittering (similar idea to stochastic rasterization) you can turn spatial aliasing into temporal aliasing and filter that too. And it's a big hammer that can be used to address all the major sources of aliasing, not just jaggies. Jaggies just haven't been top of mind for graphics programmers for a long time and for a good reason (1080p -> 1440p -> 2160p). There's a reason Apple got rid of subpixel antialiased text rendering when they moved to Retina/1440p displays. In games, the shift towards deferred shading made MSAA unavailable/impractical and TAA picked up the slack. The one case where I think MSAA/anti-aliased rasterization is a big win is in VR because of how visible edge aliasing can be with the low relative resolution. But that's a niche. Outside of that, it's too far down the list of aliasing problems to be a major concern.
@normaalewoon6740 Жыл бұрын
@@pervognsen_bitwise Thanks for your comment. The literature on stochastic rasterization looks quite complicated to me, but as far as I can tell, its main focus is multi pixel blur. Besides the gaussian anti aliasing project, I can't find any literature on random rasterization anti aliasing Other than that, I think it all comes down to personal preference. Real time rendering has its limitations, so even with random rasterization points, you won't hide aliasing artefacts. That is not the primary goal of it though. Which is showing more accurately what is going on inside the pixel, while preserving the finest detail during movement, without additional cost. Lots of people rather disable anti aliasing than using taa. As I do, but I noticed that jagged edges stand out the most on stills and slow camera movement, especially on grass fields with a high polygon/edge density. Faster camera movement looks a lot better to me, as the jagged edges ar pretty much random, as wel as texture undersampling of course. Random rasterization can emulate this at any time. If the noise gets unbearable because there is too much sub pixel detail, than this should be adressed with lods or texture mipmaps. Taa can only blur the noise away, together with a lot of precious detail. After all, taa doesn't see a difference between the two. Dlaa is an improvement, but significantly more expensive and still not as crisp as seeing the picture as it is There is also foveated adaptive resolution, which works with deferred rendering. If you have an eye tracker in vr goggles or goggle mounts without glasses to look at a regular monitor, you can render at lower resolution in the periphery of vision to improve the performance. It also allows for supersampling in the center of vision. This reduces random rasterization noise to very acceptable levels. Still not noise free, but lots of people don't care too much, myself included. It's always possible to include reprojection, but the player should have full control of the most recent frame contribution Then there is the problem of effects relying on taa to look smoother (at the cost of washing out details). These effects often use dither patterns to emulate translucency, mostly by turning off the opacity mask or by pushing pixels forward so a part of them is hidden behind other objects. I rather use a random number generator instead. Without reprojection, noise looks a lot better to me than dither patterns. I have made a swamp water shader with random pixel depth offsets in ureal engine. This is not only a noisy approximation of translucency, but it projects volumetric shadows inside the water and it looks really awesome. It also works properly with cloud shadows, unlike real translucency. Other than that, I tend to disable smooth lod changing and object blending as soon as I can, as unnecessary and problematic as they are. I really don't mind small changes in geometry and hard edges between objects There is an even bigger problem than taa blur though: sample and hold motion blur due to the way modern monitors work. Even on 240hz with 1 ms response time or less, you see every frame for 4.2 ms. This produces a significant amount of motion blur during eye tracking, as the picture doesn't track your eye movement in that time. On 120 and 60 hz it gets even worse. Oled won't save us from this. At this time, only a crt monitor or a strobing backlight lcd has a pixel visibility time small enough for sharp movement. This makes taa blur even more obvious during movement, so I can confidently say that I don't need reprojection anymore. As well as variable refresh rates, which happened overnight as it isn't compatible with backlight strobing. A constant framerate is always the smoothest and most predictable, unlike multiplying in game movements by the previous frametime. In order to reach the target framerate all the time, it's well possible to do runtime view distance optimizations based on gpu utilization, if you can get it
@mrtitanhearted3832 Жыл бұрын
That was really awesome and useful!!! 😄👍
@aviator14723 ай бұрын
Thank you for the video. Actually there's more optimized algorithm for interpolantion in triangle. We can do it using Gradients. It is more optimized because for every interpolant in every pixel you dont need to calculate weights*interpolant. I understand the idea but i dont really understand the mathematics of that. You can find this algorithm in thebennybox software renderer tutorial and in perspective texture mapping article from Chris Hecker. P.S. Khow i understatood the mathematics of that.))
@vitorpmhАй бұрын
Parabéns pelo video, no inicio tinha certeza que tu era BR ai fui ver seu twitter e voila kkkkk melhor video sobre rasterização no youtube !!! Se quiser fazer um vídeo sobre gaussian splatting ou nerfs um dia pode me contactar.
@Felipekimst11 ай бұрын
but how do you turn the alpha, beta and gama for a given P point to uv cordinates?
@@pikuma haha thanks for taking long to reply... I was trying to figure out what you said but I kinda of failed but didnt want to cause you to explain it once again haha but considering another part of my algorithm, I was trying to use that approach with quadrilaterals, do you think it is possible to use the cross product just like you did to find the correct weights for the 4 vertices? I just need to know if that is possible, if it's, I'll try to figure out how to interpolate them haha so don't feel obligated to answer that once again ahah
@Felipekimst3 ай бұрын
@@pikuma lol I am really sorry. I just read what I wrote and didn't mean to be rude!!! What I wanted to say was Thanks for replying and sorry for (me) taking too long to reply
@Felipekimst3 ай бұрын
And I did manage to do it with the information you gave me!!! Thanks🙏
@HTMangaka8 ай бұрын
Most of these concepts work on a GPU as well, with a bit of mathematical fanagling. My current hobby is coding crazy efficient GPGPU kernels with CUDA. ^^
@said-rv1er Жыл бұрын
Thanks a lot, so actually we are doing cross product with 3d vectors (with z component being 0) and only care about the *sign* of the z component of the resulting vector.
@Felipe_f Жыл бұрын
I did somethig like it. My project is a 3D rendering software. I'm using a version of the scanline algorithm. The program is ready to run, but has not finished.
@jamessouza6927 Жыл бұрын
Sensacional!!!!
@johnhajdu427610 ай бұрын
On the github the int version of the triangle rasterizer's main.c is wrong. The multiple "edge_cross" call is outside of the 2 FOR-loop. So it cannot check individual pixels.
@cryptogaming9052 Жыл бұрын
Thank i m a technical artist and this is gold
@GeorgiyChipunov2 ай бұрын
Cool thanks for video
@TheALEXMOTO3 ай бұрын
Finally the riptiloids have allowed these ancient technologies to be revealed to the world :)
@KafkaesqueCthulhu7 ай бұрын
Oi Gustavo! Antes de mais nada, muito obrigado pelo conteúdo! Eu sempre tive o sonho de aprender computação gráfica e, quem sabe, trabalhar na área, e pelo que tô percebendo através deste vídeo é que teus cursos vão ser o pontapé inicial pra isso. Sério, obrigado! Mas eu tô com uma dúvida aqui. Teria como me ajudar, por favor? Fui até a parte da fill convention e, em tese, entendi tudo. A única coisa que tô com dúvida é sobre o sinal do w0, w1, e w2. Eles não deveriam dar um valor negativo no lugar de um positivo? Seguindo aquela animação que você mostrou do produto vetorial aumentando e diminuindo (minuto 42:42), quando o vetor que se move (vamos dizer que ele é o vetor b) está do lado esquerdo do vetor a (aquele que está parado), o produto vetorial seria positivo; caso estivesse do lado direito, o produto vetorial seria negativo. No caso do triângulo que a gente quer preencher, o vetor b (que no caso seria do ponto v2, por exemplo, até o ponto p dentro do triângulo) estaria do lado direito do vetor a (que iria do ponto v2 até o ponto v0), tendo, em princípio, um produto vetorial com valor negativo, porque a gente tá fazendo a x b, e não b x a; o problema é que no vídeo ocorre o contrário, daí a confusão. Você poderia dizer o que eu não estou captando? Novamente, muito obrigado pelo conteúdo! Sei que vou passar minhas férias da faculdade fazendo o teu curso de 3D. :)
@KafkaesqueCthulhu7 ай бұрын
Depois de terminar um trabalho da faculdade e mais umas coisinhas, aproveitei pra voltar ao problema. 1:01 da manhã, sendo que 6:00 da manhã tenho que acordar pra faculdade, mas finalmente encontrei o motivo! Eu esqueci que o y na tela, diferente do plano cartesiano, cresce de cima pra baixo. (eu fiz várias vezes utilizando o plano cartesiano e o w sempre dava negativo.) Caramba, que detalhe *bobo*! Enfim, apesar da pequena dor de cabeça que esse problema gerou, fazia tempo que não ficava tão empolgado com algo. Espero que as férias cheguem logo! Abração! :)
@laminak11737 ай бұрын
It reminds me the time of demomakers in the 90s
@legeorgelewis353011 ай бұрын
Something fun you can do is implement this in a compute shader and render normal vertices with textures and all that.
@tylervandermate Жыл бұрын
This is FANTASTIC!!! Thank you! holy moly insta-sub
@demon_hunter9547 Жыл бұрын
At 1:26:00 these are biases added to the w's, does this not affect the results of alpha, beta, and gamma?
@pikuma Жыл бұрын
Hm, good question. I'll have to do some proper thinking about this but from a very quick thinking I'd say that it respects what we consider what's inside or outside. For example, changing the w's by a bias modifies what points we consider inside or outside. So, when we compute alpha, beta, and gamma, we are computing the barycentric coords for that point that we consider inside the triangle. Again, I'll revisit the code and think about this properly, but that's my initial quick thought.
@demon_hunter9547 Жыл бұрын
@@pikuma Thank you for replying! Maybe texture mapping something using alpha, beta, gamma would make things clear.....?
@giggles8593 Жыл бұрын
hello sir, i was wondering what font are you using for your text editor?
@paulooliveiracastro Жыл бұрын
@pikuma what's the performance difference between this algorithm to fill triangles and the flat-top/flat-bottom one that you teach in the paid course?
@pikuma Жыл бұрын
Compared to that implementation (scanline rasterizer), this one is faster! You can easily replace the triangle_fill() function for this one and measure it in your machine. Since it's just a simple addition per-pixel it's better than having to compute the slope and the start/end points per scanline.
@paulooliveiracastro Жыл бұрын
@@pikuma any tips on how to optimize this? Maybe I'm not being reasonable, but I was expecting to reach 60fps with this when drawing a few thousand triangles on screen. In reality I'm achieving ~38fps (even with backface and fustrum culling turned on). I tried to skip a line when going out of the triangle, I pre-computed the inverse of the area to avoid divisions per pixel, but that only got me so far.
@paulooliveiracastro Жыл бұрын
I just compiled with flag O2 and...surprise! 140fps. Those compiler optimizations are dope.
@lt_henry820 Жыл бұрын
@@paulooliveiracastro Software rasterizers are bounded by filling rate rather than triangle count. It will bottleneck if you are using a high resolution and model is near to near clip plane. A single triangle filling the screen will struggle your CPU more than a several thousands far away from the camera on a small 64 pixel square. Knowing this....140fps is a lot a textured Sponza model, but you should achieve 200/300 fps for a small rendering area.
@AllanSavolainen23 күн бұрын
Also, I don't think we ever cared about the small overlap, z-buffer usually handled that
@nikefootbag Жыл бұрын
I'm a windows user and not able to compile. "make" command is not recognized, I have gcc/mingw installed but can't seem to work out what I need to resolve. If I just run the gcc command it complains about sd12-config: No such file and unrecognized commands '--libs' and '--cflags'. I've been tempted to get your full 3D graphics programming course but am wondering if it's more comprehensive in the project setup than this video? I've also recently followed another video of yours about setting up SDL in a Visual Studio Project on windows and feel like that might be what i'm missing here, but I don't seem to have the experience to combine this video's source code with a Visual Studio Project setup with SDL. Your videos are great no doubt but any help getting this example running on windows would be greatly appreciated!
@pikuma Жыл бұрын
I don't habe a Windows machine with me, but if I recall correctly MinGW cones with an executable called "mingw32-make", which should behave similar to GCC's make on Linux. But my suggestion would be to simply use Visual Studio. It has not only a better build process (as I show on my SDL+Windows video) but you get a great debugger with it as well.
@pikuma Жыл бұрын
Using Visual Studio also means you don't need a Makefile (or make).
@nikefootbag Жыл бұрын
@@pikuma thanks for the reply! I’ll try again with visual studio
@johnhajdu427610 ай бұрын
At 1:14:27 you are using greek letters to define areas, which is misleading. According to genereal math rule the greek letters are used for angles (degrees or radians).
@pikuma10 ай бұрын
What about PI? Or delta? 🤔 I've seen alpha, beta, and gamma being used by one book and I always liked that. Feel free to call them whatever you want though. 👍🙂
@PrecisionzFPS Жыл бұрын
thank you!
@pikuma Жыл бұрын
You're welcome! 🙂
@TheBitProgress Жыл бұрын
Can you compare it to scanline algorithm? -Is it slower? At first look it should be slower because of math.- My bad. Now i have watched this part of the video. Brilliant stuff! Thank you!
@pikuma Жыл бұрын
🙂👍❤️
@blinded65022 ай бұрын
This 2d cross product you speak of is really just 2d wedge product. It works also in 3d, 4d, 5d and etc to infinity Unlike regular cross product
@pikuma2 ай бұрын
@@blinded6502 Exactly. Thanks for writing it down. ❤️
@DiThi Жыл бұрын
There's another solution for this bias other than fixed point numbers: if all your floats are positive you can just reinterpret them as integers for the comparison and the bias can be just -1 like before.
@GonziHere Жыл бұрын
Building up for your own version of Nanite? :D
@patrickpeer7774 Жыл бұрын
It says "for beginners" but while I did generally understand the visualised concept of rasterising, I didn't understand the code overview part too much. I think I'm missing certain prior knowledge. Is there a video or course here that is "more for beginners"? 😅
@1u8taheb6 Жыл бұрын
To understand the code you need to be a little bit familiar with programming languages like C. There's no specialist knowledge related to this specific topic of rendering that you need in order to understand this code - it's functionally quite simple. You just need to be more familiar with C-like languages in general and their syntax and then you'll be able to follow along much easier. Code always looks more complicated than it is because all the keywords and boilerplate distract the untrained eye from the actual relevant bits.
@DeafMan1983 Жыл бұрын
Hello great idea but I use smilar with "uint32_t inter_color = (a
@vxcute010 ай бұрын
I did it that way :) its called type punning Color color = {r,g,b,a}; DrawPixel(x, y, *((u32*)&color));
@anthonypace5354 Жыл бұрын
I like your vids, but this barycentric approach is actually a bit slow. The better approach really is to get the lines first, find the leftmost and rightmost edge per triangle in 2 slot buckets, using the bounding box so your height y buckets can start at 0, and LERPing the colour in a scanline approach from the leftMost point to the rightMost point for each row. The Barycentric technique, of having to calculate to cross product with multiple multiplications per point, is much slower than using a properly written Bresenham, which limits it’s operations to a few branches prior to the main loop which has only 1 branch, and a + or - operation; which means finding the edges first can outpace the barycentric approach completely. Also, to find the edges, allows you to skip needing to find 1/3rd of the edges for successive connected triangles, and would allows you to know the top-left immediately. In either technique, that is 1/3rd less computation right away if you properly memo-ize edges to be used with surrounding triangles and render outward; thus finding the edges first significantly helps, and it helps parallelize too. Not only can multiple triangles be done per thread, but you can break down aspects of the triangle into threads too. If we were talking very large triangles, each line can be sent to different threads, and to so too can bucket comparisons for line segments, and when you have your 2-slot buckets of leftMost and rightMost points figured out, you can segment/sub-divide the triangle and have a texture rendered or colours LERPed by multiple threads per division of rows too. Both approaches of course benefitting from sharing a cache for texture/fill application, and getting rid of ranges of triangles that would be covered or showing a backface right from the beginning before doing any rendering at all. What I do like about your vid, is that you can extrapolate some of the concepts you were teaching to other applications, less specific to rendering.
@pikuma Жыл бұрын
Thank you so much for this breakdown. You're correct. If I was creating a software renderer I'd probably approach it from this angle. I guess my idea was to give students an overview on how GPUs see this problem, and currently barycentric coords play a part in the modern pipeline.
@anthonypace5354 Жыл бұрын
@@pikuma Well, I do agree that it is smart to teach what the current pipeline is, and what you are teaching is the common technique out there; yet, what is popular is not always the most efficient. Scanline Rasterization, finding the edges first can lead to a giant boost in performance. Segmentation is easy given the constrained boundaries, requiring less work, and very efficiently balanced. But I’m not expecting you to take the word of a rando; I did a google search and found that there has been work done and it’s about 2.5X faster than current popular techniques. E.g. An interesting paper about efficient gpu path rendering using the scanline rasterization, by kunzhou, came right up.
@pikuma Жыл бұрын
@@anthonypace5354 Great stuff, Anthony. Agreed! 🙂👍
@lt_henry820 Жыл бұрын
@@anthonypace5354 This approach is known as Pineda algorithm. It is known to be used at least, on early 3Dfx gpus. Last decade, Intel tried some sort of cpu-based gpu, and this algorithm was selected instead of bresenham one. I also implement this algorithm on my rasterizers because it makes side clipping easy (and faster). Isn't Kunzhou paper about glyph rasterization on modern gpus? I find kind of off topic
@ashwithchandra2622 Жыл бұрын
Are you using opengl or what?
@pikuma Жыл бұрын
No OpenGL. Just a window with a framebuffer of pixels to be painted. The source code is in yhe description. I use SDL to create the operating system window.
@tocatocastudio9 ай бұрын
Você é brasileiro?
@pikuma9 ай бұрын
Sim
@tocatocastudio9 ай бұрын
@@pikuma é muito difícil achar conteúdo bom de computação gráfica em português então achei o seu vídeo eu consigo entender tudo. Parabéns você é muito bom no que faz.