Dithering is still used in printing. In fact modern day printers use a clever combination of computed dithering like FS and seamless tiling like Penrose tiles.
@SerBallister3 жыл бұрын
Cheap LCD panels too
@Smittel3 жыл бұрын
^ as serb says, some cheap panels use 6bit color exploiting us not being as receptive to spatial high frequency color changes. Also, cg graphics sometimes use dithering to reduce banding that even with 8bit can be noticeable with dark gradients, it looks a bit like the quantized image, just not as exaggerated. Dithering is basically just free color precision similar i believe to why printers do it as well but im not too knowledgeable about them
@landmanland3 жыл бұрын
@@Smittel printers have the problem of having only a very limited set of colors, usually 6 in your typical inkjet printer. Fortunately printers today have extreem high density of “pixels” so that dithering doesn’t effect the end result. You only see it under a microscope. Straight on dithering is actually not used as the end result can make the image muddy, because how ink droplets flow and mix with each other. I use it as a first stage filter, but only for photos since it’s relative cpu expensive.
@guitart3 жыл бұрын
Welcome back, Maestro!
@javidx93 жыл бұрын
lol Thanks Perini!
@renhoeknl3 жыл бұрын
What I love about this channel, is that you can just watch the video and learn something without actually having to code along. You certainly can is you want to, but just watching and learning some new algorithms is really nice too.
@brentgreeff11153 жыл бұрын
I love this channel. - this is the year I take a few months off to actually try implement all this code.
@suzuran4513 жыл бұрын
Very nice! I've wanted to learn about color quantization and dithering for a long time and this video explained them in a very understandable way! Thank you!
@setharnold97643 жыл бұрын
The look of my childhood! I'm always amazed at how your framework makes the topic of the video flow so smoothly. Nice stuff.
@thomas37543 жыл бұрын
A new video. You call this 'Pog' these days I think. Very high quality as always, excited for the next times already
@Pariatech3 жыл бұрын
As always a great tutorial. I like that you start with the demo, that's a nice hook to keep watching. I'm curious if I could use this algorithm to make emulation of old 16bit art using high res pictures . I'll have to try it. One more project in my bucket list hahaha
@javidx93 жыл бұрын
lol thanks! Yeah its a great way to make pictures look retro. Most art software will have an equivalent "filter". In fact I tested a version of mu implementation against Affinty Photo, and got exactly the same results, so we know how they're doing it :D
@SergiuszRoszczyk3 жыл бұрын
My thought on that would be to take VGA output, dither it and connect to EGA 64-color monitor. That could be interesting. Something that back in the days video cards weren't capable of (at least not 60 times a second).
@ric82483 жыл бұрын
It's fascinating that you're doing some DSP! I hope you enter the world of audio effects one day.
@tjw_3 жыл бұрын
new javidx9 video?! christmas came slightly late this year I see!
@kiefac3 жыл бұрын
Or extremely early, if you don't handle the overflow correctly
@anonanon30663 жыл бұрын
Wow. This is amazing. How did i not know about this? Never ever would i have imagined it to be such a simple algorithm.
@crazykidsstuff3 жыл бұрын
Best part about this weekend? Working through this video! Very entertaining and very informative. Thanks so much!
@pskry3 жыл бұрын
So good to see you're back! Hope you all are well!
@wesleythomas68583 жыл бұрын
Glad to see you back!!!
@javidx93 жыл бұрын
lol cheers Wesley, not as frequent this year, but Im hoping for once a month.
@carlphilip43933 жыл бұрын
Hey javid youre a great guy! Im currently at university and I am looking up to you! Its amazing that you share all your knowledge to all us people for free and you teach as an excellent teacher!
@javidx93 жыл бұрын
Hey that's very kind of you Carl, good luck with your studies, and you can aim much much higher than me!
@aropis3 жыл бұрын
So great to have you back! Really awesome for people new to image processing. If you did linked dithering and to printing you would have completed the circle. I could have imagine their AHA moment especially if you mentioned CMYK color space. Awesome stuff! Keep it up! Really this video opens many interesting topics regarding signal processing. Reducing a dithered image shows the limit of nearest neighbor/bilinear filtering. This could be the starting point of an image sampling video. All the very best for 2022!
@teucay73743 жыл бұрын
The best video I've seen since the year started. I am working on a program to produce pixel art from high def images, and this is super useful for that! Thank you javid!
@SergiuszRoszczyk3 жыл бұрын
I used this technique to display pictures on white/black/yellowish-brown E-ink display. I was limiting palette to RGB values mimicking three colors of display and then dithering the picture. Works great for photos.
@PumpiPie7 ай бұрын
Very good video, Good exsplanation ;D Keep up the good work :D
@treyquattro2 жыл бұрын
this was another superb tutorial. Old (Robert) Floyd was certainly one of the giants of 20th century computer science (e.g. Floyd's algorithm for finding cycles in lists, Floyd-Warshall shortest path algorithm, correctness, work with Knuth, etc.). BTW, with modern C++ and template argument deduction, if you're creating a std::array you can leave out the item count - and even the type if elements are all of the same - if you''re initializing from an initializer list. e.g. std::array a{1, 2, 3, 4, 5}; // creates a 5 element array of type int
@Komplexitet3 жыл бұрын
Yay new video!
@StarLink1492 жыл бұрын
I love your videos. :) I always learn something interesting and can't wait for you to release more. On another note, I've always found old pixel art using Bayer dithering to look very nice.
@ianmoore3222 жыл бұрын
I've always wondered how to implement this algorithm. Thank you OLC. You always have the answers I've always needed. Console game engines and pixel game engines for example
@Maxjoker983 жыл бұрын
Dithering still has uses in "modern" applications. You can still get some more dynamic range for a specific display using it. Floyd-steinberg is rarely used for this nowadays, but the principle remains. Think of things like displaying higher-bitdepth images or videos on "normal" 24bpp monitors, or display-stream compression, etc. EDIT: Also dithering is not a scanline-algorithm. Floyd-Steinberg is, but not all dithering algorithms are. But most of them are simple matrix operations!
@wes81903 жыл бұрын
Agreed; I used dithering just a few years ago on a graphics project to get impossibly smooth gradients with no banding. It was like magic.
@infinitesimotel2 жыл бұрын
If you want to see some impressive dithering, have you seen the presentation on the Lucas arts guy who only used 16 colours but could get crazy colour ranges and even cycle the palette to make them seem animated?
@SquallSf2 жыл бұрын
@@infinitesimotel The name of the guy is Mark Ferrari and he is not Lucas' he left long, long ago.
@dimarichmain3 жыл бұрын
So good to finally see you back!
@geehaf3 жыл бұрын
You're back!! Great explanation and demonstration - as ever. :)
@_tzman3 жыл бұрын
Thank you so much for introducing this brilliant algorithm to us. My mind is blown
@anoomage3 жыл бұрын
I just did my own Floyd-Steinberg dithering for photographs displaying on an ePaper screen :D (Connected photograph frame, where you can choose an image on your smartphone to be displayed on the ePaper, sent to the Arduino with Bluetooth) Can't wait to see how you did it !
@TheButcher583 жыл бұрын
Very interesting video. I once wrote an algorithm to apply a weight to a pixel (related to A*) where it would look at its neighbours. One of the problems I got was the fact that when you iterate from top left to bottom right, it affects the results and I needed to do it again in the reverse way, but this was quite inefficient. This algorithm made me think of that.
@Ethanthegrand3 жыл бұрын
I love your videos man. Even though your channel is based around C, and I know nothing about it, i in fact watch a lot of these tutorials and program in Lua with my own pixel engine. That’s the great stuff about your videos is that you visualise everything. Keep up the great work!
@davidwilliss55553 жыл бұрын
Years ago I developed dithering algorithms for printing and Floyd Steinberg is one of the algorithms we used. There was a similar algorithm called Stucki which worked the same way but distributed the error to more pixels using different weights and produced a more pleasing image. There's another problem that arises in printing in that often your pixels are not square and a printed pixel will overlap the neighboring white pixels so you have to weight them differently. We had one printer where this was so bad that if you printed a 50% gray by painting pixels like a checker board, the black pixels completely overlapped the white pixels and you got black. For that we ended up using a completely different algorithm.
@secondengineer98143 жыл бұрын
Really cool video! Always fun to see a simple algorithm that does so much!
@ianbarton19903 жыл бұрын
Another really good video about a subject that I've always found really intriguing, I remember the first time I came across dithering playing around with The GIMP to convert full colour to true black and white images and I thought it was some magic voodoo algorithm that must be beyond mere mortal levels of comprehension. That's why it's so satisfying to find out that the algorithm is very accessible and intuitive to understand but there is still a solid level of mathematical thinking and nuance behind it. I think there's a probably a natural follow on video surrounding the generation of 'optimised' palettes (where the computer decides what colours will best approximate the source image) too if you're so inclined to do so. :)
@super_jo_nathan3 жыл бұрын
Am I correct in thinking that when clamping to 0 and 255 you will lose some of the error propagation? Of course its better than wrapping around, but wouldn't storing the altered value and only clamping when actually assessing the pixel result in a better dithering?
@javidx93 жыл бұрын
You are! In fact I was intrigued by this too, and created a version where this doesnt happen. What I observed was the error propogation goes out of control and quickly saturates, so the bottom right of the image is garbled. I thought about including it in the video, but then I'd have to explain the new custom pixel type required and it didnt really fit. I would guess that the clamping is required to keep things under control - this could probably be achieved by other means however, if you're prepared to go beyond just the basic FloydSteinberg algorithm.
@DFPercush3 жыл бұрын
Seems like you could have a moving window of floating point values, like maybe 2 or 3 horizontal lines at a time.
@super_jo_nathan3 жыл бұрын
@@javidx9 thank you for the detailed response and the informative video! Hope to see more videos like this from you in the future!
@nobody87173 жыл бұрын
@@javidx9 We'd probably have to hold at the place where the clamping would potentially kick in, to investigate what is happening as to information overloading or miscalculating or translating unexpectedly. Partial dividends accumulating a discrepancy from a rounding or something like that. Debug when "clamp" is used and peek the memory values of the vars.
@eformance3 жыл бұрын
@@javidx9 That makes sense, since errors would propagate and propagate diagonally, and since the algorithm's bias is towards "brightness" it would got out of control. It seems that the clamping was a fortuitous side effect that the algorithm needs. Did you try altering the bias constants too, to see if you could produce something more interesting?
@clamato20103 жыл бұрын
Greetings from Mexico teacher, I am a fan of your channel and I have learned a lot with your videos.
@javidx93 жыл бұрын
Hey thanks Sam! Greetings from the UK!
@Cypekeh3 жыл бұрын
love this dithering
@mehulajax212 жыл бұрын
David..your content is awesome...The information that you present is pure gold...keep up the good stuff.. I have a similar background like you (minus the game development and 10 years of automotive development). However, it find a lot of content carrying over to auto dev for experimentation... I would like to know if you have some book recommendations.
@WillBourne9993 жыл бұрын
Fantastic video thanks javid.
@Unit_003 жыл бұрын
Interesting topic as always
@javidx93 жыл бұрын
Thanks Mateo!
@kweenahlem61613 жыл бұрын
best teacher ever
@radojedom83003 жыл бұрын
Excellent. Interesting and educative.
@arrangemonk3 жыл бұрын
dithering is still everywhere, in every conversation for audio /image resampling (rgb32 float -> rgb8), i also used floyd steinberg for service fees applied to the whole document distributed to its the positions
@ElGnomoCuliao3 жыл бұрын
Finally!
@therealchonk3 жыл бұрын
Great Video. I'll try it out myself.
@Jade-Cat3 жыл бұрын
A big factor in the brightening of shadows might be not the dithering algorithm itself, but using it on sRGB (i assume) data, with a linear distance function. Two pixels set one to 0 and the other to 64 will emit more light than two both set to 32.
@Tordek3 жыл бұрын
Indeed! A lambda adjustment is necessary to linearize the image in between processing steps.
@bubuche19873 жыл бұрын
Exactly. And to test that you can pictures of your screen at some distance, while your screen is either displaying a 127,127,127 color or a pattern of altering black and white pixels.
@diskoBonez2 жыл бұрын
really fascinating video!
@Rouverius3 жыл бұрын
25:30: CMYK! And sure enough it looks like a photo from a color newspaper! What's amazing to think about is that back in the 1930's, the first fax machines did a similar operation with vacuum tubes and used capacitors to hold the error values.
@vytah3 жыл бұрын
The company I work in uses Floyd-Steinberg dithering to allow our users to print arbitrary images on B&W thermal printers. It works reasonably well.
@will1am3 жыл бұрын
The return of the King! :)
@brainxyz3 жыл бұрын
Very Nice! Thanks
@SoederHouse3 жыл бұрын
Thanks for bringing back the youtube::olc::candy
@orbik_fin3 жыл бұрын
The brightening effect is caused by doing arithmetic with gamma-compressed values instead of linear ones. E.g. middle gray (128) actually encodes a brightness of 24%, not 50%. See sRGB on Wikipedia.
@TomCarbon3 жыл бұрын
The great advantage of Floyd Steinberg is also the ratios were chosen to match 16 so it can be resolved with 4 bits shifting and be very optimal!!
@Moonz973 жыл бұрын
Great insightful video! I wonder, how do you handle pixels that are out of bounds at 18:42?
@catalyst54343 жыл бұрын
Amazing video, I really like your explanation it is so clear and is very easy to understand ! Thanks for the nice content. I was looking for cache optimization videos but couldn't find a good one, maybe you can make a video about it, that would be awesome !
@hermannpaschulke15833 жыл бұрын
I'd say dithering still has uses today. Even with 24bpp you can still see banding in darker areas.
@OrangeDied3 жыл бұрын
i know nothing and have no intresting in image proccesing, but i will watch this whole thing because yeah
@janPolijan2 жыл бұрын
Hello, there. I'm using plain C at the moment for basic graphics programming. Thus all the C++ lambda-goodies feel like some sort of black magic, ha ha ha! But still from your good explanations, I understood most of your video and the Floyd-Steinberg dithering and it's very interesting. While watching the B&W dithering at 18m15s when you add the clamping, I started to wonder... I understand pixel values must not wrap around when diffusing the error. But isn't the clamping a little problematic??? For potentiall two reasons?: #1) slight decrease in dithering quality when we delete part of the error to be diffused in next steps. #2) significant amount of branching operations is now being added to perfom clamping of all four adjacents pixels for every pixel we scan. I thought perhaps it can be avoided? By simply computing Floyd Steinberg in a signed buffer? Or even for in place dithering, maybe one could add just a simple simple preprocessing step to half the intensity of the input buffer and then cast that pixel array to a signed type during the processing. I dunno? Maybe it sounds too "hacky"? But it's an idea I'd like to explore.
@s4degh2 жыл бұрын
I was fascinated by last dithering show case with only 5 colors
@BudgiePanic3 жыл бұрын
Another cool video 👍
@alrutto3 жыл бұрын
Loved the video, I'm glad you're back. In term of file size, how small would it be after filtering?
@javidx93 жыл бұрын
Thanks, it entirely depends on how many bits per pixel you filter to, for an uncompressed memory surface of 100x100x3x8 you can get it down to 100x100x3xN where N is the number of bits.
@SreenikethanI Жыл бұрын
@@javidx9 Or we can also use an Indexed format if applicable, saving even more space
@АлексейБаскинов3 жыл бұрын
Thank you. ☺
@dennisrkb2 жыл бұрын
You should perform the dithering in a linear color space.
@thorham1346 Жыл бұрын
No, you need gamma correction, and the more bits per channel you have, the less gamma correction you need. sRGB to linear color space is already too much for even one bit per channel.
@eddiebreeg38852 жыл бұрын
Looking at the very distinct artifacts it does look like the dithering algorithm used in the GIF format, which also uses indexed colors to reduce filesize. I had no idea this algorithm could be that efficient! By the way I noticed the use of sqrt and pow to find the closest match by minimizing the euclidian distance... it would be far more efficient to completely ditch the square root (useless if you just want to find the shortest distance, minimizing the squared distance is enough) and replace the pow by a good old multiplication :D
@Roxor128 Жыл бұрын
Dithering isn't part of the GIF format. If you've got a GIF file with dithering, that was done by whatever program saved it. Also, it really should be avoided as it doesn't play well with the format's compression. Data compression is all about finding patterns that can be represented more simply. Noise doesn't contain any patterns, so you can't compress it. That's why it ruins compression for images in PNG and GIF format. Dithering makes a trade-off between spatial resolution and colour resolution. It fakes there being more colour by using patterns or noise that average out when viewed from a distance. And there's why it doesn't play well with compressed image formats: many dithering approaches rely on noise. One form of dithering that isn't terrible for use in compressed images is ordered dither, which relies on a regular pattern. Of course, if anything ends up resizing the image, that'll introduce a whole new set of problems to ruin things.
@eddiebreeg3885 Жыл бұрын
@@Roxor128 Although dithering isn't part of the GIF format, color indexing is. If you take an image originally encoded using some 24 bit format, and convert it down to 8 bits, you'd probably want some form of dithering if you want it to look anything like the original. As for compression, I would argue that if you're using dithering, your goal is to reduce the size of the total pixel space you're using, so you're ALREADY kind of compressing the image in some way. As to whether it's the best way to compress, that's besides the point
@Roxor128 Жыл бұрын
@@eddiebreeg3885 Indexed colour was designed for saving memory by separating the colour representation from the pixels while still allowing good colour accuracy. Yes, it does have the downside of limiting the number of colours you can have, but you can usually pick _which_ colours you'll have in your palette, and those will be the full accuracy of the display device (18-bit for VGA). When you only have 256KB of memory for the framebuffer, you're not going to waste it directly specifying the RGB values for every pixel. For some numbers, 640*480 with 16 colours takes about 150KB of memory (and was the highest resolution supported by plain old VGA). If you wanted to directly encode the 18-bit RGB values VGA uses for it, you'd need nearly 700KB (and it would have been a pain to program because 18-bit values do not fit neatly into byte-addressed memory). Compressed file formats are for saving disk space given a certain kind of image data. GIF was designed in the late 1980s when everyone was using indexed colour for everything. If you had an image saved as a GIF back then, it would almost certainly have been created from scratch with an artist-chosen palette, not converting it from an RGB form. That's what GIF's compression was designed to work with. Dithering undermines that. Just tried an experiment. Starting with a photograph resized to 640*480, I reduced the colour depth to 256 colours, generating the palette by the same method, but mapping the colours differently. One image used nearest neighbour, the other used error diffusion. The uncompressed image was 301KB. The nearest neighbour match was 157KB. The error-diffused dither was 192KB. Okay, that's not really fair, given GIF wasn't designed with photographic content in mind. So I tried another experiment with an image downloaded from FurAffinity that'd be a closer fit, even though it needed conversion. Same process but left at original size. Uncompressed 256-colour version was 1.1MB. Nearest neighbour was 339KB, error-diffused was 519KB. Also tried 16-colour versions. Uncompressed was 605KB, nearest neighbour 80KB, and error-diffused was 199KB. As for how they look, while the banding is a lot worse in the 16-colour version than the 256-colour one for nearest-neighbour, it really doesn't look too bad. I could buy an artist producing something similar from scratch (though obviously they'd do a better job). In all three cases, error diffusion makes the compression significantly worse. 22%, 53% and 148% larger than nearest neighbour, respectively for each of the test cases. You really have to ask yourself "Is it really worth potentially more than doubling the file size for the sake of some nice dithering?" EDIT: Realised that Paint Shop Pro can do ordered dither if you limit your results to a standard web-safe palette. Results: Photograph: Uncompressed: 301KB, Nearest: 46KB, Ordered: 70KB, Diffused: 107KB Drawn image: Uncompressed: 1.1MB, Nearest: 97KB, Ordered: 209KB, Diffused: 492KB. While ordered dither isn't as compression-friendly as nearest-neighbour, it looks a hell of a lot better, and compresses significantly better than error-diffusion, with error-diffusion coming out 52% bigger for the photograph and 135% bigger for the drawing.
@eddiebreeg3885 Жыл бұрын
@@Roxor128 This is the part where I have to concede I am no expert about dithering, I haven't looked at all possible algorithms although I know there are a few. The only thing I wanted to point out in my original comment was that the look of that specific algorithm strongly ressembles what you would see on *modern* GIFs, that have been converted from RGB formats even though it wasn't designed for that originally. Thank you for taking the time with the experiments, I did learn from them :)
@frankgrimes92992 жыл бұрын
We could fine tune the lambda in 9:35 should get a better code optimization when we remove the branch but instead mask out the MSB and shift it down.
@SreenikethanI Жыл бұрын
Same thought haha
@yutdevmahmoud52712 жыл бұрын
Can you create a video for how to setup visual studio and importing your engine and how to work with it
@javidx92 жыл бұрын
Yes! kzbin.info/www/bejne/m4WqhIeKrbdgidU
@GNARGNARHEAD3 жыл бұрын
nice one; I've been meaning to go back and have a look at the optical flow video.. try and figure something out for horizon tracking on the ESP32 Cam, a nice refresher 😁
@barmetler3 жыл бұрын
I want to point something out about pointers. In C++, the star is part of the declarator. int *i, j; will create one int pointer and one int. This is why we put the star on the right. It is not a style choice, since the star is not part of the type specification, but the declarator. The same goes for references. This is in contrast to unsafe C# code, where the above snippet would create two pointers. Hope this helps!
@SchalaArchives2023ish3 жыл бұрын
Been using SDL2 with my follow-alongs, as it's a tried and true frontend to the standard graphical APIs, with a few additional goodies such as a render scaling function
@trim79113 жыл бұрын
RGB error dithering ... but after you said there's no cross dithering colours all I can think is what happens if you dither Hue Saturation Brightness (HSB or HSL or HSI or HSV if you prefer). Wonder what sort of funky things would happen ... In theory it should still work but possibility of rotating all the way to a complimentary colour. But then converting to HSB and back after dithering might just be too much of a pain. Still you'd get some funky results ... Very much something that would have been tried on TV signals maybe games consoles like the Sega Genesis, Super Nintendo or Amiga (assuming you're using a composite out). Edit wait no that's Y'CbCr .... So much technology that's mostly gone and I've forgot about.
@nanoic29643 жыл бұрын
I've noticed that the quality on a 9th gen 2021 standard edition front facing ipad camera is quite poor, this video has shown me that it is because it dithers quite a lot.
@evennot3 жыл бұрын
Author of the "Return of the Obra Dinn" has in-depth research on dithering in his blog, if anyone wants even more admiration of the topic
@GregoryTheGr8ster3 жыл бұрын
Nice, but what should the algorithm do when the current pixel is the last on a scanline?
@oschonrock3 жыл бұрын
Is there a "bounds check" bug here: kzbin.info/www/bejne/oqTIg2mQnNp1hLs on line 78. ie do the coordinates of vPixel+vOffset fall off the end / beginning of the row/column? ie do they use image information from a part of the image which is VERY far away, or even worse "outside the image"... (I haven't checked the implementation of GetPixel and SetPixel to see whether, and if so how, they are bounds checked). -- Update I just downloaded, compiled and checked. SetPixel does the bounds checking and just ignores the "out of bounds" pixel coordinates... so this is "OK..."
@FaridAnsari13 жыл бұрын
I got my first IBM-compatible PC in the early 90's with my monitor being able to only display 256 colors in windows 3.1. I remember that when I wanted to save an image or videos, I would play around with quantization and dithering options in whatever graphic program to make it look right on my display. After watching this video, it really makes me appreciate what dithering does (approximate with far less information and still get the idea of the image across!). I think it would make for a cool post-processing effect for a pixel game engine based games but not sure if it is speedy enough for the FPS?
@SianaGearz3 жыл бұрын
If you have something CPU-rendered, then you can make Floyd Steinberg work, it's fine, but it also looks terrifyingly bad in motion, as when you have moving and non-moving parts of the image, every little movement causes a ripple of value changes to the right and bottom of it (assuming you process from top right), while everything to top and left stays static, it distracts you from actually moving parts of the image and pulls your attention towards noise at the bottom left. You can use blue noise instead to achieve a similar looking dither effect. With precomputed blue noise, diffusion style dither is insanely fast on the GPU (or CPU), trivially parallelisable, and you can control the behaviour, you can make it stable frame-to-frame or vary it uniformly between the frames, there's even 3D or spatiotemporal blue noises specifically for the purpose. Computing optimised noise is extremely slow, but it can be precomputed such that it wraps around seamlessly and just shipped as a texture or array.
@smartito_973 жыл бұрын
This is the algorithm what printers use?
@hackerman83642 жыл бұрын
hey can you make a video about headers ?
@normwaz28132 жыл бұрын
Hi, a little off topic but I wonder if you could explain anti-aliasing algorithm?
@nyyakko3 жыл бұрын
welcome back! :D
@sunnymon14363 жыл бұрын
MYST had a lot of this in it, as I recall.
@RockTo113 жыл бұрын
I wish dithering was used these days, even with 24bit palettes. For example, the splash screen on the Hulu app (on Samsung TVs) uses a teal gradient, but has a lot of posterization banding. Dithering would eliminate that.
@bubuche19873 жыл бұрын
In general, I think it would be easy to have shaders ( I am talking about glsl here, and if you don't know what it is this comment is going to make little to no sense ) outputting colors in a much broader range of colors. Everything is calculated not with integers between 0 and 255, but with "reals" between 0 and 1. The precision of those "real" is invisible for the programmer, so it could be very high. Then, in the last step, when would come the time to display it on the screen with only 24 bits per pixel, the GPU could dither the whole result ( it would have the real result of what the color should be in those "reals" and the transformation to 24 bit would be the sampling ). Invisible to the programmer ( maybe a boolean to set to true ), retro compatible with a lot of games and improving the result a lot.
@Roxor128 Жыл бұрын
The serial nature of Floyd-Steinberg dithering isn't the only problem with it. It's also not a good fit for animations. The way FS dithering propagates the error through the image means that if you change a single pixel, everywhere after it will change as well, resulting in shimmering noise in an animation, which looks pretty bad. An animation-safe form of dithering needs to be localised and keep its pattern still relative to the screen. A Bayer-matrix ordered dither works quite nicely. Well-enough that the software renderer for the original Unreal from 1998 uses a 2*2 version of it on environmental textures to fake bilinear filtering. Interestingly, it's not dithering between colour values, but texture coordinates. Which makes sense as a way to save on performance. Much easier to add offsets to the coordinates of the texel to look up than to do bilinear filtering. Note that it only applies to the environment. Objects such as enemies and your weapon models are unaffected. Those just use nearest-neighbour texture filtering.
@Kaltinril3 жыл бұрын
I wonder if you had the input parameters (bits, ending error), if you could use this is a lossless compression algorithm. working your way back from the bottom right, to the top left.
@javidx93 жыл бұрын
Hmm, working backwards would require you store which direction the error comes from. I had success some time ago, dithering to low bit counts to compress, but then Gaussian blurring and a 3x3 sharpening convolution to reconstruct and this had such shockingly low error I went on to build a commercial product with it.
@Kaltinril3 жыл бұрын
@@javidx9 that's a good point, I was forgetting about all the other error values that we don't know.
@allmycircuits88503 жыл бұрын
Once I tried to implement undo operations on image in "clever way", not just have older version of image but implementing inverse operation (as precise as possible) and then storing difference between original image and the one inversed after operation performed. Classic "predictor/corrector". For example, when I perform image rotation by 1 degree, it first rotates, then rotates back, subtracted from original and stored inside "undo" structure. I watched what these diff images look like. it has almost gray areas (for this images I had offset of 128 so black is max difference to one side, white is to the other) with regular grid of pretty small noise. From original image having all levels 0 to 255 there was so little remaining, I cheered that diff image would take just 1/10 or less of original and still in lossless, leading to "svn for images" with small overhead. But alas, lossless compression is ruthless bitch. Range of each pixel was lowered from 256 to just 16, but it's not reduction by 16 times, just by 2 because 8 bits are replaced with 4 bits. What's more, that "residual noise" is almost incompressible as any white noise should be. I'm afraid compression based on dithering will suffer from same problems. But very interesting topic anyway. There is still something magical about compression algorithms...
@bogdanstrohonov83103 жыл бұрын
Good evening Mr. Barr, how about a video on localization in games? Greetings, B S
@bubuche19873 жыл бұрын
Some thoughts after seeing this video: I am curious about how we should handle borders of the image. I mean you assume it's always possible to delegate the error to the 4 pixels you mention, and it's not the case. I also strongly disagree with your sentence saying that the number of colors is greater than what we can see. I think it's in general false ( can you provide a pair of colors which, put side by side even a immense area, would be indistinguishable ? ), and I know that it is false for for special yet frequently encounter cases: dark shades. There is only 256 variation for hueless colors, and it's quite a small number. Create a picture made of bands of those shades, put it on any screen, and you'll see bands. There is a reason why so many games have banding issues ( at least old games ). I also do not understand what you said about storing negative values. Yes, if you try to do IN PLACE dithering, with a fixed amount of memory, you may face this problem. But in your case you are already creating a whole new array anyway. Nothing stops you from having an array of signed shorts that will contains errors. Finally, I am not an expert in dithering at all, but I am quite sure that there must be alternatives that would allow for parallel dithering ( with a random function based on a _blue_ noise to avoid artifacts ? ). And dithering is still useful today ( for printing for example, and as I mentioned to deal with dark areas, in movies or games ). Last but not least, your approach seems to assume a linearity of the brightness. But if you have a field made of gray pixels of, let's say (127, 127, 127), it's in incorrect to approximate it with a field of altering blacks and whites pixels. Doing so will result in a much brighter image, and it has nothing to do with negative values or anything like that. ( And to check that and avoid sampling issues ( that would merge the black and white pixels ) you can display on full screen your gray image, stand away from your screen and take a picture of it. Then do the same with the black and white pattern ). I banged my head on this problem for quite a long time.
@bubuche19873 жыл бұрын
Forgot to mention: is it possible to create an image that would look like a grid of black and white (like a chess board) but with slightly modified pixels so WHEN you dither with the right colors you actually obtain a totally different result? In general: is it possible to hide an image into another one so you can only see the hidden picture with the right dithering algorithm ?
@zxuiji3 жыл бұрын
12:55, doesn't seem that hard to parallelise on the cpu side, at worst deliberately yield thread execution time until a columns thread has started processing before launching the next columns thread... so long as it doesn't need to access the next column's pixel anyways
@SianaGearz3 жыл бұрын
I don't understand how you want to accomplish that. Every pixel depends on output from processing the neighbours to the left, to the top, and diagonally in between, and recursively so. If you want a parallel algorithm similar in effect and appearance to diffusion dither, you just use a precomputed high-quality blue noise and apply it as threshold offset.
@zxuiji3 жыл бұрын
@@SianaGearz You would have each thread process one column, as long as each thread is launch in sequence then by the time subsequent threads are ready for their next pixel the pixel adjacent to it is already done with, at worst you would just add another buffer filled with counts, when a count matches the thread's pool number then it indicates all the pixels to the left are done with, since they're launched in sequence it be very rare that a thread would need to wait for other threads to do their bit. Each thread would effectively be doing a subsection of the scan line and only after the previous thread has already gotten to it, by waiting for the 1st pixel in a column to finish being processed before you let the next thread start with it's column you further reduce the chance that count buffer would ever serve it's purpose.
@SianaGearz3 жыл бұрын
@@zxuiji So you have a triangular wavefront, where let's say you have a 4-thread pool, thread 0 processes pixel 3 of column 0, thread 1 processes pixel 2 of column 1, thread 2 processes pixel 1 of column 2, and thread 3 processes pixel 0 of column 3? Or alternatively instead of one pixel, you have each thread process a span of pixels but advancing in that same triangular wavefront? Yeah, with a decent batch size, it could work.
@zxuiji3 жыл бұрын
@@SianaGearz Yeah, roughly like that, things can be further optimised by keeping all buffers in one, if the dst is at the front of the shared buffer and the src is kept directly after it with each row & colomn prepended with a pixel that emits 0 light then no thread would need to check if their column is column 0, instead they would just - 1 from it prior to using it for the dst image, having the dst at the front means it can also be directly used for sending to the graphics card
@zxuiji3 жыл бұрын
@@SianaGearz Now that I'm back home with a keyboard in front of me I'll do a pseudo example of what the thread code would roughly look like: ``` void* dither_col( void *obj ) { DITHER *dither = obj; PIXEL *dst = dither->buff, *src = dst + (dither->cols * obj->rows); uint *counts = (uint*)(src + ((dither->cols + 1) * (dither->rows + 1)); uint y = 1, X = dither->col - 1, Y = 0; for ( ; y < dither->rows; ++y, ++Y ) { uint row = dither->cols * Y; PIXEL nxt = {0}, a = src[(dither->cols * y) + X], b = src[row + dither->col], c = src[row + X]; while ( count[y] != X ) pthread_yield(); ... dst[row + X] = nxt; count[y]++; } }
@giorgioguglielmone65282 жыл бұрын
Sorry if I write to you here. Could you do a tutorial on how to write a program in Visual Studio C ++ 2022 to connect to a Firebird 4.0.1 database (maybe using Boost.Asio or other library like SOCI or IBPP) ?
@Lattamonsteri3 жыл бұрын
I remember that I heard an interview where an LucasGames employee told how when he went to work there, dithering wasn't used in the games because it didn't compress well. But after he drew a dithered image and it looked so much better than the standard (EGA?) image, the coders were forced to implement dithering. Now, I wonder... is this Floyd Steinberg dithering easy to compress? Could we use the standard posterized image and use it as the compressed image, and then just store another array, where the error amount is stored for each pixel? Then at runtime, the algorithm would go through the image and create the doppled effect? Or is there a better way?
@SianaGearz3 жыл бұрын
I don't see much use, you might as well compress the original high colour depth image instead. Because all you've done is for the say 8 bits of your original colour image, if you're converting it to 2-bit dithered representation, all you've done is stored for each pixel, a separate plane with 2 bits corresponding to the thresholded image, and another plane with 6 bits corresponding to the remaining error. You have also not at all decorrelated the data but duplicated it, so let's say you store differences between neighbouring pixels, and give them a variable storage size depending on the stored value; but then both planes encode the same general trend really, and you'd be better off doing this on the whole pixel, as when there are substantial magnitude changes, you store them once rather than twice. At low bit depths, the dithered image itself is probably pretty much incompressible, while at higher ones, nothing speaks against compressing dithered image directly with local difference. But i think someone else can come up with a better approach in terms of compression, but i wager a guess, it wouldn't be simple at all. On the other hand, if you knew you'd be diffusion dithering the image for display, you can make use of lossy compression algorithm on the high bit depth image, the artefacts of which would be particularly well hidden by the dither, as they're similar in appearance. If you know ADPCM algorithm for compressing audio, there are 2D generalisations of it. It would even spit out data in the same order as consumed by the dithering algorithm, so you can have a pretty optimised implementation that simultaneously decodes and dithers. But i really don't know whether it would beat storing dithered image uncompressed at lower bit depths and trivial compression of dithered image at higher. Sounds like a subject for a scientific paper or something, but maybe someone has done that before.
@Lattamonsteri3 жыл бұрын
@@SianaGearz thanks for a thorough answer :D i'm not familiar with audio compression or anything related to compression in general but i think i understood most of what you said! x) As for my original idea, i forgot how many binary numbers are needed for the error values xD i guess i thought they could also be rounded to 8 values or something, but that would probably cause very weird rounding error artefacts!
@JoshRosario3103 жыл бұрын
Audio Dithering next?
@yonis91203 жыл бұрын
[In the voice of Cornelius Fudge in Harry Potter 5:] He's back!
@samuelecanale54633 жыл бұрын
Hello, i'm trying to do my own pixel game engine but i encountered a compiler error C2089: "Class too large" about the pixelgameengine big class. Did you encounter the same error? If so how did you solve it? Hope you will find the time to answer. Great video btw, i'm learning so much from you!
@javidx93 жыл бұрын
Thanks Samuele, sounds like you are allocating too much memory on the stack. Big areas of memory need to be allocated on heap and accessed via pointers or other appropriate interfaces.
@samuelecanale54633 жыл бұрын
@@javidx9 thank you very much. I'll try to fix it like this
@philtoa3343 жыл бұрын
Nice.
@deathreus2 жыл бұрын
Instead of writing out the long clamp fn, you could bitwise AND it with 255, no?
@javidx92 жыл бұрын
No, that performs something equivalent to modulus.
@densming3 жыл бұрын
No Vimto can??
@mworld10 ай бұрын
CGA is back hehe.
@FrostGamingHype3 жыл бұрын
im getting closers to build something like the console game engine you made in vscode im designing one in code::blocks
@Ochenter3 жыл бұрын
Hello David, Daddy. Long time no see you, miss your lessons. Stay safe, Mister.
@javidx93 жыл бұрын
Hi Daniel, thanks as always, and yes stay safe indeed!
@watercat12483 жыл бұрын
This tethering method I will by amazing fore hardware or software that have limited color support for example nes, gb, gbc, ECT for people that create games or other software for those system I believe this information is very useful Personally in not that good with codes and algorithm but I appreciate that video personally the only way I'm able to create I video games is because the game engine existing
@akimpus3 жыл бұрын
Javidx9, hi. Do you want to touch on the topic of neural networks and artificial intelligence? I think with your teaching skills, I and other viewers could easily understand this topic.
@javidx93 жыл бұрын
Thanks, but sadly no. My academic background is actually in machine learning and network construction/simulation... I'm done with it. I find it quite dull.
@johnsports_iii7 ай бұрын
I've seen some recent games fake transparency by dithering stuff out.