One of the best (maybe the best) explanations of histograms on the internet. I truly appreciate your time and devotion Cyril.
@raminmdn3 жыл бұрын
There are four terms that continuously come to my mind while watching your videos: Clear, Concise, Comprehensive, Attractively Presented
@DeniseNepraunig2 жыл бұрын
Wow - I wish I could have been your student while studying! Great explanations and examples. Thank you for providing those lectures on KZbin!
@CyrillStachniss2 жыл бұрын
Thanks
@luthfiaminulloh81772 жыл бұрын
Thank you prof, Your lecture really helpme a lot. 🙇🏻♂
@oldshiloh90612 жыл бұрын
When calculating the histogram of a 24 bit color image, how would you do it because the intensity range is 0 - 16,777, 215 values and not just 0 - 255. For example if you wanted to generate a pallet of 256 colors to represent the 24 bit image, you could use maybe the octree method. This would not produce correct results if you used 3 separate histograms (one per color channel: r g b), you would need to consider all the channels combined. Is that correct, or am I missing something about the difference between a grayscale histogram vs a true color histogram? I suppose you could reduce each channel to 5 bits each which will fit into an array of 65,536 elements, but then you would be losing a lot of color range right off the bat.
@CyrillStachniss2 жыл бұрын
Depends on your application. Often, you basically use 3 histogram, one for each channel. If you need a full one over all 2^24 values, I would use a hash table as the internal data structure.
@oldshiloh90612 жыл бұрын
@@CyrillStachniss separate histograms are useless when you want to perform an analysis based on the true color of the pixels within the used gamut. maybe an octree is better.
@michaelpettit1263 Жыл бұрын
@@oldshiloh9061 This is a fun puzzle I'll go think about. Image scientists probably have a good answer other than separate histograms vs the whole banana all in one. I mean, think about it. A World View 3 8-band color image (can be very huge pixel wise) has up to 11-bits of intensity data per band (in addition to the 4x finer PAN band). If the image was shot with the SWIR sensor also recording, that is another 8 bands at 14-bits of intensity data per band at 4x coarser pixel wise. All 17 of these values are legitimately contributing to the 'color' of a given point on the ground even though they are at differing spatial resolutions. And the illumination and look angles contribute to color when looking at the same point at different times of the day, etc. I've always wanted to highlight a blob of pixels on screen and tell the machine "please find me all the things that are this color, where color means all 17 input values." It has been fun and frustrating to figure out how complicated this really is. While I'm off researching this, thank you to Cyrill for 10 years of photogrammetry, computer vision and robotics videos.