Are longer subs really better? An experiment in broadband and light pollution.

  Рет қаралды 10,445

Deep Sky Detail

Deep Sky Detail

Күн бұрын

Пікірлер: 109
@deepskydetail
@deepskydetail 2 күн бұрын
Thanks, everyone! If you'd like to support the channel even more, consider becoming a buymeacoffee member: buymeacoffee.com/deepskydetail
@FrankSD76
@FrankSD76 3 күн бұрын
Thanks! Nice to see some numbers instead of just intuition.
@deepskydetail
@deepskydetail 3 күн бұрын
Thanks for the support! I like looking at the numbers too. I find it really interesting seeing where intuition agrees or doesn't agree with the numbers, and then trying to figure out why. And, of course, I hope people let me know when I've done something incorrectly.
@astrofalls
@astrofalls 3 күн бұрын
Anecdotally, from what I have found doing ultra ultra faint narrowband imaging from very dark skies, is that the sub length absolutely becomes important for these details near the read noise limit. It is so important for these, that I am totally fine with clipping dozens of my stars just to expose long enough to remove the blotchiness. The satisfactory exposure times I've found for these objects depend entirely on the F-ratio of the scope. For my fsq85 at f/5.6, I expose for 1200s typically, for my FSQ106 at f/3.6 I go for 600-900s, and for my RASA8 system at f/2 I expose for 300s. This is all for narrowband.
@nickambrose8606
@nickambrose8606 3 күн бұрын
@@astrofalls and you are probably not using the narrowband stars anyway so blowing them out doesn’t matter
@deepskydetail
@deepskydetail 3 күн бұрын
Great comment, and thanks for sharing! Your experience is in line with what I understand as well: in dark skies going longer is definitely better! I bet there are a ton of astrophotographers at Starfront who have found similar things :)
@monkeypuzzlefarm
@monkeypuzzlefarm 3 күн бұрын
I am so glad that you are here to do the hard work! Fascinating video as usual!
@deepskydetail
@deepskydetail 3 күн бұрын
I'm glad you're enjoying the videos!
@gregmac8268
@gregmac8268 3 күн бұрын
Get back to work... ;)
@btornqvist
@btornqvist 3 күн бұрын
Thanks!
@deepskydetail
@deepskydetail 3 күн бұрын
Thank you! I appreciate it!!!
@CuivTheLazyGeek
@CuivTheLazyGeek 3 күн бұрын
Amazing work! I'm not surprised about the SNR result, it does align with our understanding (phew!) I think what was referred to as blotchiness was really the quantization error, which can indeed happen depending on the gain. If the gain is 0.5e/ADU means we'd always count electrons, no quantization error - but if you have a gain of 10e/ADU, and your pixel collected 19 electrons in a single exposure, tough luck, it's only one ADU. and if your shot noise is just 1e/ADU for instance, then you'll only rarely count 2 ADU for that pixel, so as we stack the average will still he 1 ADU. Maybe the an area nearby is just slightly brighter and almost always counts 2ADU, I can indeed envision blotchiness - if you were to use gain 0 on some cameras in a Bortle zero zone with no light pollution, or with narrowband imaging, maybe the blotchiness would appear? Not sure.
@AF29007
@AF29007 3 күн бұрын
It is worth considering the IMX294 sensor is notorious for band noise issues with narrowband filters - I wonder if this would have any impact in these kinds of experiments
@deepskydetail
@deepskydetail 3 күн бұрын
Thanks, Cuiv! That's a good question. Good points about gain. Yes, I think you're right that the blotchiness in the video is better described as quantization error. It'd be nice to go to dark skies to figure this out ;)
@leonidous888
@leonidous888 3 күн бұрын
You're such a super nerd and I love it. This type of detailed analysis speaks to me so much more than Astrobackyard (He did a "comparison" of filters by downloading random, uncontrolled shots on astrobin).
@deepskydetail
@deepskydetail 3 күн бұрын
Thanks for the positive feedback! I definitely like these types of experiments, as you can tell. I think there is benefit to looking at astrobin to get a rough idea of things (I do it a lot, tbh!). But, you're right, there are a lot of uncontrolled variables that make it hard to determine what is going on.
@adventuresofshadowdog
@adventuresofshadowdog 3 күн бұрын
Wow! You really did out a lot of time and effort into this video. The results and information you shared is super helpful. Thanks!
@deepskydetail
@deepskydetail 3 күн бұрын
Thanks! Glad you enjoyed it. I hope one day my dog can meet Shadow :)
@AF29007
@AF29007 3 күн бұрын
Love the stats approach to these topics, please keep it up.
@deepskydetail
@deepskydetail 3 күн бұрын
Thank you!
@dlrager
@dlrager 3 күн бұрын
I rarely go over 300s for any subs because going longer increases the risk of losing much more data for a single bad sub. If you have a tracking error on a sub for instance, a 5min sub is less risk for overall total integration time. So I think there's a bigger picture to consider for the majority of folks in this hobby. Also, your comment on worrying about planes and satellites is not an issue with stacking. Those trails are considered as noise and eliminated with enough subs making the argument for a greater number of subs vs fewer, shorter subs.
@deepskydetail
@deepskydetail 3 күн бұрын
Great points! I also usually never go over 300s for the same reasons :)
@jango71
@jango71 2 күн бұрын
@@dlrager Ah ok so I do not have to eliminate subs with plane tracks…
@Naztronomy
@Naztronomy 2 күн бұрын
This is an AMAZING video. Really nice work with the experiment and you confirmed my experience in my bortle 7/8 location, I just never had the numbers to show it. Keep it up, I love videos like this!
@deepskydetail
@deepskydetail 2 күн бұрын
Thanks, Naz!! Glad to know the data match your experience :)
@elbass0
@elbass0 3 күн бұрын
Thanks for all the hard work you're putting in to your videos.
@deepskydetail
@deepskydetail 3 күн бұрын
Glad to do it. Thank you! :)
@dersupra
@dersupra 3 күн бұрын
oh, I need to watch this. The issue I found personally at 10 min exposure you end up with 6 stacks an hour compared to 12. It is the SNR I am wondering about.
@ubit64
@ubit64 3 күн бұрын
There is an error in the part of your video starting at about 16:00 min: You compare 10x300s subs with 10x600s subs. The 600s data has a higher total exposure time. You should have used 20 subs with 300s for the comparison. In theory you then should be able to do a "temporal binning" by adding pairs of subs (1+2 => a, 3+4 => b, 5+6 => c etc.) and stack them with 16 bit result. This would give the same average values as 10x600s subs. But: What exactly happens during stacking heavily depends on the algorithms used and the implementation. I think most stacking software internally always work with (at least) 32bit or 64bit floating point precision and only convert the result into the requested output format (16/32 int/float). This means that during the stacking process the values 5.5 vs 6 ARE distinguished. If you normalize the image during stacking the result should be nearly identical if you sum up the same total exposure time because the 300s subs are "doubled" compared to the 600s subs during normalization (if there are no hot pixels left after calibration and the stars are not burned out in the 600s subs). In general you should use an output format for stacking that depends on the precision of your sensor and the number of subs to get the best out of your data. If your camera works with 12 bit, stacking of 16 subs leads to a 16 bit image. With a 14 bit camera stacking more than 4 subs looses information. With a 16bit sensor and 32bit output you can stack up to 65536 images without loosing information. Therefore you always should use a precision for the stacked image that matches your data. If you do so there should be no significant differences even in the fainter parts of the image as the "fractional values" are preserved during the stacking process.
@deepskydetail
@deepskydetail 3 күн бұрын
Thanks for the comment! I agree, the hypothetical example should have controlled for total exposure time. Good catch! The real-world data did control for total exposure time though. I agree about the 32/64 bit data distinguishing between 5.5 and 6.0. Hopefully, that was clear, but it might not have been!
@soaringfranz
@soaringfranz 2 күн бұрын
Very interesting video! I have a question about the 32-bit example... where do the 32 bits come from? Is it the unsigned integer space used by SiriL for stacking? Also, although the camera driver may remap the output values to always span the full 16-bit range, the 294MM is a 14-bit camera in bin2, and a 12-but camera in bin1, so you if you examine the ADU values of an uncalibrated sub you will see that only one value in 4 (for bin2) or 16 (for bin1) is represented.
@stevenmiller5452
@stevenmiller5452 2 күн бұрын
The precision of the 32 bits, which is a floating point so actually only a 24-bit mantissa, comes from the stacking. This averaging of all of these discreet values creates new intermediate values. You actually gain levels by a factor of log2(N) where N is the number of lights (subs). It’s easy to visualize because with two subs you can get a value of 20 and 21 and this will average to 20.5 which is an intermediate value so you double the number of intermediate values and now you have halves. With four subs you have quarters you can have the results of 20.25, 20.5, 20.75. So with 256 lights log base two of 256 is eight you gain eight bits of precision because you can now create 256 imtermediate levels. This is why 32-bit precision is needed during stacking, otherwise these fractions values generated by the averaging get truncated.
@soaringfranz
@soaringfranz 2 күн бұрын
@@stevenmiller5452 yes I thought it could be 32-bit float - in PixInsight that would be the norm, but I’m not familiar with SiriL. Also the examples given were all unsigned integers, hence my question. But 32-bit unsigned integers would be IMO too limited to represent the result of stacking.
@stevenmiller5452
@stevenmiller5452 2 күн бұрын
@@soaringfranzIt’s academic at this point but you could, in theory, use 32 bit integers, the stacker could use a summing method, instead of averaging, and then simply rescale to a 32-bit float (divide the final result by the number of sibs) afterward to save in the proper format. The nice thing about floats is it’s easier to do weighted stacking by applying a non-integer weighing factor for instance. Lot’s of math is just easier to program with floats.
@soaringfranz
@soaringfranz 2 күн бұрын
@stevenmiller5452 agreed, but the fact remains that the example in the video performs an average in unsigned integer space, or all the discussion about rounding wouldn’t have any meaning
@stevenmiller5452
@stevenmiller5452 2 күн бұрын
@@soaringfranz Yup, that was a problem. Good thing he caught it.
@ondraskala1233
@ondraskala1233 Күн бұрын
Wow, great job! Such a fun video
@deepskydetail
@deepskydetail Күн бұрын
Thank you 🤗
@Farathus
@Farathus 3 күн бұрын
Very nice video! I totally understand the "blockiness" issue because i have run into this. But nowadays we can process with very high bit depth, eliminating that issue. Most people don't process at 32bit yet, so for them it is still relevant to image longer subs. Just this week i reprocessed my cocoon nebula images and was able to eliminate the blockiness I got before.
@deepskydetail
@deepskydetail 3 күн бұрын
Thanks for sharing! Good info :)
@EliasHansenu7f
@EliasHansenu7f 3 күн бұрын
1. The dynamic range for the sensor is best used when the range of min brightness to max. brightness of an image is covering the whole range of your sensor. Min brightness has to be higher than the readout noise, hence you have to collect enough photons for each sensor pixel. Hence more exposure time is better for the dynamic range until the brightest object of interest is saturating. 2. Now comes stacking. Any image should be stretched for brightness before the averaging to avoid numerical issues, which means a pass over all subs to get max an min brightness of all images to calculate an offset and gain. The offset and gain will be used to stretch the brightness of all images. The last part is the averaging over all subs. 3. The last step removes some banding from the processing, 4. But the sensor isn't perfect. Which means there are always leakage currents, causing a background color. This is the reason for flats. The leakage current will reduce the dynamic range of the camera. 5. There is an optimum exposure time depending on the camera. Cooled sensors have the advantage reducing readout noise and leakage currents. Larger pixels are collecting for more photons. When the object of interest is dim, saturating the brightest objects is the way to go.
@comeraczy2483
@comeraczy2483 Күн бұрын
Thanks for the good work. I fully agree that sub length won't affect blotchiness, based on the properties of poisson distributions. However, I believe that your measurement of blotchiness isn't going to be very representative of what the eye would see. My understanding of the effect that you are trying to measure is that it is similar to banding: almost always irrelevant, but very noticeable on smooth gradients.
@deepskydetail
@deepskydetail Күн бұрын
Thanks! Yes, I don't think it will be very representative. Still trying to figure out how to measure blotchiness better! Any suggestions are welcome!
@benjaminolry5849
@benjaminolry5849 3 күн бұрын
I think many of these discussions must be held with the reference to what Sensor tech is used. Although CCD and CMOS sensors both are dominated by photon shot noise - ccd sensors have often more read and dark current noise. For a sensor with more read noise, it is favorable to seek out the longest sensible exposure time, as it does not scale with exposure time. I have only ever worked with CMOS sensors and most of the time I went with the simple rule that long exposures are good, but cut it back, if the dynamic range of the sensor clipps more than the cores of bright stars. This establishes a sensible upper limit and the rest is down to other factors like guiding accuracy etc.
@deepskydetail
@deepskydetail 3 күн бұрын
So true! If an older CCD camera has a read noise of 10, then a lot of things could change with these analyses. Thank you!
@JeffHorne
@JeffHorne 3 күн бұрын
Amazing. Thank you for this. Narrowband next! 😊
@deepskydetail
@deepskydetail 3 күн бұрын
Yes! I hope I have clear skies soon :)
@tostativerdk
@tostativerdk 3 күн бұрын
+1 for a narrowband version! :)
@TevisC
@TevisC 2 күн бұрын
Narrowband analysis would be awesome... I appreciate your deep dive analysis vs just the few minutes clip of a guru. For dual scope setup.. I believe it's the best way to halve your acquisition time. Faster than f4 comes with a ton of strings attached. In many ways, I'd rather have *for example* two (or 3) 80mm refractors at f6 than an f4 scope. ( Or 4.8 vs f 3.4 by the math) I've considered 3 80mm svbony triplet refractors and 3 mono imx 533 sensors. That way no data gets stacked from multiple telescopes. Just combined in post processing.
@deepskydetail
@deepskydetail 2 күн бұрын
Thank you! Totally agree: dual rig setups seem to have a lot of benefits :)
@nikivan
@nikivan 3 күн бұрын
I love the dedication and attention to detail. Please let me know if you'd like to get access to a year's worth of data collected from a dark site in NM. Both broadband and narrowband targets were shot with 2600MM and 294MM cameras. I am confident that some good analysis can be performed on the data.
@deepskydetail
@deepskydetail 3 күн бұрын
That would be amazing if you are able to! You can email me at deepskydetail at gmail if you'd like. Thank you!
@KCM25NJL
@KCM25NJL 3 күн бұрын
This feels like a problem that could be modeled (supervised machine learning) if enough data could be captured across a range of cameras and scopes. I'm not an astro photographer myself, so I'm not sure if much of the data collection could be largely automated....... but if it could, something like tracking a playlist of targets and pulling the same range of sub exposures across various hardware..... the premise being to tune the model to give you the ideal exposure times for the intended target and setup you have. I get the feeling the data pile would have to be a crowd sourced effort just to get a wide range of gear covered that informs the model.
@deepskydetail
@deepskydetail 3 күн бұрын
This is a great comment! I think a lot of the optics/measuring are fairly well understood in general. Some developers have made calculators based on those equations. But, machine learning might be able to add more nuance that are tough to model with just the equations. Thank you!
@pcboreland1
@pcboreland1 3 күн бұрын
I've been looking into this myself, while developing a new stacking program. In your analysis of blockness you are not taking into account that the final displayed image is 0 - 255 (8bits)? Although you did mention the monitor bit depth in passing at the end. So when the image is stretched, you only have 8bits dynamic range to play with, so having more precision (data values) with 32bit verses 16bit numbers, I'm not sure makes for a better image. Although, in my software I do work entirely with 32 flointing point values through the entire image processing pipeline. I'd be very interested in your and others thoughts on this.
@deepskydetail
@deepskydetail 3 күн бұрын
Yeah, I was thinking about monitors' dynamic range the entire time I was making the video. And I feel it's something I need to think about more, tbh! There are so many moving parts with the video. Having worked with older versions of Gimp that could only open 8-bit images, 8-bits is definitely not enough! It's a good question, and one that I don't (yet) have an answer for.
@pcboreland1
@pcboreland1 3 күн бұрын
@@deepskydetail 255 levels of gray scale it likely more than human eyes can see. It's easy to get stuck on the wrong thing! You can draw false conclusions. From my googling it is between 450 and 900 gray scale levels while at your prime! It is a pitty one can not post links in a youtube comment. Perhaps I could email you directly?
@evilkidm93b
@evilkidm93b 3 күн бұрын
you can also do those experiments without a tracker in a dark room at night with a faint artificial light source
@deepskydetail
@deepskydetail 3 күн бұрын
Yes! I've done something similar in the past. I thought I'd do a real life example this time so I can examine star shapes and things. But you're 110% correct :)
@xe1zlgg
@xe1zlgg 3 күн бұрын
Hi... nice video and interesting data.... but the point to get a longest sub frame is to lower the gain, surpass the read noise and get deep in to the image without saturation on the stars..... that's where the filters come in... the STC multispectra works very well on Bortle 6-7 skies without destroying natural colors.
@luboinchina3013
@luboinchina3013 3 күн бұрын
Exactly. I tried to put my F/10 Edge HD to observatory and spent around a month capturing M33 along with other objects. ASI2600MC and Optolong L-Ultimate very narrow filter. I had to run it on Gain300 in order to separate the histogram from the 0 at black value. I got disappointing results. Really lost hope in that telescope with that filter on. Can you please help?
@herrcay0323
@herrcay0323 3 күн бұрын
@@luboinchina3013 You should definitely get a reducer for your scope
@deepskydetail
@deepskydetail 3 күн бұрын
Thanks for the clarification! I think there is a very good argument for that in darker skies. But, if there is substantial light pollution, I'm not sure you can really go that deep into an image with just LRGB. I've never used the STC filter. Do you think it helps with galaxies?
@Dionaeatrap
@Dionaeatrap 3 күн бұрын
@@luboinchina3013 I use an 8" RC, 1600mm f8 with asi2600mc and 3nm L-ultimate and 4nm altair SiiOiii. Not excatly yours, but close. My subs are always 300 seconds with filters and 60sec without and I always use gain of 101 with offset of 50. I'm in Bortle7+ skys. At that exposure length the histogram may not look like it is off edge but the numbers data shows seperation from left edge and few oversaturated pixels on right by the numbers, thus it's good. My results with 10-30hrs on target have been pretty danged good. F10 is a bit slower so perhaps may need slightly longer than 300 sec. Just my 2 cents.
@DSOImager
@DSOImager 3 күн бұрын
Interesting... I use to run relatively short subs with both with OSC and mono.. we're talking 2-3 min (5min for NB) subs, from my bortle 5 back yard.. and I was fore sure seeing blotchy backgrounds.. that seemed to get worse with increased total integration time. I've been experimenting with different sub lengths lately. Currently using 6 min subs with the OSC. Narrowband does indeed change things.. especially with the imx492. On bright narrowband targets, 10 min subs worked well, but on targets with less signal (Ha regions in galaxies or Minimal amounts of Oiii) I run into the banding issues that the imx492 is infamous for. I've had to crank up the gain and go longer in the exposures.. running 15min subs in some cases. I recently sent one of my rigs to Starfront (65mm refractor and an asi1600mm). I'm currently experimenting with sub exposures there now... although I'm not taking measurements.. I'm just eyeballing the results. How does sensor well depth come into play here? Would a camera with a deeper well take advantage of longer subs?
@deepskydetail
@deepskydetail 3 күн бұрын
Great comment, and great observation. To my understanding, a deeper well (all things being equal) should help expose for longer without clipping. It's something I probably need to think about more. Also, are you using your imx492 in bin1 or bin2 mode?
@DSOImager
@DSOImager 3 күн бұрын
@@deepskydetail I've been playing with both bin 1 and bin 2. I have a asi294mm on my 8" Edge that I use in bin2. I also have a QHY294mm that one of my friends is letting me borrow. I had it on a 115mm refractor in my backyard but I'm planning to swap out the asi1600mm I have at starfront with it. With that one I have tested both bin 1 and bin2. I'll have to do additional testing once its on the 65mm scope but I could see me using bin 1 for lum and bin 2 for RGB (and probably NB).
@starpartyguy5605
@starpartyguy5605 2 күн бұрын
I take a test image of 3 and 5 minutes. I make sure to not saturate my stars. Otherwise they won’t show color. One of the points mentioned by Adam Block.
@deepskydetail
@deepskydetail 2 күн бұрын
Agree. There are so many variables to think about when choosing a good exposure time.
@stevenmiller5452
@stevenmiller5452 3 күн бұрын
Very well produced video and a lot of work represented by your analysis, well done! Your analysis conclusion is correct and Tim’s statement was incorrect. As long as you’re overcoming read noise then it is the total integration time that matters not the additional length of even longer subs. This is well established. Tim’s statement goes against the current understanding and if he was going to make such an extraordinary statement, he would have needed to back that up with some analysis. And yes, it’s absolutely critical the data is stacked into a higher precision bit depth than the capture bit depth because more subs will give you more intermediate levels as the pixels will statistically bounce between two nearby values and if you don’t stack into a higher precision, you truncate that data as there is no way to create the intermediate values. I was worried when I saw you stacking into 16 bit thinking “no!!! don’t do that! You are throwing away all of your intermediate levels!” Thank goodness, you recovered and actually stacked into 32 bit to explain why you need higher precision to create the additional intermediate levels that multiple exposures enables. What Tim doesn’t seem to realize is intermediate levels are created at the rate of log base2 of N where N the number of exposures. I call this “single exposure mentality syndrome “ where people often extrapolate from a single exposure and thinking that a stack exhibits the same behavior, it doesn’t. It’s fundamentally different in its final precision. Here is another analysis you can do which goes the opposite direction: when you take longer exposures with a gain that has multiple photons per ADU do you lose levels in the dimmer regions because you aren’t distinguishing between multiple photons? Would it be better to shoot at a gain that is never lower than one photon per ADU and just shoot more exposures to create more intermediate levels? I think the answer is probably yes and you will find that both the math and the testing would support this premise. I think in this case having more exposures with 1 ADU per electron (per photon) or better delivers the highest number of intermediate levels in a stack.
@deepskydetail
@deepskydetail 3 күн бұрын
Thanks for the comment! That's an interesting analysis to do with exposing at lower gain settings. I might have to do a follow up video on that. I'm also glad that I caught the 32-bit thing. I'll admit this: I originally stacked the files thinking they were 32 bit. When I did the analysis, I realized they weren't! I was actually pretty surprised that the 16-bit stacks were so different. So, I had to do everything again.
@luboinchina3013
@luboinchina3013 3 күн бұрын
Can you please do the same experiment on F/10 setup with narrowband filter and dim object? I tried to put my F/10 Edge HD to observatory and spent around a month capturing M33 along with other objects. ASI2600MC 300sec and Optolong L-Ultimate very narrow filter. I had to run it on Gain300 in order to separate the histogram from the 0 at black value. I got disappointing results. Really lost hope in that telescope with that filter on. Do you think that in my case longer exposure would help? Can you please help?
@deepskydetail
@deepskydetail 3 күн бұрын
You've convinced me to use my C8 at f/10 on my next experiment. I'll try my best!
@StarlancerAstro
@StarlancerAstro 3 күн бұрын
I wouldn't use a duo-band filter on a target like M33, that's a broadband target and should at most used a basic light pollution filter and ideally just a UV/IR cut, I wouldn't expect much of an image of a galaxy with a filter like the L-Ultimate
@DSOImager
@DSOImager 3 күн бұрын
The L-Ultimate has very has a pretty tight bandpass at 3nm. It makes sense that you would need to increase gain and expose very long. On a target like M33, that filter would only be useful for grabbing Ha data (and maybe oiii) for an Halrgb show. How does M33 look at F10 in broadband?
@luboinchina3013
@luboinchina3013 2 күн бұрын
@@DSOImager Quite dim. It is dim object indeed. And you would be surprised to find out how much amazing nebulae M33 has, not just Ha but Oiii too.Just look up M33 in narrowband...
@luboinchina3013
@luboinchina3013 2 күн бұрын
@@StarlancerAstro Just google M33 nebulosity or M33 narrowband and You will be amazed how many great nebulae you will see
@dmitribovski1292
@dmitribovski1292 3 күн бұрын
Blocking has nothing to do with SNR. Blocking when stretching is caused by is caused by captured bit depth in an extreme case you may have a 16bit sensor on your camera but if the dark area of your capture is only receiving 2bits worth of photons, capturing multiple shots & stacking is still going to average out at 2bit's & no mater how you stretch it you are still only going to have the 4 levels that the 2bit's produce. The longer the exposure the more photons hit the sensor the bits you capture. This is why noise isn't an issue in studio photography as you can add as much light as you need to fill the available bits.
@deepskydetail
@deepskydetail 3 күн бұрын
Thanks for the comment! I was hoping in the video it was clear that SNR wasn't being used to measure blockiness, and that stacking in 32-bits can help compensate for shorter exposures.
@GrouchoDuke
@GrouchoDuke 3 күн бұрын
Nicely done, as always. Data are good!
@deepskydetail
@deepskydetail 3 күн бұрын
Thank you!
@rashie
@rashie 3 күн бұрын
👍👍 - phenomenal! thanks!
@deepskydetail
@deepskydetail 3 күн бұрын
Thank you!
@naveenravindar
@naveenravindar 3 күн бұрын
I think looking at the number of grays makes perfect sense, and always using a 32 bit stack will help reduce blockiness. 16 bits is just not enough for faint dusty targets where I have multiple hundreds of images going into a stack. The general argument for the shot noising blowing out the read noise dictating the maximum useful exposure time holds no matter what. If one noise source dominates it becomes the primary noise source regardless of your ability to measure it. For the exposure lengths 99% of people are using and the photon rates with most optics on most targets, a 16 bit ADC for between ~45 - 70 Ke- of full well depth is sufficient. With high QE cameras you almost always are not going to be getting more photons than your ability to measure them with “reasonable exposure lengths” (Ignoring that some brighter stars will likely blow out). Aka your quantization error is controlled by how dim the object is not the bit depth of the camera. When I taught photometry at university and to my knowledge this is the way it is still done, statistics done on counts are done on the sum of the values in all of the images with some type of Kappa sigma clipping used to reject outliers before the sums are taken. If you are using the standard ccd SNR equation from photometry for your SNR calculation the statistics technically only applies to the sums values not averages (16 bit must have some type of averaging or scaling) but I’m sure that the results are close enough to work fine. Averages or bit depth scaling is only used for images due to convenience for file sizes and speed. Using only the sums prevents the bit depth problem in a stack and mitigates the blockiness problem since the individual values in the array that make up the images can be stored as 32 or 64 bit values and even for HUGE stacks there is more than enough numbers to go around. With sufficient images using only sums of pixel values the quantization error from low photon rates “eventually” goes away as the tails of the distributions will be captured and differences in counts can be seen. If bit depth scaling happens or is needed, aka we run out of numbers, before this point is reached, the quantization error and hence blockiness may not be sufficiently mitigated even with the stack and more images going into a stack with greater bit depth is needed. A 32 bit image is basically always sufficient. An unsigned 32 bit stack needs 65536 16 bit images with pixel values that range to their max before scaling occurs. Even if we lose a bit to the sign we still have 32k images before we run out of numbers. A 16 bit stack from images from a 16 bit camera is wasting a huge benefit of stacking and that is the additional dynamic range you get. If you want to use photoshop or another software that limits many operations to 16 bit images, stretch the image before converting to 16 bit so that the dynamic range of the dim low end can be recovered and used. Hope this rambly thing made sense!
@deepskydetail
@deepskydetail 3 күн бұрын
Makes sense to me! Good info! Quick question though, would average and summing be essentially equivalent as long as you've got the bit-depth needed? Thanks again :)
@naveenravindar
@naveenravindar 3 күн бұрын
Yeah don’t see why not, but then average would then just be the sum. Dividing by the number of frames is what is destructive to dynamic range and the ability to mitigate blockiness.
@desbarry8414
@desbarry8414 3 күн бұрын
Start using Sharpcap Pro brain, ie the smart histogram functionality which will stop one using long subs when you no longer have any benefit from doing so. I always get Sharpcap to measure my sky and work out my optimal exposures.
@deepskydetail
@deepskydetail 3 күн бұрын
It's a great tool! Thanks for sharing :)
@willowail
@willowail 11 сағат бұрын
200 shorter subs take wayyyy longer to process than 40 long subs, its as simple as that.
@deepskydetail
@deepskydetail 8 сағат бұрын
Yes! And they take up a lot more space on your hard drive!
@JuanRodriguezArchitect
@JuanRodriguezArchitect Күн бұрын
Great Video, However It all boils down to money. It would seem to me that bigger megapixels give you better subs...so 20 subs taken with 102 Megapizel QHY QHY461-PH will be better than 2000 subs taken with a 5 or 10 or 15 Megapixel camera. I'll be honest in saying that would be a true test. It's the difference between using a DSLR and a dedicated astro camera. It boils down to cost. I appreciate trying to walk around that but unfortunately there is no walking around that.
@bamsemh1
@bamsemh1 6 сағат бұрын
Well, my exposures are controlled by the clouds and weather 😬
@deepskydetail
@deepskydetail 3 сағат бұрын
You're not the only one! 😅
@rosaluks644
@rosaluks644 Күн бұрын
Light pollution does not overwhelm the signal at long exposures. From the camera's standpoint, the light pollution is undistinguishable from the signal and can only be subtracted based on our subjective judgement.
@deepskydetail
@deepskydetail Күн бұрын
Thanks for the comment. I'm not exactly sure I understand what you're trying to say. The point I was trying to make is that shot noise from light pollution is random, which destroys SNR. It destroys faint signal's SNR faster than bright signal's. You can't just subtract light pollution from an image to restore the SNR because of the randomness. Longer subs won't help.
@rosaluks644
@rosaluks644 Күн бұрын
@ I see what you mean, but the signal is just as random as the light pollution. Both are noisy. The noise is due to the random nature of photons hitting the photodetector. This noise is square root of the number of photons so if the number of photons is large, the signal to noise for both the light pollution and the signal increases. At long exposures, the signal to noise ratio can be large enough such that light pollution (assuming it is uniform across the field of view or can be somehow approximated) can be simply subtracted out.
@deepskydetail
@deepskydetail 20 сағат бұрын
Unfortunately, I don't think that's the case. The DSO signal and LP signal are not equally noisy. In fact, the light pollution signal is generally much noisier. Exposing longer increases both the LP noise and faint signal noise. Its randomness makes it so you cannot approximate it and subtract it out (i.e., the LP noise and DSO noise are not correlated).
@LearningAstrophotography-jj9en
@LearningAstrophotography-jj9en 3 күн бұрын
Sooo, with shorter subs, you will never spot the difference then, so why not just say that? I get what you did, but look at the images, and if anything, shorter subs means less noise.
@deepskydetail
@deepskydetail 3 күн бұрын
Thanks for the comment! As to why I don't just say it, I enjoy doing these experiments and sharing what I learn. I can say anything, but the experiments help make it more concrete, imo.
@LearningAstrophotography-jj9en
@LearningAstrophotography-jj9en 2 күн бұрын
@@deepskydetail Well if nothing else, you got a new subscriber today. :)
@deepskydetail
@deepskydetail 2 күн бұрын
@@LearningAstrophotography-jj9en thank you!
@robertking3098
@robertking3098 3 күн бұрын
Good video, but no audio.
@docterroxxo
@docterroxxo 3 күн бұрын
I don’t know if you know that you have a limb in you neck you might what to have a doctor look at.
@deepskydetail
@deepskydetail 3 күн бұрын
Thank you. I just had shoulder surgery. It might be related to that. But, I'll definitely check with my doctor.
@cadenseward2054
@cadenseward2054 3 күн бұрын
Nice video! Also first
@deepskydetail
@deepskydetail 3 күн бұрын
Thanks, glad you liked it!
@awnstar2139
@awnstar2139 3 күн бұрын
Thanks!
@deepskydetail
@deepskydetail 3 күн бұрын
Thank you! I really appreciate the support :)
To Brawl AND BEYOND!
00:51
Brawl Stars
Рет қаралды 17 МЛН
Что-что Мурсдей говорит? 💭 #симбочка #симба #мурсдей
00:19
Why Don’t Railroads Need Expansion Joints?
27:20
Veritasium
Рет қаралды 3,2 МЛН
What Happened to the World's Largest Tube TV?
35:46
Shank Mods
Рет қаралды 2,2 МЛН
Dark Energy Illusion // Great Meteor Shower // Mars Chopper
20:55
Inside America’s Most Mysterious Place - Mt. Shasta 🇺🇸
1:13:07
Peter Santenello
Рет қаралды 1,9 МЛН
How to Polar Align with NO STARS.
8:35
Dylan O'Donnell
Рет қаралды 11 М.
Why SpaceX’s Starship Raptor Engine Is Lightyears Ahead Of Its Time!
21:08
Journey to the Edge of the Universe in Real Time!
21:35
Astrobiscuit
Рет қаралды 727 М.
To Brawl AND BEYOND!
00:51
Brawl Stars
Рет қаралды 17 МЛН