I'm very early in my PixInsight journey. Having watched these three videos all I'm left with is WOW. Even with minimal experience in both image capture and processing I found the difference in outcomes presented to be immediately obvious. Videos like this are the reason I keep checking Adam's channel (and why I signed up for the FastTrack Training). Not just the "how" but more importantly, the "why".
@wandaconde36963 жыл бұрын
Thank you for putting together this 3-Part Video Tutorial. This whole thing about which frames should make the higher contribution to the final result has always been a mystery for me. I have never been inclined to use Noise Evaluation because: (1) I image from light polluted skies, (2) the obstructions in my backyard force me to image objects that are not very high in the sky, and (3) in many times I have high clouds that I don’t see until I blink the images later on. Your explanation confirmed me what I always thought: with noise evaluation the frames with high clouds or when objects are low in the sky may get a higher weight rather than the best ones. For this reason, I decided to use SFS instead but I couldn’t come out with a formula that produced results that made sense and many times I found it assigned very high weights to images that visually gave me the impression to be among the worst. After watching these videos everything I’ve experienced makes sense!!! I will definitely give this script a try if and when the weather allows me to image again.
@kitward36322 жыл бұрын
Great informative video, thank you so much for taking the time to do this. I will definitely be giving this a go looks a game changer
@paulnaquet3 жыл бұрын
Thank's a lot Adam for the great explanations! Cheers!
@yosmith13 жыл бұрын
already starting to do comparisons. Thanks for sharing
@adamgray19913 жыл бұрын
Thanks for the video. It would be great though to see the effect this has on the actual final processed image
@AdamBlock3 жыл бұрын
I think I do demonstrate this sufficiently. Normalization happens early on. The effect would be better S/N and less issues with rejection and gradients. All of this adds up to an image with better contrast (given the same steps). But... what you are asking for is something akin to - "Show me a final image with and without calibrating the data.... spend 3 hours making the image that you know will not turn out well and use the exact same steps to make the properly-processed image." That is quite an ask.
@alexandervarakin94783 жыл бұрын
Thank you Adam for all the efforts in creating your deeply informative videos. I was on a fence about using this script, but I am convinced now. I wonder if this script can be enhanced to do gradient reduction. Also, it would be very nice to integrate it into WBPP.
@AdamBlock3 жыл бұрын
Normalization and gradient removal are different things. If it is possible to make NSG or something like it computationally feasible... putting it into ImageIntegration is what needs to be done. Then WBPP will use ImageIntegration as normal and you get what you want. I predict something like this will happen in the future... just don't know how long...
@waynescave38443 жыл бұрын
Awesome set of videos. Top drawer stuff as usual from you Adam, thanks! LP isn't a huge deal for me But those times when I still want to shoot broadband with say 20% moon setting throughout the night changing sky brightness values.. IDEAL! Looking forward to trying this new script!.. Have been promoting your help and tutorials to everyone! Please keep up the brilliant work! 👍👏
@AdamBlock3 жыл бұрын
Thanks Wayne. Just to highlight...a by product of normalization by cleverly matching the gradient of a reference- is a simplification. However... in my mind the big benefits are the improved rejection and weighting.
@waynescave38443 жыл бұрын
@@AdamBlock absolutely, weighting with Actual quality of signal and not just weight of exposure (almost) is a game changer for sure! 🙂
@CuivTheLazyGeek3 жыл бұрын
Thanks for this - I am like you, I really don't like the way PI does the weighing of frames (including WBPP methods). It almost assumes no light pollution gradients or clouds - and worse, will rank images with more LP or more clouds higher! Because SNR is computed as higher! I've personally been using a weighing (in Subrame Selector) based on number of stars detected and nothing else. Clouds? Fewer stars detected. Light pollution? Fewer stars detected. Poor tracking? Fewer stars detected. Poor focus? Fewer stars detected. Imaging through an obstacle? Fewer stars detected. A single indicator for all my needs! Plus it lets me be very lazy - I don't have to blink through my millions of frames. It looks like indirectly, this script performs this (though in a more advanced manner), and as a side effect deals with gradients (to be able to compare apples to apples). I will be interested to see whether it performs well for me.
@AdamBlock3 жыл бұрын
Unfortunately star count has some weaknesses. One of the most common is that for undersampled data (which are often wide-field images with varying sky conditions) hot pixels and other thing can mess up the star counts and give poor results. This is particularly true for undersampled data with *short* exposures- which is what most people do, especially with CMOS sensors. And, as you are mentioning... one of the BEST things about this is that you can ACTUALLY SEE THE NORMALIZE IMAGES!! This in an of itself is pretty cool.
@CuivTheLazyGeek3 жыл бұрын
@@AdamBlock True, although I've yet to see the hot pixel filter fail me on that - I always do a check on the star numbers to see that I get a nice curve across each night (more stars near Meridian, etc.)
@AdamBlock3 жыл бұрын
@@CuivTheLazyGeek There are so many users out there with "warm pixels" that CC just doesn't take care of... really noisy stuff.
@cheloniachris3 жыл бұрын
@@AdamBlock Maybe I am wrong, but hot pixels should be constant over all your frames, so why not using star count as an indicator for quality?
@AdamBlock3 жыл бұрын
@@cheloniachris Not true. You assume data taken at equal exposure time with the same calibration files. (Hot pixels change through time, the population today is different than the past for example). Different methods of CosmeticCorrection will result in different numbers of hot pixels being removed per image. There are many variables to affect the number of hot pixels that are detected and this trips up "star count". This is particularly true when people use star count on short exposures when the hot pixel population may equal or exceed the number of "good" (detected) stars in the frame.
@michaellewis59213 жыл бұрын
Adam, Great presentation, much appreciated. A question - it seems like the one other effect of using this script is that it breaks the ability to use Drizzle Integration later, as I am unable to add in my drizzle files to the integration tool when using the _nsg files? Mike L.
@AdamBlock3 жыл бұрын
At the moment this is generally true. In the future the developer intends to incorporate Drizzle into the process- but it requires programming the script to work with PixInsight in a more integrated way. So this is a future thing.
@z28rgr83 жыл бұрын
G'Day Adam. I enjoyed the three part video explaining your views on the NSG script. I have been playing with it for a month or so now because subframe selector has some questionable results and this script promises a better normalization result than local normalization without the need to create perfect reference frames. Now that subframe_selection has been updated do you feel it capable of these kinds of weightings? Have to admit that the results of this script have been very good. Cheers mate!
@AdamBlock3 жыл бұрын
No, not yet. You will see soon another update to PixInsight concerning weighting in particular. This chapter is not yet closed....
@Unavidadevideos3 жыл бұрын
great videos Adam! what happens if you set a as a reference image to this script an integrated image with a good DBE process applied? does this script replace the local normalization?
@AdamBlock3 жыл бұрын
Normalization is a matching process. You will not get the proper scaling factors and other things if you "adjust" the images (Like using DBE beforehand). It is *supposed* to match the gradient of your reference image. So NSG 's job isn't to remove gradients- it just makes images equal to one another for better rejection and weighting. Applying DBE before hand messes this up. Concerning Local Normalization- my opinion is yes, basically. LN can be used in some extreme problematic cases- but 90% of the time NSG->Image Integration and then DBE is far better than LN through ImageIntegration.
@Unavidadevideos3 жыл бұрын
@@AdamBlock thanks Adam! I will add this procedure to my workflow
@desmcmorrow29783 жыл бұрын
Many thanks for the very informative videos on NSG which looks to be a game changer. I have a question on how best to combine different exposure lengths. Currently I use wbpp to calibrate and register, and then use image integration in the normal way to combine with noise evaluation for the weights. I could use the method described in your videos for each exposure length separately, and then combine the masters in pixelmath (weighting by total integration for each exposure) or even HDR combine, but this seems less optimal for pixel rejection. Advice appreciated and apologies if any of the foregoing is nonsense. Thanks in advance, Des
@AdamBlock3 жыл бұрын
When you have different exposure lengths there are two things people want. One is to create an HDR Composite image. This requires individual combined images for each exposure time. The other situation is combining all of the images of different exposures to create a single combined result. I think THIS is what you are asking about. In this case you load all of the images into NSG to be normalized and then combine the output files in ImageIntegration. This normalization process will properly give weights to each of the exposure lengths.
@desmcmorrow29783 жыл бұрын
@@AdamBlock Thanks, Adam. That's very clear. You're right - loading all images into NSG is probably what I want. Good to know that it handles correctly the weighting. For some reason my version of NSG does not load image integration. Is there a flag to set somewhere? I have the latest version but maybe you have a development one.
@AdamBlock3 жыл бұрын
@@desmcmorrow2978 *smile* My version of the script is still a little ahead of everyone... the new update will make everyone current with me. Just a few more days I think.
@brandlc3 жыл бұрын
Another great tutorial - thanks Adam. Love the way Image Integration opens automatically on NSD exit and preselects frames plus makes a suggestion on rejection. One question on Reference frame selection in NSD - can be quite challenging to decide with Blink so presumably altitude is ok. What about using the reference frame chosen by WBPP in calibration - and would be interesting to see correlation between the WBPP weights and NSD NWEIGHTs, if any
@AdamBlock3 жыл бұрын
WBPP relies on a version of SNRWeight. Thus, it will suffer from the same disease... NSG is now the gold standard. The choice of reference isn't super critical... normalization matches frames - but choosing a reference with the smallest gradient might help upstream. I suspect the WBPP choice of registration gives stars more importance... so I would not expect it to choose the frame with the simplest gradient.
@alecalden2162 жыл бұрын
Hi Adam, great video and thanks for all the work you do. I have a further question if I may? I started to use it on some old data, 2 different cameras obviously with different pixel sizes but on the same scope. Any advice on how to use NSG and add all the output subs from the cameras into the same image. I simply ran NSG for each camera's batch of subs separately, and used PixMath to integrate them. However, I'm not sure this gives a correct answer. Feedback would be appreciated. V many thanks
@AdamBlock2 жыл бұрын
Yeah... the different pixel sizes matters quite a bit. You need to compensate for this difference in plate scale since the interpolation process will match everything... but the signal/pixel between the data is actually different (which affects the relative weights). This true for any weighting scheme.
@alecalden2162 жыл бұрын
@@AdamBlock Thanks Adam, what is the process for getting the sets of data for the 2 cameras together after they have been run through NSG, or isn't it possible? thanks again
@Calzune3 жыл бұрын
amazing! Could this be implimented in WBPP? if not, in what order should I do the steps? 1. BLINK (to remove very bad frames?) and then?
@AdamBlock3 жыл бұрын
Run WBPP for pre-processing and for the post processing output registered frames. (don't do ImageIntegration.) Then you take those frames into NSG for normalization..and then combine with ImageIntegration.
@nicolast34993 жыл бұрын
Adam, would you then recommend not using weighting in WBPP and prefer the weighting in this script?
@alexgti93453 жыл бұрын
Once again thanks a lot for this comprehensive outlook of this new function Adam! Still, I was wondering, regarding drizzle integration.. is it possible to do it after exiting NSG/opening Image integration? Do we have to load the .xdrz files that are created by WBPP before launching the final integration? Thanks for your feedback!
@barrytrudgian45143 жыл бұрын
Thanks for the introduction to the script. I plan to use SFS to weed out eccentric star frames before moving on to NSG unless NSG's flux calculations are already doing that. Do you think my two pronged approach is worthwhile?
@AdamBlock3 жыл бұрын
Sure- you can add additional constraints that measure the quality of an image. Culling images that are poorly tracked is fine. The flux measurement would account for some- but not all of smeared stars (depends how much smeared light is in the aperture).
@barrytrudgian45143 жыл бұрын
@@AdamBlock Thank you. I will now set to work and apply the new approach to some NGC 6888 data.
@flyingairedale3 жыл бұрын
I often struggle with individual color channels have different gradients, resulting in an uneven background color across the frame. So my question is; Should I choose reference frames in each color channel with similar/same geometries?
@AdamBlock3 жыл бұрын
To be clear for anyone that reads this answer- in NSG the normalization reference is always chosen within a particular set of filtered images (e.g. all RED images). Your question is to choose references in each that have a similar gradient between channels- so that later when the images are used to make a color image it is easier to remove the color gradients. I see where you are going with this...but the reality is that PixInsight operates on the color channels individually. So when you DBE- it really doesn't matter much if the gradient of one color is different than another in its channel. Your strategy could be helpful in generating good samples that accommodate all channels- but I see this is a secondary benefit. So... your strategy would not hurt anything at all! I am not convinced in "theory" it is very helpful... but I think you can do the experiment!
@normanhey80163 жыл бұрын
Very impressive results, and clear demonstration and explanation--thank you, Adam. A very naive question: is this going to work for OSC data, or do you have to extract channels and run it on each? I suppose I could just try it...(smile)
@AdamBlock3 жыл бұрын
I believe it will do its thing on each channel for you. I have not tried... but pretty certain.
@normanhey80163 жыл бұрын
@@AdamBlock John Murphy says it works on OSC data in PixInsight Forum--just reading his post there now.
@AdamBlock3 жыл бұрын
@@normanhey8016 Yes. Additionally the scaling factor is taken from the most appropriate channel. So everything is good.
@gclaytony3 жыл бұрын
Should drizzle integration (drizzle data generated by the WBPP script) be done before or after the NSG script?
@AdamBlock3 жыл бұрын
At the moment you need to choose if you want to do this correctly. Drizzle will be incorporated later once NSG or some form of it is a part of ImageIntegration. So... the answer to your question is neither at the moment. It is either or... not both. NSG requires registered images. Drizzle requires unregistered images with the transformation information to make a drizzled result.
@gclaytony3 жыл бұрын
@@AdamBlock Thanks, I'm a subscriber to your website/Fundamentals/Horizon but did not see/overlooked a way to ask a question on this video there. John has been very responsive on the PixInsight forum and had provided a link to me this AM on a discussion that had covered this topic as well.
@AdamBlock3 жыл бұрын
@@gclaytony When you log into my site, you can ask questions on the Forum (link at the top right)
@nightskyimaging3 жыл бұрын
My widefield setup creates undersampled images. Is there a way to apply NSG with drizzled images?
@AdamBlock3 жыл бұрын
Since we begin with registered images in the normal way...no at the moment. This will likely be possible in the future with further improvements of the code... likely when, I predict, it becomes part of ImageIntegration in some form.
@nightskyimaging3 жыл бұрын
@@AdamBlock Let's predict!
@lindathomas-fowler64863 жыл бұрын
Adam, would it make sense to use this for the scaling and use WBPP or subframe selector's weighting? Or is this photometry based approach superior to that?
@AdamBlock3 жыл бұрын
SFS relies on SNRWeight- which is similar to the noise evaluation calculation. My approach is to use WBPP to pre-process and register. Then I use this to properly calculate weights. Unlike some of the other methods (star count, FWHM, SNRWeight...)- it seems really hard to beat this method. It is very early in the application of this script- I doubt many have done a deep dive on it- so perhaps there is a gotcha I am unaware of- but I don't see it right now. Did you find this explanation valuable?
@CuivTheLazyGeek3 жыл бұрын
@@AdamBlock We can customize the formula used by SFS to not use SNRWeight (I absolutely abhor SNRWeight and can't understand why it is even used at all), right? At first glance though, this script seems to beat everything else for weighing. Although I wonder how well it deals with things like poor tracking (it should deal with it well, stars flux per pixel will be lower). Can't wait to be testing this out!
@lindathomas-fowler64863 жыл бұрын
@@AdamBlock Thanks for the response! I was using SNRWeight along with FWHM, Stars and Eccentricity to make a weighting expression in SFS however, I had previously weeded out subs with artificially high SNR (typically finding them by looking for subs with low star counts). However, this does seem worth exploring in more depth. Thanks for your always helpful videos!
@HerpMcDerpington3 жыл бұрын
Does this script make SubframeSelector obsolete now? Since I'm guessing ImageIntegration will use the weights generated by NSG rather than SS.
@AdamBlock3 жыл бұрын
You can tell ImageIntegration to use either set of weights. My opinion, however, is that the weighting done based on signal strength and noise (NSG) is far better than the other metrics SS uses.
@TheAlros1003 жыл бұрын
@@AdamBlock great video and explanation. this was my question too. I have been rating subframes based on eccentricity and fwhm. So I guess this method won’t rate on those criteria. I wonder how slight defocused stars or eccentricity affects the new methods waiting? Also too bad it doesn’t allow drizzle ingestion
@AdamBlock3 жыл бұрын
@@TheAlros100 Drizzle is coming. Out of focus stars will have a smaller flux measure (since less of the light is in the aperture) and so will receive smaller weights.
@YuntaoLu3 жыл бұрын
Thanks for the video. I am not sure how this works with drizzle? The drizzle will use file generated in image register. Is it possible to run integration with NSG output with the drizzle file from IR?
@AdamBlock3 жыл бұрын
RIght now you need to provide the script regularly registered data. So there isn't an easy route for drizzle. There will likely be a future update that will make this possible. Even more likely (or hopeful)...Juan will incorporate this script into ImageIntegration in some way that isn't a computational burden...and then drizzle suddenly becomes possible.
@johnadastra17543 жыл бұрын
Hmmm.... Recently did my first successful run with WBPP 2.1.1 and got a nice master OSC light master using lights and flats from multiple sessions which was really nice. Now to do this normalization process I could use the previously created registered files. Must they be normalized by session here or can all files be normalized together? And then if using monochrome cameras, must I normalize each filter channel separately, or can all channels and sessions all be done together? Sorry if this answer is directly obvious.
@AdamBlock3 жыл бұрын
You are nomalizing the same set of data you are putting into (combining in) ImageIntegration. (That is the simplest statement!) So you normalize a single color (taking a reference from within that set). You can normalize across any number of nights/sessions since you will be combining (presumably) all data in that filter across all nights together.
@johnadastra17543 жыл бұрын
That scenario works for me. Thanks Adam!
@dankuchta51703 жыл бұрын
How can this be used along with drizzle? My registered images have associated drizzle files, but the output from NSG does not. I tried to add the drizzle files from the Registered folder in the ImageIntegration process after NSG, but it would not allow those to be associated with the NSG files.
@AdamBlock3 жыл бұрын
At the moment it can't (at least not in the proper way). Drizzling works on files that are *not* registered. At the moment NSG requires registered files. The interpolation that registration does is what messes up the proper usage of Drizzle. The developers are going to update PixInsight to make drizzle possible with NSG.
@ibnulhussaini37912 жыл бұрын
Hi Adam. Would you still recommend using this method or does the PSF signal weight change things? Also, when I was following the Light Vortex tutorials, I was generating drizzle data when registering (my pixel scale is 0.4") & then generating local normalization files before Image Integration. So if I was to use NSG, I'd obviously not need local normalization but would it be okay if I can't drizzle considering my pixel scale? Edit: Do you think I should be binning my images to avoid the oversampling?