Hey Sascha! Very interesting video mate, well done! there's so many facets to this particular question that I take my hat off to anyone tackling the subject haha! :-D Thanks for the mention by the way! Im glad my old vid was some use :-) Clear skies!
@viewintospace Жыл бұрын
And thanks for inspiring this video with yours! I think it's always great what we can build on each others work and hence bring the thought process further one video at a time....
@Astro_Px5 ай бұрын
hi guys - yes, two super guys, so for @lukomatico: just saw (one) video of yours (LRGB Galaxy processing) were you combo all channels and then stretch - whereas Sascha stretches first and then does the combo - either way - unclear of the reason - and I did the combo LRGB before stretching and adding the lum actually gave me less satisfactory results - Cheers, let's make America a Bortle 1 again - with you leading it 🙂
@OigresZevahc Жыл бұрын
Thank you very much for all you do for us, Sasha!
@viewintospace Жыл бұрын
My pleasure!
@bobc3144L Жыл бұрын
Outstanding explanation! Thank you.
@davewilton6021 Жыл бұрын
Synthetic luminance can be very useful for SHO images and RGB images where you don't have real luminance subframes. After stretching, you create the synthetic lum and apply all your sharpening to that. Then you apply convolution (not deconvolution) to the color image. This blurs out the color noise. Then you recombine the images, with the sharpened synthetic lum as the new luminance channel. This sharpens the structure without sharpening the color noise. It doesn't increase your integration time, but it results in a better image without adding much complexity to the workflow.
@viewintospace Жыл бұрын
Great input Dave! You describe here nicely what is preached by the ones promoting synthetic LUM. The issue is, the only thing you can state at advantage of doing this, is to blur out color noise. And if THAT is really the only tangible effect synth LUM has (and I would not know any other), then I know a MUCH faster way of achieving that.
@charliemiller3884 Жыл бұрын
After 10 years of shooting mono LRGB-SHO frames, I have converted to only using a OSC camera plus filter wheel with UVIR and Optolong L-eXtreme filters. This provides excellent RGB imaging and narrow band imaging with less imaging time and less processing time.
@BruceMallett Жыл бұрын
Somewhere around 3:30 you say that luminance provides the detail and contrast. I've read this claim elsewhere and that it is sufficient to shoot RGB at a lower res (say bin2x2) if you keep the luminance at full res (bin1x1). Do you do this? It would save a lot of session time, would it not?
@viewintospace Жыл бұрын
I don't do this, but yes, makes sense to me and should work fine. I also heard of people who shot the Lum with a mono cam and the RGB with an OSC cam. Also a way to save time.
@paulbenoit249 Жыл бұрын
Great video, .....this is why I am planning to keep using my color camera to capture the color, and the equivalent mono is on its way to shoot luminance only (or Ha in some cases).....to try and get the same results as shooting fully mono LRGB, but with the benefit of no filters, filter wheel, ...
@viewintospace Жыл бұрын
Great strategy!
@pcboreland1 Жыл бұрын
You're on the right track I think with IR. Perhaps a blend of the two. This is what a number of people doing lucky dso imaging have been doing for some time.
@starpartyguy5605 Жыл бұрын
For many years, going back to the early 2000's, I shot long lum and short color using very small (compared to today) cameras, ST7, ST8, STF-8300. This year I moved to the QHY268M with 50 mm filters. I'm using a C9.25. on a G11 Gemini 2. I switched from Maxim to Nina along with all the extra stuff to learn, including Pixinsight. So learning curve, culture shock... I got my Optec Lepus f/6.3 focal reducer configured with a special spacer that OPtec made for me. Now I shoot 3 minute subs and no luminance. Images seem OK so far. But wow, so much new stuff to learn!
@MrPedalpaddle Жыл бұрын
The argument for synthetic luminence with Narrow Band would come from those who stretch and colorize each channel before combining - e.g., Steve @EnteringintoSpace would then add convolution to the colored channels to remove noise, then restoring the structure lost through the convolution with a synthetic luminence. Not sure off hand if @paulyman also does this.
@PaulymanAstro Жыл бұрын
I do. Exactly as you described. I do think carefully about how I do it and if I do it, as Sascha says though. Sometimes I use the Ha data, sometimes I create a synthetic lum by integrating multiple channels if I feel they add structure. RGB stretching is to me 90% about maintaining good colour contrast, synthetic luminance to me is all about maximising contrast and sharpness as well as highlighting interesting structures.
@MrPedalpaddle Жыл бұрын
Thanks very much for the comment. I’m finding your tutorials very helpful. I hope you can update your Foraxx script to play with the new PI version. Cheers!
@darkrangersinc Жыл бұрын
Great video and explanation! Never been a huge fan of Synthetic L or Ha doubling as a Luminance layer I would rather just add more actual data. But I think you did a nice job highlighting when it can make sense to use a Luminance Layer
@dbakker7219 Жыл бұрын
HI Sasha, very good explanations! thank you. I experimented with IR too and another reason for less detai in your andromeda is that IR has a longer wavelength and thus always a lower resolution than visible light in our amateur scopes. Also using a refractor for IR does not work wll ( i think you used your FRA 400?) t the glass messes with your IR signal and diminishes its strength. I always use a reflector for IR imaging, no glass, no correctors of glass in between. I get more smaller galaxies/clusters but resolution is less.
@viewintospace Жыл бұрын
That is really helpful - thanks!!!!
@davecurtis8833 Жыл бұрын
Great video. Pretty much matches my experiences. For a very bright nebula with bright stars like M42, would you use Lum as well RGB ?
@viewintospace Жыл бұрын
If you shoot RGB and not Narrowband, then Lum should be used.
@Astro_Px4 ай бұрын
Halo Sascha - forgot a couple of points and rewatch the video... this time I took notes :-) anyway regarding your IR experiment at the end - I only shoot with ZWO cameras, especially the 2600 and 6200; I purchased a BAADER IR PASS filter I think it was 685 - meaning "blocking" everything/most below 685 nano - then I looked up the specs for both 2600 Mono and Color regarding Quantum Efficiency and for Mono I was shocked that transmission falls below 10% after 700nm and for the Color at 700nm it falls below 60% and keeps falling quite steep at higher wavelengths - so that is a factor as well I suppose.
@viewintospace4 ай бұрын
I agree - definitely not optimal....
@pcboreland1 Жыл бұрын
As a british english speaker it is loo-minance. English is so messed up! Great video, awesome!
@Astro_Px5 ай бұрын
Hi Sascha, another awesome, well presented, every important topic video. All new and critical information I never knew those details after doing this for 9 years... lessons learned - take advantage and study it - as it takes too long or maybe never to learn it on your own. Follow-up please: so you mentioned about stretching before recombining let's say in RGB - but I don't know the rationale behind it... there are some folks I've seen do it before stretching - I have done it before - because I did not know any better, and the results were disappointing to the degree that the picture looked better without Luminance - so again: why stretched / and how best to stretch - (a) RGB combo in linear, then stretch, stretch lum, then add it non-linear or (b) combo RGB+L in linear then stretch or (c) stretch each RGBL separately then combo in Non-linear? thanks
@larryfine471911 ай бұрын
Ah, lots of things make sense here. While RGB for luminance is not technically a bad idea, the reduced imaging time of LRGB is definitely more efficient in terms of time :-)
@Phenolisothiocyanate10 ай бұрын
One thing that confuses me about luminance is: If the color data is good enough to assign a value to a pixel then why do you need lum? Conversely, if color data isn't good enough then won't lum just bring out noisy colors?
@viewintospace10 ай бұрын
There is nothing like color data - it is simply light that passes a filter. Now what matters is signal to noice ratio. So where is light at all (and how much ) and where is dark. When I know that I only need to know how to color it and that is easier. In very dark areas even if a color signal is there, it will still be black, so no real issue.