Where Is the Center of the Cosmos
11:40
Aurora Over Maritime Canada
5:35
2 ай бұрын
The Myth of the "Fast" Telescope
12:32
Пікірлер
@mikehardy8247
@mikehardy8247 5 сағат бұрын
The band Hawkwind" Had it right in Master of the Universe, "I am the centre of this universe The wind of time is blowing through me And it's all moving relative to me"
@refetastro
@refetastro 11 сағат бұрын
Good video But I see stars not good on Boole nebula PI should do a better job with blur ext..
@BurgerOosthuizen
@BurgerOosthuizen 13 сағат бұрын
Touche! Great reply! I like your comment about data and theory, a bit like how data has debunked the theory of evolution. 😊
@petergrant1348
@petergrant1348 Күн бұрын
I am really getting into methods. Thank you for your time and effort. Could you please explain why you take 1 minute exposures as opposed to say, 5 minutes and also how often do you dither with the 1 minute subs?
@TevisC
@TevisC 2 күн бұрын
Very helpful, thanks!
@Hilmi12
@Hilmi12 2 күн бұрын
I really would have thought this is obvious to people, you capture all the light or at least most of it with LRGB. I suspect even in mild light pollution it's superior to narrow band. And those people who love to torture themselves with ultra narrow band pass filters, then they image at f7 😮
@robvandenwijngaart1970
@robvandenwijngaart1970 2 күн бұрын
Hi, Merry Christmas and a happy new year with lots of clear skies for all of us. Thanks for making all the videos, I enjoy them all. Hopefuly you keep making interresting videos for a long time! Thanks again! Saludos from Spain, Rob.
@TevisC
@TevisC 3 күн бұрын
Is just under 3 hrs enough data? I typically go for 5hrs each RGB (I don't bother will L). I find the longer stacks bring out the detail of the overlapping stars. With my osc 585mc I've done a thousand stack of 1s and the results were promising, but 17 min of data was not enough. ... I'd like to get 10+ osc hrs and parse down to the best 20%.
@SKYST0RY
@SKYST0RY 2 күн бұрын
The more integration, the fainter the detail you'll get. More time on target would have been nice, but the full moon was rising in the east so I had to shoot targets low in the west. I shot 3 targets that night, each for about 4 hours, keeping to the opposite side of the sky as the moon.
@aw7425
@aw7425 3 күн бұрын
Than you and Merry Christmas
@Peter-mg2ex
@Peter-mg2ex 3 күн бұрын
Thank you so much for this.I have recently started applying you methods. I have one question though... I shoot OSC. So, after splitting the channels, can I also extract the Luminance channel and continue working with your methods, please?
@SKYST0RY
@SKYST0RY 2 күн бұрын
With an OSC, you do not get a L channel. Use the same procedure, but just with RGB.
@BurgerOosthuizen
@BurgerOosthuizen 3 күн бұрын
How good of you to always take the time to answer my incessant questions. Thank you.
@chrislee8886
@chrislee8886 3 күн бұрын
I normally like your strategies but here i feel this is over complex. For globulars i also use 60s captures with my asi533 (i only use OSC) but when i process, i stretch three versions. One that just stretches the core stars (using arcsinh), one that stretches the main globular (core is blown) and one that stretches out the faint outer stars of the halo (the main is blown). I then put all three into Affinity and merged the three using the eraser brush. For me it gives a great structure and an even balance whereas i felt your core was “dulled” vs the surrounding stars?
@SKYST0RY
@SKYST0RY 2 күн бұрын
That's also a valid strategy. I used that method to make the HDR image of the Great Orion Nebula, covered in the link below. Some people don't like the multiple stretch and paint-to-merge method, though, so I wanted to provide an alternative. There are several ways to do this DSO, but luminosity masking of unconventionally combined linear masters gave the best results with this DSO. But no strategy fits all targets. With another DSO (even another star cluster), I may well have taken a different approach. kzbin.info/www/bejne/rmqqaKx4iaele6M
@captainfruitbatify
@captainfruitbatify 3 күн бұрын
That linear histogram in NINA freaks people out the first time they see it, particularly if they've been used to DSLR histograms and the "Keep the peak in the lower third but don't touch the left edge" rule. I know it did me when I first saw it. Your advice is good - look at the min and max values to make sure you're not clipping darks at all, and minimal or no clipping on the light side.
@SKYST0RY
@SKYST0RY 2 күн бұрын
Yep, as long the photosites are getting photons, it'll work out in the stack. That's my experience, anyway. I do wish NINA provided a way to zoom in on the histogram and see graphically what's happening in the light curve. But the Min/Max values mostly tell you what you need to know.
@BurgerOosthuizen
@BurgerOosthuizen 3 күн бұрын
Thankyou very much for explaining the histogram. I appreciate your approach to sub times, many APs get excited about long sub exposures, as you have stated before, 60s is the sweet spot.
@SKYST0RY
@SKYST0RY 3 күн бұрын
You're very welcome. My general feeling is as short as you can get away is the sweet spot lol I go with 60s to avoid losing write time between the camera and mini PC, which isn't super fast.
@d.fresh.750
@d.fresh.750 3 күн бұрын
This is quite an interesting take on star cluster processing! I have some OSC data on M3 that I've never fully processed, so maybe I'll give these techniques a shot with that data! Clear skies, Merry Christmas, & Happy Holidays!
@SKYST0RY
@SKYST0RY 3 күн бұрын
Let me know how it goes with OSC information, please. I haven't tried this with data from an OSC yet.
@billblanshan3021
@billblanshan3021 3 күн бұрын
You should NOT do LRGB combination on linear data, the L is lightness and lightness should only be used in a nonlinear state because the conversion from RGB to LAB and back can actually hurt your data if the backgrounds don't match perfectly. Even the creator of pixinsight (Juan} says never do this and I really wish he would remove the LRGB combination process from pixinsight and just use standard channel combination instead but then create a process for adding lightness only but have a disclaimer saying to use on nonlinear data only. If you dig deep down in the math that converts between these two color modes you will see why any type of LAB manipulation should not be done in a linear state.
@SKYST0RY
@SKYST0RY 3 күн бұрын
Unfortunately, following PI's "proper" method produces many artifacts that are much trickier to remove. I've posted an example of both methods on Astrobin using the recommended method vs my unconventional approach. The proper method also yields some odd saturation effects like saturation belts in the image. At least they can be removed more easily using PL8 to extract the relevant color ranges. I stopped worrying about PI's recommendations long ago as I've found they often yield less than ideal results. www.astrobin.com/xeh0ik/B/
@billblanshan3021
@billblanshan3021 3 күн бұрын
@SKYST0RY I really don't care about how you do things or conventional pixinsight methods but the software was designed to work in a nonlinear state and they specifically state this a few times in the forums and there's a reason behind it, the fact of the matter is is that you cannot just tell people on KZbin to do the LRGB combination when you actually don't know how it works and you can potentially mess up people's data by doing improper things like this. When converting from RGB to XYZ and then to LAB, different math is used for values of the image thus values below 0.003 will actually be curved stretch lower and values above higher. And we all know that most linear data falls below this range so you can't just simply add the luminance data to the LRGB if it doesn't share the same background value because it will completely alter the RGB data and in many cases a negative way. Trust me on this. If you are going to use linear fit on the RGB data then you might as well use it on the luminance so you can get the backgrounds to match the RGB but I still think this is a bad idea
@SKYST0RY
@SKYST0RY 3 күн бұрын
@@billblanshan3021 I'm sorry, Bill, but I processed this information several times before settling on this method and making this video. The conventional way yielded inferior results (shown in the link I sent to you). Color was odd and artifacts were plentiful and difficult to suppress. I even tried it using my usual method from my own workflow, which is to combine and stretch RGB in PI, then stretch L in PI, then combine in Affinity Photo using the Luminosity composite mode. None of those methods, including my own, gave the best results. So I began experimenting and this approach yielded the best results, which I define as clean space, balanced color and minimal artifacts. I've gone back and re-tested the data using some of the insights you have shared. Thank you for them, but they have also not yielded the best results. Applying LF to the L master yielded especially bad results. As always, I am just going to follow the outcomes to the method that produces clean space, balanced color and minimal artifacts. This method does it. If people find that combining linear LRGB in the LRGB Combination tool gives them results they are unhappy with, they can opt to undo it. It's not like their computers will explode or their data vanish. In fact, I have no doubt some people will need different methods, if for no other reason than having different hardware, software and drivers, or maybe simply different preferences. There are many other ways to any given photographic outcome, and this is the way I am going to go about it because it gets me where I want to go most directly.
@BurgerOosthuizen
@BurgerOosthuizen 3 күн бұрын
How interesting this is 😊
@janelubenskyi1177
@janelubenskyi1177 3 күн бұрын
Excellent…I will use this technique
@SKYST0RY
@SKYST0RY 3 күн бұрын
Let me know how it goes.
@janelubenskyi1177
@janelubenskyi1177 3 күн бұрын
Merry Christmas ❤
@SKYST0RY
@SKYST0RY 3 күн бұрын
🎄
@rashmibhatt7274
@rashmibhatt7274 5 күн бұрын
hi how do i use it with flat master and nina ? thanks
@bobs_photo
@bobs_photo 6 күн бұрын
Great video. I knew about the outer crop feature, but didnt think about combining it with the inner crop feature to make a donut. I did see in one of your previous videos about moving off center of a nebulous region or galaxy focusing and then reframing, but the issue with this I see is you have to disable any AF instruction from your sequence to stay automated...then no autofocus feature. This makes much more sense. Thanks again... you rock!
@SKYST0RY
@SKYST0RY Күн бұрын
The other method usually works because once you get the initial good focus, NINA is often able to find focus when it does focus checks. However, that doesn't pan out with huge targets that defy focusing, like Andromeda. This method is generally more consistent.
@pompeymonkey3271
@pompeymonkey3271 6 күн бұрын
It's worth mentioning that LEO satellites are primarily only visible after sunset and in the pre-dawn. Between these periods, especially in the winter, the satellites are in the Earth's shadow and don't show up. This doesn't happen for geostationary satellites, or those with significantly elliptical orbits (high-polar for example).
@BurgerOosthuizen
@BurgerOosthuizen 7 күн бұрын
Thanks for your help! So kind.
@vidholf
@vidholf 7 күн бұрын
Thank you! I did not know about this option in N.I.N.A.
@freeflysi1707
@freeflysi1707 7 күн бұрын
I need to rename my folder of NINA tips videos to sky story tips. Always such helpful information. Thanks.
@astrofromhome
@astrofromhome 7 күн бұрын
Very nice and helpful trick to overcome auto focus issues because of blurry central target. For the initial AF it is ok to focus somewhere else in the sky but somewhen another AF is going to happen. Your hint definately helps to avoid a lost night of imaging.
@SKYST0RY
@SKYST0RY 7 күн бұрын
It can help to get the initial focus from another location since NINA needs a starting place relatively close to ideal focus. I usually just move the telescope a degree or so off the target and focus on stars in empty space, if necessary.
@jtepsr
@jtepsr 7 күн бұрын
very informative, so if i own a 65mm f. 5 refractor with a aps-c sensor and i want to get DSO,s with closer views, e.g. galaxies, what scope should i be considering.
@SKYST0RY
@SKYST0RY 7 күн бұрын
Personally, I'd go with a SCT. A Celestron 203 mm (8") SCT will give great results and not break the bank.
@ritacastil
@ritacastil 7 күн бұрын
Very useful tip, I couldn't figure out what the crop in the autofocus meant. Thanks!
@Your_Imagin8ion
@Your_Imagin8ion 7 күн бұрын
Wish ASIAIR would add features like this
@SKYST0RY
@SKYST0RY 7 күн бұрын
To the best of my knowledge, only NINA offers this.
@robvandenwijngaart1970
@robvandenwijngaart1970 7 күн бұрын
Thanks for an other great tip! I just have a eaf for a short time. First time I used it everthing worked fine. Second time it went crazy. I don't remember what I was shooting at that moment, but maby your solution would have been a solution for me too. So next time it goes crazy, Ill use an inner crop box too.
@SKYST0RY
@SKYST0RY 7 күн бұрын
I might help. If the autofocus software confuses haze for a star, it can try some pretty unusual adjustments.
@scottbadger4107
@scottbadger4107 8 күн бұрын
Thanks for the video! I also ran into the issue and was stumped. I eventually came up with the same solution but figured it was gremlins until I saw your video!.... Had a sort of similar issue while imaging the snowball nebula (NGC 7662). Focus didn't fail, but the error bars were huge, I assume because Nina was seeing the nebula core as a very diffuse star. The first time it happened, I almost gave up for the night thinking the seeing was especially bad, but decided to try anyhow, and happy that I did since the stars were actually close to half of what the focus curve said they'd be.
@SKYST0RY
@SKYST0RY 7 күн бұрын
You can also try putting a hole in the focus field: kzbin.info/www/bejne/jH_FdmyXm9GmqdEsi=tRVGgHqgx6u-oll4
@stephanjurgens3766
@stephanjurgens3766 8 күн бұрын
Incredible Images and information! Thannk you so much for sharing :) if you had to image from a B5 sky, or even B8, would you have to double the integration time for each Bortle class?
@SKYST0RY
@SKYST0RY 7 күн бұрын
I would have to increase integration time much more than that because I would have to use narrowband. Narrowband only uses a few percent of the visible light spectrum. I find friends that shoot narrowband often need 3 - 5 times as much integration time as I do shooting LRGB from dark skies.
@TheHelicapt
@TheHelicapt 8 күн бұрын
I'm wondering why when I do Histograms I dont see the color curve at all. I even zoom in and out...why is this ?
@SKYST0RY
@SKYST0RY 7 күн бұрын
Are you shooting with a color camera? If so, you need to separate the color channels.
@TheHelicapt
@TheHelicapt 7 күн бұрын
@ I figured out why. I didn’t select the image to edit correctly.
@traceymonroe9026
@traceymonroe9026 8 күн бұрын
Thanks for this video, I have read many people raving about the Player One support, and your video was the final straw to push me to them.
@SKYST0RY
@SKYST0RY 7 күн бұрын
I am sure you won't regret it. They have been a great company. My next astrocam will probably be their Poseidon.
@traceymonroe9026
@traceymonroe9026 7 күн бұрын
@@SKYST0RY I had a question about one of their promotions (aka should I get the 1.25" filter wheel or 2" filter wheel with the Ares-M) ... and since I am not a popular KZbinr, I didn't think I would hear from them ... but they answer me the same day. I'm sold and IMPRESSED. Thanks for all your content!
@SKYST0RY
@SKYST0RY 7 күн бұрын
@@traceymonroe9026 For the Ares-M, I use the 36 mm. It's a good compromise in cost and works great! I am using the Phoenix filter wheel from Player One.
@Hilmi12
@Hilmi12 9 күн бұрын
I'm surprised Nina doesn't have this as a standard feature, had this in focus max since the stone age
@SKYST0RY
@SKYST0RY 8 күн бұрын
I agree. Some applications have this capacity. Perhaps someone will make a plug-in to do this. NINA does have another trick up its sleeve to help with this that I making a video on now.
@ahiemstra
@ahiemstra 9 күн бұрын
You speak about drizzling as if it can be done on a single image - eg by looking at neighboring pixels. But my understanding of drizzling is the fact that it uses the entire stack. So basically it puts the images of a stack on a matrix that is twice as large. So a star that is 1 pixel wide sometimes lights up 1 pixel, sometimes 4 pixels for 25%, sometimes 2 pixels for 50% etc.. It uses this information to fill the matrix that is twice as large. Therefore it is also important to dither, because without dithering it would not have this information. Therefore it would I think be better to speak about "the image from the drizzled stack" instead of "the drizzled image".
@charlespacer7421
@charlespacer7421 9 күн бұрын
I love your videos. They are very informative. You mentioned using levels, brightness and curves tool about nine and a half minutes into the video. i would love to see how you handle those.
@SKYST0RY
@SKYST0RY 7 күн бұрын
Sorry for the delay. I live in the Canadian backwoods and this is a very busy time, getting ready the big winter snow in. I hope to make a video showing the comprehensive addition of PL8 to my workflow shortly, and probably a separate video on PL8's light and color management power.