I assume this may clarify something for those who is already familiar with the topic, but for me its unclear how this is actually predicting anything...
@sjpbrooklyn769924 күн бұрын
Why, you ask, is it called Benford’s Law when it was discovered earlier by someone else? Stigler’s Law of Eponymy: “No discovery or invention is named after its first discoverer.” In 1983 statistician-historian Stephen Stigler published a paper worthy of Arthur Conan Doyle titled “Who Discovered Bayes’ Theorem?” Since Bayesian analysis, which is at the heart of inductive reasoning, is rapidly becoming the dominant statistical paradigm of the 21st century, this is of considerable scientific interest. Stigler suggests that the Reverend Thomas Bayes (1702-1761) may NOT be the true originator of the theorem named after him. In a tour de force of statistical deduction (or maybe induction) he uses Bayes’ Theorem itself to produce 3 to 1 odds that the real author was Nicholas Saunderson, a blind mathematician who became the fourth Lucasian professor at Cambridge (Newton was the second). He does leave the door open to other possibilities by invoking Damon Runyon’s rule (nothing in life is more than 3 to 1) and Laplace’s principle of indifference (absent contradictory evidence, give equal weight to all possible outcomes).
@kashyap5186Ай бұрын
4:35 💀 only Aric sir can teach in this way Thanks Sir for this series I have learned time series from your channel and it really helped my in my exams
@kashyap5186Ай бұрын
I can't thank him enough for making this series and making it soo much fun
@TheBlueFluidBreatheАй бұрын
You are a phenomenal educator
@shreshthasarkar991Ай бұрын
You are amazing.. thanks a lot
@pol-aurelienhay5050Ай бұрын
merci beaucoup monsieur Labarr!
@HadbbdbdDhhdbdАй бұрын
Helpful is your video.
@ArsalanMS-s6jАй бұрын
Best presentation ✅👌
@AricLaBarrАй бұрын
Thanks for watching!
@itohanakpasubi45112 ай бұрын
Seeing myself learning this in 5 minutes is still a shock Thank you❤
@AricLaBarrАй бұрын
Happy to hear that!
@LTVictorg2 ай бұрын
Excellent. Thank you mate
@AricLaBarrАй бұрын
My pleasure!
@sasik14722 ай бұрын
Sooo good. Thanks a lot!
@AricLaBarrАй бұрын
Thanks for watching!
@peasant123452 ай бұрын
Can we apply arch/garch to arma model? It seems ma part has already modeled the noise/vol. Will adding ma give us more reliable volatility estimates?
@AricLaBarrАй бұрын
So the fun part of these is they can be combined with ARMA models. ARMA models the mean, while the ARCH/GARCH model the volatility!
@art10412 ай бұрын
Keep up the good work!!!!!!!!! Thanks a lot!
@AricLaBarrАй бұрын
Glad you liked it!
@TheBlackNight9713 ай бұрын
I think there is an error. The "Term" in the fourier series is considered as a pair sin(x)+cos(x) for each n. So if we choose n = 3 (number of terms) we will have 3 pairs of sin(x)+cos(x). @Aric LaBarr
@Southtiger803 ай бұрын
oh..i just stuck gold mine..thanks Arick..
@AricLaBarrАй бұрын
Glad I could help
@prishaputri97453 ай бұрын
thank u so much sir, you're such a lifesaver!
@AricLaBarrАй бұрын
Glad it helped!
@deepakparmar963 ай бұрын
super useful to understand complex subject. hope to see the rest of machine learning approaches video soon
@AricLaBarrАй бұрын
Thanks! I plan on making more videos, but can't promise when!
@CynthiaJiyane-z8m3 ай бұрын
😀😀😀 @ ""kids that's called marketing!" Thank you so much for this video it helped a lot. You are great at this!
@AricLaBarrАй бұрын
Thanks for watching, glad it helped!
@simoncha87334 ай бұрын
This is what KZbin should be. No fluff, informative, and entertaining. Thank you Doc!
@nikitaegorov73494 ай бұрын
Great vid, many thanks
@anoriginalnick4 ай бұрын
It sounds like a fancy wrapper over Python's time series decomposition with structural breaks embdedded. It does feel very overfit. Any thoughts on in this ?
@AricLaBarrАй бұрын
There are definitely some similarities with time series decomposition. Most of the time, in TS decomposition we use LOESS to estimate the underlying trend as compared to just piecewise regression. Seasonality it also estimated differently, but you are correct in the idea that they handle the pieces individually!
@Atrix2564 ай бұрын
A lot of overlap here with an infinite impulse response filter from DSP. Im about to watch the moving average model video, but am wondering if that is the finite impulse response equivalent :)
@AricLaBarrАй бұрын
Not familiar with the infinite impulse response filter! Let me know what you think after watching the MA model video!
@CP-tq1ue4 ай бұрын
I like your videos, thanks!
@AricLaBarrАй бұрын
Thank you so much!
@CP-tq1ue4 ай бұрын
Thanks!
@andresgonzalez-nl8or4 ай бұрын
shouldn't it be, if Φ > 1 and not Φ < 1?
@AricLaBarrАй бұрын
Not if you want stationarity. To be stationary, we want the value of phi to be less than 1 so that when raised to higher powers we have lower and lower impact on that observation the further back in time we go.
@artyCrafty45644 ай бұрын
@3:23 a lot of bad memory from my calculus days 🤣🤣🤣🤣.
@TheBlueFluidBreathe5 ай бұрын
Bro God bless you!
@zoheirelhouari52865 ай бұрын
the playlist was amazing. thank you
@robin54536 ай бұрын
so clear
@MaxGroßeHerzbruch6 ай бұрын
so is it possible to read out the algebraic form of the fitted linear model out explicitly?:) If not, is there another approach where this is possible?
@AricLaBarr5 ай бұрын
It definitely is possible! The more complicated the model (more knots, more fourier terms, more holidays) the more complicated the equation is. y = beta0 + TREND + FOURIER + HOLIDAY TREND = piecewise linear regression on trend. That could be a simple as beta1*time (no knots) or more complicated with knots FOURIER = beta term multiplied by each of the fourier terms described in the video HOLIDAY = beta term multiplied by dummy variable where it is a 1 for the holiday and 0 otherwise
@ShihYangLin6 ай бұрын
Thank you for this excellent video!
@tochoXK36 ай бұрын
I like how you go straight to the point while still making it understandable. Both fast and well-explained.
@tochoXK36 ай бұрын
So, what if there's integration and also seasonal integration?
@AricLaBarr5 ай бұрын
Then you would take two differences FIRST. First you take the seasonal difference as in the video, then another difference before you start worrying about the AR and MA terms.
@aryashahdi27906 ай бұрын
salute to this dude for the clarity of his explanations
@alisavictory29696 ай бұрын
Great and concise! Very engaging especially with the witty titles! Thank you for sharing :)
@kafuu16 ай бұрын
This video is amazing
@kafuu16 ай бұрын
nice video!
@SyedMohommadKumailAkbar6 ай бұрын
These were an excellent series, though it would be fantastic if you could do deep dive videos as well! thanks for these regardless
@arkadiuszkulpa6 ай бұрын
Great informative, funny! Thanks for those videos. I must say, however, some (like this one) are stretching my brain out a bit too thin... some components could do with a little bit more explanation and maybe you got rushed by the 5 minute limit you placed on yourself :)
@alejandrogallardo14146 ай бұрын
Best series of videos I've seen on anomaly detection. Great work!
@rajatautensute32716 ай бұрын
does this mean that we can use bayesian statistics to impose a non-negativity constraint to our statistical models like ARIMA or Holt-Winters to ensure that the forecasted values can never be negative? assuming that it doesn't make sense for the forecasted values to be below zero.
@AricLaBarr5 ай бұрын
That would more be a transformation on the target variable - like a log transformation for example. The Bayesian piece is to set prior values on the estimation of the coefficients themselves, not restrict the target variable.
@katarabo6 ай бұрын
kids thats called marketing.......I subscribed so fast
@AynazAbdollahzadeh6 ай бұрын
I was super lost thanks for explaining it amazingly!
@ditdit-dahdah-ditdit-dah6 ай бұрын
Love from Asia ! Topics made easy
@szymonch66626 ай бұрын
Video from 4 years ago, so idk if you read it man, but just so you know, I sincerely consider you a genuinely wonderful person for doing this series
@AricLaBarr6 ай бұрын
I do try to still check comments. Thank you for the kind words!
@baruite7 ай бұрын
Tellement bien expliquée! merci
@michalkiwanuka9387 ай бұрын
3:35 So when I predict for Y_t+1, how will I know the value of error e_t+1? I need it for predicting the value of tomorrow, yet I don't know it, and it's not based on the previous errors.
@AricLaBarr7 ай бұрын
Happy to help! e_t+1 will never be known at t+1. That is the point of the random error. Your prediction will never be perfect because of that. You can use all he information up until then, but that e_t+1 accounts for the differences between what you predict and what actually happens.
@michalkiwanuka9387 ай бұрын
the underlying assumption is that we know the data up to time t-1, and we use the observed data to estimate the parameters (ϕ1,ϕ2,…,ϕpϕ1,ϕ2,…,ϕp and e_t) , right?
@AricLaBarr7 ай бұрын
Correct!
@michalkiwanuka9387 ай бұрын
Just some clarifications. When you say "model the lack of consistency in variance", do you mean model the variance in a consistent way? When you say they are Lazy, do you mean they are using a method that has statistically incorrect properties for the sake of simplicity?
@AricLaBarr7 ай бұрын
Happy to help! I mean that there are models to actually model variance, especially when it is changing over time. The methods aren't statistically incorrect in terms of the mean and will follow everything they need to predict the means (averages) well still.