What are k Nearest Neighbors
4:58
What are Mahalanobis Distances
4:58
What are Z scores
5:05
2 жыл бұрын
What is Benford's Law
4:55
2 жыл бұрын
How are Time Series Models Evaluated
4:53
What are ARCH & GARCH Models
5:10
2 жыл бұрын
What is the Prophet Model
4:41
2 жыл бұрын
What are Seasonal ARIMA Models
4:52
3 жыл бұрын
Learning from Mistakes - Fail Fast
9:20
What are ARIMA Models
5:07
4 жыл бұрын
What are Moving Average (MA) Models
5:01
What are Autoregressive (AR) Models
5:01
What is Stationarity
5:01
5 жыл бұрын
What is Time Series Decomposition
4:54
What is Time Series Data
5:01
5 жыл бұрын
Пікірлер
@0xoRial
@0xoRial 13 күн бұрын
I assume this may clarify something for those who is already familiar with the topic, but for me its unclear how this is actually predicting anything...
@sjpbrooklyn7699
@sjpbrooklyn7699 24 күн бұрын
Why, you ask, is it called Benford’s Law when it was discovered earlier by someone else? Stigler’s Law of Eponymy: “No discovery or invention is named after its first discoverer.” In 1983 statistician-historian Stephen Stigler published a paper worthy of Arthur Conan Doyle titled “Who Discovered Bayes’ Theorem?” Since Bayesian analysis, which is at the heart of inductive reasoning, is rapidly becoming the dominant statistical paradigm of the 21st century, this is of considerable scientific interest. Stigler suggests that the Reverend Thomas Bayes (1702-1761) may NOT be the true originator of the theorem named after him. In a tour de force of statistical deduction (or maybe induction) he uses Bayes’ Theorem itself to produce 3 to 1 odds that the real author was Nicholas Saunderson, a blind mathematician who became the fourth Lucasian professor at Cambridge (Newton was the second). He does leave the door open to other possibilities by invoking Damon Runyon’s rule (nothing in life is more than 3 to 1) and Laplace’s principle of indifference (absent contradictory evidence, give equal weight to all possible outcomes).
@kashyap5186
@kashyap5186 Ай бұрын
4:35 💀 only Aric sir can teach in this way Thanks Sir for this series I have learned time series from your channel and it really helped my in my exams
@kashyap5186
@kashyap5186 Ай бұрын
I can't thank him enough for making this series and making it soo much fun
@TheBlueFluidBreathe
@TheBlueFluidBreathe Ай бұрын
You are a phenomenal educator
@shreshthasarkar991
@shreshthasarkar991 Ай бұрын
You are amazing.. thanks a lot
@pol-aurelienhay5050
@pol-aurelienhay5050 Ай бұрын
merci beaucoup monsieur Labarr!
@HadbbdbdDhhdbd
@HadbbdbdDhhdbd Ай бұрын
Helpful is your video.
@ArsalanMS-s6j
@ArsalanMS-s6j Ай бұрын
Best presentation ✅👌
@AricLaBarr
@AricLaBarr Ай бұрын
Thanks for watching!
@itohanakpasubi4511
@itohanakpasubi4511 2 ай бұрын
Seeing myself learning this in 5 minutes is still a shock Thank you❤
@AricLaBarr
@AricLaBarr Ай бұрын
Happy to hear that!
@LTVictorg
@LTVictorg 2 ай бұрын
Excellent. Thank you mate
@AricLaBarr
@AricLaBarr Ай бұрын
My pleasure!
@sasik1472
@sasik1472 2 ай бұрын
Sooo good. Thanks a lot!
@AricLaBarr
@AricLaBarr Ай бұрын
Thanks for watching!
@peasant12345
@peasant12345 2 ай бұрын
Can we apply arch/garch to arma model? It seems ma part has already modeled the noise/vol. Will adding ma give us more reliable volatility estimates?
@AricLaBarr
@AricLaBarr Ай бұрын
So the fun part of these is they can be combined with ARMA models. ARMA models the mean, while the ARCH/GARCH model the volatility!
@art1041
@art1041 2 ай бұрын
Keep up the good work!!!!!!!!! Thanks a lot!
@AricLaBarr
@AricLaBarr Ай бұрын
Glad you liked it!
@TheBlackNight971
@TheBlackNight971 3 ай бұрын
I think there is an error. The "Term" in the fourier series is considered as a pair sin(x)+cos(x) for each n. So if we choose n = 3 (number of terms) we will have 3 pairs of sin(x)+cos(x). @Aric LaBarr
@Southtiger80
@Southtiger80 3 ай бұрын
oh..i just stuck gold mine..thanks Arick..
@AricLaBarr
@AricLaBarr Ай бұрын
Glad I could help
@prishaputri9745
@prishaputri9745 3 ай бұрын
thank u so much sir, you're such a lifesaver!
@AricLaBarr
@AricLaBarr Ай бұрын
Glad it helped!
@deepakparmar96
@deepakparmar96 3 ай бұрын
super useful to understand complex subject. hope to see the rest of machine learning approaches video soon
@AricLaBarr
@AricLaBarr Ай бұрын
Thanks! I plan on making more videos, but can't promise when!
@CynthiaJiyane-z8m
@CynthiaJiyane-z8m 3 ай бұрын
😀😀😀 @ ""kids that's called marketing!" Thank you so much for this video it helped a lot. You are great at this!
@AricLaBarr
@AricLaBarr Ай бұрын
Thanks for watching, glad it helped!
@simoncha8733
@simoncha8733 4 ай бұрын
This is what KZbin should be. No fluff, informative, and entertaining. Thank you Doc!
@nikitaegorov7349
@nikitaegorov7349 4 ай бұрын
Great vid, many thanks
@anoriginalnick
@anoriginalnick 4 ай бұрын
It sounds like a fancy wrapper over Python's time series decomposition with structural breaks embdedded. It does feel very overfit. Any thoughts on in this ?
@AricLaBarr
@AricLaBarr Ай бұрын
There are definitely some similarities with time series decomposition. Most of the time, in TS decomposition we use LOESS to estimate the underlying trend as compared to just piecewise regression. Seasonality it also estimated differently, but you are correct in the idea that they handle the pieces individually!
@Atrix256
@Atrix256 4 ай бұрын
A lot of overlap here with an infinite impulse response filter from DSP. Im about to watch the moving average model video, but am wondering if that is the finite impulse response equivalent :)
@AricLaBarr
@AricLaBarr Ай бұрын
Not familiar with the infinite impulse response filter! Let me know what you think after watching the MA model video!
@CP-tq1ue
@CP-tq1ue 4 ай бұрын
I like your videos, thanks!
@AricLaBarr
@AricLaBarr Ай бұрын
Thank you so much!
@CP-tq1ue
@CP-tq1ue 4 ай бұрын
Thanks!
@andresgonzalez-nl8or
@andresgonzalez-nl8or 4 ай бұрын
shouldn't it be, if Φ > 1 and not Φ < 1?
@AricLaBarr
@AricLaBarr Ай бұрын
Not if you want stationarity. To be stationary, we want the value of phi to be less than 1 so that when raised to higher powers we have lower and lower impact on that observation the further back in time we go.
@artyCrafty4564
@artyCrafty4564 4 ай бұрын
@3:23 a lot of bad memory from my calculus days 🤣🤣🤣🤣.
@TheBlueFluidBreathe
@TheBlueFluidBreathe 5 ай бұрын
Bro God bless you!
@zoheirelhouari5286
@zoheirelhouari5286 5 ай бұрын
the playlist was amazing. thank you
@robin5453
@robin5453 6 ай бұрын
so clear
@MaxGroßeHerzbruch
@MaxGroßeHerzbruch 6 ай бұрын
so is it possible to read out the algebraic form of the fitted linear model out explicitly?:) If not, is there another approach where this is possible?
@AricLaBarr
@AricLaBarr 5 ай бұрын
It definitely is possible! The more complicated the model (more knots, more fourier terms, more holidays) the more complicated the equation is. y = beta0 + TREND + FOURIER + HOLIDAY TREND = piecewise linear regression on trend. That could be a simple as beta1*time (no knots) or more complicated with knots FOURIER = beta term multiplied by each of the fourier terms described in the video HOLIDAY = beta term multiplied by dummy variable where it is a 1 for the holiday and 0 otherwise
@ShihYangLin
@ShihYangLin 6 ай бұрын
Thank you for this excellent video!
@tochoXK3
@tochoXK3 6 ай бұрын
I like how you go straight to the point while still making it understandable. Both fast and well-explained.
@tochoXK3
@tochoXK3 6 ай бұрын
So, what if there's integration and also seasonal integration?
@AricLaBarr
@AricLaBarr 5 ай бұрын
Then you would take two differences FIRST. First you take the seasonal difference as in the video, then another difference before you start worrying about the AR and MA terms.
@aryashahdi2790
@aryashahdi2790 6 ай бұрын
salute to this dude for the clarity of his explanations
@alisavictory2969
@alisavictory2969 6 ай бұрын
Great and concise! Very engaging especially with the witty titles! Thank you for sharing :)
@kafuu1
@kafuu1 6 ай бұрын
This video is amazing
@kafuu1
@kafuu1 6 ай бұрын
nice video!
@SyedMohommadKumailAkbar
@SyedMohommadKumailAkbar 6 ай бұрын
These were an excellent series, though it would be fantastic if you could do deep dive videos as well! thanks for these regardless
@arkadiuszkulpa
@arkadiuszkulpa 6 ай бұрын
Great informative, funny! Thanks for those videos. I must say, however, some (like this one) are stretching my brain out a bit too thin... some components could do with a little bit more explanation and maybe you got rushed by the 5 minute limit you placed on yourself :)
@alejandrogallardo1414
@alejandrogallardo1414 6 ай бұрын
Best series of videos I've seen on anomaly detection. Great work!
@rajatautensute3271
@rajatautensute3271 6 ай бұрын
does this mean that we can use bayesian statistics to impose a non-negativity constraint to our statistical models like ARIMA or Holt-Winters to ensure that the forecasted values can never be negative? assuming that it doesn't make sense for the forecasted values to be below zero.
@AricLaBarr
@AricLaBarr 5 ай бұрын
That would more be a transformation on the target variable - like a log transformation for example. The Bayesian piece is to set prior values on the estimation of the coefficients themselves, not restrict the target variable.
@katarabo
@katarabo 6 ай бұрын
kids thats called marketing.......I subscribed so fast
@AynazAbdollahzadeh
@AynazAbdollahzadeh 6 ай бұрын
I was super lost thanks for explaining it amazingly!
@ditdit-dahdah-ditdit-dah
@ditdit-dahdah-ditdit-dah 6 ай бұрын
Love from Asia ! Topics made easy
@szymonch6662
@szymonch6662 6 ай бұрын
Video from 4 years ago, so idk if you read it man, but just so you know, I sincerely consider you a genuinely wonderful person for doing this series
@AricLaBarr
@AricLaBarr 6 ай бұрын
I do try to still check comments. Thank you for the kind words!
@baruite
@baruite 7 ай бұрын
Tellement bien expliquée! merci
@michalkiwanuka938
@michalkiwanuka938 7 ай бұрын
3:35 So when I predict for Y_t+1, how will I know the value of error e_t+1? I need it for predicting the value of tomorrow, yet I don't know it, and it's not based on the previous errors.
@AricLaBarr
@AricLaBarr 7 ай бұрын
Happy to help! e_t+1 will never be known at t+1. That is the point of the random error. Your prediction will never be perfect because of that. You can use all he information up until then, but that e_t+1 accounts for the differences between what you predict and what actually happens.
@michalkiwanuka938
@michalkiwanuka938 7 ай бұрын
the underlying assumption is that we know the data up to time t-1, and we use the observed data to estimate the parameters (ϕ1,ϕ2,…,ϕpϕ1​,ϕ2​,…,ϕp​ and e_t) , right?
@AricLaBarr
@AricLaBarr 7 ай бұрын
Correct!
@michalkiwanuka938
@michalkiwanuka938 7 ай бұрын
Just some clarifications. When you say "model the lack of consistency in variance", do you mean model the variance in a consistent way? When you say they are Lazy, do you mean they are using a method that has statistically incorrect properties for the sake of simplicity?
@AricLaBarr
@AricLaBarr 7 ай бұрын
Happy to help! I mean that there are models to actually model variance, especially when it is changing over time. The methods aren't statistically incorrect in terms of the mean and will follow everything they need to predict the means (averages) well still.