What are Autoregressive (AR) Models

  Рет қаралды 129,650

Aric LaBarr

Aric LaBarr

Күн бұрын

Пікірлер: 81
@enock_elk
@enock_elk 4 жыл бұрын
Came here after being confused by my Lecturer, Thank you very much for simplifying this!
@AricLaBarr
@AricLaBarr 4 жыл бұрын
Glad it helped!
@pettirto
@pettirto Жыл бұрын
Thanks Mr. LaBarr, I'm studying for my exam in time series and your videos are very helpful. Greetings from Italy!!!
@AricLaBarr
@AricLaBarr Жыл бұрын
Grazie! Glad to hear it was helpful! Ciao!
@oren2234
@oren2234 3 жыл бұрын
my statistics is very basic and i just needed a forecasting algorithm, this video explained it sooo well
@arnonym5995
@arnonym5995 9 ай бұрын
I like the way you convey the intuition behind AR and MA models. One thing that might be confusing is however the terminology, in particular with regard to short and long memory, which is different in common literature. Therein, AR, MA and ARMA models are considered to be short-memory models, because their autocovariances are summable. Also AR models, whose autocovariance function (ACVF) decays quite quickly towards zero for increasing lags, even though the ACVF values in fact never fully reach zero, has summable autocovariances. In contrast long-memory behavior is indicated by a hyperbolically decaying ACVF, which results in an ACVF whose elements are not summable anymore. A popular example is the fractionally integrated ARMA model, often denoted by either FARIMA or ARFIMA, that can still have ACVF values of notable magnitude for large lags.
@hugoagudo4282
@hugoagudo4282 3 жыл бұрын
Great video. I’ve had a text book about time series that’s been gathering dust because I was afraid of all the symbols. This helps a lot
@oq88
@oq88 3 жыл бұрын
One of the best teachers i’ve ever seen! Thank you
@clickbaitpolice9792
@clickbaitpolice9792 2 жыл бұрын
just become my lecturer lol. i love the enthusiasm you put in. makes learning more fun lol
@williamgomez6226
@williamgomez6226 2 жыл бұрын
Thank you, j had seen this equation when a was studying reinforcement learning, it's like the Value function weighted by a discount factor.... Great explanation!!!
@felipedaraujo_
@felipedaraujo_ 3 жыл бұрын
Excellent teaching! Thanks for your good work Aric!
@economicsfriendly7425
@economicsfriendly7425 3 жыл бұрын
wow your teaching style is really amazing !! please make more videos on time series analysis. we really need your help!!
@ahsanshabbir16
@ahsanshabbir16 3 жыл бұрын
Hi Dr Aric LaBarr you work is Amazing please continue this again Under 5 minute concept is great
@vadimkorontsevich1066
@vadimkorontsevich1066 2 жыл бұрын
God bless you for your efforts to explain!
@rossijuan9548
@rossijuan9548 3 жыл бұрын
Excellent contribution, thank you very much
@bend0596
@bend0596 Жыл бұрын
super clearly explained, thanks!
@josealeman5008
@josealeman5008 2 жыл бұрын
simple and beautifully explained! thanks!
@Atrix256
@Atrix256 2 ай бұрын
A lot of overlap here with an infinite impulse response filter from DSP. Im about to watch the moving average model video, but am wondering if that is the finite impulse response equivalent :)
@AricLaBarr
@AricLaBarr 23 сағат бұрын
Not familiar with the infinite impulse response filter! Let me know what you think after watching the MA model video!
@elisesauvary8174
@elisesauvary8174 3 жыл бұрын
You are a god send!!
@valdompinga
@valdompinga Жыл бұрын
man, you are incredible! Im learning ARIMA like im building legos!
@AricLaBarr
@AricLaBarr Жыл бұрын
Thank you!
@MrSk8L8
@MrSk8L8 4 жыл бұрын
Great explanation
@AricLaBarr
@AricLaBarr 4 жыл бұрын
Thank you! Glad you liked it!
@vaishnavikhiste7841
@vaishnavikhiste7841 Жыл бұрын
WELL EXPLAINED
@robin5453
@robin5453 Жыл бұрын
Best ever, thank you!!
@kumaratuliitd
@kumaratuliitd 3 жыл бұрын
Hi Aric, thanks for the explanatory video. Can it be said that AR(1) is equivalent to Single Exponential Smoothing algorithm because it too depends on the Previous forecast and error.
@AricLaBarr
@AricLaBarr 3 жыл бұрын
Actually, a single exponential smoothing model is equivalent to a moving average of order 1 after taking a single time difference (more formally called an ARIMA (0,1,1) model or sometimes an IMA(1,1))! This is because of the structure of the single exponential smoothing model. It is a combination of past and prediction, but the prediction is more past, etc. Hope this helps!
@eengpriyasingh706
@eengpriyasingh706 2 жыл бұрын
For 3:51, what is the manipulation done should be explained a little. Since I am not from this background it will be difficult for me to go through what and how it is happening?
@ArunKumar-yb2jn
@ArunKumar-yb2jn 2 жыл бұрын
May be you should make some effort by gathering a little background before asking that question?
@eengpriyasingh706
@eengpriyasingh706 2 жыл бұрын
@@ArunKumar-yb2jn u r so smart that's why I am asking...if he has told some references or a bit of manipulation done......if I have already some background then definitely I will not be here
@ArunKumar-yb2jn
@ArunKumar-yb2jn 2 жыл бұрын
@@eengpriyasingh706 May be you should not act so entitled.
@magtazeum4071
@magtazeum4071 11 ай бұрын
at 3:31, 2nd term on the right hand side of the last equation, shouldn't the power of PI be (t-1) instead of t (and so on) ?
@AricLaBarr
@AricLaBarr 10 ай бұрын
Completely correct! In all honesty, I should have had the left hand side be Y_(t+1) to make the math work better.
@ΜιχαήλΣκιαδάς-γ8β
@ΜιχαήλΣκιαδάς-γ8β 2 жыл бұрын
I could not undrestand how do you calculate the φ because I 've seen a lot of correlation types and I do not know which one to use. Thank you for your time.
@AricLaBarr
@AricLaBarr 2 жыл бұрын
It actually isn't a correlation directly (unless it is an AR(1) model and then it is the Pearson correlation if the variables are standardized). The best way to think about it is that it is a weight in a regression model. The model chooses the weight that maximizes the likelihood (MLE) of the model and predictions. Hope this helps!
@ΜιχαήλΣκιαδάς-γ8β
@ΜιχαήλΣκιαδάς-γ8β 2 жыл бұрын
@@AricLaBarr It helped a lot, thank you
@michalkiwanuka938
@michalkiwanuka938 5 ай бұрын
the underlying assumption is that we know the data up to time t-1, and we use the observed data to estimate the parameters (ϕ1,ϕ2,…,ϕpϕ1​,ϕ2​,…,ϕp​ and e_t) , right?
@AricLaBarr
@AricLaBarr 5 ай бұрын
Correct!
@dipenmodi1807
@dipenmodi1807 4 жыл бұрын
Can you explain the difference between Static, Dynamic and Autoregressive Probit models?
@Rundtj45
@Rundtj45 3 жыл бұрын
Excelente explanation, thanks
@mirroring_2035
@mirroring_2035 Жыл бұрын
Okay you're genius, thanks
@Pewpewforyou0
@Pewpewforyou0 3 жыл бұрын
this was very helpful
@pjy1006
@pjy1006 2 жыл бұрын
Love your videos! I am on a quest to find out why we need stationarity for ARIMA model (many explanations online but I cannot say I have a very clear understanding). Is stationarity necessary for Simple Exponential Smoothing?
@AricLaBarr
@AricLaBarr 2 жыл бұрын
We need stationarity because the structure of ARIMA models are that they revert to the average of the series if you predict out far enough. That wouldn't work very well at all if we have trending or seasonal data! Simple ESM's don't need stationarity, but do require no trend or seasonality to make them work best. Stationarity is more mathematically rigorous than just no trend or seasonality. Hope this helps!
@roym1444
@roym1444 4 жыл бұрын
Is there any online resource you know of that would demonstrate how to code some of the concepts you've spoke about ?
@kafuu1
@kafuu1 5 ай бұрын
nice video!
@dineafkir5184
@dineafkir5184 4 жыл бұрын
Nice video. Will you be making something about the ARCH/GARCH model :-)
@mengsupeng6541
@mengsupeng6541 3 жыл бұрын
Thank you. Already subscribed.
@amirhoseinbodaghi9527
@amirhoseinbodaghi9527 3 жыл бұрын
Thank You Dear
@josephgan1262
@josephgan1262 2 жыл бұрын
If I am using a AR(1) model, and I have data of Yt-1, do I need to recursive back all the way to start point to predict Yt? or I can just use the formula shown at @1:17
@AricLaBarr
@AricLaBarr 2 жыл бұрын
You just use the formula! The recursive piece is to just show what is happening in concept if you keep plugging in what each lag truly represents. All you need for an AR(1) is just the lagged values (for each time point) to build the model!
@梁馨月-m7c
@梁馨月-m7c 4 жыл бұрын
I hope there is a video about MA model!!!!!
@AricLaBarr
@AricLaBarr 4 жыл бұрын
Just uploaded this morning! Enjoy!!
@梁馨月-m7c
@梁馨月-m7c 4 жыл бұрын
@@AricLaBarr Tks a lot!
@NishaSingh-qf2it
@NishaSingh-qf2it 2 жыл бұрын
Hi Aric! This was such a splendidly explained video. I have a doubt though about NARX. Do they function the same way as this one (explained in the video) because NARX is also autoregressive model? If not, could you please explain about NARX as well?
@razzlfraz
@razzlfraz 4 жыл бұрын
Does anyone know where the line is between autoregression and regression is, because, eg lowess and loess functions are called local regression, yet it looks like "local regression" is a form of autogression from a 10,000 ft view. My guess atm is that local regression does not add stochastic noise making it just barely miss the definition, but I am only guessing here. It could also be local regression is a form of autoregression but everyone is too lazy to write it all out. Whatever it is, I would like to know!
@PhilosophySoldier
@PhilosophySoldier 4 жыл бұрын
Good question - I'm also wondering the answer. @Aric LaBarr can you help?
@statisticianclub
@statisticianclub 4 жыл бұрын
Really beneficial
@Rundtj45
@Rundtj45 3 жыл бұрын
How is different between long and short run, Do you have any class about that
@sidharthmohanty6434
@sidharthmohanty6434 3 жыл бұрын
Thanks
@anupamagarwal3976
@anupamagarwal3976 Жыл бұрын
perfect 5mins to understand any topic
@AricLaBarr
@AricLaBarr Жыл бұрын
Thank you!
@insideonionyt
@insideonionyt 4 жыл бұрын
Its damn awesome!!!!!
@Tomahawk1999
@Tomahawk1999 4 жыл бұрын
Dear Aric, can a AR model have other predictors? and if yes what class of models is that?
@AricLaBarr
@AricLaBarr 4 жыл бұрын
Yes they can! AR models are long memory models, but there are also short memory models (think quick shocks that don't last long in time) called Moving Average (MA) models. That is the next video about to come out! If you are talking about normal predictors (think X's in linear regression) then this class of model is called an ARIMAX model. I'll have a video on these coming soon!
@Tomahawk1999
@Tomahawk1999 4 жыл бұрын
@@AricLaBarr Thanks for the quick reply!. I had to review a paper last week which used predictors (like X's) to examine stock prices in a time series model. I really had no clue and if and when u make a video, please do include how to run these models, and evaluate these models. Thanks a lot. stay safe.
@ValentinLeLay
@ValentinLeLay 11 ай бұрын
Hi ! At 3:33 you wrote Yt = w/(1-ø) + ø^tY_1 + ... but shouldn't it be Yt = w/(1-ø) + ø^tY_0 + ... since it's basically ø^tY_t-t = ø^tY_0
@AricLaBarr
@AricLaBarr 10 ай бұрын
You are correct! That should be Y_0 or phi^(t-1). I should have had the left hand side equal Y_t+1 and then my math would work better :-)
@makting009
@makting009 4 жыл бұрын
Sir one video about moving average
@AricLaBarr
@AricLaBarr 4 жыл бұрын
Definitely! Be on the look out this week!
@GameinTheSkin
@GameinTheSkin 3 жыл бұрын
You are a more level headed StatQuest, won't mind singalongs tho
@zubairkhan-hz1vz
@zubairkhan-hz1vz 5 жыл бұрын
Plz Arima model
@waimyokhing
@waimyokhing 5 жыл бұрын
what is exponential autoregressive model???
@razzlfraz
@razzlfraz 4 жыл бұрын
Like this? en.wikipedia.org/wiki/Exponential_smoothing
@andresgonzalez-nl8or
@andresgonzalez-nl8or 3 ай бұрын
shouldn't it be, if Φ > 1 and not Φ < 1?
@AricLaBarr
@AricLaBarr 23 сағат бұрын
Not if you want stationarity. To be stationary, we want the value of phi to be less than 1 so that when raised to higher powers we have lower and lower impact on that observation the further back in time we go.
@batolhashimi6863
@batolhashimi6863 2 жыл бұрын
I wish you were my professor instead of him.
@andreneves6064
@andreneves6064 4 жыл бұрын
Slides please
@abderrahimba7390
@abderrahimba7390 2 жыл бұрын
Wooow
@HardKore5250
@HardKore5250 4 жыл бұрын
GPT-3
What are Moving Average (MA) Models
5:01
Aric LaBarr
Рет қаралды 71 М.
Time Series Talk : Autoregressive Model
8:54
ritvikmath
Рет қаралды 338 М.
Smart Sigma Kid #funny #sigma
00:33
CRAZY GREAPA
Рет қаралды 27 МЛН
Creative Justice at the Checkout: Bananas and Eggs Showdown #shorts
00:18
Fabiosa Best Lifehacks
Рет қаралды 28 МЛН
Turn Off the Vacum And Sit Back and Laugh 🤣
00:34
SKITSFUL
Рет қаралды 8 МЛН
What is Stationarity
5:01
Aric LaBarr
Рет қаралды 79 М.
What is the Local Outlier Factor
5:02
Aric LaBarr
Рет қаралды 4,7 М.
What are ARCH & GARCH Models
5:10
Aric LaBarr
Рет қаралды 42 М.
Time Series Talk : Stationarity
10:02
ritvikmath
Рет қаралды 292 М.
Vector Auto Regression : Time Series Talk
7:38
ritvikmath
Рет қаралды 129 М.
Time Series Talk : Moving Average Model
7:10
ritvikmath
Рет қаралды 196 М.
What is the Vector Autoregressive (VAR) Model
5:11
Aric LaBarr
Рет қаралды 35 М.