great explanation & visuals, especially concerning the alpha vectors!
@TheTenorChannel2 жыл бұрын
Very good video, thank you sir. Loved that tree in the end :)
@chandinivelilani23832 жыл бұрын
Very Lucidly explained. Thank You!!
@秋三杯 Жыл бұрын
Thank you. Very easy to understand.
@mariiakozlova2 жыл бұрын
Robert, thank you for the videos! Easy to follow even for newcomers to the field. I am trying to replicate the Crying baby problem. Julia is installed, Pluto works, and the concise definition of the problem provided at the end works too! Now I want to go step-by-step with the lecture flow, but get an error from the beginning: '@with_kw not defined'. What am I missing?
@mariiakozlova2 жыл бұрын
That one was solved by moving the line 'using POMDPs, QuickPOMDPs, POMDPModelTools, BeliefUpdaters, Parameters' above calling '@with_kw'. Am I on the right track?
@robertmoss26922 жыл бұрын
@@mariiakozlova Yes that's right. the @with_kw macro is defined in the Parameters package, so you'll need to run the following before using @with_kw using Parameters
@mariiakozlova2 жыл бұрын
A problem calling QMDP solver as written in the notes. When running the line "qmdp_solver = QMDPSolver(max_iterations=qmdp_iters);" the error appears "UndefVarError: qmdp_iters not defined"
@mariiakozlova2 жыл бұрын
if killing the expression inside the brackets and just calling "qmdp_solver = QMDPSolver();" - all goes fine. But then I will probably have trouble playing with the slider later 🤔
@robertmoss26922 жыл бұрын
@@mariiakozlova You'll have to add the qmdp_iters variable either by adding a cell like: qmdp_iters = 100 or by adding a PlutoUI slider like so: using PlutoUI @bind qmdp_iters Slider(1:100)
@Usernamestate93173 жыл бұрын
Hi, are there other good videos you would suggest for POMDPs?
@robertmoss26922 жыл бұрын
Pieter Abbeel from UC Berkeley has a nice lecture on POMDPs here: kzbin.info/www/bejne/aJWxoWqHrtR5lc0 (skip to about 37 minutes)
@dikshie3 жыл бұрын
Can you upload the slides?
@robertmoss26923 жыл бұрын
Absolutely-the slides are posted on the Github link in the video description
@dikshie3 жыл бұрын
@@robertmoss2692 Thank you.
@StevenSiew23 жыл бұрын
This things feels like a Bayesian
@SaMusz733 жыл бұрын
Yes, and also I was wondering about the difference with hidden Markov Models also. (I am not math trained, but use stats). All theses applications of stats and automation models are really bloomings and really interesting for biology modelling (the baby example is great). I'll go watch the full course for clarifications.
@NoctisCaelus3 жыл бұрын
What exactly do you mean by "feels Bayesian"? That the Bayes Rule is used in belief updating or that is Bayesian RL?
@SaMusz733 жыл бұрын
@@NoctisCaelus I believe it was about the updating. Was isn't Bayesian is the total ignorance of the past, what is incorporated in Bayes "world" in the prior. ps didn't get the acronym RL
@NoctisCaelus3 жыл бұрын
@@SaMusz73 Yes the belief updating is Bayesian. RL: Reinforcement Learning. For your question about difference with HMMs: POMDPs are controlled HMMs. In other words the state transition is affected by actions. In HMMs you have latent variables (or system states) that emit observations. With POMDPs the state transition depends on actions/control/external input to the system. Additionally you have a reward given for this transition.
@SaMusz733 жыл бұрын
@@NoctisCaelus Thanks for the precisions (and fast reply) So if I understand well, you say it's a more elaborate model of an "actor" by it's percieved observable variables