SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

  Рет қаралды 19,301

Steve Brunton

Steve Brunton

Күн бұрын

Пікірлер: 22
@deltax7159
@deltax7159 8 ай бұрын
you guys are so brilliant. such a great idea, would love to hear a podcast with you guys talking about how you came up with these ideas/ the life cycle of SINDY rl.
@musicarroll
@musicarroll 8 ай бұрын
Nick: Excellent work! This is genuine progress in AI to integrate state estimation SOTA with decision making (RL). Would love to see this further refined using POMDPs ( Partially Oberservable Markov Decision Processes).
@sai4007
@sai4007 8 ай бұрын
Checkout PlaNet and dreamer models
@Idonai
@Idonai 8 ай бұрын
Thanks for the presentation. Do I understand correctly that this whole process could be automated making highly efficient agents or do some aspects of this process require manual work? Also, how well does it scale to significantly harder RL problems? Does this technique stay computationally efficient (e.g. compared to PPO) in these harder ernvironments? Could this be combined with Reinforcement learning from human feedback (RLHF) in a practical manner?
@JoshuaSheppard-pp5iz
@JoshuaSheppard-pp5iz 8 ай бұрын
Bold steps ... thrilling work! I look forward to working through the implementation.
@srikanthtupurani6316
@srikanthtupurani6316 8 ай бұрын
This is so amazing I don't have words. Deepmind made computers play go game, chess game. It uses reinforcement learning. It is simply superb.
@brianarbuckle2284
@brianarbuckle2284 8 ай бұрын
Great work. This is fantastic!
@TheRubencho176
@TheRubencho176 8 ай бұрын
Impressive! Thank you very much for sharing and for the inspiration.
@jimlbeaver
@jimlbeaver 8 ай бұрын
Great presentation, very interesting approach. I’m curious about the intuition behind the ensemble…eager to read more. Thanks!
@Eigensteve
@Eigensteve 8 ай бұрын
Thanks Jim! The ensembling gives us way more robustness to noisy data and also to very few data samples, so it can let us train models much more quickly than NN models.
@xueqiu6384
@xueqiu6384 7 ай бұрын
curious about how fitting can accelerate the training process. Any assumptions for action space/ state space / environment? Thanks for your attention.
@drj92
@drj92 8 ай бұрын
Has your lab considered experimenting with Kolmogorov-Arnold Networks in combination with SINDy? It feels like a potentially excellent match. Their approach to network sparsification, in particular, seems like it could be automated in a very interesting way via SINDy. In the recent paper they fix and prune activation functions by hand, but it seems that you could instead use SINDy to automatically fix a particular activation function once it fit a dictionary term beyond some threshold. Love the presentation!
@Eigensteve
@Eigensteve 8 ай бұрын
Neat idea -- definitely thinking about ways of connecting these topics. Thanks!
@ClicheKHFan
@ClicheKHFan 8 ай бұрын
Amazing. I've been looking for something like this.
@awsomeguy563
@awsomeguy563 8 ай бұрын
Absolutely brilliant
@Student-ve5ug
@Student-ve5ug 8 ай бұрын
Dear Sir, If we want to use reinforcement learning (RL) in a specific environment, I am concerned that the trial-and-error method will result in many errors, some of which may have negative consequences. Furthermore, I am unsure how many attempts the RL model will need to reach the optimal and correct decision. How can this challenge be addressed?
@kevinarancibiacalderon9039
@kevinarancibiacalderon9039 8 ай бұрын
Gracias por el video!
@Pedritox0953
@Pedritox0953 8 ай бұрын
Great video!
@alexxxcanz
@alexxxcanz 8 ай бұрын
Great!
@SrZonne
@SrZonne 8 ай бұрын
Amazing
@juleswombat5309
@juleswombat5309 8 ай бұрын
Interesting
@nvjt101
@nvjt101 8 ай бұрын
Real AI is RL
Reinforcement Learning: Machine Learning Meets Control Theory
26:03
Steve Brunton
Рет қаралды 297 М.
Quilt Challenge, No Skills, Just Luck#Funnyfamily #Partygames #Funny
00:32
Family Games Media
Рет қаралды 55 МЛН
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН
The Full Reinforcement Learning Iceberg
25:57
Neural MMO
Рет қаралды 2,5 М.
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 432 М.
Inside the V3 Nazi Super Gun
19:52
Blue Paw Print
Рет қаралды 2,9 МЛН
Deep Reinforcement Learning for Fluid Dynamics and Control
17:35
Steve Brunton
Рет қаралды 49 М.
Sparse Identification of Nonlinear Dynamics (SINDy)
26:44
Steve Brunton
Рет қаралды 79 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 445 М.
Reinforcement Learning with sparse rewards
16:01
Arxiv Insights
Рет қаралды 119 М.
Reinforcement Learning - My Algorithm vs State of the Art
19:32
Pezzza's Work
Рет қаралды 155 М.
Quilt Challenge, No Skills, Just Luck#Funnyfamily #Partygames #Funny
00:32
Family Games Media
Рет қаралды 55 МЛН