SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

  Рет қаралды 14,041

Steve Brunton

Steve Brunton

Күн бұрын

SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning
by Nicholas Zolman, Urban Fasel, J. Nathan Kutz, Steven L. Brunton
arxiv paper: arxiv.org/abs/2403.09110
github code: github.com/nzolman/sindy-rl
Deep reinforcement learning (DRL) has shown significant promise for uncovering sophisticated control policies that interact in environments with complicated dynamics, such as stabilizing the magnetohydrodynamics of a tokamak fusion reactor or minimizing the drag force exerted on an object in a fluid flow. However, these algorithms require an abundance of training examples and may become prohibitively expensive for many applications. In addition, the reliance on deep neural networks often results in an uninterpretable, black-box policy that may be too computationally expensive to use with certain embedded systems. Recent advances in sparse dictionary learning, such as the sparse identification of nonlinear dynamics (SINDy), have shown promise for creating efficient and interpretable data-driven models in the low-data regime. In this work we introduce SINDy-RL, a unifying framework for combining SINDy and DRL to create efficient, interpretable, and trustworthy representations of the dynamics model, reward function, and control policy. We demonstrate the effectiveness of our approaches on benchmark control environments and challenging fluids problems. SINDy-RL achieves comparable performance to state-of-the-art DRL algorithms using significantly fewer interactions in the environment and results in an interpretable control policy orders of magnitude smaller than a deep neural network policy.
%%% CHAPTERS %%%
00:00 Intro
01:25 What is Reinforcement Learning?
03:12 Reinforcement Learning Drawbacks
05:20 Dictionary Learning and SINDy
06:55 SINDy-RL: Environment
11:42 SINDy-RL: Reward
23:25 SINDy-RL: Agent
14:48 SINDy-RL: Uncertainty Quantification
20:07 Recap and Outro

Пікірлер: 22
@deltax7159
@deltax7159 29 күн бұрын
you guys are so brilliant. such a great idea, would love to hear a podcast with you guys talking about how you came up with these ideas/ the life cycle of SINDY rl.
@brianarbuckle2284
@brianarbuckle2284 25 күн бұрын
Great work. This is fantastic!
@JoshuaSheppard-pp5iz
@JoshuaSheppard-pp5iz 24 күн бұрын
Bold steps ... thrilling work! I look forward to working through the implementation.
@TheRubencho176
@TheRubencho176 28 күн бұрын
Impressive! Thank you very much for sharing and for the inspiration.
@jimlbeaver
@jimlbeaver 29 күн бұрын
Great presentation, very interesting approach. I’m curious about the intuition behind the ensemble…eager to read more. Thanks!
@Eigensteve
@Eigensteve 25 күн бұрын
Thanks Jim! The ensembling gives us way more robustness to noisy data and also to very few data samples, so it can let us train models much more quickly than NN models.
@ClicheKHFan
@ClicheKHFan 28 күн бұрын
Amazing. I've been looking for something like this.
@drj92
@drj92 26 күн бұрын
Has your lab considered experimenting with Kolmogorov-Arnold Networks in combination with SINDy? It feels like a potentially excellent match. Their approach to network sparsification, in particular, seems like it could be automated in a very interesting way via SINDy. In the recent paper they fix and prune activation functions by hand, but it seems that you could instead use SINDy to automatically fix a particular activation function once it fit a dictionary term beyond some threshold. Love the presentation!
@Eigensteve
@Eigensteve 25 күн бұрын
Neat idea -- definitely thinking about ways of connecting these topics. Thanks!
@Idonai
@Idonai 29 күн бұрын
Thanks for the presentation. Do I understand correctly that this whole process could be automated making highly efficient agents or do some aspects of this process require manual work? Also, how well does it scale to significantly harder RL problems? Does this technique stay computationally efficient (e.g. compared to PPO) in these harder ernvironments? Could this be combined with Reinforcement learning from human feedback (RLHF) in a practical manner?
@awsomeguy563
@awsomeguy563 28 күн бұрын
Absolutely brilliant
@Pedritox0953
@Pedritox0953 29 күн бұрын
Great video!
@musicarroll
@musicarroll 27 күн бұрын
Nick: Excellent work! This is genuine progress in AI to integrate state estimation SOTA with decision making (RL). Would love to see this further refined using POMDPs ( Partially Oberservable Markov Decision Processes).
@sai4007
@sai4007 26 күн бұрын
Checkout PlaNet and dreamer models
@kevinarancibiacalderon9039
@kevinarancibiacalderon9039 29 күн бұрын
Gracias por el video!
@xueqiu6384
@xueqiu6384 Күн бұрын
curious about how fitting can accelerate the training process. Any assumptions for action space/ state space / environment? Thanks for your attention.
@alexxxcanz
@alexxxcanz 29 күн бұрын
Great!
@SrZonne
@SrZonne 28 күн бұрын
Amazing
@juleswombat5309
@juleswombat5309 27 күн бұрын
Interesting
@Student-ve5ug
@Student-ve5ug 22 күн бұрын
Dear Sir, If we want to use reinforcement learning (RL) in a specific environment, I am concerned that the trial-and-error method will result in many errors, some of which may have negative consequences. Furthermore, I am unsure how many attempts the RL model will need to reach the optimal and correct decision. How can this challenge be addressed?
@srikanthtupurani6316
@srikanthtupurani6316 29 күн бұрын
This is so amazing I don't have words. Deepmind made computers play go game, chess game. It uses reinforcement learning. It is simply superb.
@nvjt101
@nvjt101 18 күн бұрын
Real AI is RL
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 181 М.
Is it Cake or Fake ? 🍰
00:53
A4
Рет қаралды 17 МЛН
We Got Expelled From Scholl After This...
00:10
Jojo Sim
Рет қаралды 20 МЛН
Reinforcement Learning: Machine Learning Meets Control Theory
26:03
Steve Brunton
Рет қаралды 255 М.
Joseph Suarez Thesis Defense - Neural MMO
1:00:06
Neural MMO
Рет қаралды 100 М.
Q Learning simply explained | SARSA and Q-Learning Explanation
9:46
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 259 М.
TransformerFAM: Feedback attention is working memory
37:01
Yannic Kilcher
Рет қаралды 34 М.
A Neural Network Primer
19:14
Steve Brunton
Рет қаралды 35 М.
Мечта Каждого Геймера
0:59
ЖЕЛЕЗНЫЙ КОРОЛЬ
Рет қаралды 927 М.
Main filter..
0:15
CikoYt
Рет қаралды 3,4 МЛН
Дени против умной колонки😁
0:40
Deni & Mani
Рет қаралды 11 МЛН
ПОКУПКА ТЕЛЕФОНА С АВИТО?🤭
1:00
Корнеич
Рет қаралды 832 М.
i love you subscriber ♥️ #iphone #iphonefold #shortvideo
0:14
Si pamerR
Рет қаралды 2,7 МЛН
TOP-18 ФИШЕК iOS 18
17:09
Wylsacom
Рет қаралды 617 М.
Mi primera placa con dios
0:12
Eyal mewing
Рет қаралды 719 М.