Deep Learning to Discover Coordinates for Dynamics: Autoencoders & Physics Informed Machine Learning

  Рет қаралды 133,877

Steve Brunton

Steve Brunton

Күн бұрын

Пікірлер: 108
@liamtsai2179
@liamtsai2179 3 жыл бұрын
YT algorithm does know where to take me, never thought i'd sit through a lecture in my leisure time fully engaged. Very well done!
@AICoffeeBreak
@AICoffeeBreak 3 жыл бұрын
Knowing a lot about autoencoders already, it is useful to see how they start to dissipate into other research areas, like physics (my favorite!). Great to see a good explanation of ML as a tool for further discovery. Thanks for this video!
@wibulord926
@wibulord926 Жыл бұрын
cant belive see you here, your vidieo is helpful too thanks you alot.
@ergolibersum
@ergolibersum 3 жыл бұрын
I might just have found my research topic for my master's. Fascinating, thanks. Besides that, the quality of the video deserves remarks: Dark background which is good for eyes, persistently high quality graphics, and a narrator who does his best to create understanding with a decent use of English.
@aidankennedy6973
@aidankennedy6973 3 жыл бұрын
Incredible work your team is doing. So much to think about, with incredibly wide ranging applications
@HeitorvitorC
@HeitorvitorC 3 жыл бұрын
Thank you for your videos, Steve! Also, your gesticulation eases the complexity of your talk significantly. Keep up with the good work!
@marioskokmotos8274
@marioskokmotos8274 3 жыл бұрын
Awesome work! Thanks for sharing in such a digestible way! I feel we cannot even start to imagine in how many different fields this approach could be used.
@gammaian
@gammaian 3 жыл бұрын
Your channel is incredible Prof. Brunton, thank you for your work! There is so much value here
@jimlbeaver
@jimlbeaver 3 жыл бұрын
This is the most amazing stuff you guys have came up with so far!!! Awesome…great job.
@alberto.caballero
@alberto.caballero 3 жыл бұрын
Awesome work. I can't believe I understood most of this topic. One of the best explanations I have seen so far.
@alfcnz
@alfcnz 3 жыл бұрын
Cool, nice lecture! 🤓🤓🤓
@Eigensteve
@Eigensteve 3 жыл бұрын
Thanks!
@lablive
@lablive 3 жыл бұрын
I'm lucky to meet this work positioned between the 3rd and 4th science paradigms. As mentioned at the end of this video, I think the key to the interpretability is to take advantage of inductive biases described as existing models or algorithms for forward/inverse problems to design the encoder, decoder, and loss function.
@diegocalanzone655
@diegocalanzone655 3 жыл бұрын
Brought here by YT algorithm while finishing my BS thesis on non-phsysics-informed auto-encoders to learn from Shallow Water Equations. I will definitely dedicate further studies on the lecture content. Thanks!
@jessegibson3548
@jessegibson3548 3 жыл бұрын
Thank you for this vid. Really great content you are putting out for the community Steve.
@jeroenritmeester73
@jeroenritmeester73 3 жыл бұрын
Hi Steve, very interesting video. One remark on the slides that you use: I tend to watch videos with closed captions despite me having average hearing because it helps me keep track of what you're saying. I can imagine that people with hear impairments will also do this, but sometimes elements on your slides will overlap with KZbin's space for subtitles, like the derivative at 1:45. Perhaps this is something you could take into account, particularly for slides that do not contain many different elements and allow for scaling. Thanks again.
@iestynne
@iestynne 3 жыл бұрын
This was a super interesting one. Thank you very much for another engaging whirlwind tour through recent advances in computer science! :)
@peilanhsu
@peilanhsu 4 ай бұрын
Such a gem of a video! Thank you!!
@danberm1755
@danberm1755 Жыл бұрын
Fantastic discussion! Love that you cover the complexities so in-depth.
@rockapedra1130
@rockapedra1130 Жыл бұрын
Nice but would love to see some demos of the results. For example, the equation of the pendulum, the reconstruction from the found dynamics and comparison between the two.
@zhanzo
@zhanzo 3 жыл бұрын
I wish I was able press the like button more than once.
@skeletonrowdie1768
@skeletonrowdie1768 3 жыл бұрын
thanks so much! this definitely helped me get into deep learning dynamical systems. I am working on a problem where I want to classify the state of a viral particle near a membrane. I transformed a lot of simulation frames into structural descriptors. I am at the point where I need to decide on an architecture and loss functions to learn. I have begun naively with a dense neural network. This however seems very interesting, not directly but it could be another input for the DNN. The Z could be describing certain constant dynamics surrounding the viral particle which could help classify the state. Anyway, thanks a lot!
@__-op4qm
@__-op4qm 2 жыл бұрын
very kindly structured explanations like this can make everyone feel welcome and interested) This is exactly why subbed to this channel almost 2 years ago; all the videos are very, inviting, welcoming and by the end leave a calm sense of curiosity balanced with a pinch of reassurance, free of any unnecessary panic. In other places these types of subjects are often presented with a thick padding of jargon and dry math abstractions, but not here. Here the explanations are distilled into a sparse latent form without loss of generality and with a clear reminder of the real life value of these methods.
@SaonCrispimVieira
@SaonCrispimVieira 3 жыл бұрын
Professor Brunton, thanks to you and team mates for the amazing content. I think it is desirable to correct the pendulum videos, because the images are affected by an affine transformation due to the lens distortions, looking to the botton video line you can se how distorted it is. There are libraries to identify the parameters of the camera affine transformation using a chessboard tracking the corners coordinates distortion.
@alfcnz
@alfcnz 3 жыл бұрын
You can easily factor the affine transformation in the encoder (and the inverse one in the decoder). You don't always have access to distortion correction settings, and as long as you've been using the same capturing equipment, you will be able to factor such transformations during training.
@SaonCrispimVieira
@SaonCrispimVieira 3 жыл бұрын
@@alfcnz Professor Canziani, it's amazing to have your answer here, in a way I'm your virtual machine learning student on youtube! Thanks a lot you and your team mates for the amazing content. I totally agree, especially when it comes to a linear transformation that would be easily understood by the network, my biggest concern is that this distortion could be wrongly treated as the problem physics, being more of an observational error, especially when linearity is enforced in the dynamics discovery.
@maythesciencebewithyou
@maythesciencebewithyou 3 жыл бұрын
@@alfcnz But if you trained it on distorted image data, wouldn't it make a false correction to undistorted image data?
@SaonCrispimVieira
@SaonCrispimVieira 3 жыл бұрын
@@iestynne Its is not difficult to calibrate the camera!
@jinghangli623
@jinghangli623 Жыл бұрын
I've been looking for some insights on how to leverage deep learning to optimize our MRI transmit coil. This has been extremely helpful
@Ejnota
@Ejnota 3 жыл бұрын
how much i love this videos and the quality of the software they use
@MaxHaydenChiz
@MaxHaydenChiz 3 жыл бұрын
This is a really good video. Really well explained and it let me see how your field was using this tech. Thanks for posting it. It sounds like you are doing a lot of interesting research. I'll keep an eye on your channel now that the algorithm recommended it to me.
@dr.mikeybee
@dr.mikeybee 3 жыл бұрын
I've just been learning about how to use PCA to reduce dimensionality. Now I see one can go further and learn the meaning of the linear combination at the bottleneck. I don't really understand how one can use additional loss functions to find that meaning, but now I know it can be found. I'll need to think about it. Thank you.
@drskelebone
@drskelebone 3 жыл бұрын
I will always love that the simple solution was just returned as the simple solution. :D
@weeb3277
@weeb3277 3 жыл бұрын
Very esoteric video. I like. 👍
@AyunAyun-m2y
@AyunAyun-m2y Жыл бұрын
I tried to use autoencoder to do Anomsly detection for anti-fraud task in social media.It's a good way to do information compression.But I never thought it can be used in model discovery for science! AI will change the game of Science research today!
@johnsalkeld1088
@johnsalkeld1088 3 жыл бұрын
The linear areas seem to be a maximising of the neighbourhoods implied by the implicit function theory - i am probably wrong it was 1987 when i studied this
@ernstuzhansky
@ernstuzhansky 8 ай бұрын
This is very cool!
@weert7812
@weert7812 3 жыл бұрын
Do you know of any jupyter notebook examples in say Keras or Pytorch that give an example of how to do this?
@leonardromano1491
@leonardromano1491 3 жыл бұрын
Nice video! I am very new to this subject (In fact this is the first video I have seen about it), but it seems that essentially what you do is derive dynamics from an action principle (minimizing the generalized loss functional) and so any partially known physics I suppose would just be incorporated by Lagrange multipliers. About the two different approaches for linearisation (going to higher and lower dimension), I think that both are physically motivated. You can definitely expect dynamics to become more linear if you go to higher dimension too. Think about thermodynamics: You can either try to describe average degrees of freedom like entropy, heat, etc. which would follow easy laws, but at the same time you could try and describe the system by describing each individual particle. It wouldn't really be feasible, but it's not unlikely that the dynamics can be described from a simple possibly linear law (like a box full of free collisionless particles in a homogeneous gravitational field).
@have_a_nice_day399
@have_a_nice_day399 3 жыл бұрын
Thank you for the amazing video. Would you please give a few simple examples and explain step by step of how to use these machine learning algorithms?
@ArxivInsights
@ArxivInsights 3 жыл бұрын
Fantastic video!!
@veil6666
@veil6666 2 жыл бұрын
Just curious whether your usage of the term "lift" is related to the topological/categorical use of that term? Specifically whenever there is a morphism f: X -> Y and g: Z -> Y then a lift is a map h: X -> Z such that f = gh (i.e. the diagram commutes). I think the analogy works: Let X be the original data space, Z the latent space, and Y = X. The composition gh is a map X -> Z -> X, if we set f = the identity on X, then h and g are the encoder and decoder, then f ≈ gh expresses the reconstruction objective.
@johnsalkeld1088
@johnsalkeld1088 3 жыл бұрын
Do you have your presentation available on line? Or links to the arxiv site for the papers referenced? I would love to read them
@spencermarkowitz2699
@spencermarkowitz2699 Жыл бұрын
so amazing
@PedrossaurusRex
@PedrossaurusRex 3 жыл бұрын
Amazing lecture!
@meetplace
@meetplace 10 ай бұрын
@3:30 If Steve Brunton says something is "a difficult task", you can be sure it really is a difficult task! :D
@marjankrebelj4007
@marjankrebelj4007 3 жыл бұрын
I saw the thumbnail and the title and I assumed this was a course on encoding audio (dynamics) for movie editing. :)
@rrr33ppp000
@rrr33ppp000 3 жыл бұрын
YES
@drskelebone
@drskelebone 3 жыл бұрын
Is Steve quiet for everyone? I've been in conferences all week, so I might be set up wrong, but I had to reverse twice to get a clean vocal.
@jeroenritmeester73
@jeroenritmeester73 3 жыл бұрын
It's fine for me on mobile
@user255
@user255 3 жыл бұрын
I had to turn up volume quite high, but now hearing just fine.
@beauzeta1342
@beauzeta1342 7 ай бұрын
Thank you professor for the very inspiring video! At 12:05, can we say something about the uniqueness of the representation transform phi and psi? Or they may not be unique at all, and may depend on how we train the network?
@AliRashidi97
@AliRashidi97 3 жыл бұрын
Great lecture . Thanks a lot 🙏
@AllanMedeiros
@AllanMedeiros 3 жыл бұрын
Fantastic!
@niccologiovenali7597
@niccologiovenali7597 Жыл бұрын
you are the best
@netoskin
@netoskin Жыл бұрын
Amazing!!
@AA-gl1dr
@AA-gl1dr 3 жыл бұрын
Thank you so much!
@eerturk
@eerturk 3 жыл бұрын
Thank you.
@joseantoniogambin9609
@joseantoniogambin9609 3 жыл бұрын
Awesome!
@frankdelahue9761
@frankdelahue9761 2 жыл бұрын
Deep learning is revolutionizing engineering, along with Exascale supercomputing.
@andersonmeneses3599
@andersonmeneses3599 3 жыл бұрын
Thanks! 👍🏼
@krishnaaditya2086
@krishnaaditya2086 3 жыл бұрын
Awesome Thanks!
@vyacheslavboyko6114
@vyacheslavboyko6114 3 жыл бұрын
23:32 sounds interesting. So you say this is a way to learn the linearizing transform for the convective term of the Navier-Stocks Eq? How do you even know if, after training the network, we end up with a meaningful solution?
@iestynne
@iestynne 3 жыл бұрын
You might not. Sara Hooker has recently been arguing that properties like accuracy and interpretability (among others) may direct conflict; so the better one is, the worse the others are. You might have to sacrifice a 'meaningful' solution for an accurate one.
@majstrstych15
@majstrstych15 3 жыл бұрын
Hey Steve, your videos are great! I wanna ask how can the balanced model reduction be used in the deep learning autoencoder. I'm asking, because with the BML you are able to find the coordinate transformation to equalize and diagonalize the Gramians, but this transformation could turn out to be dense and non-interpretable, right? Could you please explain what would be the advantage of combining these two? Thanks, your big fan!
@haydergfg6702
@haydergfg6702 3 жыл бұрын
Thank you alot i hope share with apply by cod
@FromaGaluppo
@FromaGaluppo 3 жыл бұрын
Amazing
@vitorbortolin6810
@vitorbortolin6810 3 жыл бұрын
Great!
@prikarsartam
@prikarsartam 3 ай бұрын
If I have a very large video feed, isn't doing singular value decomposition extremely computationally expensive?
@Eigensteve
@Eigensteve 3 ай бұрын
You can always do a randomized SVD to make it faster
@kawingchan
@kawingchan 3 жыл бұрын
Many non linear system exhibit phenomenon of chaos (divergence in the “original” coord if 2 systems have tiny diff in their init condition), would be interested to see if the “recovered” x_\hat should also reproduce the chaotic behavior with that same Lyapunov expononent, and also what should happen to the latent z’s.
@hfkssadfrew
@hfkssadfrew 3 жыл бұрын
First question, they do. It is validated in 1990-2000 where numerous engineers and mathematicians play shallow neural network. Second, I don’t have an answer.
@radenmuaz7125
@radenmuaz7125 3 жыл бұрын
How do you deal with external control input u(t) for control problems and robots, Maybe called exogenous inputs.
@toastyPredicament
@toastyPredicament 2 жыл бұрын
No this is good
@yoavzack
@yoavzack 2 жыл бұрын
Imagine using this to represent a human brain in a low dimentional space.
@__-op4qm
@__-op4qm 2 жыл бұрын
probably boils down to 2D ('amount of tasty pizza' x 'amount of tasty bacon') quite precisely. [If even one training example involves brain data in response to pineapple pizza, the gradient instantly explodes, coffee levitates onto keyboard and alien police come to remove pineapple away from pizza, just in time before a black hole forms turning milky-way into a Lorenz attractor.]
@JohnWasinger
@JohnWasinger 3 жыл бұрын
Single Value Decomposition / Principal Components Analysis / Proper Orthogonal Decomposition (field? / field? / field?)
@zeydabadi
@zeydabadi 3 жыл бұрын
Am I right that he implied that all those three are the same?
@JohnWasinger
@JohnWasinger 3 жыл бұрын
@@zeydabadi you’re right, they are. I was wondering if certain fields prefer one term over another.
@mattkafker8400
@mattkafker8400 3 жыл бұрын
Tremendous video!
@tharunsankar4926
@tharunsankar4926 3 жыл бұрын
How would we train a network like this though?
@NozaOz
@NozaOz 3 жыл бұрын
Could someone help me? I’m a student fresh out of high school, I’ve got an Australian-HSC-education in Chemistry, physics and extension 2 maths, I intend on studying physics at university and possibly getting a minor in CS to give me the marketable skills. I’m currently just doing simple things like a code academy course on Python and likely the machine learning skill path. From where I am now, where do I go to understand this video?
@mohdnazarudin2636
@mohdnazarudin2636 3 жыл бұрын
to understand the video, coding is useless, it is not gonna help. you need to understand linear algebra, dynamical system or ODE/PDE, and also the math for neural network. take course in those subjects.
@MrHardgabi
@MrHardgabi Жыл бұрын
waouh, cool but complex, not sure if it could be simplified a bit
@huyvuquang2041
@huyvuquang2041 Жыл бұрын
Anybody have a feeling like me? Learning math and science with Harrison Well?
@Anujkumar-my1wi
@Anujkumar-my1wi 3 жыл бұрын
In wikipedia ,state variables are reffered to as the varibles that describes the mathematical state of the system and state as something that descirbes the system ,but isn't state is the minimum set of variables that describes the system wikipedia article link : en.wikipedia.org/wiki/State_variable And also ,I want to ask is there any difference between configuration of a system and state of a system?
@vg5028
@vg5028 3 жыл бұрын
Yes, your understanding of state variables is correct. Sometimes its useful to make a distinction between state variables and a "minimum set" of state variables. State variables are anything that give you information about the state of the system -- it doesn't always have to be a minimal set. In my experience "configuration" and "state" are similar terms but I could be wrong about that.
@Anujkumar-my1wi
@Anujkumar-my1wi 3 жыл бұрын
@@vg5028 yes but isn't state is referred to as minimum set of varibles that completly desctibes the system(those minumun set of varibles i.e state varibales) but in wikipedia state is referred to as something that describes the system and state variable are something that describes the state of the system but isn't here state was reffered as minumun set of varibales i.e state variables?
@Anujkumar-my1wi
@Anujkumar-my1wi 3 жыл бұрын
@@vg5028 Well,my question is that,why the definition of state is different in this article by mit :web.mit.edu/2.14/www/Handouts/StateSpace.pdf and in this wikipedia article:en.wikipedia.org/wiki/State_variable
@hfkssadfrew
@hfkssadfrew 3 жыл бұрын
You asked a GREAT question. Think about this, you have a system variable of 2 state, one always is around 0.00001, the other is around -1 to 1. So you will tend to believe this system is approximately 1D. But mathematically, your understanding is 100% right. it is 2 degree and no less, but you can think it as 1D which brings you a lot of easy life, if you are in the business of modeling and control!
@Anujkumar-my1wi
@Anujkumar-my1wi 3 жыл бұрын
@@hfkssadfrew what i am asking is what 'state' is whether its referring to that condition of the system or referring to the mathematical description of the system?
@marku7z
@marku7z 3 жыл бұрын
How do I compute the x dot in case of x are pixels?
@__-op4qm
@__-op4qm 2 жыл бұрын
probably for each pixel separately in 1D by simple Euclidean gradient dx/dt, because the joint underlying function over all pixes is unknown (neural network needs to learn those correlations from examples).
@MatthewGale-s2w
@MatthewGale-s2w 6 ай бұрын
😊 I don't know if computers are capable of deep learning Like I just explained our type of learning It don't come from all your Function boards The details that you place in it are your details I can't live your life my friend And your computer will never know what I'm trying to say Unless we were being straight but you don't have a straight life I doubt you make a completely straight computer ...😊 It's personal To understand your construction modeling You see the thing about my life it is not orchestrated by your construction modeling 😊 Even if I had my own chance ... Sometimes the facts ain't even facts... if it ain't even there What could be what won't be That's really not your prediction 😊 Unless it's within your case to understand 😮 Most people don't have these matters and they only predict 😊 Try to be the cause and effect of them Before you predict in the middle of them .... Even if predictions are such outcasts 😊 Even the teacher's pet taught us that ... I won't even use the word persuasions ..... You see a computer has to modify itself to each and every case of individual and the life and standards that they have to live by To understand them You will never help them By a parents point of view You got to take the strong considerations of their wrongs .... Their point of views Were there aiming what they can what they can't I don't need a computer that says well I can't do that I won't learn that 😊 That's what my professor at MIT told me If I can't do that I won't work on that 😊 I said okay you will give me a computer just the same ..... 😊 Logically I am correct But like I said that's a prediction I am careful about my predictions Because what is important to you is the same that is important to me it's just not important to you to give it to me as much as it was important to just keep it to yourself 😊 I'm a man of discoveries and I can't help but run my mouth 😮 But you're a man with a job and you got nothing else to learn ....😊 We did meet in the middle 😮 I can't help it you're going the other way 😊 Maybe I'm stupid Look we met back in the middle 😢 Call it even damn it
@Tyler-pj3tg
@Tyler-pj3tg Жыл бұрын
AI to learn how many black shirts Steve Brunton has
@__--JY-Moe--__
@__--JY-Moe--__ 2 жыл бұрын
wow! this is so fun! I think I made it 2 somewhere, in this switchboard of bowties! I don't know whether 2 call this ''at&t,how can I help U"! or. land of confusion, in deep thought flow's? ha..ha.. yes, my attempt @ humor! thanks so much 4 the lesson! totally love this! good luck!
@MatthewGale-s2w
@MatthewGale-s2w 6 ай бұрын
You got to be worried about the wrong point of view you feed a computer 😊 As a human we don't make the mistakes 😊 We necessarily know or know what we need or what is needed to be added ....😊 Sometimes no potential strains there 😊 Sometimes we don't have such qualifications as a qualification 😮 Even if you are not qualified a human will work you into qualified Leave it up to a computer 😊 You won't be qualified for s***
@ArbaouiBillel
@ArbaouiBillel 3 жыл бұрын
AI has gone through a number of AI winters because people claimed things they couldn't deliver
@laxibkamdi1687
@laxibkamdi1687 3 жыл бұрын
Sound really hard
@tag_of_frank
@tag_of_frank 3 жыл бұрын
First 9 minutes can be summarized with this sentence: "There exists a neural network which can perform SVD."
@hfkssadfrew
@hfkssadfrew 3 жыл бұрын
Lol. You can say “there exists a polynomial which can approximately perform any operation”. If you think so, then you still don’t get the point.
@tag_of_frank
@tag_of_frank 3 жыл бұрын
@@hfkssadfrew I think the point is after minute 9.
@MatthewGale-s2w
@MatthewGale-s2w 6 ай бұрын
😊 next thing you know we got crooked computers 😊 Last time I checked there's not a f****** game on this computer that the game does not f****** cheat or can it play f****** digitally Fair 😊 Ever since they made one f****** computer program You can never trust a f****** poker cards ever again 😊 I don't want to play with your computer 😊 For one it does not know how to f****** shuffle 😊 And for two it don't know how to stop looking at my f****** cards
@gtsmeg3474
@gtsmeg3474 3 жыл бұрын
audio is sooo low WTF
@nerdomania24
@nerdomania24 3 жыл бұрын
inventing my own math, from ground up and have no problem with physical systems and AI, you just have to make metrics emergent from a sack of infinite amount of Differential forms and just pick one until the metric of selfmanifistation won't be statistically correlated.
Neural Network Architectures & Deep Learning
9:09
Steve Brunton
Рет қаралды 788 М.
나랑 아빠가 아이스크림 먹을 때
00:15
진영민yeongmin
Рет қаралды 19 МЛН
Секрет фокусника! #shorts
00:15
Роман Magic
Рет қаралды 102 МЛН
小丑和白天使的比试。#天使 #小丑 #超人不会飞
00:51
超人不会飞
Рет қаралды 44 МЛН
这三姐弟太会藏了!#小丑#天使#路飞#家庭#搞笑
00:24
家庭搞笑日记
Рет қаралды 123 МЛН
Why Neural Networks can learn (almost) anything
10:30
Emergent Garden
Рет қаралды 1,2 МЛН
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 297 М.
Variational Autoencoders
15:05
Arxiv Insights
Рет қаралды 498 М.
AI, Machine Learning, Deep Learning and Generative AI Explained
10:01
IBM Technology
Рет қаралды 203 М.
Neural ODEs (NODEs) [Physics Informed Machine Learning]
24:37
Steve Brunton
Рет қаралды 60 М.
Simple Explanation of AutoEncoders
10:31
WelcomeAIOverlords
Рет қаралды 105 М.
나랑 아빠가 아이스크림 먹을 때
00:15
진영민yeongmin
Рет қаралды 19 МЛН