Physics Informed Neural Networks (PINNs) [Physics Informed Machine Learning]

  Рет қаралды 71,635

Steve Brunton

Steve Brunton

Күн бұрын

Пікірлер: 76
@juandiegotoscano_brown
@juandiegotoscano_brown Ай бұрын
Thank you so much, Prof. Brunton, for recommending my video on PINNs! It's an honor to have my work mentioned on your channel. I appreciate your support and your incredible job in making advanced topics accessible to the community!
@jiaminxu7275
@jiaminxu7275 5 ай бұрын
Hi Prof. Brunton, I am a Ph.D. student from UT Austin majoring in Mechanical Engineering with specification of dynamical system and control. Your vedios has been helping me by either giving me a deeper understanding of foundamental knowledge or openning my horizon, ever since I begin my Ph.D. Just want to express my great gratitude to you again and hope I can meet you in certain conferences so that I can say thank you to you in person.
@The_Quaalude
@The_Quaalude 5 ай бұрын
Getting a PhD and learning from KZbin is wild 😭
@arnold-pdev
@arnold-pdev 5 ай бұрын
​@@The_Quaalude Why?
@The_Quaalude
@The_Quaalude 5 ай бұрын
@@arnold-pdev bro is paying all that money just to learn something online for free
@kaihsiangju
@kaihsiangju 5 ай бұрын
@@The_Quaalude usually, PhD student in the U.S get paid and does not need to pay for the tuition.
@Sumpydumpert
@Sumpydumpert 5 ай бұрын
I threw some concepts up on Reddit grand unified theory and some other places for a binary growth function based on how internet work with all these different platforms
@rehankhan-gn2jr
@rehankhan-gn2jr 6 ай бұрын
The way of teaching is highly beneficial and outstanding. Thank you, Steven!
@alessandrobeatini1882
@alessandrobeatini1882 6 ай бұрын
This is hands down one of the best videos I've seen on KZbin. Great work, keep it up!
@markseagraves5486
@markseagraves5486 5 ай бұрын
Very helpful Steven. I work in consciousness studies and find too often the math is written off as too complicated. On the other side, many computational scientists may write off consciousness studies as too ethereal to be of much value. Bridging these two worlds with insight and rigor, I feel advances our understanding of both artificial and human intelligence. You have contributed to this effort here. Thank you.
@code2compass
@code2compass 5 ай бұрын
Steve your videos are always helpful, clear and concise. Thank you so much for such amazing content. You are my hero
@ryansoklaski8242
@ryansoklaski8242 5 ай бұрын
I would love to see a video on Universal ODEs (which leverages auto-diff through diffEQ solvers). Chris Rackauckas' work in the Julia language on these methods has been striking - would love to see your take on it.
@Eigensteve
@Eigensteve 5 ай бұрын
Already filmed and in the queue :)
@ryansoklaski8242
@ryansoklaski8242 5 ай бұрын
@@Eigensteve I'm so excited to hear this. I recommend you so highly to my students and colleagues. I just wish I had your lessons when I was a college student way back when. Thanks for everything.
@mithundeshmukh8
@mithundeshmukh8 5 ай бұрын
Please share references only 1 Link is visible
@tillsteh7273
@tillsteh7273 5 ай бұрын
Dude they are literally in the video. Just use google.
@DrakenRS78
@DrakenRS78 5 ай бұрын
Also - take a look at his textbook for further reference
@aliabdollahian1465
@aliabdollahian1465 2 ай бұрын
Truly great explanation! It really helps me understand the concepts deeply. You're a hero, Steve! T hank you for your highly beneficial, outstanding, and most importantly, free teaching! ❤
@nandhumon2377
@nandhumon2377 2 ай бұрын
Great video and I always enjoy your presentation. I think we had to include about the loss balancing for PINNs too in this.
@clementboutaric3952
@clementboutaric3952 4 ай бұрын
The fact that writing the physics in the loss function won't enforce it but rather suggest it can be a cool thing if the hypothesis that lead to this NS equation (incompressible newtonian fluid) start to become less solid.
@abhisheksaini5217
@abhisheksaini5217 6 ай бұрын
Thank you, Professor.😃
@pantelisdogoulis8662
@pantelisdogoulis8662 Ай бұрын
Thanks a lot for the video! I would like to ask if you have encountered any PINNs into solving systems described by simple Algebraic equations with no time parameter present.
@MLDawn
@MLDawn 3 ай бұрын
In 29:25, the problem lies in the way backpropagation works! That is, even though the loss function is physics-informed, the learning algorithms, backpropagation, is far from physics-informed, which means the neuronal message passing in a traditional neural net, does not resemble how the brain works. More specifically, the gradient trajectories used in backprob, are shared by both terms of the PINN loss! This means while minimizing term 1, the network forgets term 2 and vice versa. That is why you need to artificially balance the MLP and Physics part by some coefficient! This is not a proper solution as it addresses the problem after it already has occured! I would suggest a fundamental alteration of the dynamics of training, that is, NOT using backprop but instead use the Free Energy Principle and in short local Hebbian learning! This should create meaningfully factorised portions of the network that specialise in minimising different parts of your loss, without constantly being over-written (i.e., no catastrophic forgetting).
@blacklabelmansociety
@blacklabelmansociety 5 ай бұрын
Hi Professor Steve. I’d love to see a series on Transformers. Thanks for your content, greetings from Brazil.
@reversetransistor4129
@reversetransistor4129 5 ай бұрын
Nice, kinda gives me ideas to mix control theories together.
@THEPAGMAN
@THEPAGMAN 5 ай бұрын
This is really helpful, if only you posted this sooner! Thanks
@calvinholt6364
@calvinholt6364 5 ай бұрын
This is much easier to comprehend than the course given by the author GK. He should just point you to us. 😅
@MariaHeger-tb6cv
@MariaHeger-tb6cv 5 ай бұрын
I was thinking about your comment that rules of physics become expressions to be optimized. Unfortunately, I think that they are absolute rules that should be enforced at every stage of the process. Maybe only at the last step? It’s like allowing an accountant to have errors knowing that the overall performance is better?
@drozdchannel8707
@drozdchannel8707 5 ай бұрын
Great video! it may be useful to do another video about Neural Operators. It is more stable and faster in many physical tasks as i know.
@luc-nh5lo
@luc-nh5lo 2 ай бұрын
Good video! I'm starting to see more about PINN, I hope one day I'll do a master's degree at an American university like MIT or Stanford, and your video helped me, thanks (:
@nafisamehtaj8779
@nafisamehtaj8779 5 ай бұрын
Prof Brunton, it would be great a help, if you can cover the neural operator (DeepONets) in any of your video. Thanks for all the amazing videos though, making learning easier for grad people.
@sedenions
@sedenions 5 ай бұрын
Have you made a video on embedding and fitting networks for running simulation inference?
@rudypieplenbosch6752
@rudypieplenbosch6752 5 ай бұрын
I was waiting for this, hope to see more about this subjects, thanks a lot.
@alshahriarbd
@alshahriarbd 5 ай бұрын
I think you forgot to put the link on the description to the PyTorch example tutorials.
@AndrewConsroe
@AndrewConsroe 5 ай бұрын
PINN foundation models, even if domain specific at first, would be really cool. I see one paper from a quick google search with some early positive results. Even if you do have to finetune to your problem it would beat scratch training for every new application. I wonder if the architecture could be modified to separate the physics from the data to make the fine tuning more effective/efficient. Do we have more insight into the phase space of nets with low/zero physics loss?
@moisesbessalle
@moisesbessalle 5 ай бұрын
cant you also clip/trim the search space with the possible range of output values also to speed it up before inference? so for example the velocities will be a positive integer with values less then some threshold that depends on your setting?
@mostafasayahkarajy508
@mostafasayahkarajy508 5 ай бұрын
Thank you very much for the lecture. I am looking forward for your next lecture on this topic.
@Obbe79
@Obbe79 5 ай бұрын
PINNs usually require more training. A lot of attention must be given to activation functions.
@valgorbunov1353
@valgorbunov1353 Ай бұрын
Great video as always. Quick question, you said you would included resources in the description, but I don't see any links to the tutorials, only a link to the original paper describing PINN's. Am I looking in the wrong section? I was able to search for the sources you referenced thanks to the description, but I think actual links would help other viewers.
@tshepisosoetsane4857
@tshepisosoetsane4857 3 ай бұрын
Wooooooow i am back to class Physics Maths Chemistry Electrical Control Systems
@caseybackes
@caseybackes 5 ай бұрын
i knew someone would end up working on this soon. really excited to see some sophisticated applications!
@arbor318
@arbor318 4 ай бұрын
The idea is cool. But I wonder how truly effective it is. Because once you add penalty function based on physics you probably removed a lot of solutions suggested by neutral networks.
@zfrank3777
@zfrank3777 4 ай бұрын
Will there be problem if the real system is chaotic?
@alexanderskusnov5119
@alexanderskusnov5119 5 ай бұрын
What about Kolmogorov-Arnold networks (KAN)?
@thepanzymancan
@thepanzymancan 5 ай бұрын
Specifically asking with regards to the spring-mass-damper system. How well does the trained NN perform when you give it different initial values than the ones used for training? In general, when you have ODEs of a mechanical system can you train the NN (or other architecture) with just one data set (and in this data set have the system performing in a way to capture transients and steady state dynamics) of the system doing its thing, or do you need different "runs" of the system exploring many combinations of states for the NN in the end to be generalizable? I want to start exploring the use of PINNs for my research and would like to hear PINN user's opinions and experiences. Thanks!
@Jononor
@Jononor 5 ай бұрын
I recommend testing it out yourself! Great way of getting into it, building intuition and experience on simplified problems
@muthukamalan.m6316
@muthukamalan.m6316 5 ай бұрын
wonderful content, any code sample would be helpful
@anthonymiller6234
@anthonymiller6234 5 ай бұрын
Awesome video again Steve. Thanks so much.
@ayushshukla9959
@ayushshukla9959 Ай бұрын
I am really very sorry sir but i am unamble to deduce how pinns replace cfd and whts the difference as I have to put them in a project
@Sumpydumpert
@Sumpydumpert 5 ай бұрын
Loved the video ❤️❤️
@cfddoc
@cfddoc 5 ай бұрын
no audio?
@notu483
@notu483 5 ай бұрын
What if you use KAN instead of MLP?
@arnold-pdev
@arnold-pdev 5 ай бұрын
Sounds like the start of a research question
@Anorve
@Anorve 5 ай бұрын
fantastic! As always
@victormurphy3511
@victormurphy3511 5 ай бұрын
Great video. Thank you.
@mintakan003
@mintakan003 5 ай бұрын
Is there anything that works well for chaotic systems (?)
@arnold-pdev
@arnold-pdev 5 ай бұрын
Think about what the definition of "chaos" is, and you'll have your answer.
@MyrLin8
@MyrLin8 5 ай бұрын
excellent. thanks :)
@commonwombat-h6r
@commonwombat-h6r 5 ай бұрын
very nice!
@googleyoutubechannel8554
@googleyoutubechannel8554 2 ай бұрын
This feels kinda backwards in what (I'd guess) NNs could do for physics. Wouldn't you want to try to use NNs to discover better fundamental relationships based on letting them have a go tabula rasa on a huge amount raw 'agnostic' data. So many physics models have problems being useful, are stats, or are hand-waving-spherical-cows models, heck, most physics is a bunch of properties and operators developed before computers even existed. Why not use the power of NNs to try to discover better, more useful, dynamics, better _fundamental properties and operators_ , instead of using them as sort of a shitty solver?
@johnmorrell3187
@johnmorrell3187 2 ай бұрын
Two thoughts in response; For a lot of the problems that are mentioned here like fluid flow, we do have very good PDEs that describe the problem very intuitively but which are very difficult to solve. So, existing equation is good and we're not really struggling to explain the physics, but it's hard to work with. Second, even if the NN can learn some novel equation from, for example, lots of measured data, there's usually no useful way to get the equation OUT of the NN in any useful way. Like, let's say I'm looking at some particle physics problem, and I have tons of data but no good equation, and I manage to get an NN to predict new data well. That NN clearly has learned some useful equation, but there's nothing that a physicist could take from the NN's parameters and generalize, the solution is not useful or human-readable beyond it's predictive power.
@googleyoutubechannel8554
@googleyoutubechannel8554 2 ай бұрын
​@@johnmorrell3187 You're being tricked with math notation and a hundred years of hubris, you can formulate almost any relationship into a PDE regardless of how well you understand it if you can find a single relation between two (made up) properties, 'PDEs that are hard to solve' is identical to 'shitty model'.
@The_Quaalude
@The_Quaalude 5 ай бұрын
Who else is high af rn⁉️
@Sumpydumpert
@Sumpydumpert 5 ай бұрын
Wonder how ai is gonna use this ?
@alexroberts6416
@alexroberts6416 5 ай бұрын
I'm sorry, what? 😁
@arnold-pdev
@arnold-pdev 5 ай бұрын
PINNs have to be one of the most over-hyped ML concepts... and that's stiff competition.
@arnold-pdev
@arnold-pdev 5 ай бұрын
On one level, it's an unprincipled way of doing data assimilation. On another level, it's an unprincipled way of doing numerical integration. Yawn. Great vid tho!
@SylComplexDimensional
@SylComplexDimensional 4 ай бұрын
All of your shit from yesterday forward won’t get seen
Neural ODEs (NODEs) [Physics Informed Machine Learning]
24:37
Steve Brunton
Рет қаралды 66 М.
Motorbike Smashes Into Porsche! 😱
00:15
Caters Clips
Рет қаралды 23 МЛН
They Chose Kindness Over Abuse in Their Team #shorts
00:20
I migliori trucchetti di Fabiosa
Рет қаралды 12 МЛН
Hoodie gets wicked makeover! 😲
00:47
Justin Flom
Рет қаралды 124 МЛН
小路飞还不知道他把路飞给擦没有了 #路飞#海贼王
00:32
路飞与唐舞桐
Рет қаралды 79 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,3 МЛН
Fourier Neural Operator (FNO) [Physics Informed Machine Learning]
17:46
A Brain-Inspired Algorithm For Memory
26:52
Artem Kirsanov
Рет қаралды 158 М.
Some light quantum mechanics (with minutephysics)
22:22
3Blue1Brown
Рет қаралды 1,9 МЛН
Future Computers Will Be Radically Different (Analog Computing)
21:42
Lagrangian Neural Network (LNN) [Physics Informed Machine Learning]
19:49
Motorbike Smashes Into Porsche! 😱
00:15
Caters Clips
Рет қаралды 23 МЛН