Thank you so much, Prof. Brunton, for recommending my video on PINNs! It's an honor to have my work mentioned on your channel. I appreciate your support and your incredible job in making advanced topics accessible to the community!
@jiaminxu72755 ай бұрын
Hi Prof. Brunton, I am a Ph.D. student from UT Austin majoring in Mechanical Engineering with specification of dynamical system and control. Your vedios has been helping me by either giving me a deeper understanding of foundamental knowledge or openning my horizon, ever since I begin my Ph.D. Just want to express my great gratitude to you again and hope I can meet you in certain conferences so that I can say thank you to you in person.
@The_Quaalude5 ай бұрын
Getting a PhD and learning from KZbin is wild 😭
@arnold-pdev5 ай бұрын
@@The_Quaalude Why?
@The_Quaalude5 ай бұрын
@@arnold-pdev bro is paying all that money just to learn something online for free
@kaihsiangju5 ай бұрын
@@The_Quaalude usually, PhD student in the U.S get paid and does not need to pay for the tuition.
@Sumpydumpert5 ай бұрын
I threw some concepts up on Reddit grand unified theory and some other places for a binary growth function based on how internet work with all these different platforms
@rehankhan-gn2jr6 ай бұрын
The way of teaching is highly beneficial and outstanding. Thank you, Steven!
@alessandrobeatini18826 ай бұрын
This is hands down one of the best videos I've seen on KZbin. Great work, keep it up!
@markseagraves54865 ай бұрын
Very helpful Steven. I work in consciousness studies and find too often the math is written off as too complicated. On the other side, many computational scientists may write off consciousness studies as too ethereal to be of much value. Bridging these two worlds with insight and rigor, I feel advances our understanding of both artificial and human intelligence. You have contributed to this effort here. Thank you.
@code2compass5 ай бұрын
Steve your videos are always helpful, clear and concise. Thank you so much for such amazing content. You are my hero
@ryansoklaski82425 ай бұрын
I would love to see a video on Universal ODEs (which leverages auto-diff through diffEQ solvers). Chris Rackauckas' work in the Julia language on these methods has been striking - would love to see your take on it.
@Eigensteve5 ай бұрын
Already filmed and in the queue :)
@ryansoklaski82425 ай бұрын
@@Eigensteve I'm so excited to hear this. I recommend you so highly to my students and colleagues. I just wish I had your lessons when I was a college student way back when. Thanks for everything.
@mithundeshmukh85 ай бұрын
Please share references only 1 Link is visible
@tillsteh72735 ай бұрын
Dude they are literally in the video. Just use google.
@DrakenRS785 ай бұрын
Also - take a look at his textbook for further reference
@aliabdollahian14652 ай бұрын
Truly great explanation! It really helps me understand the concepts deeply. You're a hero, Steve! T hank you for your highly beneficial, outstanding, and most importantly, free teaching! ❤
@nandhumon23772 ай бұрын
Great video and I always enjoy your presentation. I think we had to include about the loss balancing for PINNs too in this.
@clementboutaric39524 ай бұрын
The fact that writing the physics in the loss function won't enforce it but rather suggest it can be a cool thing if the hypothesis that lead to this NS equation (incompressible newtonian fluid) start to become less solid.
@abhisheksaini52176 ай бұрын
Thank you, Professor.😃
@pantelisdogoulis8662Ай бұрын
Thanks a lot for the video! I would like to ask if you have encountered any PINNs into solving systems described by simple Algebraic equations with no time parameter present.
@MLDawn3 ай бұрын
In 29:25, the problem lies in the way backpropagation works! That is, even though the loss function is physics-informed, the learning algorithms, backpropagation, is far from physics-informed, which means the neuronal message passing in a traditional neural net, does not resemble how the brain works. More specifically, the gradient trajectories used in backprob, are shared by both terms of the PINN loss! This means while minimizing term 1, the network forgets term 2 and vice versa. That is why you need to artificially balance the MLP and Physics part by some coefficient! This is not a proper solution as it addresses the problem after it already has occured! I would suggest a fundamental alteration of the dynamics of training, that is, NOT using backprop but instead use the Free Energy Principle and in short local Hebbian learning! This should create meaningfully factorised portions of the network that specialise in minimising different parts of your loss, without constantly being over-written (i.e., no catastrophic forgetting).
@blacklabelmansociety5 ай бұрын
Hi Professor Steve. I’d love to see a series on Transformers. Thanks for your content, greetings from Brazil.
@reversetransistor41295 ай бұрын
Nice, kinda gives me ideas to mix control theories together.
@THEPAGMAN5 ай бұрын
This is really helpful, if only you posted this sooner! Thanks
@calvinholt63645 ай бұрын
This is much easier to comprehend than the course given by the author GK. He should just point you to us. 😅
@MariaHeger-tb6cv5 ай бұрын
I was thinking about your comment that rules of physics become expressions to be optimized. Unfortunately, I think that they are absolute rules that should be enforced at every stage of the process. Maybe only at the last step? It’s like allowing an accountant to have errors knowing that the overall performance is better?
@drozdchannel87075 ай бұрын
Great video! it may be useful to do another video about Neural Operators. It is more stable and faster in many physical tasks as i know.
@luc-nh5lo2 ай бұрын
Good video! I'm starting to see more about PINN, I hope one day I'll do a master's degree at an American university like MIT or Stanford, and your video helped me, thanks (:
@nafisamehtaj87795 ай бұрын
Prof Brunton, it would be great a help, if you can cover the neural operator (DeepONets) in any of your video. Thanks for all the amazing videos though, making learning easier for grad people.
@sedenions5 ай бұрын
Have you made a video on embedding and fitting networks for running simulation inference?
@rudypieplenbosch67525 ай бұрын
I was waiting for this, hope to see more about this subjects, thanks a lot.
@alshahriarbd5 ай бұрын
I think you forgot to put the link on the description to the PyTorch example tutorials.
@AndrewConsroe5 ай бұрын
PINN foundation models, even if domain specific at first, would be really cool. I see one paper from a quick google search with some early positive results. Even if you do have to finetune to your problem it would beat scratch training for every new application. I wonder if the architecture could be modified to separate the physics from the data to make the fine tuning more effective/efficient. Do we have more insight into the phase space of nets with low/zero physics loss?
@moisesbessalle5 ай бұрын
cant you also clip/trim the search space with the possible range of output values also to speed it up before inference? so for example the velocities will be a positive integer with values less then some threshold that depends on your setting?
@mostafasayahkarajy5085 ай бұрын
Thank you very much for the lecture. I am looking forward for your next lecture on this topic.
@Obbe795 ай бұрын
PINNs usually require more training. A lot of attention must be given to activation functions.
@valgorbunov1353Ай бұрын
Great video as always. Quick question, you said you would included resources in the description, but I don't see any links to the tutorials, only a link to the original paper describing PINN's. Am I looking in the wrong section? I was able to search for the sources you referenced thanks to the description, but I think actual links would help other viewers.
@tshepisosoetsane48573 ай бұрын
Wooooooow i am back to class Physics Maths Chemistry Electrical Control Systems
@caseybackes5 ай бұрын
i knew someone would end up working on this soon. really excited to see some sophisticated applications!
@arbor3184 ай бұрын
The idea is cool. But I wonder how truly effective it is. Because once you add penalty function based on physics you probably removed a lot of solutions suggested by neutral networks.
@zfrank37774 ай бұрын
Will there be problem if the real system is chaotic?
@alexanderskusnov51195 ай бұрын
What about Kolmogorov-Arnold networks (KAN)?
@thepanzymancan5 ай бұрын
Specifically asking with regards to the spring-mass-damper system. How well does the trained NN perform when you give it different initial values than the ones used for training? In general, when you have ODEs of a mechanical system can you train the NN (or other architecture) with just one data set (and in this data set have the system performing in a way to capture transients and steady state dynamics) of the system doing its thing, or do you need different "runs" of the system exploring many combinations of states for the NN in the end to be generalizable? I want to start exploring the use of PINNs for my research and would like to hear PINN user's opinions and experiences. Thanks!
@Jononor5 ай бұрын
I recommend testing it out yourself! Great way of getting into it, building intuition and experience on simplified problems
@muthukamalan.m63165 ай бұрын
wonderful content, any code sample would be helpful
@anthonymiller62345 ай бұрын
Awesome video again Steve. Thanks so much.
@ayushshukla9959Ай бұрын
I am really very sorry sir but i am unamble to deduce how pinns replace cfd and whts the difference as I have to put them in a project
@Sumpydumpert5 ай бұрын
Loved the video ❤️❤️
@cfddoc5 ай бұрын
no audio?
@notu4835 ай бұрын
What if you use KAN instead of MLP?
@arnold-pdev5 ай бұрын
Sounds like the start of a research question
@Anorve5 ай бұрын
fantastic! As always
@victormurphy35115 ай бұрын
Great video. Thank you.
@mintakan0035 ай бұрын
Is there anything that works well for chaotic systems (?)
@arnold-pdev5 ай бұрын
Think about what the definition of "chaos" is, and you'll have your answer.
@MyrLin85 ай бұрын
excellent. thanks :)
@commonwombat-h6r5 ай бұрын
very nice!
@googleyoutubechannel85542 ай бұрын
This feels kinda backwards in what (I'd guess) NNs could do for physics. Wouldn't you want to try to use NNs to discover better fundamental relationships based on letting them have a go tabula rasa on a huge amount raw 'agnostic' data. So many physics models have problems being useful, are stats, or are hand-waving-spherical-cows models, heck, most physics is a bunch of properties and operators developed before computers even existed. Why not use the power of NNs to try to discover better, more useful, dynamics, better _fundamental properties and operators_ , instead of using them as sort of a shitty solver?
@johnmorrell31872 ай бұрын
Two thoughts in response; For a lot of the problems that are mentioned here like fluid flow, we do have very good PDEs that describe the problem very intuitively but which are very difficult to solve. So, existing equation is good and we're not really struggling to explain the physics, but it's hard to work with. Second, even if the NN can learn some novel equation from, for example, lots of measured data, there's usually no useful way to get the equation OUT of the NN in any useful way. Like, let's say I'm looking at some particle physics problem, and I have tons of data but no good equation, and I manage to get an NN to predict new data well. That NN clearly has learned some useful equation, but there's nothing that a physicist could take from the NN's parameters and generalize, the solution is not useful or human-readable beyond it's predictive power.
@googleyoutubechannel85542 ай бұрын
@@johnmorrell3187 You're being tricked with math notation and a hundred years of hubris, you can formulate almost any relationship into a PDE regardless of how well you understand it if you can find a single relation between two (made up) properties, 'PDEs that are hard to solve' is identical to 'shitty model'.
@The_Quaalude5 ай бұрын
Who else is high af rn⁉️
@Sumpydumpert5 ай бұрын
Wonder how ai is gonna use this ?
@alexroberts64165 ай бұрын
I'm sorry, what? 😁
@arnold-pdev5 ай бұрын
PINNs have to be one of the most over-hyped ML concepts... and that's stiff competition.
@arnold-pdev5 ай бұрын
On one level, it's an unprincipled way of doing data assimilation. On another level, it's an unprincipled way of doing numerical integration. Yawn. Great vid tho!
@SylComplexDimensional4 ай бұрын
All of your shit from yesterday forward won’t get seen