AI/ML+Physics Part 3: Designing an Architecture [Physics Informed Machine Learning]

  Рет қаралды 41,666

Steve Brunton

Steve Brunton

Күн бұрын

Пікірлер: 58
@pavodindoyi3415
@pavodindoyi3415 8 ай бұрын
Thank you Professor Brunton. I have been following you since the very beginning of my PhD program. Though, I am still in my first year, I have been able to write a conference paper on discrete time modeling of physics informed neural networks. I have presented my work in Machine Learning Summer School 2024 in Okinawa in Japan just today. And whenever I come on KZbin, if you have new content, your notification always shows up first. I am glad that I have been following your videos over and over again to build my own intuition behind machine learning and dynamical systems. I am a PhD student in Japan and thank you for your valuable content. If anyone is interested in my research and how I used RNN for physics informed neural networks in discrete time modeling, let me know.
@BreakingMathPod
@BreakingMathPod 8 ай бұрын
I’d be interesting in hearing about your research!
@Septumsempra8818
@Septumsempra8818 8 ай бұрын
Any economic applications?
@pavodindoyi3415
@pavodindoyi3415 8 ай бұрын
⁠​⁠@@BreakingMathPod Thank you for your interest. I’ll share a public a link so you can download my work. In case you have questions or need further clarifications, I will be happy to connect and talk about it in detail. However, my work is shared into 2 parts: one for Internal conference in scientific computing and machine learning (MLSS 2024), which is a very concise version of the paper with only 5 pages, omitting lots of details and the other is for International Joint Conference on Neural Networks ( IJCNN 2024) under WCCI, which has 8 pages long with all the details on RNN and training process. As for now, I can only share the 5 page long paper, by next week, I hope IJCNN will accept the paper, then I will be able to share the 8 page long copy. I will be happy to connect if you are interested in my research
@pavodindoyi3415
@pavodindoyi3415 8 ай бұрын
@@Septumsempra8818 I am not sure about application in economics, I have read interesting papers on PINN with application on chemistry, gas dynamics, etc. I guess some work might be ongoing in that direction but I’m not sure as of my limited knowledge now
@pavodindoyi3415
@pavodindoyi3415 8 ай бұрын
@@BreakingMathPod Please find the link of our conference paper, PINN with RNN: drive.google.com/file/d/109iTml7Qxi7Z-JHZYqceSH9l3dph29Gb/view?usp=sharing
@videos-de-fisica
@videos-de-fisica 8 ай бұрын
Whoa! I am blown away by your insights here, thank you for sharing them and making an effort to spread knowledge.
@tibyanmustafa2014
@tibyanmustafa2014 7 ай бұрын
i am starting my Master thesis work with the use of PINNs and this channel has being a great start and mind opener thanks prof Steve
@anthonybernstein1626
@anthonybernstein1626 8 ай бұрын
This series has been a real eye opener so far, looking forward to those architecture deep dives!
@rudypieplenbosch6752
@rudypieplenbosch6752 8 ай бұрын
Studying PINNs at the moment, your videos are helping a lot.
@allenlu2007
@allenlu2007 8 ай бұрын
I am working on ML to embed prior knowledges to the model. Your AI/ML+Physics is a great example that I may be able to leverage from it.
@anton9690
@anton9690 8 ай бұрын
This series is pure gold. Thank you for the effort. Looking forward to the next lectures
@reversetransistor4129
@reversetransistor4129 8 ай бұрын
Nice Dr. Brunton, Looking forward to those videos. Happy to have some problem solving and predictive tools that fits everywhere in the electrical engineering.
@RichardGriffithsD
@RichardGriffithsD 3 ай бұрын
Fantastic. I'm a newbie to the physics side but I have to say, this makes me wanna get involved. Thank you Steve!
@SassePhoto
@SassePhoto 8 ай бұрын
Excellent- would be great to go through an example with code - can't wait thanks
@alisultan3174
@alisultan3174 8 ай бұрын
Just discovered the channel and I feel that I just found a treasure
@azeemishaq8240
@azeemishaq8240 8 ай бұрын
great job i was looking for this kind of lecture on PINNs but i didn't find but finally i am here
@gebrilyoussef6851
@gebrilyoussef6851 8 ай бұрын
Prof. Brunton thank you so much for you you and for your team in WU for all the efforts you put in these vidoes. I'm really waiting to see how you are gonna bake Lie Groups and Differential Manifolds into the architecture of those neural network.
@marc-andredesrosiers523
@marc-andredesrosiers523 8 ай бұрын
Steven, I think that, on top of references to Judea Pearl and his study ion Causality with the language/algebra he codified to compute around it, you'd want to consider Sherri Rose's and Mark Van Der Laan's Targeted Learning and their discussion about which data is actually useful to estimate a causally-interpreted coefficient amongst other things 🙂
@erhanturan
@erhanturan 8 ай бұрын
Amazing series. Part 2 is missing though, looks like it is kept as Private?
@AhmedThahir2002
@AhmedThahir2002 8 ай бұрын
Yepp same here
@iitbhustructure
@iitbhustructure 8 ай бұрын
Same Here
@bennasralaa5964
@bennasralaa5964 8 ай бұрын
kzbin.info/www/bejne/nV62YaBor8h-i8ksi=JGUoahbFUcNj5ZjV
@abdulbaset0
@abdulbaset0 7 ай бұрын
Another exceptional lecture as expected! I'm curious if you could consider delivering a lecture on LSTMs. There's some research highlighting their application in designing virtual sensors for wind energy applications. Thank you!
@theminertom11551
@theminertom11551 2 ай бұрын
I love this lecture series so far. Is there a text version or some published "book" that would help describing "PINNs"? I have seen lots of disparate articles but not someone's seminal work on the topic. Just too new I suppose.
@et4493
@et4493 7 ай бұрын
Thank you so much for this series ❤
@RahulAhire
@RahulAhire 7 ай бұрын
Just one question - how does one develop an intuition to exactly know what kind of neural architecture they need and how to code it?
@martin_hristov
@martin_hristov 5 ай бұрын
Steve Brunton keeps mentioning hours and hours of material but I don't see it linked anywhere.. Does anyone know how we can access the mentioned material or if it is still being made?
@virtuous8
@virtuous8 8 ай бұрын
this is so valuable, please keep making these videos
@denizcanbay6312
@denizcanbay6312 8 ай бұрын
Great lecture series thank you very much for all of your hard work! I just have a quick question, I always though that Convolution operation as equivariant since we still keep the input structure but apply a filter on it. We can not have classifiers(invaiant) using only CNNs unless introducing an invariant layer since invariance absorbs the equivariance. I apologise If my thinking process is wrong.
@climbscience4813
@climbscience4813 8 ай бұрын
I think you are right. My understanding was that when talking about equivariance, he was referring to other use cases that generate outputs that contain spatial information such as autoencoders, semantic segmentation & object detection. In that context equivariance makes more sense for the use case too.
@isuryanarayanan506
@isuryanarayanan506 8 ай бұрын
I can't believe this is free
@okhan5087
@okhan5087 8 ай бұрын
Another excellent video as always!
@rainie_876
@rainie_876 7 ай бұрын
Great video and I learned a lot! One question open for anyone, what do you think about the prevalence of foundation models in vision and language modeling? Nowadays the state-of-the-art is to take a foundation model and fine-tune it to an application, which involves no problem-specific choice in architecture. Do you think there will be a large physics foundation model or that the choice of architecture is fundamentally application-specific? Cheers from someone working on vision.
@reyes09071962
@reyes09071962 8 ай бұрын
You mentioned Asimov, the great explainer. That you are.
@TheNewton
@TheNewton 6 ай бұрын
32:34 invariance vs equivariance in a Neural Network architecture(NNA): would the transformation(g) be insitu or a post process? by that I mean A) transformation(g) after f() is IN the NNA itself , or B) the "output of my neural network is also ran through" means the NNA's OUTPUT has a process ran AFTER it leaves the NNA??? If it's #A does that mean transformations themselves can be identify/labeled through equivariant NNA's separately from the content? (i.e. this dog is facing down a hill, this isn't a number one it's a dash character, etc) If it can't label a transformation what's the point of the NNA transforming it's subject internally before Output if the original wasn't transformed? If it's #B where transformations have to be done after, why ever bother mentioning it ,or doing it after, if the subject isn't transformed in the first place? The explanation of the motivation to reducing data needed helps alot for choosing approaches, by using equivariance architecture(due to symmetry groups aka Lie group) , and makes a lot of sense just missing some intuition on what happens when , like transformations happening internally seems a waste of processing or a source of hallucinations if your not just trying to generate data.
@TheGmr140
@TheGmr140 8 ай бұрын
Nice video on machine learning 😊😊
@lucidboy9436
@lucidboy9436 8 ай бұрын
So far I have found only 1 Yt account for Physics ML
@Septumsempra8818
@Septumsempra8818 8 ай бұрын
Any Economists following this series?
@johnk7025
@johnk7025 8 ай бұрын
In machine learning applications in stocks, or economy do you try bake in say black scholes equation as done in this series but with physical laws?
@martingeier5732
@martingeier5732 8 ай бұрын
Just by the way, symmetries and conservation laws are the same. This is Noether-Theorem.
@reyes09071962
@reyes09071962 8 ай бұрын
Can a loss function be a PID loop?
@radwizard
@radwizard 8 ай бұрын
Thank you
@hubstrangers3450
@hubstrangers3450 8 ай бұрын
Thank you....
@GeoffryGifari
@GeoffryGifari 8 ай бұрын
One problem that i think could come up in the case of harmonic oscillator/pendulum, is that you can always take a time derivative of a previous derivative (θ̇̇, θ̇̇̇, θ^(3), ... ). They alternate between sine and cosine and related through frequency ω. Could the order of the derivative in our output variable set (the variables in an equation) be constrained from the beginning so our architecture won't just output infinite variables?
@shreygandhi7157
@shreygandhi7157 8 ай бұрын
GNNs seem interesting
@myelinsheathxd
@myelinsheathxd 8 ай бұрын
Great,
@howeichin4103
@howeichin4103 8 ай бұрын
Nicee
@ingilizanahtar644
@ingilizanahtar644 8 ай бұрын
Turkish please
@lw4423
@lw4423 8 ай бұрын
We get it, you are a "white guy with Asian wife". You don't have to mention it every time.
@xinyaoyin2238
@xinyaoyin2238 8 ай бұрын
but can we do some actual coding though
Haunted House 😰😨 LeoNata family #shorts
00:37
LeoNata Family
Рет қаралды 13 МЛН
Человек паук уже не тот
00:32
Miracle
Рет қаралды 4,2 МЛН
Trapped by the Machine, Saved by Kind Strangers! #shorts
00:21
Fabiosa Best Lifehacks
Рет қаралды 39 МЛН
The Elegant Math Behind Machine Learning
1:53:12
Machine Learning Street Talk
Рет қаралды 58 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 512 М.
ICML 2024 Tutorial"Machine Learning on Function spaces #NeuralOperators"
2:06:19
Kamyar Azizzadenesheli
Рет қаралды 7 М.
The future of AI looks like THIS (& it can learn infinitely)
32:32
Neural ODEs (NODEs) [Physics Informed Machine Learning]
24:37
Steve Brunton
Рет қаралды 66 М.
Haunted House 😰😨 LeoNata family #shorts
00:37
LeoNata Family
Рет қаралды 13 МЛН