Thank you Professor Brunton. I have been following you since the very beginning of my PhD program. Though, I am still in my first year, I have been able to write a conference paper on discrete time modeling of physics informed neural networks. I have presented my work in Machine Learning Summer School 2024 in Okinawa in Japan just today. And whenever I come on KZbin, if you have new content, your notification always shows up first. I am glad that I have been following your videos over and over again to build my own intuition behind machine learning and dynamical systems. I am a PhD student in Japan and thank you for your valuable content. If anyone is interested in my research and how I used RNN for physics informed neural networks in discrete time modeling, let me know.
@BreakingMathPod8 ай бұрын
I’d be interesting in hearing about your research!
@Septumsempra88188 ай бұрын
Any economic applications?
@pavodindoyi34158 ай бұрын
@@BreakingMathPod Thank you for your interest. I’ll share a public a link so you can download my work. In case you have questions or need further clarifications, I will be happy to connect and talk about it in detail. However, my work is shared into 2 parts: one for Internal conference in scientific computing and machine learning (MLSS 2024), which is a very concise version of the paper with only 5 pages, omitting lots of details and the other is for International Joint Conference on Neural Networks ( IJCNN 2024) under WCCI, which has 8 pages long with all the details on RNN and training process. As for now, I can only share the 5 page long paper, by next week, I hope IJCNN will accept the paper, then I will be able to share the 8 page long copy. I will be happy to connect if you are interested in my research
@pavodindoyi34158 ай бұрын
@@Septumsempra8818 I am not sure about application in economics, I have read interesting papers on PINN with application on chemistry, gas dynamics, etc. I guess some work might be ongoing in that direction but I’m not sure as of my limited knowledge now
@pavodindoyi34158 ай бұрын
@@BreakingMathPod Please find the link of our conference paper, PINN with RNN: drive.google.com/file/d/109iTml7Qxi7Z-JHZYqceSH9l3dph29Gb/view?usp=sharing
@videos-de-fisica8 ай бұрын
Whoa! I am blown away by your insights here, thank you for sharing them and making an effort to spread knowledge.
@tibyanmustafa20147 ай бұрын
i am starting my Master thesis work with the use of PINNs and this channel has being a great start and mind opener thanks prof Steve
@anthonybernstein16268 ай бұрын
This series has been a real eye opener so far, looking forward to those architecture deep dives!
@rudypieplenbosch67528 ай бұрын
Studying PINNs at the moment, your videos are helping a lot.
@allenlu20078 ай бұрын
I am working on ML to embed prior knowledges to the model. Your AI/ML+Physics is a great example that I may be able to leverage from it.
@anton96908 ай бұрын
This series is pure gold. Thank you for the effort. Looking forward to the next lectures
@reversetransistor41298 ай бұрын
Nice Dr. Brunton, Looking forward to those videos. Happy to have some problem solving and predictive tools that fits everywhere in the electrical engineering.
@RichardGriffithsD3 ай бұрын
Fantastic. I'm a newbie to the physics side but I have to say, this makes me wanna get involved. Thank you Steve!
@SassePhoto8 ай бұрын
Excellent- would be great to go through an example with code - can't wait thanks
@alisultan31748 ай бұрын
Just discovered the channel and I feel that I just found a treasure
@azeemishaq82408 ай бұрын
great job i was looking for this kind of lecture on PINNs but i didn't find but finally i am here
@gebrilyoussef68518 ай бұрын
Prof. Brunton thank you so much for you you and for your team in WU for all the efforts you put in these vidoes. I'm really waiting to see how you are gonna bake Lie Groups and Differential Manifolds into the architecture of those neural network.
@marc-andredesrosiers5238 ай бұрын
Steven, I think that, on top of references to Judea Pearl and his study ion Causality with the language/algebra he codified to compute around it, you'd want to consider Sherri Rose's and Mark Van Der Laan's Targeted Learning and their discussion about which data is actually useful to estimate a causally-interpreted coefficient amongst other things 🙂
@erhanturan8 ай бұрын
Amazing series. Part 2 is missing though, looks like it is kept as Private?
Another exceptional lecture as expected! I'm curious if you could consider delivering a lecture on LSTMs. There's some research highlighting their application in designing virtual sensors for wind energy applications. Thank you!
@theminertom115512 ай бұрын
I love this lecture series so far. Is there a text version or some published "book" that would help describing "PINNs"? I have seen lots of disparate articles but not someone's seminal work on the topic. Just too new I suppose.
@et44937 ай бұрын
Thank you so much for this series ❤
@RahulAhire7 ай бұрын
Just one question - how does one develop an intuition to exactly know what kind of neural architecture they need and how to code it?
@martin_hristov5 ай бұрын
Steve Brunton keeps mentioning hours and hours of material but I don't see it linked anywhere.. Does anyone know how we can access the mentioned material or if it is still being made?
@virtuous88 ай бұрын
this is so valuable, please keep making these videos
@denizcanbay63128 ай бұрын
Great lecture series thank you very much for all of your hard work! I just have a quick question, I always though that Convolution operation as equivariant since we still keep the input structure but apply a filter on it. We can not have classifiers(invaiant) using only CNNs unless introducing an invariant layer since invariance absorbs the equivariance. I apologise If my thinking process is wrong.
@climbscience48138 ай бұрын
I think you are right. My understanding was that when talking about equivariance, he was referring to other use cases that generate outputs that contain spatial information such as autoencoders, semantic segmentation & object detection. In that context equivariance makes more sense for the use case too.
@isuryanarayanan5068 ай бұрын
I can't believe this is free
@okhan50878 ай бұрын
Another excellent video as always!
@rainie_8767 ай бұрын
Great video and I learned a lot! One question open for anyone, what do you think about the prevalence of foundation models in vision and language modeling? Nowadays the state-of-the-art is to take a foundation model and fine-tune it to an application, which involves no problem-specific choice in architecture. Do you think there will be a large physics foundation model or that the choice of architecture is fundamentally application-specific? Cheers from someone working on vision.
@reyes090719628 ай бұрын
You mentioned Asimov, the great explainer. That you are.
@TheNewton6 ай бұрын
32:34 invariance vs equivariance in a Neural Network architecture(NNA): would the transformation(g) be insitu or a post process? by that I mean A) transformation(g) after f() is IN the NNA itself , or B) the "output of my neural network is also ran through" means the NNA's OUTPUT has a process ran AFTER it leaves the NNA??? If it's #A does that mean transformations themselves can be identify/labeled through equivariant NNA's separately from the content? (i.e. this dog is facing down a hill, this isn't a number one it's a dash character, etc) If it can't label a transformation what's the point of the NNA transforming it's subject internally before Output if the original wasn't transformed? If it's #B where transformations have to be done after, why ever bother mentioning it ,or doing it after, if the subject isn't transformed in the first place? The explanation of the motivation to reducing data needed helps alot for choosing approaches, by using equivariance architecture(due to symmetry groups aka Lie group) , and makes a lot of sense just missing some intuition on what happens when , like transformations happening internally seems a waste of processing or a source of hallucinations if your not just trying to generate data.
@TheGmr1408 ай бұрын
Nice video on machine learning 😊😊
@lucidboy94368 ай бұрын
So far I have found only 1 Yt account for Physics ML
@Septumsempra88188 ай бұрын
Any Economists following this series?
@johnk70258 ай бұрын
In machine learning applications in stocks, or economy do you try bake in say black scholes equation as done in this series but with physical laws?
@martingeier57328 ай бұрын
Just by the way, symmetries and conservation laws are the same. This is Noether-Theorem.
@reyes090719628 ай бұрын
Can a loss function be a PID loop?
@radwizard8 ай бұрын
Thank you
@hubstrangers34508 ай бұрын
Thank you....
@GeoffryGifari8 ай бұрын
One problem that i think could come up in the case of harmonic oscillator/pendulum, is that you can always take a time derivative of a previous derivative (θ̇̇, θ̇̇̇, θ^(3), ... ). They alternate between sine and cosine and related through frequency ω. Could the order of the derivative in our output variable set (the variables in an equation) be constrained from the beginning so our architecture won't just output infinite variables?
@shreygandhi71578 ай бұрын
GNNs seem interesting
@myelinsheathxd8 ай бұрын
Great,
@howeichin41038 ай бұрын
Nicee
@ingilizanahtar6448 ай бұрын
Turkish please
@lw44238 ай бұрын
We get it, you are a "white guy with Asian wife". You don't have to mention it every time.