Neural Networks for Solving PDEs

  Рет қаралды 25,680

Fields Institute

Fields Institute

Күн бұрын

Пікірлер: 31
@bikmeyevAT
@bikmeyevAT 3 жыл бұрын
Great presentation and one of the most understandable explanation of PDE AI-solver! Many thanks!!!
@alexeychernyavskiy4193
@alexeychernyavskiy4193 3 жыл бұрын
Thank you, Anastasia. The approach of trying to find those collocation that have the most effect on the final solution could be a very promising direction of research indeed. While you demonstrated a coupe of model examples, it would be great to see one day these methods applied to, e.g., fluid flows for reservoir modelling, gas dynamics etc.
@anastasiaborovykh120
@anastasiaborovykh120 3 жыл бұрын
Agree; those are very interesting future directions we are thinking about!
@shailendrakaushik9281
@shailendrakaushik9281 4 жыл бұрын
An excellent review of PINNs and a very fascinating way to choose Lambda to weigh optimally the losses on boundary versus interior points. Do you a have a tutorial problem with code that exemplifies this approach? Please let me know. Thanks
@anastasiaborovykh120
@anastasiaborovykh120 4 жыл бұрын
Thank you :) I am happy to hear you found it interesting! Our code is available on Github: github.com/remcovandermeer/Optimally-Weighted-PINNs
@bingli1918
@bingli1918 3 жыл бұрын
Thanks for sharing this excellent presentation
@Eta_Carinae__
@Eta_Carinae__ Жыл бұрын
Have you heard of SINDy from Brunton's lab at UW?
@keeperofthelight9681
@keeperofthelight9681 Жыл бұрын
Steve Brunton is my favorite teacher when it comes to Machine learning meets dynamical systems
@mohammedaajaji2265
@mohammedaajaji2265 3 жыл бұрын
Hi @Anastasia Borovykh Thank for this presentation, I read the article and I'm playing around with the code, and I wonder if we can solve PDEs that depend both on time and space or the application of this method is only limited to space dimension. I would like to apply the approach to solve PDEs in Finance (for example the Black Scholes PDE), and where only the Boundary value at the final time is available, and we are interested in the solution value at initial time. It will be helpful if you can comment on this
@anastasiaborovykh120
@anastasiaborovykh120 3 жыл бұрын
Hi! Thank you for your interest :) Yes, definitely! In that case you would just create the collocation points also over your time variable. I have not worked on the financial applications of this method myself, but my collaborators have a paper where they use the weighting of the loss function to compute various option prices: arxiv.org/pdf/2005.12059.pdf Specifically in section 3.1 the Black Scholes model is discussed. Hope this helps! Anastasia
@samuelauerbacher7982
@samuelauerbacher7982 3 жыл бұрын
a really good and well structured talk! helped me a lot to prepare my bachelor thesis which will be about that topic
@ionlipsiuc8608
@ionlipsiuc8608 Жыл бұрын
Hey Samuel, I was wondering if I could get some form of contact information from you as I am also working on my Bachelor Thesis about the same topic and was hoping to get some insights from others. Thank you.
@AhmedEmamAI1
@AhmedEmamAI1 3 жыл бұрын
Great explanation, can you make a video in hidden physics models HPM
@leon-tjomb
@leon-tjomb 2 жыл бұрын
Hello Anastasia, So interesting your presentation. I'm Leon, I'm currently working on PINNs for a vibration problem: Case of a beam Bridge. I would like to know if we are dealing with time dependence PDE such as if we have boundaries and initial the condition, how can we define the loss function since we would like to minimise de weight? Best regards,
@gauravbokil8
@gauravbokil8 3 жыл бұрын
Thanks Anastasia. If you ever see this comment., THANK YOU SO MUCH!
@anastasiaborovykh120
@anastasiaborovykh120 3 жыл бұрын
Thank you for watching!
@arshadalam-xm1ht
@arshadalam-xm1ht 3 жыл бұрын
Appreciated. Can u provide code in python?
@abderrahmaneouachouach926
@abderrahmaneouachouach926 Жыл бұрын
Could you please provide a citation for the theorem (MOB, 2020) that you mentioned in 5:09? I couldn't find it anywhere.
@edvinbeqari7551
@edvinbeqari7551 4 жыл бұрын
Can you let lambda be a parameter - and use gradient descent to find the its optimal value? meaning for each train step - take the gradient of the loss with respect to lambda
@oliverhennigh451
@oliverhennigh451 4 жыл бұрын
If you did this and optimized lambda on the same loss function then lambda would converge to either 1 or 0. The network would learn either the zero solution (a constant) which would satisfy the PDE but not the boundary conditions or it would only satisfy the boundary conditions but not the PDE at all.
@edvinbeqari7551
@edvinbeqari7551 4 жыл бұрын
@@oliverhennigh451 Thanks for the comment. My setup is slightly different, I am trying the inverse problem on fitting the parameters to an ode i.e.: x" + bx' + kx = 0. I sampled and perturbed the real solution - and used that data as domain data. Hence, I have three sets of losses - 1. the ode loss (loss_f), the IC loss (loss_ic) and the loss between predicted and sampled data (loss_u). I let the loss be λ^2 * (loss_f + loss_ic) + (1-λ^2) * loss_u, and take derivatates of the loss with respset to b, k, λ. I square lambda so the loss remains positive. It is true that lambda becomes pretty small but not zero - but I am getting good results and b and k approach the actual values. Perhaps, what I am doing does not make sense but I am experimenting on my own. I would love some friends that know the material. Happy to share what I have.
@anastasiaborovykh120
@anastasiaborovykh120 4 жыл бұрын
@@edvinbeqari7551 That sounds interesting. The way I see it is that if we optimize lambda while training then we just select the lambda which makes it most easy for the NN to make the loss small (what Oliver Hennigh also mentions). In our case it is not just about making the loss small but finding a weighting between interior and boundary such that a small loss implies a solution close to true PDE solution. In your case I would view the loss_f + loss_ic as a regularization-like term. But exactly the meaning of optimizing it while training would mean I'd have to think about a bit more...
@edvinbeqari7551
@edvinbeqari7551 3 жыл бұрын
@@anastasiaborovykh120 Hi Anastasia - do you have a document where I can see the full derivation of the optimal lambda. Perhaps, a simple example. I would love to learn your method.
@anastasiaborovykh120
@anastasiaborovykh120 3 жыл бұрын
Yes definitely. The derivation we did is in our paper arxiv.org/pdf/2002.06269
@BryceChudomelka
@BryceChudomelka 4 жыл бұрын
Bravo
@999nilbog
@999nilbog 3 жыл бұрын
whawaw wait, you so pretty
Человек паук уже не тот
00:32
Miracle
Рет қаралды 4,2 МЛН
Ice Cream or Surprise Trip Around the World?
00:31
Hungry FAM
Рет қаралды 16 МЛН
What type of pedestrian are you?😄 #tiktok #elsarca
00:28
Elsa Arca
Рет қаралды 20 МЛН
Neural SDEs, Deep Learning and Stochastic Control
1:07:43
Fields Institute
Рет қаралды 3,1 М.
Understanding AI from Scratch - Neural Networks Course
3:44:18
freeCodeCamp.org
Рет қаралды 428 М.
How Deep Neural Networks Work - Full Course for Beginners
3:50:57
freeCodeCamp.org
Рет қаралды 4,3 МЛН
How GNNs and Symmetries can help to solve PDEs - Max Welling
1:28:48
IARAI Research
Рет қаралды 3,4 М.
Rethinking Physics Informed Neural Networks [NeurIPS'21]
51:22
Amir Gholaminejad
Рет қаралды 52 М.
Terence Tao at IMO 2024: AI and Mathematics
57:24
AIMO Prize
Рет қаралды 601 М.
Neural ODEs (NODEs) [Physics Informed Machine Learning]
24:37
Steve Brunton
Рет қаралды 66 М.
Neural Ordinary Differential Equations
35:33
Andriy Drozdyuk
Рет қаралды 25 М.