Great presentation and one of the most understandable explanation of PDE AI-solver! Many thanks!!!
@alexeychernyavskiy41933 жыл бұрын
Thank you, Anastasia. The approach of trying to find those collocation that have the most effect on the final solution could be a very promising direction of research indeed. While you demonstrated a coupe of model examples, it would be great to see one day these methods applied to, e.g., fluid flows for reservoir modelling, gas dynamics etc.
@anastasiaborovykh1203 жыл бұрын
Agree; those are very interesting future directions we are thinking about!
@shailendrakaushik92814 жыл бұрын
An excellent review of PINNs and a very fascinating way to choose Lambda to weigh optimally the losses on boundary versus interior points. Do you a have a tutorial problem with code that exemplifies this approach? Please let me know. Thanks
@anastasiaborovykh1204 жыл бұрын
Thank you :) I am happy to hear you found it interesting! Our code is available on Github: github.com/remcovandermeer/Optimally-Weighted-PINNs
@bingli19183 жыл бұрын
Thanks for sharing this excellent presentation
@Eta_Carinae__ Жыл бұрын
Have you heard of SINDy from Brunton's lab at UW?
@keeperofthelight9681 Жыл бұрын
Steve Brunton is my favorite teacher when it comes to Machine learning meets dynamical systems
@mohammedaajaji22653 жыл бұрын
Hi @Anastasia Borovykh Thank for this presentation, I read the article and I'm playing around with the code, and I wonder if we can solve PDEs that depend both on time and space or the application of this method is only limited to space dimension. I would like to apply the approach to solve PDEs in Finance (for example the Black Scholes PDE), and where only the Boundary value at the final time is available, and we are interested in the solution value at initial time. It will be helpful if you can comment on this
@anastasiaborovykh1203 жыл бұрын
Hi! Thank you for your interest :) Yes, definitely! In that case you would just create the collocation points also over your time variable. I have not worked on the financial applications of this method myself, but my collaborators have a paper where they use the weighting of the loss function to compute various option prices: arxiv.org/pdf/2005.12059.pdf Specifically in section 3.1 the Black Scholes model is discussed. Hope this helps! Anastasia
@samuelauerbacher79823 жыл бұрын
a really good and well structured talk! helped me a lot to prepare my bachelor thesis which will be about that topic
@ionlipsiuc8608 Жыл бұрын
Hey Samuel, I was wondering if I could get some form of contact information from you as I am also working on my Bachelor Thesis about the same topic and was hoping to get some insights from others. Thank you.
@AhmedEmamAI13 жыл бұрын
Great explanation, can you make a video in hidden physics models HPM
@leon-tjomb2 жыл бұрын
Hello Anastasia, So interesting your presentation. I'm Leon, I'm currently working on PINNs for a vibration problem: Case of a beam Bridge. I would like to know if we are dealing with time dependence PDE such as if we have boundaries and initial the condition, how can we define the loss function since we would like to minimise de weight? Best regards,
@gauravbokil83 жыл бұрын
Thanks Anastasia. If you ever see this comment., THANK YOU SO MUCH!
@anastasiaborovykh1203 жыл бұрын
Thank you for watching!
@arshadalam-xm1ht3 жыл бұрын
Appreciated. Can u provide code in python?
@abderrahmaneouachouach926 Жыл бұрын
Could you please provide a citation for the theorem (MOB, 2020) that you mentioned in 5:09? I couldn't find it anywhere.
@edvinbeqari75514 жыл бұрын
Can you let lambda be a parameter - and use gradient descent to find the its optimal value? meaning for each train step - take the gradient of the loss with respect to lambda
@oliverhennigh4514 жыл бұрын
If you did this and optimized lambda on the same loss function then lambda would converge to either 1 or 0. The network would learn either the zero solution (a constant) which would satisfy the PDE but not the boundary conditions or it would only satisfy the boundary conditions but not the PDE at all.
@edvinbeqari75514 жыл бұрын
@@oliverhennigh451 Thanks for the comment. My setup is slightly different, I am trying the inverse problem on fitting the parameters to an ode i.e.: x" + bx' + kx = 0. I sampled and perturbed the real solution - and used that data as domain data. Hence, I have three sets of losses - 1. the ode loss (loss_f), the IC loss (loss_ic) and the loss between predicted and sampled data (loss_u). I let the loss be λ^2 * (loss_f + loss_ic) + (1-λ^2) * loss_u, and take derivatates of the loss with respset to b, k, λ. I square lambda so the loss remains positive. It is true that lambda becomes pretty small but not zero - but I am getting good results and b and k approach the actual values. Perhaps, what I am doing does not make sense but I am experimenting on my own. I would love some friends that know the material. Happy to share what I have.
@anastasiaborovykh1204 жыл бұрын
@@edvinbeqari7551 That sounds interesting. The way I see it is that if we optimize lambda while training then we just select the lambda which makes it most easy for the NN to make the loss small (what Oliver Hennigh also mentions). In our case it is not just about making the loss small but finding a weighting between interior and boundary such that a small loss implies a solution close to true PDE solution. In your case I would view the loss_f + loss_ic as a regularization-like term. But exactly the meaning of optimizing it while training would mean I'd have to think about a bit more...
@edvinbeqari75513 жыл бұрын
@@anastasiaborovykh120 Hi Anastasia - do you have a document where I can see the full derivation of the optimal lambda. Perhaps, a simple example. I would love to learn your method.
@anastasiaborovykh1203 жыл бұрын
Yes definitely. The derivation we did is in our paper arxiv.org/pdf/2002.06269