Fourier Neural Operators (FNO) in JAX

  Рет қаралды 9,291

Machine Learning & Simulation

Machine Learning & Simulation

Күн бұрын

Пікірлер: 24
@enrikosmutje6201
@enrikosmutje6201 7 ай бұрын
Do I understand correctly, that you add the mesh coordinates as a feature to the input data (somewhere in the beginning you concatenate the mesh to a)? Is that really necessary? I image that this will just add a Fourier transform of equally spaced point coordinates to the latent space, which will be the same for all test data.
@MachineLearningSimulation
@MachineLearningSimulation 7 ай бұрын
Yes, that's right :). I also think that it is not necessary but I wanted to follow original FNO implementation closely, see here: github.com/neuraloperator/neuraloperator/blob/af93f781d5e013f8ba5c52baa547f2ada304ffb0/fourier_1d.py#L99
@beincheekym8
@beincheekym8 7 ай бұрын
I was asking myself the same question, you are basically adding a constant input to your network which will likely be ignored. for the sake of being close to original implementation I understand, but you can likely just drop this channel entirely and it would work just as well. EDIT: now that I think of it, maybe it helps with the zero-shot upsampling? but at the same time you always train with the same grid so the second channel was probably ignored (as it's constant).
@beincheekym8
@beincheekym8 7 ай бұрын
thank you for the great video, very clean implementation as well, satisfying to watch. eagerly waiting for your video on the use of FNO for next-state prediction / recurrent NN.
@MachineLearningSimulation
@MachineLearningSimulation 6 ай бұрын
Thanks a lot 😊 Yes, definitely, the autoregressive prediction of transient problems will be one of the future videos 👍
@kanishkbhatia95
@kanishkbhatia95 5 ай бұрын
Super cool as always. Some feedback to enhance clarity - when writing modules (SpectralConv1d, FNOBlock1d, FNO1d), overlaying the flowchart on the right hand side to show the block to which the code corresponds would be really helpful. I felt a bit lost in these parts.
@MachineLearningSimulation
@MachineLearningSimulation 4 ай бұрын
That's a great idea 👍 I will try to include this in a future video.
@lightfreak999
@lightfreak999 6 ай бұрын
Very cool video! The walkthrough write-up of this alternate program of 1D FNO is super useful for newcomers like myself :)
@MachineLearningSimulation
@MachineLearningSimulation 6 ай бұрын
Great to hear! 😊 Thanks for the kind feedback ❤️
@GUYPOLO1
@GUYPOLO1 8 ай бұрын
Thank you for the detailed video! I have a question regarding the zero-shot super-resolution. If we train for 1 second as shown in your video, is it possible to run the test for, let's say, 5 seconds of propagation? Additionally, is it possible to plot the propagation from t=0 to t=1, or can it only provide the result? Since the training data can include the propagation as well, perhaps this information can be utilized in the training, not just the starting point. Thanks for your help!
@MachineLearningSimulation
@MachineLearningSimulation 8 ай бұрын
Hi, thanks a lot for the kind feedback :). Here, in this video, the FNO is just trained to predict 1 time unit into the future. The dataset also only consisted of these input-output pairs. I assume you are interested in the state at times between 0 and 1 and beyond 1. There are two ways to obtain spatiotemporal information: either the FNO is set up in a way to return not only the state at t=1, but the state at multiple time levels (t=0.1, 0.2, ... 2.9, 3.0), or we use an FNO with a fixed dt (either the 1.0 of this video or a lower value) to obtain a trajectory autoregressive (kzbin.info/www/bejne/hJ20YoFpjJiKo9U ). For this, a corresponding dataset (or a reference solver to actively create the reference we need) is required.
@Machine_Learner
@Machine_Learner 6 ай бұрын
Awesome stuff. I am wondering if you can do a similar video for the new neural spectral methods paper?
@MachineLearningSimulation
@MachineLearningSimulation 6 ай бұрын
Thanks 🤗 That's a cool paper. I just skimmed over it. It's probably a good idea to start covering more recent papers. I'll put it on my todo list, still have some other content in the loop I want to do first, but will come back to it later. Thanks for the suggestion 👍
@amirmahdijafary2734
@amirmahdijafary2734 10 ай бұрын
Where in the code is the burgers equation used? Is it possible to mix PINN models with FNO models?
@MachineLearningSimulation
@MachineLearningSimulation 10 ай бұрын
Hi, thanks for the questions 😊 1) the burgers equation is "within the dataset". It contains the (discretized) state for the Burgers equation at two time levels. So let's say you had a simulator for the burger equations, and you would use it to starting at the first state integrate the dynamics for 1 time unit, you would arrive the second state. We want to learn this mapping. 2) Yes, there are definitely approaches to incorporating PDE constraints via autodiff into the neural operator learning process (see for instance the paper by Wang et al. on physics-informed DeepONets or the Li et al paper on physics-informed neural Operators). This can be helpful but does not have to be. In their original conception FNOs are trained purely data driven.
@lineherz
@lineherz 5 ай бұрын
I was looking at the reference code that you mentioned in the jupyter notebook and found that they coded something weird that I can't understand for 2D. out_ft[:, :, : self.mode1, : self.mode2] = self.compl_mul2d(x_ft[:, :, : self.mode1, : self.mode2], self.weights1) out_ft[:, :, -self.mode1 :, : self.mode2] = self.compl_mul2d(x_ft[:, :, -self.mode1 :, : self.mode2], self.weights2) I don't understand why there are two weights (weights1, weights2) and why they take upper mode1 frequencies. Can you explain this? Thanks for your video.
@gitfted_by_AI
@gitfted_by_AI 11 ай бұрын
I have a question about R , only low frequencies are saved but in some case keeping the high frequencies can be relevant.
@MachineLearningSimulation
@MachineLearningSimulation 11 ай бұрын
Hi, yes that's indeed a valid concern. In the paper by Li et al. it is argued that the nonlinearities recover the higher modes. However, it is up to experimental evidence how well they do this. For benchmarks against UNets check for instance the pde arena paper ("towards multiscale....") by Gupta and Brandstetter.
@starshipx1282
@starshipx1282 11 ай бұрын
great effort
@MachineLearningSimulation
@MachineLearningSimulation 11 ай бұрын
Thanks a lot 😊
@SahbaZehisaadat
@SahbaZehisaadat 9 ай бұрын
great. thanks
@MachineLearningSimulation
@MachineLearningSimulation 9 ай бұрын
You are welcome! 😊
@soudaminipanda
@soudaminipanda 9 ай бұрын
I am having a very hard time understanding your data preparation. It should have been taught more slowly.
@MachineLearningSimulation
@MachineLearningSimulation 9 ай бұрын
Hi, thanks for letting me know. I'm sorry to hear that. 😐 What was too fast for you? The speed of me talking or did I spend too little on explaining the involved commands? Can you also pinpoint specific operations that were hard for you to comprehend? Then, I can do them in greater detail in a future video. 😊
DeepONet Tutorial in JAX
51:38
Machine Learning & Simulation
Рет қаралды 3,5 М.
Fourier Neural Operator (FNO) [Physics Informed Machine Learning]
17:46
Twin Telepathy Challenge!
00:23
Stokes Twins
Рет қаралды 50 МЛН
What type of pedestrian are you?😄 #tiktok #elsarca
00:28
Elsa Arca
Рет қаралды 17 МЛН
The Singing Challenge #joker #Harriet Quinn
00:35
佐助与鸣人
Рет қаралды 36 МЛН
PIZZA or CHICKEN // Left or Right Challenge
00:18
Hungry FAM
Рет қаралды 16 МЛН
Nobel Prize in Physics (& Computer Science?) - Computerphile
14:40
Computerphile
Рет қаралды 71 М.
Zongyi Li's talk on solving PDEs from data
55:02
Homanga Bharadhwaj
Рет қаралды 19 М.
Simulation By Data ONLY: Fourier Neural Operator (FNO)
17:21
machine decision
Рет қаралды 2,4 М.
Physics-Informed Neural Networks in JAX (with Equinox & Optax)
38:51
Machine Learning & Simulation
Рет қаралды 7 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 372 М.
ICML 2024 Tutorial"Machine Learning on Function spaces #NeuralOperators"
2:06:19
Kamyar Azizzadenesheli
Рет қаралды 7 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,3 МЛН
Anima Anandkumar - Neural operator: A new paradigm for learning PDEs
59:56
Physics Informed Machine Learning
Рет қаралды 19 М.
Twin Telepathy Challenge!
00:23
Stokes Twins
Рет қаралды 50 МЛН