I want this channel to grow with the Order of O(Re^3) Great Content Sir Really Appreciated!!!!
@MachineLearningSimulation Жыл бұрын
Thanks for the cool comment 😁😊 Feel free to share the channel/video with your friends and colleagues to help achieve this growth :)
@handyMath2 жыл бұрын
Really great content! You deserve way more views!
@MachineLearningSimulation2 жыл бұрын
Thanks so much for the feedback 😊 Feel free to share the videos with your friends and peers, that would be super awesome.
@MrDoraHo2 жыл бұрын
Great video : D I have a few comments regarding the DNS simulation for someone who wants to dive in bit more into DNS. 1. In reality, you won't fail from simulating something in low-Res DNS simulation. Otherwise, you don't really need that high resolution to perform a DNS simulation. This is the nature of DNS simulation. The solution usually would converge to itself. The main point here would be, how much detail would you like to resolve in the DNS. (Or eddies scale) So, don't be afraid from performing the low-Res DNS if you just want to get the large-scale information as DNS usually would give your the full physical information. 2. They are few ways to minimize the computational cost in DNS, as usually, people are just interested in only a friction of the domain of simulation. For example, the astrophysicist who performs the simulation of star-forming cloud would only be interested in those regions which are dense for star to form. In those cases, some methods such as SMR/AMR (Static/Adaptive Mesh Refinement) would help. Those methods would set a low-Res grid in the region that you are not interested in and focusing the fine structure of the dense region(High-Res grid). As a result, sometimes you can even save few orders of magnitude of the computational cost using those method but getting the information that you want.
@MachineLearningSimulation2 жыл бұрын
Amazing, thanks for the comment and the kind words 😊 And you are right. I think these are some nice points for people who want to dive deeper.
@codeChrit Жыл бұрын
Great video. I have a question at time stamp 31:19, why is 10^6 seconds equal to 1 month, and not ~12 days?
@MachineLearningSimulation Жыл бұрын
You're welcome 😊 Thanks for the comment. Of course, to be precise, 10^6 seconds are only 11.sth days. Although there is of course a factor of 2 to 3 between this estimate and a month, you could argue it's still in the same order of magnitude. The estimates done in the video are extremely rough, we lose many significant digits along the way. Also, the rounding we did in the exponent to Re^3 is mathematically questionable :D. You should also remember that the full utilization of supercomputers require perfect parallelization of the problem (and perfect programming) which is impossible to achieve. That alone would probably add another order of magnitude to the runtime. In the end, I think the important takeaway is the infeasibility of the simulations with our current (and future) hardware 😊
@alibabatwist5 ай бұрын
thanks for your lecture, the pronunciation of inviscid is not [INVISKID] it is simply [INVISID].
@MachineLearningSimulation4 ай бұрын
You're welcome 😊. You spotted the non-native speaker 😉
@alexfoergaard Жыл бұрын
Thank you so much for the awesome video. I am not super familiar in this field so please bear with me a little here. I have the book of pope "Turbulent flows" he doesn't however use the theta that you do. How is it i should interpret the theta you are using? is it purely to tell that is is the order of magnitude that is important? Thanks in advance :D
@MachineLearningSimulation10 ай бұрын
Hi, thanks a lot for the comment and the kind words 😊 I think you are referring to the "big O"-notation. In the context of this video, I wanted to express with it that something grows similarly as whatever is inside the "big O" expression. As an example, take the time scale which is O(1/sqrt(Re)): if you have 4 times higher Reynolds number, your time scale halves. Hope that helped 😊
@ibonitog8 ай бұрын
Great video! What software do you use for your handwriting? Cheers!
@MachineLearningSimulation8 ай бұрын
Thanks, it's Xournal++.
@ibonitog8 ай бұрын
@@MachineLearningSimulation no ipad support :( it would be perfect otherwise! thanks for your time! :)
@benjaminmelde14792 жыл бұрын
This really was an awesome Video, thank you do much! In case I'm writing my Thesis about turbulent flow simulations, is the Book you mentioned the only one I need to read?
@MachineLearningSimulation2 жыл бұрын
Glad it was helpful! :) You're very welcome. I think it's hard to say, whether it will the only thing you have to read, but Pope's "Turbulent Flows" is a classical reference. Also check out Wilcox' "Turbulence modeling for CFD" if you are interested in turbulence models.
@5ty7177 ай бұрын
Excellent
@MachineLearningSimulation6 ай бұрын
Thank you! Cheers!
@mermanstorm35622 жыл бұрын
I note you slipped in an extra Re ^ 0.25. At Re = 10^8, that is a factor of 100. Leave out that factor of 100, and your example of an aircraft computation would not take a month, but 8 hours. In a year you could do all 1000 cases, a reasonable time for a new aircraft design cycle.
@MachineLearningSimulation2 жыл бұрын
Thanks for the comment :). You are right, the simplification of changing from Re^(11/4) to Re^(12/4)=Re^(3) is a rather rough one and results in the factor of 100 in runtime for the aerospace example. However, I'd argue it is okay since our estimates are super rough in the first place. We are not accounting for potential additional effects and also build upon the Kolmogorov scales. In the end, it doesn't greatly matter whether it will take 8 hours or a month, as it is still running on (the world's fastest) supercomputer. The estimate also assumed the CFD simulation would scale as good as benchmark software (which it will not, to a large amount). Hence, runtimes will be considerably longer. Ultimately, industrial CFD applications don't have access to supercomputer-like resources, and it would also be unreasonable to use a (full) supercomputer for an entire year, the costs are unbearable. Let me know what you think :).
@saadmirza27272 жыл бұрын
It's not 500 Teraflops, I think it's 500,000 TeraFlops
@MachineLearningSimulation2 жыл бұрын
Hi, thanks for the comment. :) Do you have a timestamp to which point in the video you are referring to?
@saadmirza2727 Жыл бұрын
29:50
@saadmirza2727 Жыл бұрын
Oh sorry you’re right, you did 10^5 which is peta flops…
@MachineLearningSimulation Жыл бұрын
@@saadmirza2727 Great. :) Thanks for the clarification. And of course, not problem!
@FerdiTekin2 жыл бұрын
TURBULENCE_XYZ: Tn (Like Capacitance) Lx (Like Capacitance) Hy (Like Capacitance) Wz (Like Capacitance) SOLID Position: Check CENTERED SURFACE Position: Check SURFACED turbulent_xyz: tn (Capacitance Roll) lx (Capacitance Roll) hy (Capacitance Roll) wz (Capacitance Roll) ATOMIC VOLUME Position: Check AIR When TURBULENCE_XYZ take direction, and velocity, turbulent_xyz effected this situation. Influence does not actually reach a higher level than affecting, but the sum of the active dimension stitches can be higher than certain values of the influencer. Mean: turbulent_xyz [tn, lx, hy, wz] total bigger than TURBULENCE_XYZ [Tn, Lx, Hy, Wz].
@MachineLearningSimulation2 жыл бұрын
Hi, thanks for the comment. I can't quite decrypt what your comment is supposed to be about? 😅