Thanks for creating these videos. They're very helpful for us.
@FluidNumerics28 күн бұрын
Glad you enjoy them. If there is anything specifically you want to see in 2025, just let us know here in the comments :)
@aminezoubir46672 ай бұрын
very interesting, thank you. I'm looking for a Book to learn Mpi woth Fortran, any help will be apreciloved
@FluidNumerics2 ай бұрын
For getting started, you might be interested in "Introduction to Programming in Fortran" ( link.springer.com/book/10.1007/978-3-319-75502-1 ); chapter 32 covers the basics of MPI in Fortran . For more in depth MPI specifically, check out parallel "Programming with MPI" by Peter Pacheco ( www.amazon.com/gp/product/1558603395/ )
@Ariccio1232 ай бұрын
Does vtune work with fortran code?
@FluidNumerics2 ай бұрын
VTune does work with Fortran code. For profiling AMD GPUs and getting roofline diagrams on that specific hardware, you'll want to use omniperf.
@TeeTeeNet2 ай бұрын
Wouldn’t one want each hash grid cell to contain many triangles? Or maybe it’s a tuning parameter? Meshes may have tiny triangles in an area of interest, that are orders of magnitude smaller than other triangles, so basing the hash grid size on small triangles can explode the hash grid’s storage requirements? Especially in 3d.
@FluidNumerics2 ай бұрын
All good points. The size of the hash cells is definitely a tuning parameter that changes how many unstructured grid elements you may need to check for a given particle. To store a hash grid, you really just need a fixed grid spacing size in each direction (two floats) and the position of the lower left corner (two more floats). Even if I have more hash cells, I don't need any more information than this. In 3-D, we just need hash cell size in each direction (three floats) and the position of the lower-left-bottom corner (three more floats). Storage for the hash cell grid is not really an issue.
@ali.y.19782 ай бұрын
Great work, how can I reach your email or contact information. I have some questions
Amazing video & resource! It was so much fun to follow along. I hope you plan on doing more of these :)
@FluidNumerics3 ай бұрын
We have a few things queued up at the moment and really want to keep this as a regular thing. If you have a topic you'd like to see covered or a model you want to see implemented, drop a comment here
@badhatharry432310 ай бұрын
the two github links in the description are broken, is there a chance to update those?
@FluidNumerics10 ай бұрын
@badhatharry4323 - thanks for pointing this out. The links have been fixed . See fluidnumerics.github.io/gpu-programming/codelabs/build-a-gpu-app-openmp-fortran/#0 for the codelab
@carlosenriquemosqueratruji9559 Жыл бұрын
Amazing!! Great job
@ГордейГойман Жыл бұрын
Thanks for the video! I'm curious if there will be some performance difference comapred to the native implementation of smoothing kernel on CUDA fortran? And also, does it make sense to try to parallelize the hydrodynamic code (atmosphere model, for example) in such an OpenMP "offload" manner?
@FluidNumerics Жыл бұрын
It's possible that using a kernel approach, like CUDA-Fortran (or HIP+HIPFort), will give you different performance. Kernel based approaches to GPU programming give you more direct control of the operations each thread executes. With directive based approaches, like OpenMP or OpenACC, you hint to the compiler what needs to be done, and the performance is highly dependent on the compiler. The nice thing about directive based approaches is that you can easily start offloading your code to the GPU. For large hydrodynamic codes, this can be a good way to get started. Then, using your profiler, find kernels that could benefit from writing your own kernels.
@juancolmenares6185 Жыл бұрын
I thought the "F" word was "Flux" :p
@juancolmenares6185 Жыл бұрын
thanks for sharing this useful information.
@chjesse330 Жыл бұрын
if you want to get 'spot' instances to reduce costs, how would you go about that ?
@FluidNumerics Жыл бұрын
Spot instances are currently not supported in the marketplace release of RCC-CFD, however preemptible instances are supported. When configuring the deployment, check the box for "Preemptible Instances" in the first compute partition. Keep in mind that when preemption occurs, the Slurm job scheduler will requeue the job and simply restart your batch script from the top. Ideally, you'd want to also have your batch scripts written so that the memory state can be checkpointed and restarted. A good solution for this is SMTP. If you are currently using our solutions in the marketplace, you can always open a support ticket by reaching out to [email protected]. Happy Computing!
@chjesse330 Жыл бұрын
@@FluidNumerics thank you very much!
@samikmaiti93392 жыл бұрын
In the video, noting is properly visible, I could not see properly how you did you make the set up for OpenFOAM.
@theperfectionist16072 жыл бұрын
What VIM/terminal theme do you use?
@tennisfreak3122 жыл бұрын
Hi. Do these coding tutorials assume a working knowledge of SELF? If it is, where can I find resource to learn about SELF before diving into these? Thanks
@FluidNumerics2 жыл бұрын
The older videos did not have any narration. Since then, I've started doing white-boarding sessions at the beginning to provide context to what is going on. Check out the videos starting in January 2022 in "The 'F' Word" playlist ( kzbin.info/aero/PLRO4xf5MdhAv9CNTETor75rANZtBqPVgQ ). You can also check out the repository for SELF (github.com/fluidnumerics/self ) and the ReadTheDocs (self.readthedocs.io/en/latest/) . Since this code is currently being actively developed, the documentation is also a work in progress.
@musicmaker993 жыл бұрын
Woah! Fortran!
@FluidNumerics3 жыл бұрын
Indeed! Fortran! and not the 77 stuff either.
@taarix33203 жыл бұрын
how i update nvidia drivers ? its 11.1 version this is very old
@FluidNumerics3 жыл бұрын
We had some technical difficulties with audio that were fixed about 5minutes in.
@StephaneCharette3 жыл бұрын
I will be extremely happy to see this merged. Will open up the user base to many new people, and will give everyone more choices when it comes to buying hardware.