What are the mesh size limits for (for example) A100 40 (80)Gb? Like with "normal" CPU based solver one needs up to 4Gb per 1 million cells. Are there common mesh size limitations for external automotive aero? Is it a valid assumption that one can "count" GPU memory requirements in similar way to RAM memory with CPU based solvers?
@erockromulan9329 Жыл бұрын
I am currently in the process of setting up 6 A100 GPU's to run in Ansys Fluent's 2023 R1 native GPU solver. We have 10 HPC licenses available and one A100 online, but we have not seen a significant speedup that is being advertised... We are coordinating with our Ansys IT rep, but it seems like this feature is too new for him as well. I'm not sure what we need to fix.
@KETIVTechnologies Жыл бұрын
Hello. KETIV has received your inquiry and is actively working on a response. We apologize for any delay and highly value your engagement. KETIV
@erockromulan9329 Жыл бұрын
@@KETIVTechnologies Don't worry, it was an obvious fix! If you open Fluent standalone in Enterprise mode there is an option to click "Native GPU Solver." You can't do that from the workbench version - at least I haven't figured that out yet. The Native GPU solver works great, but it doesn't support things like periodic instancing and some physics models so you have to weigh your benefits of using it.
@erockromulan9329 Жыл бұрын
How are they calculating the solving time per iteration for GPUs? From what I've seen so far, adding GPUs can decrease the actual number crunching time, but dealing with the mesh on a low CPU count is a giant bottleneck for large meshes. Do you need something like 64 Intel Gold's to be able to 'unleash' the full potential of one A100 to decrease total end-user time per iteration?