Watch this “lightboard” video led by Frank Denneman, Chief Technologist @ VMware and Johan van Amersfoort, Tech Marketing Architect @ ITQ, who present a deep dive on NUMA. Please find the link to the home lab at vhojan.nl
Пікірлер: 11
@drgr33nUK Жыл бұрын
Great video but prepare yourself for some spine tingling squeaks from the whiteboard :)
@A83r2315 жыл бұрын
Really great video. Thank both of you for the efforts!
@fooey885 жыл бұрын
Such an informative video. Thank you!
@AdamJohnson01105 жыл бұрын
Good content, thanks!
5 жыл бұрын
Great video Frank, really awesome.
@brink6685 жыл бұрын
Wow this is fabulous
@navguest17403 жыл бұрын
Clean explanation
@jamesm.23225 жыл бұрын
Shame you guys are dead wrong about EPYC and 'MonsterVMs'. Adding "numa.consolidate = FALSE" and "numa.autosize.vcpu.maxPerVirtualNode = 1" to your VMX file allows the VM to span across all of the NUMA evenly. Have several VM based SQL production databases on EPYC using these flags to run across both Sockets and limiting them to 8cores each. Of course in MSSQL you need to set 'Max Degree of Parallelism' to how many NUMA the SQL box is spanning. So while you may be able to talk about NUMA, your stance on EPYC for Monster VMs is ill found.
@FrankDenneman5 жыл бұрын
Thanks for your feedback, I read your Reddit posts and it seems you were bitten quite badly by the new architecture. Unfortunately, the settings that you recommend create a NUMA client size of 1. Thus making extremely shallow pools of local memory, thus destroying the notion of locality. Your settings do not align with the physical layout of the CPU package and it will not be beneficial to guest and application optimizations. You left out action affinity, which you mentioned in your Reddit comments (btw, action affinity is a lot older than 6.5.) If you use these settings combined, you end up with a lot of clients on the same NUMA node, while not expressing the correct architecture to the layers above. Sir Squishy is already aware of this article, but if anyone else is interested this article has more information about EPYC and ESXi frankdenneman.nl/2019/02/19/amd-epyc-and-vsphere-vnuma/
@jamesm.23225 жыл бұрын
@@FrankDenneman Under EPYC For Giant VMs with large Memory pools or large vCPU counts you need to consider breaking out the NUMA under the VM. Else you run into memory bleeding and exhausting your memory availability per NUMA. As long as the Guest OS and its applications are NUMA aware (Win2008R2+ and MSSQL2010+ are) then you can take advantage of parallelism across the 'shallow' NUMA Allocations with the VMX Changes. I have benchmarked these changes every which way possible, and I see nothing but Benefits and performance improvements. Yes, Action Affinity still needs to drop from 180 down 0 to entirely stop the virtual memory pools from bleeding. The only thing really left that VMware needs to consider doing for EPYC1 based hosts (Maybe 4way and 8way Intel boxes too) is allow VMs to address NUMA separately so you can specify how many NUMA Nodes the VM lives on. Using the VMX changes I laid out you can either choose 1-8 NUMA by addressing 1-8 vCPUs, or not apply the changes and allow the physical CPU boundaries to dictate your NUMA Allocations. A side note to be perfectly clear about the NUMA Spread, if you do apply the VMX changes and run 8vCPUS to get more you need to double down on the cores (8 to 16) else you unbalance the NUMA Allocation and get random Compute Performance.