Great explanation. Best one Ive seen so far. Quick Question but I'm assuming so. So when the data plane goes over 80% and the control plane is halted reserving the 80%... When data plane goes over 100% it takes part of the control planes 80%? So in theory the data plane can go as high as 180%.....
@devcentral7 жыл бұрын
No, the TMM thread is scheduled for data plane exclusively, the control plane thread is scheduled exclusively for control plane tasks. When TMM hits 80%, the control plane thread is 80% halted, but none of that goes to TMM, otherwise we end up back in the original scenario where TMM and control plane need to context switch.
@darshandkd7 жыл бұрын
I guess Data Plane starts yanking the CPU cycles when it reaches to 80% itself, and that is when you see "idle enforcer" log message under /var/log/kern.log So, for the sake of the calculation, yes it is 180% (100% of Data-plane + 80% Control-plane), leaving 20% for Control-plane. For more, read - support.f5.com/csp/article/K15003
@TroyMurray6 жыл бұрын
I'm curious how this works with vCMP Guests, identical? I ask because I know on certain blades (B2250) you can allocate 1 CPU to a vCMP guest, which if the CPU has 10 physical cores, each running HTSplit, for a total of 20 logical cores, is allocating 1 CPU allocating just 1 logical core? If so, how does the TMM and Linux subsystem function? Using the 80/20 split?
@robertatfsparrow7 жыл бұрын
Hello Jason, What happens when control plane reaches a high percentage of CPU? Does it have any risk? Does it have any threshold to have impact on control plane tasks? Can it be shared with Data cores? Is there any limit to be set on this control plane cores? Thanks, Roberto