Sockets, Cores and vNUMA

So you've identified a VM in your environment that needs a little boost and you've decided to up the resources on it so it can adequately run whatever services your VM is providing. You go to up the vCPU and ask, "Do I add more sockets or more cores? What's the difference?"

Firstly, if you're asking the question, you're probably still using the fat client rather than the web client but that's both a personal decision and an entirely different conversation. Regardless, the corespersocket setting is an advanced setting in the VM configuration menu of the fat client. It has an interesting history too. 

The setting was first a thing in vSphere 4.1 and was engineered to get around some guest OS licensing limitations. For example, Windows Server 2008 could only use 4 physical CPUs so to get around that, people could up the cores per socket. For example, you could throw 8 vCPUs with one core per socket and Windows Server 2k8 would only see 4 CPUs. However, if you threw 1 socket with 8 cores per socket, your 2k8 guest OS would see all 8 CPUs. Neat right?

Pump your brakes though. Upping the corespersocket setting isn't the golden ticket to unlimited CPU resources. In fact, doing so messes with vNUMA. NUMA stands for Non-Uniform Memory Access. It's essentially a system with more than one system bus. CPU and memory resources are grouped together to formulate what's referred to as a NUMA node. 

The memory and CPU within a NUMA node can talk to each other with relative ease.  When a CPU in one NUMA node has to talk to memory in a different or remote NUMA node, it can do so but it comes at an increased latency that can have a negative impact on guest performance. Since modern CPUs run much faster than memory in most cases, NUMA can help keep CPU cores from entering a stalled state where they're waiting on memory. There's a few different ways to determine the size of your NUMA. Microsoft's is the simplest to understand in my opinion. Divide your total RAM in GB by the number of logical processors and you've got the relatively close estimate as to how many NUMA nodes you're working with. 

vNUMA is the setting at the VM level that exposes the guest operating system of a VM to the physical NUMA topology. This allows the VM itself to benefit from NUMA optimizations, even if it's calling for more resources than that of an individual NUMA node. 

In short, VMware recommends scaling up vCPUs in a wide & flat topology (adding sockets rather than cores when increasing vCPU to a VM). If we're talking a VM with less than 8 vCPU, adding cores vs adding sockets has negligible performance hits. However, the reasons for adding cores over sockets are becoming fewer and far between. Stick with VMware's best practices and you should be good. 

Comments

Popular posts from this blog

Installing CentOS 7 on a Raspberry Pi 3

Modifying the Zebra F-701 & F-402 pens

How to fix DPM Auto-Protection failures of SQL servers