How Many CPUs Should A VM Have?

Today I thought I’d write about CPU scheduling in VMware. CPU scheduling pertains to how VMware’s hypervisor (ESX and ESXi) allocates CPU and helps to answer questions like:

  • How many CPU’s should a VM have?
  • Is there an optimum number of processors / cores to allocate a VM?
  • How do I get the best performance from my CPUs in VMware?
  • Can you allocate too many CPUs to a VM?
  • What is the best practice for adding CPUs to VMs?

Since I’ve had to explain this topic a number of times, rather than scrambling around for a pen and paper to draw diagrams, I figured that it would be easier to have a concise article explaining this stuff, so I can refer people next time I’m asked how to allocate the correct number of CPUs to a VM.

How VMware CPU Scheduling Works

The first thing you have to understand when allocating CPUs to a VM, is how VMware CPU scheduling works. CPU scheduling is the process VMware uses to allocate physical CPU time slices to vCPUs in VMs. Let me explain with an example:

You have an ESX / ESXi host that has 8 cores. This host has 10 VMs running on it, each with 1 vCPU.

How is that possible? Well, the vCPUs in the VMs each take turns to use the physical cores. Because there are 8 cores, up to 8 VMs can use CPU resources at a time; The other 2 VMs will have to wait their turn. This swapping behaviour is normal and happens to processes in Windows, hence it doesn’t cause VMs to crash, but may slow them down. The longer they have to wait, the slower they will go.

Can Adding Too Many CPUs To A VM Make VMware Go Slow?

Now that you know how CPU scheduling works, let me introduce a new concept:

A VM with multiple vCPUs has to wait for that many physical CPU cores to become free before it can gain access to the ESX / ESXi hosts CPU resources

This is crucial information to help us understand how many CPUs a VM should have. Let’s consider an example in which a VM runs faster with fewer CPUs:

There is a VMware ESX / ESXi host which has 4 physical cores. This host runs 5 VMs, each with 1 vCPU. Here’s how each VM (in this example) accesses the resources over time:

Now imagine how that would look if VM number 4 had been allocated an extra CPU:

As you can see, VM number 4 is forced to wait until 2 cores are simultaneously available, because it has 2 vCPUs. Consequently (in this example) the VM will run slower with more CPUs.

What’s The Best Practice For Deciding How Many CPUs To Add To A VM?

So now you are probably thinking, How do I know how many CPUs I should allocate my VM – How many CPUs is too many? Well, the best practice is to allocate 1 vCPU (core, if you’re using ESXi 5) to a VM then do performance testing to ensure that CPU utilization in Windows is acceptable under load. If you have a VM that needs more, add another vCPU and test your VM again. If you have a complex environment, with many VMs requiring multiple CPUs, then you may need to plan your resources better.

The more vCPUs you allocate to a VM, the more likely it is to be in a position whereby it is waiting for a spare CPU. Consider this, if you give your VM the same number of CPU cores that your host has, then it would only take one of those CPU cores to be in use by another VM to make the VM not be able to use any CPU resources.

VN:F [1.9.22_1171]
Rating: 9.1/10 (19 votes cast)
How Many CPUs Should A VM Have?, 9.1 out of 10 based on 19 ratings

25 thoughts on “How Many CPUs Should A VM Have?

  1. Great information. 2 questions:

    1. Based on the above scenario of 8 core and 10VM, does that mean that if each of of the VM are allocated 2 vCPU, potentially it might mean 6VM will have to wait for their turn?
    2. I am planning to install a SQL server 2012 with 24 vCPU, how many physical core will I need?


    VA:F [1.9.22_1171]
    Rating: +1 (from 1 vote)
    • Oooh, great questions! I love it πŸ™‚

      Ok, first, question 1… No… and yes 😈

      If all 10 VMs are demanding constant and equal CPU resource, then yes… However, we know that a lot of CPU time is spent idle in a VM… So the answer could be no: it might work fine with that configuration with potentially no resource contention. This is why I recommend testing πŸ™‚

      Question 2: You’re installing SQL Server with 24 vCPUs and want to know how many physical cores you’ll need… Wow, that thing is going to be a beast πŸ˜€

      I recommend that you hire an expert to help you here because there are a lot of virtualisation specific things to plan for when virtualising high end SQL Servers… For example, are you allocating a raw LUN as storage in VMware rather than a typical VMware datastore, etc…? Have you done proper testing to know how many vCPUs to allocate? Maybe you don’t need 24, maybe that’s too many? How is that going to effect bus traffic on the host, will that effect other VMs on the host, etc… If it’s the only VM on the host, why are you virtualising it rather than using SQL Clustering?

      This is something that really needs a lot of planning, because a SQL Server with 24 vCPUs is obvious very business critical. You are lucky, you have a really awesome project on your hands here πŸ™‚

      But, to answer your question, if you allocate 24 vCPUs, you need at least 24 cores. VMware vSphere 5 even lets you differentiate between cores and physical CPUs, when allocating CPU resource to hosts, this should also be considered when thinking about CPU scheduling if you’re using vSphere 5 πŸ˜‰

      Hopefully that gives you something to think about and answers your question πŸ™‚ Best of luck with your project πŸ˜€

      VN:F [1.9.22_1171]
      Rating: +1 (from 1 vote)
      • Thanks for the information! Appreciate it.

        The reason it’s got 24 vCore was due to the initial plan to put the SQL in a physical server, but then someone along the line realised that HA is not available so it went back to setting up in a VM. So since the initial plan was for 24 Core physical, it just got carried over to the VM as well, with VM we can potentially still be able to move the image to another physical server if the main one dies.

        VA:F [1.9.22_1171]
        Rating: 0 (from 0 votes)
        • Ok, I figured that this was probably for HA…

          So you will probably find that the CPU just doesn’t get used. You might want to put it in and dial it back (removing vCPUs) until you get the right number. If you do this, be sure to set the option in the VM for hot add CPU πŸ˜‰

          If you do allocate 24 vCPUs, you will need to make sure that any host it can vMotion to has resources available so you don’t get in the CPU locking situation described in the how many CPUs should a VM have article. You can do this with HA rules to make sure certain VMs can only vMotion to certain hosts, etc…

          However, I seriously urge you to look at SQL Clustering rather than VMware HA. The reason for this is that VMware HA won’t protect you from operating system corruption.

          Remember that if you do decide to virtualise SQL Server, allocating too many vCPUs can make it run slower due to resource constraints. For example, if you have a phsyical server and decide to virtualise it, then allocate all the CPU cores to 1 VM, you will only be able to run that one VM on the host. This is because any other request for CPU usage will mean that the SQL Server has to wait until ALL CPUs are free. So with virtualisation, you should really be allocating CPU resource based on requirement rather than over-estimating.

          Good luck with your project πŸ™‚

          VN:F [1.9.22_1171]
          Rating: 0 (from 0 votes)
    • Oooh, also you should be aware that there are licensing implications for SQL Server and virtualisation based around CPUs… I’m not a licensing expert, but from memory it works like this:

      You need one SQL CPU license for each physical CPU allocated to SQL Server… If it’s virtualised, you divide the single CPU license price by the number of cores per CPU and multiply that by the number of cores allocated to the SQL Server VM. Therefore it’s cheaper to have less CPUs and more cores allocated to a SQL Server. You might want to check this with whoever manages your licenses, because with the price of SQL licenses and the lifespan of your project, it might be cheaper to buy new hardware if you don’t have hardware that offers the optimum price-performance option… πŸ˜‰

      VN:F [1.9.22_1171]
      Rating: +3 (from 3 votes)
  2. I must say Sir, this is the first time that I have felt such gratitude to someone, enough so to leave a comment just to show how much. I have been struggling with a Terminal Server running on ESXi 4.1 for about 2-3 months, the CPU usage was constantly at 100% no matter what I tried. After reading your post regarding VMware and CPU’s, I changed the amount of vCPU’s to be the same as the amount of physical CPU’s and VIOLA! it worked like a charm. I crack open a bottle of champagne in your name Lewis!

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • I’m pleased I was able to help :mrgreen:

      Just one other note about terminal servers as VMs, they don’t seem to work as well as physical terminal servers and tend not to like having more than 20 concurrent user connections. If you’re still having trouble, try clustering more terminal server VMs or going physical πŸ˜‰

      PS: Don’t forget to send some of that champagne this way πŸ˜‰

      VN:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
      • I am planning on installing a Windows Server 2012 Remote Desktop Services server on a Dual CPU 12 core HP server. The RDS server will have 60 users logged into it. How many CPUs do you recommend I assign to it? There will also be a few other VM’s on this ESXi 5.5 server. Currently with those other VMs on the server CPU utilization is 1% on average. I really don’t want to have any physical servers other than the ESXi hosts. What is the difference between vCPU and Cores?


        VA:F [1.9.22_1171]
        Rating: 0 (from 0 votes)
        • Hi Wayne,

          I’ve found that RDS doesn’t really work well with more than 40 users logged onto a server no matter how many CPUs you allocate. Maybe this is something to do with CPU context switching? I don’t know.

          Why not try creating an RDS cluster with one CPU per VM and see how the load is?

          VN:F [1.9.22_1171]
          Rating: 0 (from 0 votes)
  3. Great article!

    Would you be able to help me with this query:

    If I have 4 core (non HT) Xeon CPU, 32GB RAM and 4 SAS HDD in RAID 10 mode – how many VMs (max number) I can run optimally in production environment and how many vCPU’s should I allocate per each host?

    The VMs are not heavily loaded – Web services, FTP services, Mail services (All RedHat based).

    Many thanks,

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • It’s not possible to answer your question without running tests on the VMs… However, there’s a tool for working out exactly what hardware you will need to run a selection of VMs, it’s called VMware Capacity Planner. VMware Capacity Planner will run on your VMs for a period of time, recording peak and average usage of resources such as CPU , I/O, RAM, etc… and then ask you what hardware you have (it knows about most major brand servers). It will then make recommendations πŸ™‚

      As for how many CPUs / cores you will need for each of those VMs, hopefully the article will help answer that, but as a guide: Make sure apps don’t have specific minimum requirements, then try 1 vCPU in each, monitor the performance and increase as necessary πŸ˜‰

      VN:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
      • Thanks for the answer, I will try suggested vmware tool.

        Do you think it would be ok to run 5-6 VMs on quad core CPU without HT? As I said – VMs won’t be overloaded.
        I am thinking to assign single v core per VMS.

        VA:F [1.9.22_1171]
        Rating: 0 (from 0 votes)
        • As I say, it’s not possible to say exactly, because I don’t have all the details about your hardware and VMs (Sorry to give you a wishy washy answer, but it’s true). The tool will be able to give you the answer.

          Having said that, if you are wanting to know if you can run more VMs than you have physical CPUs / cores, then the answer is definitely yes. You can run 6 or more VMs on a quad core CPU. As an example, I worked at one site that had 4 HP servers, each running 16 cores (Xeon CPUs) and there were over 100 VMs running there (some with up to 4 vCPUs)… which is over 100 vCPU’s on servers with a total of 64 CPU cores available for use. The site ran very well.

          However, as I say, run VMware Capacity Planner to help you decide how many CPUs you need, because depending on your hardware and VM resource requirements, it might not be possible. VMware Capacilty Planner will be able to help you with all your resource requirements, from SAN requirements through to RAM requirements as well as deciding how many CPU cores you need πŸ˜‰

          VN:F [1.9.22_1171]
          Rating: 0 (from 0 votes)
  4. Hi Lewis,

    Thanks for writing this – was a good read. Can you give any examples of when you would go ahead and allocate more than 1 vCPU? Let’s say I allocate 1 vCPU and don’t quite see the performance I’d expect, what does allocating 2 vCPU actually do behind the scenes.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • There are only 2 scenarios I can think of to have more than 1 vCPU in a VM off the top of my head:

      • Performance
      • You are running an application that requires multiple CPUs and may require CPU affinity
      VN:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
    • Hey Mike,

      It was written when vSphere 4 was out, but still has some relevance in vSphere 5 (Although they did change the way CPU allocation work because you can now allocate cores and CPUs, while in vSphere 4 you could only allocate cores).

      Probably the main thing to be aware of is that if you allocate a bunch of vCores on some vCPUs, as you might expect from the above article, your VM will have to wait until those resources are available… Note that at the time of writing, VMware will wait until the cores are available from a CPU, rather than allocate any cores to use from any CPU to present a single CPU (if you allocate vCPUs rather than just vCores).

      VN:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
  5. Hi Lewis,

    I went also through the document of vmware:

    and on Page 8 it is described the Strict-Co Scheduling (in ESX 2.x) and Relaxed Co-Scheduling (ESX 3.x and after). Doesn’t your example just apply to Strict-Co Scheduling? You wrote on your last comment, that the article referred to ESXi 4.x, but there was Relaxed Co-Scheduling already used, wasn’t it? Would it mean that in your example VM4 would already could get I vCPU in Time+2?

    Maybe I am mixing something up, or I misunderstood the Relaxed Co-Scheduling… I am just starting to dig deeper in the VMware World.

    Thanks in advance & greetings from germany!


    VA:F [1.9.22_1171]
    Rating: +1 (from 1 vote)
    • Dang,

      I understand that whitepaper the same as you. Ever since ESX 3.x, this statement has not been true:

      “A VM with multiple vCPUs has to wait for that many physical CPU cores to become free before it can gain access to the ESX / ESXi hosts CPU resources”

      And yet if you search the web looking for information on the matter, you will find it repeated as still the case in the latest versions.

      I think is a simple case of people (bloggers) simply repeating what they have been told without verifying it.


      VA:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
      • Yeah, I’ve been around since ESX 3, and read that it still works that way. I guess it’s a case of needing to get updated and then updating the blog.

        … Coming soon πŸ˜‰

        VN:F [1.9.22_1171]
        Rating: 0 (from 0 votes)
  6. The way I read this, relaxed co-scheduling reduces CO-STOP, but doesn’t eliminate it. A machine can still become CO-STOPPed if the lag between processes exceeds the threshold.

    It looks like this article is still relevant, though in some (many?) cases the performance hit may be less than in a strictly co-scheduled environment.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  7. Hi

    This is very great reading and might be quite relevant still.
    I have a Dell RX720D with ESXi 5.5u2 which has 2 XEON E5-2650 CPU’s and 64GB Ram.

    The Host is running 3 servers.
    1. sbs 2011 (Running with Exchange)
    2. srv2008 (Application server running 1 SQL)
    3. srv2008r2 (RDS for approx 14 users)

    So 2 physical CPU’s with 8 cores.
    Any input on how to assign CPUs ?

    I were thinking in the lines of
    SBS2011 = 1CPU, 2 Cores
    Srv2008 SQLE = 1 CPU, 4 cores
    Srv2008 rds = 2 CPU, 4 Cores
    How does that sound?

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  8. I am setting up a VMware ESXi 5.5 server to host out Win7 VMs. It is critical that these Win7 VMs run at peak performance immediately. Meaning I have no testing time. If I have an 8 core system. I do not want any of the VMs to experience “wait time”. What is the maximum number of VMs I can create? I have read that I can get best performance if I give each Win7 VM 2 cores. So it seems that I could create 4 Win7 VMs but that does not leave any processor power for the host? – thanks, Sam

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

Leave a comment

Your email address will not be published. Required fields are marked *