User bio
404 bio not found
Member since Aug 1, 2018


Thanks for your response - much appreciated. Your colleague, Mark Bolinsky, seems to have a different perspective on this question. He writes the following:

"In AWS a given EC2 instance type's reported number of vCPU is an individual thread on the processor as a "logical processor".  The OS (and Ensemble/HealthShare as well for that matter) will only see a given instance's number of vCPUs, and the OS will only schedule jobs on those as it sees them. Ensemble and HealthShare are process based - not thread based, so for an instance type of m4.large with 4 vCPUs will mean only 4 jobs in parallel will execute as a time."

This is the link for that post: I was wondering if you and Mark are actually saying the same thing but perhaps I'm missing something. If I understand Mark's point, the number of vCPUs will be exposed to the OS as the number of logical processors, and therefore can be taken into account by the scheduler (ex. 4 vCPUs could potentially allow 4 processes to be scheduled to run simultaneously). But if I understand your point, the number of processes guaranteed to be executed simultaneously (assuming they are not waiting on non-CPU resources) would be one process per physical core (i.e. vCPUs/2). Based on efficiencies introduced by HT, the processor might be able execute additional threads (i.e. processes) on the physical cores provided that they are not using the same instruction sets simultaneously or competing for the same processor resources (cache, etc.). So then, my reduction of all this would be that 8 vCPUs would guarantee parallel execution of at least 4 processes, but there might be some additional premium (but in total, < # vCPUs) depending on how well the processor is able to optimize the thread execution. So therefore any processes above (vCPUs/2) would have to be considered "gravy", and probably very application-dependent.  The OS would see 8 CPUs and try to schedule 8 processes, but it would be impossible for 8 processes to run simultaneously on 4 cores. Is this a fair summary?




Thanks for your reply - much appreciated. I actually posted a similar question as a comment on a VMWare sizing post from your colleague, Murray Oldfield. I do recognize that "mileage will vary" and there's probably no substitute for actual benchmarking. However, Murray made the following observation regarding EC2 sizing:

"If you know your sizing on VMware or bare metal with hyperthreading enabled and you usually need 4 cores (with hyperthreading) - I would start with sizing for 8 EC2 vCPUs. Of course you will have to test this before going into production."

Here is the link to Murray's post:  You'll find our discussion thread in the comments at the very end of the post. I just want to make sure that you guys are not saying something different. I got the impression from Murray's article that you can never really have more processes executing at exactly the same instant than the number of physical cores. Which is really tough with EC2, because you never know the number of physical cores anyway, just virtual cores. AWS does state that except for the t2 family and m3.medium, 2 vCPUs = 1 virtual core. Based on Murray's article and his comments, that would lead me to believe that  except for t2 and m3.medium, you can only have one OS process executing at a time for every 2 vCPUs. Am I missing something? I suppose this really revolves around an understanding of how Xeon hyperthreading works more than EC2 topology itself (which I admit I don't have).

We're also very interested in any benchmarking data you have showing differences between dedicated instances vs. default instances (I think you used the terms reserved vs. on-demand but those are just AWS billing modalities).



Certifications & Credly badges:
Raymond has no Certifications & Credly badges yet.
Global Masters badges:
Raymond has no Global Masters badges yet.
Raymond has no followers yet.
Raymond has not followed anybody yet.