To test the processor speed, I usually use pystone.py that count a number of complex things to be done per second; it is usually reliable. I wonder if I can use it reliably inside a virtual machine (especially virtualbox) in the guest OS, to test the performance I can expect inside the virtual machine. I do not mean to test the real hardware but well the emulated environnement.
What I fear is that the notion of "second" in the virtual machine does not correspond to real second which would obsiously make the result pointless. Does any one know how the kernel and applications does mesure time and if it is correctly emulated in virtual machines.
My question is of course not pystone.py specific. There are a lot of benchmarks to test the graphic cards, etc... and I wonder if the results are relialble inside a virtual machine.
Last edited by olive (2012-07-11 16:33:00)
Sure, why not? I don't know pystone.py at all but, as far as I know, a second in a VM is still a real second. Heck, you can use NTP in a VM if you really want, but the host time should be enough. Of course, there are a lot of other factors involved with VMs ... how oversold is the system, how much RAM and CPU is available RIGHT NOW versus what is typically available, etc, etc.
If it's a reasonable benchmark, it should measure available resources, which should be valid with either VMs or physical machines ... if not, the cloud guys are just blowing a bunch of smoke up our ...nevermind.