You know I love to write about performance comparisons, right?
I just came across the one I saw some time ago and forgot about it, but now it's linked from the Wikipedia article about OpenVZ. The paper (presented at OLS'08) is comparing performance of OpenVZ, Linux-VServer, Xen, KVM and QEMU with the performance of non-virtualized Linux. The results are shocking! OpenVZ is twice slower than reference system when doing bzip -9 of an iso image! Yet better, on a dbench test OpenVZ performance was about 13% of a reference system (which is almost eight times slower), while Linux-VServer gave about 98%.
Well guys, I want to tell you just one thing. If someone offers to sell you a car for 13% of the usual price — don't buy it, it's a scam and a seller is not to be trusted. If someone tells you OpenVZ is 2 (or 8) times slower than non-virtualized system — don't buy it either!
I mean, I do know how complicated it is to have a sane test results. I spent about a year leading SWsoft QA team, mostly testing Linux kernels, and trying to make sure test results are sane and reproducible (not dependent on the moon phase etc). There are lots and lots of factors involved. But the main idea is simple: if results doesn't look plausible, if you can't explain them, dig deeper and find out why. Once you will, you will know how to exclude all the bad factors and conduct a proper test.
I wrote about a Xen/OpenVZ comparison last month here -- the one that was done by a German student as his thesis. I am really glad to see another third party evaluation. (I would never ever trust any comparisons done by or sponsored by vendors.) I also like the level of details provided. Here are quotes from the last section:
For all the configurations and workloads we have tested, Xen incurs higher virtualization overhead than OpenVZ does, resulting in larger difference in application performance when compared to the base Linux case. <...> For all the cases tested, the virtualization overhead observed in OpenVZ is limited, and can be neglected in many scenarios.
For all configurations, the Web tier CPU consumption for Xen is roughly twice that of the base system or OpenVZ.
Does that mean OpenVZ is better for scenarios such as Linux servers consolidation? Yes, much better. Does that mean Xen is not good? No, not really. Xen has its applications as well (say when you also want to run Windows on the same piece of hardware), and in fact OpenVZ and Xen can nicely and happily co-exist.
From my experience working as Virtuozzo QA team leader (a few years ago) doing all sorts of performance and stress tests for Virtuozzo kernel, I know that there are very many factors influencing the results. Consider this: if you happen to run your test while cron is running daily jobs like slocate's update, log rotation routines etc., your performance could be 10 to 50 per cent slower. This was a very simple and obvious example -- just disable cron daemon before you do your testing. A trickier example is when networking performance increase by 10 to 15% if you bound a NIC interrupt to a single CPU on an two or four-way SMP box.
So, my suggestion is to take those benchmarks and comparisons with a grain of salt. Better yet, do your own comparison using your hardware and your workloads -- and make sure you understand all the results. So if something is slow -- find out why. If something is faster than it should be -- find out why, find out what you did wrong. Perhaps this part -- results analysis -- is the most complex part in the performance testing field.
Having said that, I'd like to point out a Xen vs. OpenVZ comparison, done by a German student Björn Gross-Hohnacker who I met at last year's LinuxWorld Cologne. Björn graciously allowed us to publish his results, so we have translated part of it into English.
Here is the bottom line summary: IPC and disk I/O performance is better (or much better) for OpenVZ than Xen, CPU-intensive tasks are about the same for both, networking is a bit better in OpenVZ. Conclusion: for homogeneous (i. e. Linux-only) environments, OpenVZ is way better -- as it was designed to be.
You are taking this with a grain of salt, aren't you? ;)
I tried it and was able to migrate a CentOS 7 container... but the Fedora 22 one seems to be stuck in the "started" phase. It creates a /vz/private/{ctid} dir on the destination host (with the same…
The fall semester is just around the corner... so it is impossible for me to break away for a trip to Seattle. I hope one or more of you guys can blog so I can attend vicariously.
Comments
Do you still stand by your opinions above now in 2016?…