I wrote about a Xen/OpenVZ comparison last month here -- the one that was done by a German student as his thesis. I am really glad to see another third party evaluation. (I would never ever trust any comparisons done by or sponsored by vendors.) I also like the level of details provided. Here are quotes from the last section:
For all the configurations and workloads we have tested, Xen incurs higher virtualization overhead than OpenVZ does, resulting in larger difference in application performance when compared to the base Linux case. <...> For all the cases tested, the virtualization overhead observed in OpenVZ is limited, and can be neglected in many scenarios.
For all configurations, the Web tier CPU consumption for Xen is roughly twice that of the base system or OpenVZ.
Does that mean OpenVZ is better for scenarios such as Linux servers consolidation? Yes, much better. Does that mean Xen is not good? No, not really. Xen has its applications as well (say when you also want to run Windows on the same piece of hardware), and in fact OpenVZ and Xen can nicely and happily co-exist.
From my experience working as Virtuozzo QA team leader (a few years ago) doing all sorts of performance and stress tests for Virtuozzo kernel, I know that there are very many factors influencing the results. Consider this: if you happen to run your test while cron is running daily jobs like slocate's update, log rotation routines etc., your performance could be 10 to 50 per cent slower. This was a very simple and obvious example -- just disable cron daemon before you do your testing. A trickier example is when networking performance increase by 10 to 15% if you bound a NIC interrupt to a single CPU on an two or four-way SMP box.
So, my suggestion is to take those benchmarks and comparisons with a grain of salt. Better yet, do your own comparison using your hardware and your workloads -- and make sure you understand all the results. So if something is slow -- find out why. If something is faster than it should be -- find out why, find out what you did wrong. Perhaps this part -- results analysis -- is the most complex part in the performance testing field.
Having said that, I'd like to point out a Xen vs. OpenVZ comparison, done by a German student Björn Gross-Hohnacker who I met at last year's LinuxWorld Cologne. Björn graciously allowed us to publish his results, so we have translated part of it into English.
Here is the bottom line summary: IPC and disk I/O performance is better (or much better) for OpenVZ than Xen, CPU-intensive tasks are about the same for both, networking is a bit better in OpenVZ. Conclusion: for homogeneous (i. e. Linux-only) environments, OpenVZ is way better -- as it was designed to be.
You are taking this with a grain of salt, aren't you? ;)
I tried it and was able to migrate a CentOS 7 container... but the Fedora 22 one seems to be stuck in the "started" phase. It creates a /vz/private/{ctid} dir on the destination host (with the same…
The fall semester is just around the corner... so it is impossible for me to break away for a trip to Seattle. I hope one or more of you guys can blog so I can attend vicariously.
Comments
Do you still stand by your opinions above now in 2016?…