While I am writing this, people are discussing the future of containers in the Linux Kernel at the containers mini-summit which is happening in Ottawa at the moment. You can check some rough notes from the event here. Three guys from OpenVZ team are there: Pavel Emelyanov, Denis Lunev, and Andrey Mirkin.
If you are attending Linux Symposium in Ottawa, note that this Friday, 25th, Andrey Mirkin will talk about containers checkpointing and live migration (12:00, Rockhopper room). It's going to be an interesting talk, do not miss it.
Also, this Wednesday, 23rd, Balbir Singh will lead a BoF on Memory Controller (17:45, Fiordland room). Memory controller is quite important for containers, and while some stuff are already in the mainline kernel, there's still lots to be discussed and developed in the area. You can think of this BoF as an extension to containers mini-summit.
If you are attending Linux Symposium in Ottawa, note that this Friday, 25th, Andrey Mirkin will talk about containers checkpointing and live migration (12:00, Rockhopper room). It's going to be an interesting talk, do not miss it.
Also, this Wednesday, 23rd, Balbir Singh will lead a BoF on Memory Controller (17:45, Fiordland room). Memory controller is quite important for containers, and while some stuff are already in the mainline kernel, there's still lots to be discussed and developed in the area. You can think of this BoF as an extension to containers mini-summit.


Comments
Almost all virtualization methods do quite well in all of the tests with the exception of unaccelerated QEMU... which is to be expected. Also, they don't seem to consider scalability and density... just raw benchmark measures... which I'm not sure how relevant those are in real world deployments.
Once containers makes it into mainline, whatever completed form that might take... Linux-VServer and OpenVZ comparisons will be pretty moot I think.
I guess I can't really complain too much... because it is really some task to compare all of the virtualization products in a fair way. I do think that OpenVZ was at a disadvantage in their tests though.
http://ols.fedoraproject.org/OLS/Reprints-2008/camargos-reprint.pdf
As everyone should know, OpenVZ and Linux-VServer have a number of resource parameters. By default OpenVZ's are all on and Linux-VServers are all off. The only resource the paper mentions is memory allocation. What about the rest? Differences in these settings can make for vastly differing results.
For me, this paper raises more questions than it answers... but I acknowledge that the task they took on was rather difficult. How does one pick what kernel to use for all of those virtualization methods? How does one pick a host node distro? Should they have used the same exact hardware in all tests but used the configurations / kernels / distros known to work well for each virtualization method? I think that would have been more of a real world test because I don't think many people are using OpenVZ on Ubuntu 7.10 systems using a 2.6.22 based kernel. But if they had used different kernels versions and distros... that would have raised a lot of questions too.
It should also be noted that when Linux-VServer's results were inconsistent with what they were expecting they did make considerable effort to look into it and find alternatives to explain or correct the inconsistency. I wish they had done that with OpenVZ but I can say from experience that the Linux-VServer developers are much easier to communicate with in real-time (via their IRC channel) than the OpenVZ developers who are mostly invisible except for bugzilla interactions and mailing lists.
I'm sure this is just one of the early tests among many more to come in the future.
I can only hope the results of this paper are studied further and if any real performance issues are found to be present in OpenVZ, that they are addressed. My personal experiences with OpenVZ has shown it to be almost exactly like that of the native system which is why I'm so puzzled by the results. Does the 2.6.22 release suck that bad or was it a bad configuration or are their results valid and reproducible?
One thing that amazed me in the paper — they tell in one test Linux-VServer performs better than «native» Linux. This is a clear hint that they did it all wrong. I mean, this could either be just a variation in results or measurements (and in this case it is required to find out how big is the deviation) OR somehow Linux-VServer could really perform better (and in this case it is required to find out exactly why it is so). If the required is not done — I do not trust the results.
As for OpenVZ — using 2.6.22 was a bad idea (unless you want to show how bad it is).