For the last two weeks or so we've been working on vzstats -- a way to get some statistics about OpenVZ usage. The system consists of a server, deployed to http://stats.openvz.org/, and clients installed onto OpenVZ machines (hardware nodes). This is currently in beta testing, with 71 servers participating at the moment. If you want to participate, read http://openvz.org/vzstats and run
So far we have some interesting results. We are not sure how representative they are -- probably they aren't, much more servers are needed to participate-- but nevertheless they are interested. Let's share a few preliminary findings.
First, it looks like almost no one is using 32-bits on the host system anymore. This is reasonable and expected. Indeed, who needs system limited to 4GB of RAM nowdays?
Second, many hosts stay on latest stable RHEL6-based OpenVZ kernel. This is pretty good and above our expectations.
Third, very few run ploop-based containers. We don't understand why. Maybe we should write more about features you get from ploop, such as instant snapshots and improved live migration.
yum install vzstats on your OpenVZ boxes.So far we have some interesting results. We are not sure how representative they are -- probably they aren't, much more servers are needed to participate-- but nevertheless they are interested. Let's share a few preliminary findings.
First, it looks like almost no one is using 32-bits on the host system anymore. This is reasonable and expected. Indeed, who needs system limited to 4GB of RAM nowdays?
Second, many hosts stay on latest stable RHEL6-based OpenVZ kernel. This is pretty good and above our expectations.
Third, very few run ploop-based containers. We don't understand why. Maybe we should write more about features you get from ploop, such as instant snapshots and improved live migration.


Comments
Because it very well may not be mature enough and we don't want to find out in production.
We got burnt very badly with initial instabilities of the RHEL6 kernel (that we however much needed to upgrade to because of other bugs in previous vanilla branches that were not going to get fixed) and particularly its vswap feature that we promised to our customer to use in their VMs in late 2011 (too early, I now know). We ran every RHEL6 stable update since (30 days before kernel crash was often considered success) and only in August 2012 it became fully stable for us with our workload (another site we manage heavily uses NFS, which was another point of frequent breakage in the kernel). We've reported some issues, some got fixed, eventually we're to the point in the past almost a year when 100+ days of uptime are achieved predictably, i.e. the kernel is actually *stable* from my perspective. I'm so glad we've eventually reached this point, so there is no way we'd use ploop in 2013 I'm afraid.. maybe next year :)
Added a node to vzstats, good idea.