First is memory controller. The code is submitted by Balbir Singh (of IBM), and is mostly based on an earlier work by Pavel Emelyanov (of OpenVZ), Balbir and some others. It uses the "control groups" (cgroups) framework introduced earlier by Paul Menage of Google. Basically, memory controller (in its current form) lets one to control the amount of physical memory used by a group of processes (i.e. by a container). This is a vital feature for containers since all the containers are using the same RAM resource, so for containers to co-exist nicely they should not be allowed to use too much memory. Now, system administrator can set a per-container memory limits. The whole technology is known as User Beancounters (or just Beancounters) in OpenVZ world -- it's just we have more different parameters (and thus knobs and dials) in OpenVZ.
But, in a sense, the memory controller that is now in mainstream is better than one we have in OpenVZ. The one in mainstream limits the amount of physical (RSS) pages used by a container, and if this limit is exceeded, pages are swapped out. Well, in fact, they are not swapped out -- this would cause the unnecessary disk I/O activity in case it's just a container limit being hit, and otherwise there is enough memory on the system. In this case container's memory pages are put to the swap cache. In case of global memory shortage this swap cache will be freed, i.e. swapped out to disk. To summarize, this cool feature allows to have containers with strict memory limits, but decent overall system behavior.
The second feature (and thus the second step) is network namespaces -- an ability to for containers to have their own network stacks. This is still a work in progress. The first bits and pieces appeared in 2.6.24. A lot of network namespaces code (more than 200 changesets I guess) now appeared in 2.6.25, and despite my earlier predictions it's still not the end of the journey. A lot more code (also about 200 changesets) is now in net-2.6.26 tree (networking subsystem branch maintained by David Miller), scheduled to be included in Linux 2.6.26. Risking to be wrong for the second time, I'm still thinking that in Linux 2.6.26 we will likely have fairly complete implementation of net namespaces. A short description of what we will try to have in 2.6.26 as it comes for networking is here.
Speaking of 2.6.26 -- looks like it will be our next base kernel. We are now maintaining 2.6.24-based development branch (which is also used for OpenVZ-enabled Ubuntu Hardy Heron kernels), and will start porting OpenVZ patchset to 2.6.26 soon.
Finally, here's the graph that shows how many changesets, per kernel release, our team has contributed. No need to comment it I guess.
Also, here's the list of top10 contributors to the Linux 2.6.25. Our company is #7.
Top changeset contributors by employer (None) 1188 (9.3%) Red Hat 1181 (9.3%) Novell 817 (6.4%) IBM 703 (5.5%) Intel 472 (3.7%) Bartlomiej Zolnierkiewicz 307 (2.4%) Parallels 278 (2.2%) <--- Oracle 255 (2.0%) bunk@kernel.org 227 (1.8%) (Academia) 225 (1.8%)
Pavel Emelyanov has made it to top10 of developers.
Developers with the most changesets Bartlomiej Zolnierkiewicz 307 (2.4%) Adrian Bunk 234 (1.8%) Patrick McHardy 225 (1.8%) Ingo Molnar 213 (1.7%) Paul Mundt 207 (1.6%) Greg Kroah-Hartman 172 (1.4%) Thomas Gleixner 166 (1.3%) Jesper Nilsson 166 (1.3%) Pavel Emelyanov 160 (1.3%) <--- Harvey Harrison 150 (1.2%)
Another prominent OpenVZ guy is Denis Lunev, who is number 26 in the list with 87 changesets. The full list of people contributed to this release is more than 1200 lines long.
The home of the new project is lbvm.sourceforge.net - when you're an OpenVZ user, check it out! It is really interesting whenever you come to a situation to run multiple servers with OpenVZ.
Quoting the project site:
Virtualization technologies are used to enhance the hardware load on server systems and allow a more efficient use of those servers. Nowadays, there is a wide range of existing HA solutions which guarantee the availability of all virtual machines. There are just a few commercial solutions available for allocating virtual machines during their operation time to optimize the actual server workload (e.g. VMware DRS, VirtualIron LiveCapacity). A generic solution for all kinds of virtualization technologies is non existent today. [...]
The LBVM consists of several scripts that allow to load balance virtual machines (currently preconfigured: Xen and OpenVZ) among physical servers - the algorithm is fully configurable. LBVM uses the Red Hat Cluster Suite to provide high availability and rgmanager (part of the Red Hat Cluster Suite) to perform the actual migration. Developed cluster scripts for Xen and OpenVZ allow the rgmanager to perform live migrations with zero-downtime to provide maximum reliability and uptime. The load balancing algorithm uses preconfigured resources (cpu, mem, load; fully configurable) to decide when and where to move a virtual machine. Reports and migrations are logged and also available in human-readable format.
The initial release was done by Roland Dworschak, Sabine Huber, Alexander Leitner, and Joachim Pöttinger - all students at the Upper Austria University of Applied Sciences, Hagenberg Campus (course of studies Secure Information Systems).
Better late than never, these are my impressions about SCALE and Florida Linux Show we (me and my colleague, an OpenVZ kernel developer Andrey Mirkin) went to in February.
Back in 2006 I was a speaker at SCALE4x, so I can compare and say SCALE is getting bigger and better! This time it was three days, with three parallel conference tracks and about 80 booths, one of which was OpenVZ.
The booth traffic was moderate to high, we were busy explaining OpenVZ to people, distributing booklets and live CDs, and burning more CDs. For the first time we used lightscribe to have a nice image on CDs, and I can say it works pretty well, but requires about 15 minutes for the image to be "scribed" (and about the same time for the actual data).
Also we did a talk on live migration which was quite technical and interesting. Talk was mostly delivered by Andrey, and this is the first time he did a talk in English. I hope that SCALE people will upload the audio/video from the talk -- it should be interesting enough. Unfortunately we were not able to listen to any other talks -- this is the price for having own booth.
Last day of the show was Sunday, and overnight we flew to another coast, to deliver the OpenVZ talk to participants of the Florida Linux Show. FLS is (I hope yet) much smaller than SCALE, and it is one day only, but the organisation is about the same: the expo floor and the conference tracks. My talk was attended by about 50 people, of which about 15 were asking good questions.
I managed to show the live migration of a container running pacman xscreensaver, but it was interrrupted when I raised a hand with the second notebook to show it -- apparently both the power supply and the battery got disconnected so it suddenly switched off. I continued with the slides while Andrey fixed the notebook, and then I showed the demo (without touching the second notebook this time). This "demo effect" happens from time to time, and the more people are attending the more the probability that it will happen. Anyway, all's well that ends well.
In the evening we had a dinner with some FLS participants, including Jon "maddog" Hall who was the keynote speaker, and Dan Trevino, a member of local Ubuntu community who helps us with OpenVZ/Ubuntu integration.
Next day we were in New York and met with Vasily Tarasov, our colleague who is now taking the post graduate courses in Stony Brook University. He is working on various kernel-related projects and maybe will help us a bit with OpenVZ.
I was interviewed there by Toon Vanagt from virtualization.com about the WIKI article HA cluster with DRBD and Heartbeat and some background info on why this howto was written. You can read the whole interview at virtualization.com or watch the video right here:
OpenVZ is (and has been, for the past few years) a good contributor to the mainline kernel. But in this release we are really doing better than before: 215 patches written by OpenVZ people submitted to the 2.6.24 kernel during the period of its development (i.e. last 3½ months). This is about 2% of all the patches that were merged into 2.6.24.
Most of that patches are for PID namespaces, preliminary support for net namespaces (i.e. network stack virtualization for containers), and various bugfixes.
PID namespace is now almost complete and quite usable, although it's marked as "experimental" for now. For the technical description of the feature, see this lwn.net article.
Net namespace is a work-in-progress, and there are already a lot of patches stacked in Dave Miller's net-2.6.25 tree for future inclusion into the 2.6.25 mainline kernel. The feature is expected to be complete and usable by 2.6.25 kernel release, with IPv6 support coming a bit later.
Jon Corbet of LWN.net also wrote about the 2.6.24 kernel statistics (back when it was still at a RC stage) here. Note that OpenVZ's Pavel Emelyanov is number 5 in "Most active developers" (by changeset) list, with 146 patches contributed.
I like the way he summarizes what OpenVZ is: "a really fantastic lightweight Linux virtualization technology that doesn't have the performance overhead of full OS virtualization systems".
Gentoo templates and other OpenVZ-related stuff from Daniel can be downloaded from www.funtoo.org/linux/openvz/
Cell (Wikipedia article) is a very interesting microprocessor from IBM, which is based on 64-bit Power architecture, "but with unique features directed toward distributed processing and media-rich applications". Essentially, this is a hybrid CPU, combining traditional Power core and and eight specialized Synergistic Processing Elements (SPEs). Making OpenVZ kernel and tools SPE-aware is the main topic of the work being done.
Read the article: part 1, part 2, and wait for part 3.
Now all I need is Playstation 3 (which features Cell BE).
In addition to the information about HA clustering with OpenVZ that is currently available in the OpenVZ WIKI article HA cluster with DRBD and Heartbeat I'll show how the checkpointing feature of OpenVZ can be used for a "live switchover" cluster feature. Thomas Kappelmüller (he attends Upper Austria University of Applied Sciences, Hagenberg Campus - Computer- and Media Security (B.Sc.)) has written some scripts for this purpose (we will add the scripts and some background information about using them in the HA cluster with DRBD and Heartbeat article shortly).
And another brand-new outlook about clustering will be given: some students of Secure Information Systems (M.Sc.) - also at the Upper Austria University of Applied Sciences, Hagenberg Campus - have worked on "LBVM" (load balancing of virtual machines). The LBVM allows sharing virtual machines among physical servers in a predefined cluster. With the help of load balancing algorithms it is possible to automatically live migrate VEs. Their solution uses a general approach, which allows the use of different virtualization technologies (initially they support OpenVZ and Xen).
Of course there are also a lot of other interesting talks at the conference - so it's really worth attending it. And it would be really nice talking to some other OpenVZ users there ;-)
Absolutely nothing. We will keep doing what we do, providing new releases, fixing bugs, supporting our users and remain focused on integrating containers virtualization technology into the mainstream Linux.
Separate from the company name change, you'll see us slowly cease using the terms "VE" (virtual environment) and "OS-level virtualization". The terms commonly used in the industry are "containers" and "contaners-type virtualization" -- and we are already using those.
Remember: a VPS is a VE is a container.

Comments
Do you still stand by your opinions above now in 2016?…