One of the quote there is from Werner Fischer; let me quote the whole paragraph here:
Werner Fischer in Austria has done development work with OpenVZ and high availability clustering, which he says, “makes it possible to start a virtual machine in seconds after a failover” within the information technology infrastructure. Werner Fischer is a developer at Thomas-Krenn.AG. He recently presented a paper on the subject at the Linux Tag conference in Germany (on May 6).
Again, for those who like the pictures, here is the photo taken during Werner's presentation:
( 800x600 JPEG, 41KCollapse )
It really amazes me how such big projects like Wikipedia which are done by the contributors are keep going to provide best contents and be very useful. Well, it amazes me no more than Linux or the whole free software movement, and I have to admit: we are living in really interesting times!
And so he did. We showed Andrew what OpenVZ is, what it consists of and what it can do, including our cool "live migration" demo. Then Kirill Korotaev (our OpenVZ kernel team leader) and Andrew moved on to the kernel stuff, showing some lines of code to each other and discussing it. We have also discussed the possibility of merging some of the OpenVZ code, and how to do that the best possible way.
For those of you who like visuals, here is a photo of (from left to right) Kirill Korotaev, Andrew Morton, and me. ( 800x670 JPG, 73KCollapse )
The next day, Andrew gave a keynote about the Linux 2.6 development process. The funny thing was part of the presentation was telling why such big projects can't be easily merged. Among the reasons cited was out-of-tree development, length and complexity of patches, lack of people with the knowledge to review the code, etc. That doesn't mean OpenVZ can't be merged — that means the process is not going to be easy and fast.
And the story continued just today. Serge Hallyn from IBM sent another set of OS-virtualization patches, and Andrew replied with a general thoughts and a request for discussion of how we should proceed with merging some sort of OS-level virtualization. Surely this is not the first round, but as more people are actively involved chances are becoming higher for integrating OpenVZ code in the Linux kernel.
SWsoft, a sponsor of OpenVZ project and a leader in hosting automation solutions, is holding a hosting summit. If you use (or want to use) OpenVZ for hosting, consider visiting this event, which will take place in Northern Virginia/D.C. (very close to Washington Dulles airport) from 30th of May till June the 1st. More info...
Hyatt Dulles hotel, in which the summit will take place, offer a discount to attendees: the price is $149/night if you reserve before May 22nd. Just mention SWsoft while registering at the hotel.
</shameless-plug>
We were together with Kirill Korotaev (a.k.a. dev), our kernel team leader, at the OpenVZ booth shared with SWsoft and their partner Thomas Krenn, so we had our own "little corner" to show OpenVZ on our two laptops. We also had the leaflets (in English and German) and a DVD to give away.
On the laptops we were showing command line interfaces, how to use vzctl to create VEs and set its resource limits, how user beancounters and disk quota works etc. etc. But the most amazing thing was definitely the live migration feature, which was shown using X screensaver running inside a VE and accessible via a VNC client. During the migration the screensaver was paused for a few seconds and then continued to run — but on the another laptop! So, from a user's point of view, migration looks like some kind of network delay.
Also, on May the 5th (which, coincidentally, was also my birthday) me and Kirill held a 1.5 hour workshop about virtualization and OpenVZ. Half of the workshop was a presentation on different virtualization technologies in general, and OpenVZ in particular. Another half was a live demo of some features OpenVZ have, including of course the live migration demo.
The workshop was great and attracted about 50 people (or might be more) — the room was pretty full. We had a lot of good questions, and many people came to our booth after the workshop to ask some more questions.
I've been bugging LWN to cover OpenVZ for a while now especially when the Checkpointing and Live Migration features were released. They cover it in the weekly edition released today. The only problem is that only subscribers can see the weekly edition on the week it is published. Non-subscribers have to wait a week. Check it out when you can. If you have an LWN subscription, you can check it out here:
http://lwn.net/Articles/180775/
Obligatory quote:
As might be expected, the checkpointing code is on the long and complicated side. The checkpoint process starts by putting the target process(es) on hold, in a manner similar to what the software suspend code does. Then it comes down to a long series of routines which serialize and write out every data structure and bit of memory associated with a virtual environment. The obvious things are saved: process memory, open files, etc. But the code must also save the full state of each TCP socket (including the backlog of sk_buff structures waiting to be processed), connection tracking information, signal handling status, SYSV IPC information, file descriptors obtained via Unix-domain sockets, asynchronous I/O operations, memory mappings, filesystem namespaces, data in tmpfs files, tty settings, file locks, epoll() file descriptors, accounting information, and more.
It's a huge pile of code that lets you checkpoint your virtual environments — in the same way you do "suspend to disk" or "hibernation" on your laptop. This is cool, but what's the use of it for VE? Here it is: you can actually restore ("wake up") a VE on a different machine! The process is called live migration; newer vzctl will have a tool called vzmigrate for that.
Imagine you want to add more RAM into your (physical) server, or upgrade the kernel. Such operations require a reboot of the physical server — means you have to shutdown all your VEs. But now you do not need to do that — instead you live migrate all VEs to another OpenVZ box, when do that maintenance and live migrate VEs back. From user's point of view, this looks like a delay in processing, not as any downtime. All the network connections and all the other stuff is preserved — that is why it is called "live" migration.
Another scenario is when your VE has grown considerably and it needs more CPUs than the server has. Again, you put another server with OpenVZ installed, and live migrate your VE to it. No hassles, no reconfiguration, no downtime.
Isn't it amazing? Yes it is. The most amazing thing though is the technology itself is free software. Free as in freedom, not as in beer, please do not mix those.
As of today, though, I like to announce that our git repository is now hosting the source code for vzctl and vzquota utilities. There you can find the latest not-released-yet versions of these utils.
If you browse through the changesets you'll notice that we have changed the utils' license from QPL to GNU GPL. Those licenses are very similar, but QPL is not considered to be free enough for some hard-core free software people. So we have listened to the voice of reason and switched to GPL.
I just can not consider this a minor news — thus the question mark in the subject.
As with every other kernel hacker interview posted to kerneltrap.org, this one is a very interesting reading, and gives an insight into both the people and the techonologies.
Cedrik Le Goater of IBM France just wrote a mail to LKML and openvz-devel lists saying that IBM is testing different virtualization projects. Here is what he wrote:
Recently, we've been running tests and benchmarks in different virtualization environments : openvz, vserver, vserver in a minimal context and also Xen as a reference in the virtual machine world.
We ran the usual benchmarks, dbench, tbench, lmbench, kernerl build, on the native kernel, on the patched kernel and in each virtualized environment. ( Read more...Collapse )
I am personally very happy to get this news. Let me explain why.
We do a lot of tests for OpenVZ, including tests for performance and stability, trying hard to find and fix all possible problems before they occur on users' boxes. We also do comparisons with the other projects — like Xen, UML or Linux-VServer — to make sure we are at least on a par.
It doesn't make much sense for OpenVZ to prepare and publish benchmarks. A comparison should be done by a trusted, independed third party, which will perform in a fair and unbiased manner. By its own initiative, IBM is fulfilling a meaningful role here.
Clearly, this work benefits users of virtualization technology. It would be terrific, I think, if the whole process is open in a collaborative community effort and all the projects are involved as active participants.
I am looking forward to seeing the test results, and contributing to the testing scenarios, methodologies and cases.

Comments
Do you still stand by your opinions above now in 2016?…