But while I was having fun, our guys did a great job posting sets of User Beancounter (UBC) patches and the network namespaces patches for a review. It looks like UBC stuff caused quite a long discussion. Some valid concerns were raised (and will be addressed).
The other thing is, besides UBC, there is also CKRM (Class-based Kernel Resource Management) out there, which basically has the same (or perhaps broader) goals. As Alan Cox puts it, “[...] OpenVZ has all the controls and resource managers we need, while CKRM is still more research-ish. I find the OpenVZ code much clearer, cleaner and complete at the moment, although also much more conservative in its approach to solving problems”.
That’s right — CKRM is a complex framework for doing resource management, providing an interface to plug specific resource controllers. Framework is here, but there are not too many controllers available (and CKRM is here for a few years already). User beancounters, on the other hand, just do what they were designed for — i. e. to limit a container within its boundaries, not letting it to abuse some kernel resources and thus make troubles for other containers.
I also hope to have some DVDs with the latest OpenVZ stuff available, if I’ll have time to burn them during this weekend. :)
Asked what the plans are to get the OpenVZ technology into the distributions of the leading Linux vendors Red Hat and Novell's SUSE, Kolyshkin said the OpenVZ patchset has already been provided for the SUSE Linux Enterprise Server 10 kernel and that the SUSE engineers are currently evaluating the technology.I hope Red Hat and SUSE get on board in the not too distant future.
He pointed to comments made earlier this year by Holger Dyroff, vice president of Linux Server product management at Novell, where Dyroff said Novell is committed to bringing the latest advances in virtualization "and will evaluate the technology for possible inclusion in a future release of SUSE Linux Enterprise Server 10."
Kolyshkin also referenced comments made by Brian Stevens, CTO at Red Hat, in a March Webcast, where he said the company saw a strong use case for lightweight, container-based virtualization and will "get behind that, absolutely."
With regard to the Xen virtualization technology that both Red Hat and SUSE have agreed to include in their distributions, Kolyshkin said they are employing different virtualization approaches that can "happily coexist."
Last week me and Kirill Korotaev visited Ottawa to take part in Linux Kernel Summit and Linux Symposium. It was our first time on these events, so we were in a good mood despite the 16 hours flight from Moscow to Ottawa, and the 8 hours timezone change. We went to those event mostly to discuss containers and their integration into mainstream.
Containers (VEs, VPSs), or kernel-level virtualization technology (implemented in OpenVZ), were discussed very widely during both events. The topic was presented by three parties:
- OpenVZ
- IBM (ex-Meiosys guys)
- Eric Biederman
The overall feeling among the kernel people is: containers are a good feature to have in Linux kernel, let’s merge it into mainstream. But since several different implementations of the technology are available, and several groups are working on those, the mainstream code should be a result of consensus between all those implementations.
So, let me describe what are all those groups are aiming for:
( Read more...Collapse )To conclude — this is not going to be an easy task, but it’s doable, and the thing that we met in person and discussed all that stuff, and that all the other kernel developers are all for us helps a lot. Sooner or later, we will be there.
Both events are really interested. Kernel Summit is the place where all the top-notch kernel hackers decide on the future of Linux — and one of the topics this year will be OS-level virtualization. Linux Symposium is one of the best Linux conferences out here, is highly technical, with a lot of interesting talks, sessions, and tutorials. “Virtualization guys” such as Eric Biederman, Jeff Dike, Ian Pratt, Cedric Le Goater, Dave Hansen, and Serge Hallyn will all be there.
Five years ago, in 2001, our kernel developer Andrey Savochkin was there at the “Linux 2.5 Kernel Summit”. Here is a nice description of that event, including a group picture (Andrey is a guy in a light brown sweater, standing in between Linus Torvalds and Eric Raymond; if you still can’t find him, here is the annotated photo).
There is a set of “Powered by OpenVZ” images available from our wiki, Artwork page. If you use OpenVZ, let the world know it — put one of those images to your homepage. The original OpenVZ artwork done as vector graphics (using Inkscape) is also available from the Artwork page, so you can make your own banners and buttons. If you wish to do so, feel free to add your graphics to the page. |
I am happy to report we have some success with our Wiki at wiki.openvz.org. So far we are at almost 50 articles, and the number keeps growing daily.
The most exciting fact that there are a few articles written by OpenVZ community, for example, this Ubuntu template article, or a short troubleshooting note on Cpanel quotas.
You can contribute as well, too. After logging in, you can edit any article, or start a new one.
Wiki features
( Click to read more about wiki markup, categories, talk pages, special pages, CSS for printing...Collapse )To conclude — wiki is a fantastic tool for collaborative development of a project’s knowledge base.
We have just released a new kernel from the development branch — a shiny new 2.6.16-026test014.4. Aside from the usual bunch of fixes and some performance optimizations, it includes three major features:
- Virtual ethernet device (a.k.a. veth) for a VE
- /proc/meminfo virtualization
- IPv6 virtualization
I will probably describe the other features some time later, now I want to tell you a bit about what veth is.
As each VE is "just like a real server", it needs networking abilities, thus it needs an IP address. For that to happen, there is a special network device implemented in OpenVZ kernel, called venet. Venet appears in a VE, and a physical server admin can set up an IP (or a few) for a VE. Note that IP for a VE should be set from the host system, because the proper route should be added to the host's routing table.
While venet is just fine for most of the purposes, there are some special cases which it just can not handle. For example, since venet has no MAC address, there is no ability to send/receive broadcasts, which makes impossible to run DHCP software in a VE. There is no ability to use multicasts. VE owner can not add a new IP to his system (which is actually good if your VE is untrusted, but in some other cases is a bit inconvenient).
So, to solve the above, here comes yet another virtual network device for a VE called veth. Being a human I am too lazy to repeat the work already done, so let me just quote Kirill Korotaev, our kernel team leader:
( Read more...Collapse )So, to conclude, with virtual ethernet device OpenVZ becomes even more powerful and useful in some advanced networking scenarios. As always, the power comes with the responsibility: do not give veth to the untrusted VEs, or you'll be b0rked.
The workshop itself was pretty crowded, we had a lot of interesting questions from the audience. Here is the photo from the workshop: ( 800x600 JPG, 81KbCollapse ).
Speaking of distributions, I always like to say that Gentoo Linux is the first distro which includes/supports OpenVZ — since the last September! It has happened due to the endless help from the Gentoo-VPS project. For a Gentoo user that means that installation is almost as easy as running emerge openvz-sources vzctl. Well, actually it involves some more steps which are described in details here.
We are also working with the other distribution vendors, and today I am really happy to announce is Mandriva Corporate Server 4.0 will include OpenVZ! Full press release on that is here.

Comments
Do you still stand by your opinions above now in 2016?…