IBM developerWorks recently published the second part of the article describing OpenVZ on Cell BE processors.
Cell (Wikipedia article) is a very interesting microprocessor from IBM, which is based on 64-bit Power architecture, "but with unique features directed toward distributed processing and media-rich applications". Essentially, this is a hybrid CPU, combining traditional Power core and and eight specialized Synergistic Processing Elements (SPEs). Making OpenVZ kernel and tools SPE-aware is the main topic of the work being done.
Read the article: part 1, part 2, and wait for part 3.
Now all I need is Playstation 3 (which features Cell BE).
I am happy to announce that OpenVZ is taking part in two Linux events this February. As always, we will be happy to meet with OpenVZ users.
First event is Southern California Linux Expo, which will be held in Los Angeles, CA, 8 to 10 Feb. Me and my colleague, Andrey Mirkin, will give a talk titled "Containers Checkpointing and Live Migration". Plus, we will have a booth there, with all the usual stuff: demos, live CDs etc. I've been to SCALE back in 2006 and liked it (see an old post here).
The next day, 11 Feb, in Jacksonville, FL, we will be at the Florida Linux Show, giving an introductory talk about OpenVZ. Since Florida is quite far away from Los Angeles, and the event is next day, it means six hours flight overnight. Anyway it's shorter than Moscow to LA. :)
And another brand-new outlook about clustering will be given: some students of Secure Information Systems (M.Sc.) - also at the Upper Austria University of Applied Sciences, Hagenberg Campus - have worked on "LBVM" (load balancing of virtual machines). The LBVM allows sharing virtual machines among physical servers in a predefined cluster. With the help of load balancing algorithms it is possible to automatically live migrate VEs. Their solution uses a general approach, which allows the use of different virtualization technologies (initially they support OpenVZ and Xen).
Of course there are also a lot of other interesting talks at the conference - so it's really worth attending it. And it would be really nice talking to some other OpenVZ users there ;-)
SWsoft, sponsor of the OpenVZ project, has recently announced that it will adopt "Parallels" as a new corporate name moving into next year. So, you might ask what what does this mean for OpenVZ?
Absolutely nothing. We will keep doing what we do, providing new releases, fixing bugs, supporting our users and remain focused on integrating containers virtualization technology into the mainstream Linux.
Separate from the company name change, you'll see us slowly cease using the terms "VE" (virtual environment) and "OS-level virtualization". The terms commonly used in the industry are "containers" and "contaners-type virtualization" -- and we are already using those.
We have recently started a Partners section on our wiki for those who are working together with the OpenVZ project in one way or another. Our intent is to build this over time to serve as a resource. And it already works -- a couple of companies have added their profiles recently.
If you have created virtual appliances that use OpenVZ, or provide support services, or qualify in some other way, feel free to edit the page and add your profile there. If you have any questions, just go ahead and e-mail me, kir@openvz.org.
Just a note that this section is quite different from the 2006 Contributions section on the Wiki to acknowledge those people who contributed to the OpenVZ project last year.
One of the goals of OpenVZ project is to integrate containers functionality into the mainstream Linux kernel. As you know, most of the new kernel code goes through Andrew Morton, the right hand of Linus Torvalds.
I just came across the video of Andrew speaking at the LinuxWorld Expo 2007. Among the other topics, he tells what is going to be in the kernel in a year or so. It is quite interesting to see what he thinks of containers -- to see that part, scroll to 40:58.
Update: here's the transcription of the relevant part, provided by dowdle.
The one prediction I am prepared to make is that over the next 1 to 2 years there'll be quite a lot of focus in the core of the Linux kernel on the project which has many names. Some people call it containerization, others will call it operating system virtualization, other people will call it resource management. It's a whole cloud of different features which have different applications.
It can be used for machine partitioning, to partition workloads amongst one machine, otherwise known as workload management.
Server consolidation. Well, you have a whole bunch of servers which are 30 percent loaded -- move all those things onto one the machine without having to tread on each others toes.
Resource management. A number of people in the high end numerical computing want this; numerical computing area want resource management. Other people who are running world famous web search engines also want resource management in their kernel. In fact, the major, central piece of the whole containerization framework is from an engineer at Google. It's in my tree at present and I'm hoping to get it in at 2.6.24. It's just a framework for containerization. A whole lot of other stuff is going to plug in underneath it, which is under development at present.
So an example of resource management is you might have a particular group of processes, [and] you want to not let it use more than 200 MB of physical memory, and a certain amount of disk bandwidth, network bandwidth, a certain amount of CPU -- so you can just have this little blob and give it maximum amount of resources it can consume, let it run without letting it trash everything else which is running on the machine. So that is a resource management application. People also need this feature for high availability... and I'm still not really sure I understand why.
Also the OpenVZ product, which comes out of the development team in Russia -- that's a mature project that is mainly for web server virtualization, having lots and lots of different instances of the web server on one machine, not have one excessively taking resources away from another. They've been working very hard and very patiently, and with great accommodation on this project. I hope slowly we'll start moving significant parts of the OpenVZ product into the Linux kernel in a way in which it's acceptable to all the other stake holders, so that those guys don't end up carrying such a patch burden.
Here is good news for SLES users. I'm happy to report that the OpenVZ team resumed working on the SLES10-based OpenVZ kernel a few months ago, and we now have pretty stable SLES10 OpenVZ kernel. I encourage all SLES users to try it out.
The SLES10 kernel itself is based on the Linux kernel 2.6.16, and until SLES11 comes out, it remains the most "enterprise" (read stable and supported) kernel coming from Novell/SUSE. So, what we did is we took that kernel and ported our OpenVZ patchset to it. The only feature missing is I/O priority support, which is because the disk CFQ scheduler used in 2.6.16 is way too old. Other than that, it's a pretty decent kernel, and while we haven't declared it as stable yet we will do so really soon.
Last week I went to Cambridge, UK with my colleague Pavel Emelyanov to take part in the LinuxConf Europe and the containers mini-summit, as well as the Linux Kernel Summit session devoted to containers. Pavel, who works in the OpenVZ kernel team, is now working on integrating our technology into the mainstream Linux kernel. To his credit, the memory controller and the PID namespace patch (see my recent blog post), which were integrated into -mm recently, are mostly due to him.
The first event in Cambridge was LinuxConf Europe, where we both presented our talks on containers -- mine was a general introduction to virtualization, containers, and OpenVZ, while Pavel described some intimate details of memory controller (read "beancounters") implementation.
The next day we had to skip the LinuxConf to take part in the containers mini-summit. This was an event for all the containers shareholders to discuss what and how to present the containers topic at the Kernel Summit. Unfortunately, Eric Biederman (Linux Networx) and Paul Menage (Google) came later, and Balbir Singh (IBM) was buzy with VM mini-summit, so we did this mini-summit in two rounds. First round was with Pavel (OpenVZ), Cedric Le Goater (IBM), Oren Laadan (of Zap -- a checkpointing and live migration project), Kamezava Hiroyuki (of Fujitsu Japan, mostly interested in resource management), and Paul (who joined us over Skype). The second round was with Eric, Paul, and Balbir -- the next day in the hall. The results of this mini-summit are a few threads on containers@ mailing list, plus a few documents here.
Finally, there was 30-minute topic on the Kernel Summit devoted to the containers. Paul and Eric have summarized what we have done so far, and what are we going to do next. There was not much discussion, which I think is healthy because now everybody knows about containers and why they are needed. Slides from the talk are available here. Jonathan Corbet (of Linux Weekly News) also provided a summary of the topic (this is still subscriber-only content, but since I'm a subscriber I can share a free link with you).
It feels like we are making good progress and are on the right path to a containers implementation in the Linux kernel. You can see some people helping to make this happen in this photo. Click the image for larger version.
ML: Can you update us on the current status of OpenVZ integration into the mainline kernel? Do you expect anything to happen in the near future regarding integration?
Kir: Most notable is the addition of the PID namespace patchset by Pavel Emelyanov into -mm (Andrew Morton's) tree -- it means the code will be in Linus' kernel in a few months. PID namespaces is a feature that makes it possible to have different sets of PIDs in different containers. The code was mostly developed by OpenVZ's Pavel Emelyanov, with some pieces from IBM's Sukadev Bhattiprolu. With the first version sent back in May, it was rewritten a few times to incorporate comments, suggestions and feature requests from everyone who was interested.( Read more...Collapse )
I'd also like to add that just a few days ago the memory controller patchset was also accepted into -mm tree. It does things similar to user beancounters in OpenVZ. So far, the accepted code only provides group-based RSS and page cache accounting, plus a generic infrastructure to add another accountables. The code was developed by Pavel Emelyanov and Balbir Singh (of IBM) in close collaboration.
With that in place, today Pavel already sent the first version of kernel memory controller. The code is not aimed for inclusion yet -- it is mostly aimed at spiking the discussion and trying to determine the needs.
Back in July, me and a couple of colleagues (Pavel Emelianov and Denis Lunev, both from the OpenVZ kernel team) were in Canada for the Ottawa Linux Symposium.
OLS is a pretty big event and probably the biggest conference that I've seen. Unlike all the previous years, this time it was detached from the Linux Kernel Summit (that will be in Cambridge, U.K. next week). Being detached seemed to have little impact on the event -- it is still large and somewhat kernel-oriented. The facilities for talks and BoFs included one big and five smaller rooms , all named after different species of penguins (the big one is of course named Emperor).
We also had our talk there, presented by Pavel and covering some non-trivial aspects of our resource management solution: the beancounters, which is part of OpenVZ kernel. The paper (PDF, 156K, 9 pages) and the slides (ODP, 89K or PPT, 474K) are available. In short, this is what he was talking about:
Current Linux accounting and limiting mechanisms (setrlimit() and some global stats counters) are not enough as they do not provide any task group-based counters and limits. OpenVZ's beancounters address this issue, implementing per-group accounting and limiting for about 20 different properties, like kernel memory, user space memory, physical memory, network buffers etc. Some specific implementation details (like shared RSS accounting, kernel slab accounting etc) are described.
It's good to see the high level of interest to containers this year. As in any conference, though, a lot of networking is done away from the formal proceedings. For example, we (I mean everybody who's interested in containers) all had a breakfast in nearby Starbucks to discuss containers, resource management, network virtualization and other subtle aspects of what we do. About half of the famous Blackthorn Party for us was devoted to the same discussions (while the other half is surely about the beer).
It was a successful event, and I'm looking forward to take part in the next Blackthorn Party Linux Symposium in Ottawa.
I tried it and was able to migrate a CentOS 7 container... but the Fedora 22 one seems to be stuck in the "started" phase. It creates a /vz/private/{ctid} dir on the destination host (with the same…
The fall semester is just around the corner... so it is impossible for me to break away for a trip to Seattle. I hope one or more of you guys can blog so I can attend vicariously.
Comments
Do you still stand by your opinions above now in 2016?…