October 8th, 2007

  • k001

Andrew Morton talk at LinuxWorld Expo 2007

One of the goals of OpenVZ project is to integrate containers functionality into the mainstream Linux kernel. As you know, most of the new kernel code goes through Andrew Morton, the right hand of Linus Torvalds.

I just came across the video of Andrew speaking at the LinuxWorld Expo 2007. Among the other topics, he tells what is going to be in the kernel in a year or so. It is quite interesting to see what he thinks of containers -- to see that part, scroll to 40:58.

Update: here's the transcription of the relevant part, provided by dowdle.

The one prediction I am prepared to make is that over the next 1 to 2 years there'll be quite a lot of focus in the core of the Linux kernel on the project which has many names. Some people call it containerization, others will call it operating system virtualization, other people will call it resource management. It's a whole cloud of different features which have different applications.

It can be used for machine partitioning, to partition workloads amongst one machine, otherwise known as workload management.

Server consolidation. Well, you have a whole bunch of servers which are 30 percent loaded -- move all those things onto one the machine without having to tread on each others toes.

Resource management. A number of people in the high end numerical computing want this; numerical computing area want resource management. Other people who are running world famous web search engines also want resource management in their kernel. In fact, the major, central piece of the whole containerization framework is from an engineer at Google. It's in my tree at present and I'm hoping to get it in at 2.6.24. It's just a framework for containerization. A whole lot of other stuff is going to plug in underneath it, which is under development at present.

So an example of resource management is you might have a particular group of processes, [and] you want to not let it use more than 200 MB of physical memory, and a certain amount of disk bandwidth, network bandwidth, a certain amount of CPU -- so you can just have this little blob and give it maximum amount of resources it can consume, let it run without letting it trash everything else which is running on the machine. So that is a resource management application. People also need this feature for high availability... and I'm still not really sure I understand why.

Also the OpenVZ product, which comes out of the development team in Russia -- that's a mature project that is mainly for web server virtualization, having lots and lots of different instances of the web server on one machine, not have one excessively taking resources away from another. They've been working very hard and very patiently, and with great accommodation on this project. I hope slowly we'll start moving significant parts of the OpenVZ product into the Linux kernel in a way in which it's acceptable to all the other stake holders, so that those guys don't end up carrying such a patch burden.