Top.Mail.Ru
? ?

Entries by tag: openvz

Andrew Morton on OpenVZ

Andrew Morton was giving a keynote on a recent LinuxWorld Expo in San Francisco. A fair portion of his talk was devoted to the need of testing new kernels, but also about what will appear in the kernel soon. A couple of slides were specifically about containers, including OpenVZ.

A nice recap of what he said is at zdnet.com, here's the quote: Additionally, and contrary to popular thinking, the debate over whether open source virtualization engines will fragment the industry is null and void since the kernel supports and will support all open source solutions – be it Xen, KVM, OpenVZ or VMware, Morton said.

Tags:

one kernel bug story among 305

A few days ago one of OpenVZ kernel team members, Pavel Emelyanov, posted a one-line patch to fix a bug in Linux kernel. He received the following reply from Andrew Morton, one of the upstream kernel maintainers:


I'm curious. For the past few months, people@openvz.org have discovered (and fixed) an ongoing stream of obscure but serious and quite long-standing bugs.

How are you discovering these bugs?


Andrew added later:


hm, OK, I was visualising some mysterious Russian bugfinding machine or something.

Don't stop ;)


So, here is the story behind that bug.

A few months ago, in the course of OpenVZ kernel testing, our QA (Quality Assurance) team found a strange issue. The thing is, every container (VE) in OpenVZ has a set of resource usage counters (and limits) called beancounters. All the usage counters should be zero when a VE is stopped, since naturally then all the resources are released. The issue was that a resource called kmemsize (a kernel memory used on behalf of given VE) had a usage counter of 78 bytes after the VE was stopped -- which effectively means 78 bytes of kernel memory were lost (or leaked, as programmers say).

Who cares about 78 bytes, especially on a server with 16 gigabytes (17,179,869,184 bytes) of RAM? We do. Pavel checked the beancounters debug information which showed that one struct user object has leaked. He then tried to reproduce that but with no luck.

Bugs that can not be reproduced are tough. The only option left was to audit the kernel source code. That involved finding all the places where struct user object is referenced, and checking the code correctness (the term "correctness" in this context means that every object that is allocated must later be released). It took him 4 hours to do the audit, and he found one place where the reference to an object might be lost (which means it could not later be released). It's the same as if you lend a book to your friend and later forgot whom you gave it to -- you lost the reference and you can't get the book back.

In this case, after the problem was found, fixing it was pretty straightforward. So Pavel wrote a fix and a demo code to trigger the bug, tested the fix and sent it to Linux kernel mailing list.

Why is this particular incident so important?
* It's OpenVZ code (beancounters) which helped to detect the leak in the first place -- as the bug is very hard to trigger (unless you know how) and the leak is small enough that it might not be discovered at all.
* It demonstrates OpenVZ developers dedicated attitude. They never dismiss real bugs as "works for me" or "invalid", and work to find the root cause and fix the problem.
* This bug is in fact a security issue. An ordinary user (actually two users are needed in this case) could exploit the bug and eat all the kernel memory, thus bringing the whole system down. Worse scenarious are possible as well.
* Incidentally, OpenVZ is protected from this security issue -- because kmemsize beancounter (which helped to found it) limits kernel memory usage per Virtual Environment.

Most important of all, this is just one out of 305 kernel patches by our team which were accepted into the mainstream Linux kernel during a one-year period. Almost one patch a day, excluding weekends and holidays. And we are not going to stop! :-)

Join Our Team at LinuxWorld San Francisco

We're very excited that this year OpenVZ will have exhibit space in the dot-org pavillion of LinuxWorld in San Francisco, August 6-9. We will be showing and demoing OpenVZ server virtualization, answering questions and so on.

Here is the best news of all. We can have up to 7 people at our exhibit. While a few OpenVZ developers will come, it will definitely be less than 7. We do not want to stall OpenVZ development. :)

We would like the community to participate with us in the event. If you live in California (or can come to this LinuxWorld), are an OpenVZ user and would like to be a part of our team at the OpenVZ exhibit -- you are very welcome to join us! Please email me (kir@openvz.org) your details and we'll discuss arrangements.

OpenVZ talk at Linuxtag

I had the opportunity to talk about OpenVZ at this years Linuxtag in Berlin, Germany. For some reason the slot for the talk was 90 minutes (not 60 minutes, as for the other talks at Linuxtag) - maybe because Alan Cox had a parallel session. So I thought this will be a hard job, keeping the audience from falling asleep after lunch. But OpenVZ seems to be so fascinating, that even when I mentioned after 50 minutes that I have told the most important things, but I have some slides left with more details for those who still want more information, only two people left, and the remaining 70 - 80 stayed!
I shot a photo of the audience after the live demo. As the flash of my camera did not work the first time, I took a second snapshot. I said, "It seems that taking a picture with a digicam needs more time than OpenVZ needs to start a Virtual Environment", and the whole audience started laughing ;-)
So, here is the pic:

You can find the description of the talk at the Linuxtag website, and the slides on my private website. Thanks to Kir, he had prepared most of the slides!

Tags:

Recently, I had the opportunity to present at a session of the Gelato Itanium Conference and Expo in San Jose. It was a good fit because they had a special track on virtualization, and OpenVZ (and the Virtuozzo product) is the only stable virtualization technology available now for Itanium servers.

Once again, I was able to talk with Andrew Morton (a kernel hacker, the right hand of Linus Torvalds) and was encouraged about the prospect of OS virtualization and OpenVZ in the Linux kernel. That is something we would really like to see and have been working towards. This article summarizes Andrew’s remarks noting “OpenVZ already has thousands of systems out there” and “as far as containerization standard in mainline goes, ‘most of the stakeholders are playing together quite nicely’”.

Yes, we are and we’ll keep at it so we can realize our goal.

HP labs compares OpenVZ and Xen

HP labs has performed and published a performance evaluation of OpenVZ and Xen for server consolidation. The 14 pages PDF can be viewed here: HPL-2007-59R1.pdf (418K). Update: links to the paper updated.

I wrote about a Xen/OpenVZ comparison last month here -- the one that was done by a German student as his thesis. I am really glad to see another third party evaluation. (I would never ever trust any comparisons done by or sponsored by vendors.) I also like the level of details provided. Here are quotes from the last section:

For all the configurations and workloads we have tested, Xen incurs higher virtualization overhead than OpenVZ does, resulting in larger difference in application performance when compared to the base Linux case.
<...>
For all the cases tested, the virtualization overhead observed in OpenVZ is limited, and can be neglected in many scenarios.

For all configurations, the Web tier CPU consumption for Xen is roughly twice that of the base system or OpenVZ.


Does that mean OpenVZ is better for scenarios such as Linux servers consolidation? Yes, much better. Does that mean Xen is not good? No, not really. Xen has its applications as well (say when you also want to run Windows on the same piece of hardware), and in fact OpenVZ and Xen can nicely and happily co-exist.

benchmarks; Xen vs OpenVZ

There is a somewhat interesting article at the Infoworld blog, discussing the VMware vs. Xen and Xen vs. VMware benchmarks. It appears VMware did a not-quite-good job comparing their ESX to Xen, so Xen came back and presented another comparison, where it is either on par or a bit better than ESX.

From my experience working as Virtuozzo QA team leader (a few years ago) doing all sorts of performance and stress tests for Virtuozzo kernel, I know that there are very many factors influencing the results. Consider this: if you happen to run your test while cron is running daily jobs like slocate's update, log rotation routines etc., your performance could be 10 to 50 per cent slower. This was a very simple and obvious example -- just disable cron daemon before you do your testing. A trickier example is when networking performance increase by 10 to 15% if you bound a NIC interrupt to a single CPU on an two or four-way SMP box.

So, my suggestion is to take those benchmarks and comparisons with a grain of salt. Better yet, do your own comparison using your hardware and your workloads -- and make sure you understand all the results. So if something is slow -- find out why. If something is faster than it should be -- find out why, find out what you did wrong. Perhaps this part -- results analysis -- is the most complex part in the performance testing field.

Having said that, I'd like to point out a Xen vs. OpenVZ comparison, done by a German student Björn Gross-Hohnacker who I met at last year's LinuxWorld Cologne. Björn graciously allowed us to publish his results, so we have translated part of it into English.

Here is the bottom line summary: IPC and disk I/O performance is better (or much better) for OpenVZ than Xen, CPU-intensive tasks are about the same for both, networking is a bit better in OpenVZ. Conclusion: for homogeneous (i. e. Linux-only) environments, OpenVZ is way better -- as it was designed to be.

You are taking this with a grain of salt, aren't you? ;)

Update: comments disabled due to spam

2.6.20 is here

I am happy to report that OpenVZ project has made an initial port to Linux kernel 2.6.20. The resulting kernel is now available from the GIT repository.

This is a work in progress -- so far we have checked the kernel compiles and passes some tests on x86; other architectures were not even tried yet. So it is definitely not for the faint of heart.

Note that OpenVZ versioning scheme was changed (or, rather, simplified) with this branch -- the branch number and the test/stab word were dropped, and not it is just 'ovzNNN' in RPM release and kernel "extraversion" fields (where NNN is three-digit number, starting from 001). So, what was meant to be 030test001 has become ovz001. Hope this will lead to less confusion.

Binary and source RPMS will be released some time next week.

Tags:

fun with polls and statistics

At the end of last year, we conducted a poll on the openvz.org web site. For about 4 weeks the poll was online, and more than 1300 people voted. While it is offline now, you can still see the results here.

The question was: "Which virtualization solutions are you using, or plan to use", and the top three answers were: VMware (580 votes), Xen (504 votes) and OpenVZ (502 votes). Those are the big guys. The medium guys are: Linux-VServer (165 votes), Virtuozzo (145 votes) and QEmu (148 votes). All the others are below the 5 percent barrier.

The results are not shocking. VMware is the clear leader, Xen is a recognizable name in virtualization, and OpenVZ is high because it's OpenVZ site. QEmu is somewhat popular among Linux geeks, as well as Linux-VServer.

About the same time, a German Linux portal ran a poll similar to this. The only difference was that they allowed only a single answer, while our poll allowed a few. No, you don't have to know German to read its results. VMware accounts for 60% (perhaps because only a single option was allowed), Xen goes next with 15%, OpenVZ is number three with 7%. I'm glad to see we are among the top three.

The fun thing in that poll is something called "Virtual Server" is number four. Hmm...I find that name too generic -- it could be M*crosoft Virtual Server, or Linux-Vserver, or something else.

Finally, I think it's a fun thing to run a poll, so here's one another poll for you.

Poll #943152 containers vs. hypervisor

Compare hypervisor and containers (OS-level) virtualization. Which one is more important to be incorporated in the mainstream (vanilla) Linux kernel?

Both are important
2(50.0%)
Containers is more important
2(50.0%)
Hypervisor is more important
0(0.0%)
Neither is important
0(0.0%)

2006 contributions

My last post talked the diligent work that goes into security and how many people contribute to that effort in the Linux community.

Well, now, it is time for me to acknowledge those many people in the user community who have contributed their talents to the OpenVZ project and helped make OpenVZ software better.

The list is a long one, and these people all deserve our collection thanks so I created "2006 contributions" article on the OpenVZ Wiki. This is wiki, so if you see that something/somebody is missing, feel free to add the information.

On behalf of the OpenVZ project, we are humbled and thank everybody who made OpenVZ better in one way or another.

Latest Month

July 2016
S M T W T F S
     12
3456789
10111213141516
17181920212223
24252627282930
31      

Syndicate

RSS Atom

Comments

Powered by LiveJournal.com
Designed by Tiffany Chow