Top.Mail.Ru
? ?
I am playing around for years with all major virtualization environments like all VMware products, Microsoft and several xen based solutions. But OpenVZ is the overall winner and I summarize here just a few features which make it perfect for me.

Basic features:
  • No special hardware necessary, runs on all of my pc´s and servers
  • Fast installation of OpenVZ, approx. 20 min starting from bare metal
  • Extremely fast and easy installation of guest´s, within 1 minute
  • Open Source licensed with good support from the OpenVZ team
Advanced features:
  • Online Live migration - YES, it works
  • Consistent online backups: see our vzdump
We run Proxmox Mail Gateway (in high availability cluster) on OpenVZ and we did a lot of benchmarking with real data. And we were very impressed with the results.

benchmarks; Xen vs OpenVZ

There is a somewhat interesting article at the Infoworld blog, discussing the VMware vs. Xen and Xen vs. VMware benchmarks. It appears VMware did a not-quite-good job comparing their ESX to Xen, so Xen came back and presented another comparison, where it is either on par or a bit better than ESX.

From my experience working as Virtuozzo QA team leader (a few years ago) doing all sorts of performance and stress tests for Virtuozzo kernel, I know that there are very many factors influencing the results. Consider this: if you happen to run your test while cron is running daily jobs like slocate's update, log rotation routines etc., your performance could be 10 to 50 per cent slower. This was a very simple and obvious example -- just disable cron daemon before you do your testing. A trickier example is when networking performance increase by 10 to 15% if you bound a NIC interrupt to a single CPU on an two or four-way SMP box.

So, my suggestion is to take those benchmarks and comparisons with a grain of salt. Better yet, do your own comparison using your hardware and your workloads -- and make sure you understand all the results. So if something is slow -- find out why. If something is faster than it should be -- find out why, find out what you did wrong. Perhaps this part -- results analysis -- is the most complex part in the performance testing field.

Having said that, I'd like to point out a Xen vs. OpenVZ comparison, done by a German student Björn Gross-Hohnacker who I met at last year's LinuxWorld Cologne. Björn graciously allowed us to publish his results, so we have translated part of it into English.

Here is the bottom line summary: IPC and disk I/O performance is better (or much better) for OpenVZ than Xen, CPU-intensive tasks are about the same for both, networking is a bit better in OpenVZ. Conclusion: for homogeneous (i. e. Linux-only) environments, OpenVZ is way better -- as it was designed to be.

You are taking this with a grain of salt, aren't you? ;)

Update: comments disabled due to spam

2.6.20 is here

I am happy to report that OpenVZ project has made an initial port to Linux kernel 2.6.20. The resulting kernel is now available from the GIT repository.

This is a work in progress -- so far we have checked the kernel compiles and passes some tests on x86; other architectures were not even tried yet. So it is definitely not for the faint of heart.

Note that OpenVZ versioning scheme was changed (or, rather, simplified) with this branch -- the branch number and the test/stab word were dropped, and not it is just 'ovzNNN' in RPM release and kernel "extraversion" fields (where NNN is three-digit number, starting from 001). So, what was meant to be 030test001 has become ovz001. Hope this will lead to less confusion.

Binary and source RPMS will be released some time next week.

Tags:

openvz and red hat releases day

Today is definitely the day of releases -- we've got three of those for you.

1. New vzctl, which adds I/O priority setting support, and provides bug fixes in bash-completion, vzctl destroy, man pages, and some other improvements along the way.

2. New devel kernel, containing some bugfixes in different subsystems, as well as per-VE I/O priority support. This kernel is getting quite stable, so it's called 028stab021 (it was 028test before).

3. The most important one: first RHEL5-based OpenVZ kernel. This is pretty much the same as the latest devel kernel, the only difference is it is based on kernel from Red Hat Enterprise Linux 5, not on vanilla 2.6.18. Our internal testing shows this RHEL5 kernel is pretty good, so it will replace our current stable branch real soon, so if you are currently using stable OpenVZ kernel, it makes sense to try this one and report your bugs (if there are some) soon.

Oh, by the way, Red Hat Enterprise Linux 5 was also released just a few hours ago!

fun with polls and statistics

At the end of last year, we conducted a poll on the openvz.org web site. For about 4 weeks the poll was online, and more than 1300 people voted. While it is offline now, you can still see the results here.

The question was: "Which virtualization solutions are you using, or plan to use", and the top three answers were: VMware (580 votes), Xen (504 votes) and OpenVZ (502 votes). Those are the big guys. The medium guys are: Linux-VServer (165 votes), Virtuozzo (145 votes) and QEmu (148 votes). All the others are below the 5 percent barrier.

The results are not shocking. VMware is the clear leader, Xen is a recognizable name in virtualization, and OpenVZ is high because it's OpenVZ site. QEmu is somewhat popular among Linux geeks, as well as Linux-VServer.

About the same time, a German Linux portal ran a poll similar to this. The only difference was that they allowed only a single answer, while our poll allowed a few. No, you don't have to know German to read its results. VMware accounts for 60% (perhaps because only a single option was allowed), Xen goes next with 15%, OpenVZ is number three with 7%. I'm glad to see we are among the top three.

The fun thing in that poll is something called "Virtual Server" is number four. Hmm...I find that name too generic -- it could be M*crosoft Virtual Server, or Linux-Vserver, or something else.

Finally, I think it's a fun thing to run a poll, so here's one another poll for you.

Poll #943152 containers vs. hypervisor

Compare hypervisor and containers (OS-level) virtualization. Which one is more important to be incorporated in the mainstream (vanilla) Linux kernel?

Both are important
2(50.0%)
Containers is more important
2(50.0%)
Hypervisor is more important
0(0.0%)
Neither is important
0(0.0%)

2006 contributions

My last post talked the diligent work that goes into security and how many people contribute to that effort in the Linux community.

Well, now, it is time for me to acknowledge those many people in the user community who have contributed their talents to the OpenVZ project and helped make OpenVZ software better.

The list is a long one, and these people all deserve our collection thanks so I created "2006 contributions" article on the OpenVZ Wiki. This is wiki, so if you see that something/somebody is missing, feel free to add the information.

On behalf of the OpenVZ project, we are humbled and thank everybody who made OpenVZ better in one way or another.
Quite frequently, people ask me if OpenVZ is secure enough. They are thinking that because in OpenVZ everything is running under one single kernel (as opposed to, say Xen or VMware, where each partition runs each own kernel), this single kernel is a single point of failure (SPOF).

The answer is: yes, OpenVZ stable kernel is secure enough to be used for production workloads and in hostile environments. Why? The long answer involves a comparison of different virtualization techniques and their SPOFs, a description of OpenVZ architecture, the "denied by default" principle, the fact that its practically proven on a thousands of servers, etc. The short answer is: because we care.

Security is quite a complex field. It's not enough to write secure code once, or secure your system once. In the real world, security comes from constant care. In other words, it’s not enough for a sys admin who is using a good, secure operating system, but doesn't care daily about security.

The Linux kernel is quite secure. Still, new problems are found and resolved from time to time, by those people who care. Most of them are security experts (like Solar Designer), others just work on Linux.

A few days ago, Red Hat released a new update to RHEL4 kernel (RHSA-2007-0014). Let me quote: Red Hat would like to thank Dmitriy Monakhov and Konstantin Khorenko for reporting issues fixed in this erratum.

Both Dmitriy and Konstantin are working in our Virtuozzo/OpenVZ team. Dmitriy works in the Quality Assurance department (which I wrote about before), making sure our kernels are rock-solid (by trying to break it badly, that is). Konstantin works in our kernel support team, mostly fixing the causes of kernel oopses. Besides that, as you see, they both care for security (as well as everybody else in our team). They find bugs (including security bugs), they report and fix those, they send the results to major distribution vendors (and it's not the first time Red Hat has acknowledged our developers), as well as mainstream Linux (again, I wrote about it as well before).

And this is how Linux wins: with all the parties contributing to everybody's benefit.

LinuxWorld Open Solutions Summit

LinuxWorld Open Solutions Summit will be held Feb 14-15 in New York. I will be there with a talk titled "Linux Virtualization Technology Alternatives". It gives a short overview of three main virtualization technologies -- namely emulation, paravirtualization and OS virtualization, outlining their pros and cons. Second part of the talk is devoted to OpenVZ, its components, features, and applications.

While in New York I'd be happy to meet with anyone using or interested in OpenVZ.

UML author on OpenVZ

There is a nice interview with Jeff Dike, author and maintainer of User-Mode Linux, published recently on linux.com. There are a couple of statements devoted to OpenVZ:

Dike also noted that SWsoft and XenSource are trying to get OpenVZ and Xen technology, respectively, into the mainline kernel, but says that's unlikely. <...> He also says that OpenVZ is unlikely to be adopted in the mainline kernel tree, at least as it is. Dike says that OpenVZ has to have "code sprinkled all over the place" to work, and it violates conventions within the kernel.

I'd like to comment on this.

1. Just to clarify, our goal is not to merge OpenVZ -- we want to have OS-level virtualization (a.k.a. containers, a.k.a. namespaces) in the mainline kernel. Some parts of that will be from OpenVZ, some from Eric Biederman, some from IBM developers, something from somebody else we don't know yet -- and that is what happening now.

2. We certainly do not want to merge OpenVZ "as it is". The process is like the following: we pick up a piece of OpenVZ functionality, review it, port it to recent -mm release, and submit patches for review. We then get some comments, take them into account and release the next version of the same patch set (example: today Dmitry Mishin has just submitted the third iteration of L2 network namespace patches). Sometimes somebody else pick up what we submit, rework it and send for review. After a few iterations, code is ready to be merged.

3. It is not just ourselves SWsoft and the OpenVZ project who want to have OS-level virtualization in the mainline kernel. Different teams are working on that together -- so it is a collaborative effort. So far, we have achieved some success.

4. Indeed, OpenVZ changes affects the kernel code in multiple places, because OpenVZ adds a new and complex feature, which can be compared to, say, multitasking. Still, this code is easily broken down to several features/subsystems, so it is not like one huge non-reviewable patch.

5. While the OpenVZ patch is big, as compared with the size of changes between minor linux kernel versions, it is actually much smaller. For example, the patch from 2.6.18 to 2.6.19 is 7.7MB (in gzipped form), while the latest OpenVZ patch (028test010, against the same 2.6.18, also gzipped) is only 900KB.

Surely, it's a long way to go. Definitely, we are listening to everybody who likes to be involved. Hopefully, we'll get there.

Tags:

different types of virtualization

As Scott said in the previous post, "more articles to come". He was quite right — here are a couple of articles, both covering the same subject.

My article in the Enterprise Open Source Magazine (print version) talks about different approaches to virtualization, covering emulation, paravirtualization, and OS-level virtualization. Hopefully, it will give people a better understanding of the different types of Linux virtualization technology. The article describes OpenVZ specifically, and provides examples of usage scenarios.

Another nice article on the same topic has been recently published on the IBM developerWorks site.

Tags:

Latest Month

July 2016
S M T W T F S
     12
3456789
10111213141516
17181920212223
24252627282930
31      

Syndicate

RSS Atom

Comments

Powered by LiveJournal.com
Designed by Tiffany Chow