Top.Mail.Ru
? ?

Previous Entry | Next Entry

vzstats in beta

For the last two weeks or so we've been working on vzstats -- a way to get some statistics about OpenVZ usage. The system consists of a server, deployed to http://stats.openvz.org/, and clients installed onto OpenVZ machines (hardware nodes). This is currently in beta testing, with 71 servers participating at the moment. If you want to participate, read http://openvz.org/vzstats and run yum install vzstats on your OpenVZ boxes.

So far we have some interesting results. We are not sure how representative they are -- probably they aren't, much more servers are needed to participate-- but nevertheless they are interested. Let's share a few preliminary findings.

First, it looks like almost no one is using 32-bits on the host system anymore. This is reasonable and expected. Indeed, who needs system limited to 4GB of RAM nowdays?

Second, many hosts stay on latest stable RHEL6-based OpenVZ kernel. This is pretty good and above our expectations.

Third, very few run ploop-based containers. We don't understand why. Maybe we should write more about features you get from ploop, such as instant snapshots and improved live migration.

Comments

k001
May. 3rd, 2013 01:31 am (UTC)
kyzia
Jun. 11th, 2013 11:59 am (UTC)
Thank you for article about ploop! Its really helpful.

After reading it there some new questions:

1. If i'm migrate database production server - there database use much RAM (for example 15GB active RAM ), before switchig process to new place (with, as i understand, vzmigrate --online command ) Openvz kernel must copy active(and inactive?) RAM to another machine? And during this operation processes in old place can not switch to new place? And container in old place not working?

In this case - if vzmigrate can provide the same downtime as old rsync way - it would be perfect.

Here meminfo for example from one our host:

root@:~# cat /proc/meminfo
MemTotal: 22634676 kB
MemFree: 409080 kB
Buffers: 141496 kB
Cached: 10861520 kB
SwapCached: 21796 kB
Active: 15006620 kB
Inactive: 6718756 kB
Active(anon): 9677768 kB
Inactive(anon): 1059764 kB
Active(file): 5328852 kB
Inactive(file): 5658992 kB
Unevictable: 0 kB
........


2. If i remember Vmware have mechanism (called "quiesced io") which can reduce disk io usage when container is backuping. Vzmigrate have the same mechanism? If we migrate machine - with high load disk subsystem - may happen situation when blocks in kernel list have no time to copy in new place?

Thanks a lot for you articles and comments again.
k001
Jun. 11th, 2013 10:16 pm (UTC)
1. We don't have iterative memory migration in OpenVZ yet, but we do have it in Virtuozzo (== commercial OpenVZ). More to say, we have it in CRIU (the future of OpenVZ checkpointing, to be used with kernels 3.x). See http://criu.org/Memory_changes_tracking and http://criu.org/Iterative_migration for more details if you are curious.

But yes, currently while RAM is being migrated the processes are in frozen state.

mechanism (called "quiesced io") which can reduce disk io usage when container is backuping


You can decrease CT I/O priority (vzctl set $CTID --ioprio NNN), but vzmigrate doesn't do that.

If we migrate machine - with high load disk subsystem - may happen situation when blocks in kernel list have no time to copy in new place?


No, this is impossible. We always copy everything, with the last round when CT is frozen.
kyzia
Jun. 13th, 2013 02:55 pm (UTC)
Thanks!

So (because CRUI with kernel 3.9, as i understand, not stable yet; and we use non-commercial free Openvz) in our case - rsyncing containers without ploop still the best way to copy high load containers.

Latest Month

July 2016
S M T W T F S
     12
3456789
10111213141516
17181920212223
24252627282930
31      

Page Summary

Comments

Powered by LiveJournal.com
Designed by Tiffany Chow