<?xml version="1.0" encoding="utf-8"?>
<!-- If you are running a bot please visit this policy page outlining rules you must respect. https://www.livejournal.com/bots/ -->
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:lj="https://www.livejournal.com">
  <id>urn:lj:livejournal.com:atom1:openvz</id>
  <title>OpenVZ</title>
  <subtitle>OpenVZ</subtitle>
  <author>
    <name>OpenVZ</name>
  </author>
  <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/"/>
  <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom"/>
  <updated>2019-02-12T03:43:42Z</updated>
  <lj:journal userid="9392309" username="openvz" type="community"/>
  <link rel="service.feed" type="application/x.atom+xml" href="https://openvz.livejournal.com/data/atom" title="OpenVZ"/>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:47780</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/47780.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=47780"/>
    <title>ploop and live migration: 2 years later</title>
    <published>2014-04-04T02:45:52Z</published>
    <updated>2014-04-04T05:59:23Z</updated>
    <category term="ploop"/>
    <category term="optimization"/>
    <category term="openvz"/>
    <category term="vzctl"/>
    <category term="live migration"/>
    <content type="html">It has been almost two years since we wrote about &lt;a href="http://openvz.livejournal.com/41835.html" target="_blank"&gt;effective live migration with ploop write tracker&lt;/a&gt;. It's time to write some more about it, since we have managed to make ploop live migration yet more effective, by means of pretty simple optimizations. But let's not jump to resolution yet, it's a long and interesting story to tell.

&lt;p&gt;As you know, live migration is not quite live, although it looks that way to a user. There is a short time period, usually a few seconds, during which the container being migrated is frozen. This time (shown if &lt;code&gt;-t&lt;/code&gt; or &lt;code&gt;-v&lt;/code&gt; option to &lt;code&gt;vzmigrate --live&lt;/code&gt; is used) is what needs to be optimized, making it as short as possible. In order to do that, one needs to dig into details on what's happening when a container is frozen.&lt;/p&gt;

&lt;p&gt;Typical timings obtained via &lt;code&gt;vzmigrate -t --live&lt;/code&gt; look like this. We ran a few iterations migrating a container back and forth between two OpenVZ instances (running inside Parallels VMs on the same physical machine), so there are a few columns at the right side.&lt;/p&gt;

&lt;pre&gt;
(Software: vzctl 4.6.1, ploop-1.10)

              Iteration     1     2     3     4     5     6    AVG

        Suspend + Dump:   0.58  0.71  0.64  0.64  0.91  0.74  0.703
   Pcopy after suspend:   1.06  0.71  0.50  0.68  1.07  1.29  0.885
        Copy dump file:   0.64  0.67  0.44  0.51  0.43  0.50  0.532
       Undump + Resume:   2.04  2.06  1.94  3.16  2.26  2.32  2.297
                        ------  ----  ----  ----  ----  ----  -----
  Total suspended time:   4.33  4.16  3.54  5.01  4.68  4.85  4.428
&lt;/pre&gt;

&lt;p&gt;Apparently, the first suspect to look at is that "undump + resume". Basically, it shows timing of vzctl restore command. Why it is so slow? Apparently, ploop mount takes some noticeable time. Let's dig deeper into that process. &lt;/p&gt;

&lt;p&gt;First, &lt;a href="http://git.openvz.org/?p=ploop;a=commit;h=c17b5ab8f" target="_blank" rel="nofollow"&gt;implement timestamps in ploop messages&lt;/a&gt;, raise the log level and see what is going on here. Apparently, adding deltas is not instant, takes any time from 0.1 second to almost a second. After some more experiments and thinking it becomes obvious that since ploop kernel driver works with data in delta files directly, bypassing the page cache, it needs to force those files to be written to the disk, and this costly operation happens while container is frozen. Is it possible to do it earlier? Sure, we just need to force write the deltas we just copied before suspending a container. Easy, just call fsync(), or yet better fdatasync(), since we don't really care about metadata being written.&lt;/p&gt;

&lt;p&gt;Unfortunately, there is no command line tool to do fsync or fdatasync, so we had to &lt;a href="http://git.openvz.org/?p=vzctl;a=commitdiff;h=842ae637" target="_blank" rel="nofollow"&gt;write one&lt;/a&gt; and call it from vzmigrate. Is it any better now? Yes indeed, delta adding times went down to from tenths to hundredths of a second.&lt;/p&gt;

&lt;p&gt;Except for the top delta, of course, which we migrate using ploop copy. Surely, we can't fsync it before suspending container, because we keep copying it after. Oh wait... actually we can! By adding an fsync before CT suspend, we force the data be written on disk, so the second fsync (which happens after everything is copied) will take less time. This time is shown as "Pcopy after suspend".&lt;/p&gt;

&lt;p&gt;The problem is that ploop copy consists of two sides -- the sending one and the receiving one -- which communicate over a pipe (with ssh as a transport). It's the sending side which runs the command to freeze the container, and it's the receiving side which should do fsync, so we need to pass some sort of "do the fsync" command. Yet better, do it without breaking the existing protocol, so nothing bad will happen in case there is an older version of ploop on the receiving side.&lt;/p&gt;

&lt;p&gt;The "do the fsync" command ended up being a data block of 4 bytes, you can &lt;a href="http://git.openvz.org/?p=ploop;a=commitdiff;h=d56654a2a8" target="_blank" rel="nofollow"&gt;see the patch here&lt;/a&gt;. Older version will write these 4 bytes to disk, which is unnecessary but OK do to, and newer version will recognize it as a need to do fsync.&lt;/p&gt;

&lt;p&gt;The other problem is that sending side should wait for fsync to finish, in order to proceed with CT suspend. Unfortunately, there is no way to solve this one with a one-way pipe, so the sending side just waits for a few seconds. Ugly as it is, this is the best way possible (let us know if you can think of something better).&lt;/p&gt;

&lt;p&gt;To summarize, what we have added is a couple of fsyncs (it's actually fdatasync() since it is faster), and here are the results:&lt;/p&gt;
&lt;pre&gt;
(Software: vzctl 4.7, ploop-1.11)

              Iteration     1     2     3     4     5     6    AVG

        Suspend + Dump:   0.60  0.60  0.57  0.74  0.59  0.80  0.650
   Pcopy after suspend:   0.41  0.45  0.45  0.49  0.40  0.42  0.437 (-0.4)
        Copy dump file:   0.46  0.44  0.43  0.48  0.47  0.52  0.467
       Undump + Resume:   1.86  1.75  1.67  1.91  1.87  1.84  1.817 (-0.5)
                        ------  ----  ----  ----  ----  ----  -----
  Total suspended time:   3.33  3.24  3.12  3.63  3.35  3.59  3.377 (-1.1)
&lt;/pre&gt;

&lt;p&gt;As you see, both "pcopy after suspend" and "undump + resume" times decreased, shaving off about a second of time, which gives us about 25% improvement. Now, take into account that tests were done on an otherwise idle nodes with mostly idle containers, we suspect that the benefit will be more apparent with I/O load. Let checking if this statement is true will be your homework for today!&lt;/p&gt;

</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:46181</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/46181.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=46181"/>
    <title>vzctl 4.6</title>
    <published>2013-11-07T18:42:26Z</published>
    <updated>2013-11-07T18:44:13Z</updated>
    <category term="donation"/>
    <category term="openvz"/>
    <category term="vzctl"/>
    <category term="support"/>
    <content type="html">vzctl 4.6 build hit download.openvz.org (and its many &lt;a href="http://openvz.org/Download_mirrors" target="_blank" rel="nofollow"&gt;mirrors&lt;/a&gt; around the world) last week. Let's see what's in store, shall we?&lt;br /&gt;&lt;br /&gt;First and foremost, &lt;a href="http://openvz.livejournal.com/45831.html" target="_blank"&gt;&lt;b&gt;I/O limits&lt;/b&gt;, but the feature is already described in great details&lt;/a&gt;. What I want to add is the feature was sponsored by GleSYS Internet Services AB, and is one of the first results of &lt;a href="http://www.parallels.com/support/virtualization-suite/openvz/" target="_blank" rel="nofollow"&gt;OpenVZ partnership&lt;/a&gt; program in action. This program is a great opportunity for you to help keep the project up and running, and also experience our expert support service just in case you'd need it. Think of it as a two-way support option. Anyway, I digress. What was it about? Oh yes, vzctl.&lt;br /&gt;&lt;br /&gt;Second, &lt;b&gt;improvements to UBC setting in VSwap mode&lt;/b&gt;. Previously, if you set RAM and swap, all other UBC parameters not set explicitly were set to unlimited. Now they are just left unset (meaning that the default in-kernel setting is used, whatever it is). Plus, in addition to physpages and swappages, vzctl sets lockedpages and oomguarpages (to RAM), vmguarpages (to RAM+swap).&lt;br /&gt;&lt;br /&gt;Plus, there is a new parameter vm_overcommit, and it works in the following way -- if set, it is used as a multiplier to ram+swap to set privvmpages. In layman terms, this is a ratio of real memory (ram+swap) to virtual memory (privvmpages). Again, physpages limits RAM, and physpages+swappages limits real memory &lt;b&gt;used&lt;/b&gt; by a container. On the other side, privvmpages limits memory &lt;b&gt;allocated&lt;/b&gt; by a container. While it depends on the application, generally not all allocated memory is used -- sometimes allocated memory is 5 or 10 times more than used memory. What vm_overcommit gives is a way to set this gap. For example, command&lt;br /&gt;&lt;br /&gt;&lt;code&gt;vzctl set $CTID --ram 2G --swap 4G --vm_overcommit 3 --save&lt;/code&gt;&lt;br /&gt;&lt;br /&gt;is telling OpenVZ to limit container $CTID with 2 GB of RAM, 4 GB of swap, and (2+4)*3, i.e. 18 GB of virtual memory. That means this container can allocate up to 18 GB of memory, but can't use more than 6 GB. So, vm_overcommit is just a way to set privvmpages implicitly, as a function of physpages and swappages. Oh, and if you are lost in all those *pages, we have extensive documentation at &lt;a target='_blank' href='http://openvz.org/UBC' rel='nofollow'&gt;http://openvz.org/UBC&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;A first version of &lt;b&gt;vzoversell&lt;/b&gt; utility is added. This is a proposed vzmemcheck replacement for VSwap mode. Currently it just summarizes RAM and swap limits for all VSwap containers, and compares it to RAM and swap available on the host. Surely you can oversell RAM (as long as you have enough swap), but sum of all RAM+swap limits should not exceed RAM+swap on the node, and the main purpose of this utility is to check that constraint.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;vztmpl-dl&lt;/b&gt; got a new --list-orphans option. It lists all local templates that are not available from the download server(s) (and therefore can't be updated by vztmpl-dl). Oh, by the way, since vzctl 4.5 you can use &lt;code&gt;vztmpl-dl --update-all&lt;/code&gt; to refresh all templates (i.e. download an updated template, if a newer version is available from a download server). For more details, see &lt;a href="http://openvz.org/Man/vztmpl-dl.8" target="_blank" rel="nofollow"&gt;vztmpl-dl(8) man page&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;vzubc&lt;/b&gt; got some love, too. It now skips unlimited UBCs by default, in order to improve the signal to noise ratio. If you want old behavior, (i.e. all UBCs), use -v flag.&lt;br /&gt;&lt;br /&gt;Surely, there's a bunch of other fixes and improvements, please &lt;a href="http://openvz.org/Download/vzctl/4.6" target="_blank" rel="nofollow"&gt;read the changelog&lt;/a&gt; if you want to know it all. One thing in particular worth mentioning, it's a &lt;b&gt;hack to vzctl console&lt;/b&gt;. As you might know, in OpenVZ container's console is sort of eternal, meaning you can attach to it before a container is even started, and it keeps its state even if you detach from it. That creates a minor problem, though -- if someone run, say, vim in console, then detaches and reattaches, vim is not redrawing anything and the console shows nothing. To workaround, one needs to press Ctrl-L (it is also recognized by bash and other software). But it's a bit inconvenient to do that every time after reattach. Although, this is not required if terminal size has changed (i.e. you detach from console, change your xterm size, then run vzctl console again), because in this case vim is noting the change and redraws accordingly. So what vzctl now does after reattach is telling the underlying terminal its size twice -- first the wrong size (with incremented number of rows), then the right size (the one of the terminal vzctl is running in). This forces vim (or whatever is running on container console) to redraw.&lt;br /&gt;&lt;br /&gt;Finally, new vzctl (as well as other utilities) are now in our &lt;b&gt;Debian wheezy repo&lt;/b&gt; at &lt;a target='_blank' href='http://download.openvz.org/debian' rel='nofollow'&gt;http://download.openvz.org/debian&lt;/a&gt;, so Debian users are now on par with those using RPM-based distros, and can have latest and greatest stuff as soon as it comes out. Same that &lt;a href="http://openvz.livejournal.com/45345.html" target="_blank"&gt;we did with kernels&lt;/a&gt; some time ago.&lt;br /&gt;&lt;br /&gt;Enjoy, and don't forget to report bugs to &lt;a target='_blank' href='http://bugzilla.openvz.org/' rel='nofollow'&gt;http://bugzilla.openvz.org/&lt;/a&gt;</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:45831</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/45831.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=45831"/>
    <title>Yay to I/O limits!</title>
    <published>2013-10-30T03:12:10Z</published>
    <updated>2013-10-30T03:15:34Z</updated>
    <category term="kernel"/>
    <category term="openvz"/>
    <category term="iolimit"/>
    <category term="vzctl"/>
    <category term="iopslimit"/>
    <content type="html">&lt;p&gt;Today we are releasing a somewhat small but very important OpenVZ feature: &lt;b&gt;per-container disk I/O bandwidth and &lt;a href="http://en.wikipedia.org/wiki/IOPS" target="_blank" rel="nofollow"&gt;IOPS&lt;/a&gt; limiting.&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;OpenVZ have I/O priority feature for a while, which lets one set a per-container I/O priority -- a number from 0 to 7. This is working in a way that if two similar containers with similar I/O patterns, but different I/O priorities are run on the same system, a container with a prio of 0 (lowest) will have I/O speed of about 2-3 times less than that of a container with a prio of 7 (highest). This works for some scenarios, but not all.&lt;/p&gt;

&lt;p&gt;So, I/O bandwidth limiting was introduced in &lt;a href="http://www.parallels.com/products/pcs/" target="_blank" rel="nofollow"&gt;Parallels Cloud Server&lt;/a&gt;, and as of today is available in OpenVZ as well. Using the feature is very easy: you set a limit for a container (in megabytes per second), and watch it obeying the limit. For example, here I try doing I/O without any limit set first:&lt;/p&gt;

&lt;pre&gt;
root@host# vzctl enter 777
root@CT:/# cat /dev/urandom | pv -c - &amp;gt;/bigfile
 88MB 0:00:10 [8.26MB/s] [         &amp;lt;=&amp;gt;      ]
^C
&lt;/pre&gt;

&lt;p&gt;Now let's set the I/O limit to 3 MB/s:&lt;/p&gt;

&lt;pre&gt;
root@host# vzctl set 777 --iolimit 3M --save
UB limits were set successfully
Setting iolimit: 3145728 bytes/sec
CT configuration saved to /etc/vz/conf/777.conf
root@host# vzctl enter 777
root@CT:/# cat /dev/urandom | pv -c - &amp;gt;/bigfile3
39.1MB 0:00:10 [   3MB/s] [         &amp;lt;=&amp;gt;     ]
^C
&lt;/pre&gt;

&lt;p&gt;If you run it yourself, you'll notice a spike of speed at the beginning, and then it goes down to the limit. This is so-called burstable limit working, it allows a container to over-use its limit (up to 3x) for a short time.&lt;/p&gt;

&lt;p&gt;In the above example we tested writes. Reads work the same way, except when read data are in fact coming from the page cache (such as when you are reading the file which you just wrote). In this case, no actual I/O is performed, therefore no limiting.&lt;/p&gt;

&lt;p&gt;Second feature is &lt;a href="http://en.wikipedia.org/wiki/IOPS" target="_blank" rel="nofollow"&gt;I/O operations per second, or just IOPS&lt;/a&gt; limit. For more info on what is IOPS, go read the linked Wikipedia article -- all I can say here is for traditional rotating disks the hardware capabilities are pretty limited (75 to 150 IOPS is a good guess, or 200 if you have high-end server class HDDs), while for SSDs this is much less of a problem. IOPS limit is set in the same way as iolimit (&lt;code&gt;vzctl set $CTID --iopslimit NN --save&lt;/code&gt;), although measuring its impact is more tricky.&amp;lt;/o&amp;gt;

&lt;p&gt;Finally, to play with this stuff, you need:&lt;ul&gt;
&lt;li&gt;&lt;a href="https://openvz.org/Download/vzctl" target="_blank" rel="nofollow"&gt;vzctl&lt;/a&gt; 4.6 (or higher)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openvz.org/Download/kernel/rhel6-testing" target="_blank" rel="nofollow"&gt;Kernel&lt;/a&gt; 042stab084.3 (or higher)&lt;/li&gt;
&lt;/ul&gt;

Note that the kernel with this feature is currently still in testing -- so if you haven't done so, it's time to read about &lt;a href="http://openvz.livejournal.com/45010.html" target="_blank"&gt;testing kernels&lt;/a&gt;.</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:44738</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/44738.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=44738"/>
    <title>vzctl 4.4</title>
    <published>2013-07-19T00:15:36Z</published>
    <updated>2013-07-19T00:15:36Z</updated>
    <category term="openvz"/>
    <category term="vzctl"/>
    <content type="html">A shiny new vzctl 4.4 was released just today. Let's take a look at its new features.&lt;br /&gt;&lt;br /&gt;As you know, vzctl was able to download OS templates automatically for quite some time now, when vzctl create --ostemplate was used with a template which is not available locally. Now, we have just moved this script to a standard /usr/sbin place and added a corresponding &lt;a href="http://openvz.org/Man/vztmpl-dl.8" target="_blank" rel="nofollow"&gt;vztmpl-dl(8)&lt;/a&gt; man page. Note you can use the script to update your existing templates as well.&lt;br /&gt;&lt;br /&gt;Next few features are targeted to make OpenVZ more hassle-free. Specifically, this release adds a post-install script to configure some system aspects, such as changing some parameters in /etc/sysctl.conf and disabling SELinux. This is something that has to be done manually before, so it was described in OpenVZ installation guide. Now, it's just one less manual step, and one less paragraph from the &lt;a href="https://openvz.org/Quick_installation" target="_blank" rel="nofollow"&gt;Quick installation&lt;/a&gt; guide.&lt;br /&gt;&lt;br /&gt;Another "make it easier" feature is automatic namespace propagation from the host to the container. Before vzctl 4.4 there was a need to set a nameserver for each container, in order for DNS to work inside a container. So, the usual case was to check your host's /etc/resolv.conf, find out what are nameservers, and set those using something like &lt;code&gt;vzctl set $CTID --nameserver x.x.x.x --nameserver y.y.y.y --save&lt;/code&gt;. Now, a special value of "inherit" can be used instead of a real nameserver IP address to instruct vzctl get IPs from host's /etc/resolv.conf and apply them to a container. Same applies to --searchdomain option / SEARCHDOMAIN config parameter.&lt;br /&gt;&lt;br /&gt;Now, since defaults for most container parameters can be set in global OpenVZ configuration file /etc/vz/vz.conf, if it contains a line like &lt;code&gt;NAMESERVER=inherit&lt;/code&gt;, this becomes a default for all containers not having nameserver set explicitly. Yes, we added this line to /etc/vz/vz.conf with this release, meaning all containers with non-configured nameservers will automatically get those from the host. If you don't like this feature, remove the &lt;code&gt;NAMESERVER=&lt;/code&gt; line from /etc/vz/vz.conf.&lt;br /&gt;&lt;br /&gt;Another small new feature is ploop-related. When you start (or mount) a ploop-based container, fsck for its inner filesystem is executed. This mimics the way a real server works -- it runs fsck on boot. Now, there is a 1/30 or so probability that fsck will actually do filesystem check (it does that every Nth mount, where N is about 30 and can be edited with tune2fs). For a large container, fsck could be a long operation, so when we start containers on boot from the /etc/init.d/vz initscript, we skip such check to not delay containers start-up. This is implemented as a new &lt;code&gt;--skip-fsck&lt;/code&gt; option to vzctl start.&lt;br /&gt;&lt;br /&gt;Thanks to our user and contributor Mario Kleinsasser, vzmigrate is now able to migrate containers between boxes with different VE_ROOT/VE_PRIVATE values. Such as, if one server runs Debian with /var/lib/vz and another is CentOS with /vz, vzmigrate is smart enough to note that and do proper conversion. Thank you, Mario!&lt;br /&gt;&lt;br /&gt;Another vzmigrate enhancement is option -f/--nodeps which can be used to disable some pre-migration checks. For example, in case of live migration destination CPU capabilities (such as SSE3) are cross-checked against the ones of the source server, and if some caps are missing, migration is not performed. In fact, not too many applications are optimized to use all CPU capabilities, therefore there are moderate chances that live migration can be done. This --nodeps option is exactly for such cases -- i.e. you can use it if you know what you do.&lt;br /&gt;&lt;br /&gt;This is more or less it regarding new features. Oh, it makes sense to note that default OS template is now centos-6-x86, and NEIGHBOR_DEVS parameter is commented out by default, because this increases the chances container networking will work "as is".&lt;br /&gt;&lt;br /&gt;Fixes? There are a few -- to vzmigrate, vzlist, vzctl convert, vzctl working on top of upstream kernel (including some fixes for &lt;a href="http://criu.org/" target="_blank" rel="nofollow"&gt;CRIU&lt;/a&gt;-based checkpointing), and build system. Documentation (those &lt;a href="http://openvz.org/Man" target="_blank" rel="nofollow"&gt;man pages&lt;/a&gt; is updated to reflect all the new options and changes.&lt;br /&gt;&lt;br /&gt;A list of contributors to this vzctl release is quite impressive, too -- more than 10 people.&lt;br /&gt;&lt;br /&gt;As always, if you find a bug in vzctl, please report it to &lt;a href="http://bugzilla.openvz.org/" target="_blank" rel="nofollow"&gt;bugzilla.openvz.org&lt;/a&gt;.</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:43060</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/43060.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=43060"/>
    <title>vzctl / ploop update</title>
    <published>2012-10-16T08:15:31Z</published>
    <updated>2013-01-10T19:17:09Z</updated>
    <category term="yum"/>
    <category term="ploop"/>
    <category term="openvz"/>
    <category term="vzctl"/>
    <category term="rpm"/>
    <content type="html">&lt;p&gt;We have updated vzctl, ploop and vzquota recently (I wrote about vzctl &lt;a href="http://openvz.livejournal.com/42793.html" target="_blank"&gt;here&lt;/a&gt;). Some changes in packaging are tricky, so let me explain why and give some hints.&lt;/p&gt;

&lt;h3&gt;For RHEL5-based kernel users (i.e. ovzkernel-2.6.18-028stabXXX) and earlier kernels&lt;/h3&gt;

&lt;p&gt;Since ploop is only supported in RHEL6 kernel, we have removed ploop dependency from vzctl-4.0 (ploop library is loaded dynamically when needed and if available). Since you have earlier vzctl version installed, you also have ploop installed. Now you can remove it, at the same time upgrading to vzctl-4.0. That "at the same time" part is done via yum shell:&lt;/p&gt;

&lt;pre&gt;# yum shell
&amp;gt; remove ploop ploop-lib
&amp;gt; update vzctl
&amp;gt; run&lt;/pre&gt;

&lt;p&gt;That should fix it. In the meantime, think about upgrading your systems to RHEL6-based kernel which is better in terms of performance, features, and speed of development.&lt;/p&gt;

&lt;h3&gt;For RHEL6-based users (i.e. vzkernel-2.6.32-042stabXXX)&lt;/h3&gt;

&lt;p&gt;New ploop library (1.5) requires very recent RHEL6-based kernel kernel (version 2.6.32-042stab061.1 or later) and is not supposed to work with earlier kernels. To protect ploop from earlier kernels, its packaging says "Conflicts: vzkernel &amp;lt; 2.6.32-042stab061.1" which usually prevents ploop 1.5 installation on systems having those kernels.&lt;/p&gt;

&lt;p&gt;To fix this conflict, make sure you run the latest kernel, and then remove the old ones:&lt;/p&gt;

&lt;code&gt;# yum remove "vzkernel &amp;lt; 2.6.32-042stab061.1"&lt;/code&gt;

&lt;p&gt;Then you can run update as usual:&lt;/p&gt;
&lt;code&gt;# yum update&lt;/code&gt;

&lt;p&gt;Hope that helps&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Update:&lt;/b&gt; comments disabled due to spam.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:42793</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/42793.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=42793"/>
    <title>OpenVZ turns 7, gifts are available!</title>
    <published>2012-10-06T09:31:53Z</published>
    <updated>2012-12-03T10:14:11Z</updated>
    <category term="kernel"/>
    <category term="openvz"/>
    <category term="vzctl"/>
    <category term="criu"/>
    <category term="crtools"/>
    <content type="html">&lt;p&gt;&lt;b&gt;OpenVZ project is 7 years old&lt;/b&gt; as of last month. It's hard to believe the number, but looking back, we've done a lot of things together with you, our users.&lt;/p&gt;

&lt;p&gt;One of the main project goals was (and still is) to include the containers support upstream, i.e. to vanilla Linux kernel. In practice, OpenVZ kernel is a fork of the Linux kernel, and we don't like it that way, for a number of reasons. The main ones are:&lt;/p&gt;

&lt;p&gt;&lt;ul&gt;&lt;li&gt; We want everyone to benefit from containers, not just ones using OpenVZ kernel. Yes to world domination!&lt;/li&gt;
&lt;li&gt;We'd like to concentrate on new features, improvements and bug fixes, rather than forward porting our changes to the next kernel.&lt;/li&gt;&lt;/ul&gt;&lt;/p&gt;

&lt;p&gt;So, we were (and still are) working hard to bring in-kernel containers support upstream, and many key pieces are already there in the kernel -- for example, PID and network namespaces, cgroups and memory controller. This is the functionality that lxc tool and libvirt library are using. We also use the features we merged into upstream, so with every new kernel branch we have to port less, and the size of our patch set decreases.&lt;/p&gt;

&lt;h3&gt;CRIU&lt;/h3&gt;

&lt;p&gt;One of such features for upstream is checkpoint/restore, an ability to save running container state and then restore it. The main use of this feature is live migration, but there are other &lt;a href="http://criu.org/Usage_scenarios" target="_blank" rel="nofollow"&gt;usage scenarios as well&lt;/a&gt;. While the feature is present in OpenVZ kernel since April 2006, it was never accepted to upstream Linux kernel (nor was the other implementation proposed by Oren Laadan).&lt;/p&gt;

&lt;p&gt;For the last year we are working on &lt;a href="http://criu.org/" target="_blank" rel="nofollow"&gt;CRIU&lt;/a&gt; project, which aims to reimplement most of the checkpoint/restore functionality in userspace, with bits of kernel support where required. As of now, most of the additional kernel patches needed for CRIU are already there in kernel 3.6, and a few more patches are on its way to 3.7 or 3.8. Speaking of CRIU tools, they are currently at version 0.2, released 20th of September, which already have limited support for checkpointing and restoring an upstream container. Check &lt;a href="http://criu.org/" target="_blank" rel="nofollow"&gt;criu.org&lt;/a&gt; for more details, and give it a try. Note that this project is not only for containers -- you can checkpoint any process trees -- it's just the container is better because it is clearly separated from the rest of the system.&lt;/p&gt;

&lt;p&gt;One of the most important things about CRIU is we are NOT developing it behind the closed doors. As usual, we have wiki and git, but most important thing is every patch is going through the &lt;a href="http://openvz.org/pipermail/criu/" target="_blank" rel="nofollow"&gt;public mailing list&lt;/a&gt;, so everyone can join the fun.&lt;/p&gt;

&lt;h3&gt;vzctl for upstream kernel&lt;/h3&gt;

&lt;p&gt;We have also released vzctl 4.0 recently (25th of September). As you can see by the number, it is a major release, and the main feature is support for non-OpenVZ kernels. Yes it's true -- now you can have a feeling of OpenVZ without installing OpenVZ kernel. Any recent 3.x kernel should work.&lt;/p&gt;

&lt;p&gt;As with OpenVZ kernel, you can use ready container images we have for OpenVZ (so called "OS templates") or employ your own. You can create, start, stop, and delete containers, set various resource parameters such as RAM and CPU limits. Networking (aside from routed-based) is also supported -- you can either move a network interface from host system to inside container (&lt;code&gt;--netdev_add&lt;/code&gt;), or use bridged setup (&lt;code&gt;--netif_add&lt;/code&gt;). I personally run this stuff on my Fedora 17 desktop using stock F17 kernel -- it just works!&lt;/p&gt;

&lt;p&gt;Having said all that, surely OpenVZ kernel is in much better shape now as it comes for containers support -- it has more features (such as live container shapshots and live migration), better resource management capabilities, and overall is more stable and secure. But the fact that the kernel is now optional makes the whole thing more appealing (or so I hope).&lt;/p&gt;

&lt;p&gt;You can find information on how to setup and start using vzctl at &lt;a href="http://wiki.openvz.org/Vzctl_for_upstream_kernel" target="_blank" rel="nofollow"&gt;vzctl for upstream kernel&lt;/a&gt; wiki page. The page also lists known limitations are pointers to other resources. I definitely recommend you to give it a try and share your experience! As usual, any bugs found are to be reported to &lt;a href="http://bugzilla.openvz.org/" target="_blank" rel="nofollow"&gt;OpenVZ bugzilla&lt;/a&gt;.

&lt;p&gt;&lt;b&gt;Update:&lt;/b&gt; comments disabled due to spam&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:42146</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/42146.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=42146"/>
    <title>vzubc: better cat /proc/user_beancounters</title>
    <published>2012-06-29T12:25:25Z</published>
    <updated>2019-02-12T03:43:42Z</updated>
    <category term="user beancounters"/>
    <category term="vzctl"/>
    <category term="vzubc"/>
    <category term="cli"/>
    <content type="html">Managing user beancounters is not for the faint of heart, I must say. One has to read through &lt;a href="http://wiki.openvz.org/UBC" target="_blank" rel="nofollow"&gt;a lot of documentation&lt;/a&gt; and understand all this stuff pretty well. Despite the fact we have made a great improvement recently, a feature called &lt;a href="http://wiki.openvz.org/VSwap" target="_blank" rel="nofollow"&gt;VSwap&lt;/a&gt;, many people still rely on traditional beancounters.&lt;br /&gt;&lt;br /&gt;This post is about a utility I initially wrote for myself in May 2011. Surely I have added it to vzctl, it is available since vzctl 3.0.27. Simply speaking, it is just a replacement for cat /proc/user_beancounters, and of course it can do much more than cat.&lt;br /&gt;&lt;br /&gt;Here&amp;#39;s a brief list of vzubc features:&lt;ol&gt;&lt;br /&gt;&lt;li&gt;Shows human-readable held, maxheld, barrier, limit, and fail counter for every beancounter, fitting into standard 80-columns terminal (unlike /proc/user_beancounters on an x86_64 system)&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Values that are in pages (such as physpages) are converted to bytes&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Long values are then converted to kilo-, mega-, gigabytes etc, similar to -h flag for ls or df&lt;/li&gt;&lt;br /&gt;&lt;li&gt;For held and maxheld, it shows how close the value to the barrier and the limit, in per cent&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Can be used both inside CT and on HN&lt;/li&gt;&lt;br /&gt;&lt;li&gt;User can specify CTIDs or CT names to output info about specific containers only&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Optional top-like autoupdate mode (internally using &amp;quot;watch&amp;quot; utility)&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Optional &amp;quot;relative failcnt&amp;quot; mode (show increase in UBC fail counters since the last run&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Optional quiet mode (only shows &amp;quot;worth to look at&amp;quot; UBCs, ie ones close to limits and/or with failcnt)&lt;/li&gt;&lt;br /&gt;&lt;/ol&gt;&lt;br /&gt;&lt;br /&gt;Now, this is how default vzubc output for a particular VE (with non-vswap configuration) looks like:&lt;br /&gt;&lt;pre&gt;
# vzubc 1009
----------------------------------------------------------------
CT 1009      | HELD Bar% Lim%| MAXH Bar% Lim%| BAR | LIM | FAIL
-------------+---------------+---------------+-----+-----+------
     kmemsize|1.52M  11%  10%|1.81M  13%  12%|13.7M|14.1M|    - 
  lockedpages|   -    -    - |   -    -    - |   8M|   8M|    - 
  privvmpages| 2.6M   1%   1%|4.32M   2%   2%| 256M| 272M|    - 
     shmpages|   -    -    - |   -    -    - |  84M|  84M|    - 
      numproc|  10    4%   4%|  15    6%   6%| 240 | 240 |    - 
    physpages|17.3M   -    - |18.6M   -    - |   - |   - |    - 
  vmguarpages|   -    -    - |   -    -    - | 132M|   - |    - 
 oomguarpages|1.52M   2%   - |1.52M   2%   - | 102M|   - |    - 
   numtcpsock|   2  0.6% 0.6%|   3  0.8% 0.8%| 360 | 360 |    - 
     numflock|   1  0.5% 0.5%|   2    1%   1%| 188 | 206 |    - 
       numpty|   -    -    - |   -    -    - |  16 |  16 |    - 
   numsiginfo|   -    -    - |   6    2%   2%| 256 | 256 |    - 
    tcpsndbuf|34.1K   2%   1%|51.1K   3%   2%|1.64M|2.58M|    - 
    tcprcvbuf|  32K   2%   1%|  48K   2%   2%|1.64M|2.58M|    - 
 othersockbuf|2.26K 0.2% 0.1%|14.6K   1% 0.7%|1.07M|   2M|    - 
  dgramrcvbuf|   -    -    - |   -    -    - | 256K| 256K|    - 
 numothersock|  44   12%  12%|  47   13%  13%| 360 | 360 |    - 
   dcachesize| 618K  18%  17%| 627K  18%  17%|3.25M|3.46M|    - 
      numfile| 114    1%   1%| 125    1%   1%|9.09K|9.09K|    - 
    numiptent|  20   15%  15%|  20   15%  15%| 128 | 128 |    - 
    swappages|   -    -    - |   -    -    - |   - |   - |    - 
----------------------------------------------------------------
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;As you can see, it shows all beancounters in human-readable form. Zero or unlimited values are shown by a dash. It also shows the ratio of held and maxheld to barrier and limit, in per cent.&lt;br /&gt;&lt;br /&gt;Now, let&amp;#39;s try to explore functionality available via command-line switches.&lt;br /&gt;&lt;br /&gt;First of all, &lt;b&gt;-q or --quiet enables quiet mode&lt;/b&gt; to only show beancounters with fails and those with held/maxheld values close to limits. If vzubc --q produces empty output, that probably means you don&amp;#39;t have to worry about anything related to UBC. There are two built-in thresholds for quiet mode, one for barrier and one for limit. They can be changed to your liking using -qh and -qm options.&lt;br /&gt;&lt;br /&gt;Next, &lt;b&gt;-c or --color&lt;/b&gt; enables using colors to highlight interesting fields. In this mode &amp;quot;warnings&amp;quot; are shown in yellow, and &amp;quot;errors&amp;quot; are in red. Here a warning means a parameter close to limit (same thresholds are used as for the quiet mode), and an error means non-zero fail counter.&lt;br /&gt;&lt;br /&gt;The following screenshot demonstrates the effect of -c and -q options. I have run a forkbomb inside CT to create a resource shortage:&lt;br /&gt;&lt;br /&gt;&lt;img src="https://wiki.openvz.org/images/3/37/Vzubc-c-q-forkbomb.png" fetchpriority="high" /&gt;&lt;br /&gt;&lt;br /&gt;Now, &lt;b&gt;-r or --relative&lt;/b&gt; is the ultimate answer to the frequently asked &amp;quot;how do I reset failcnt?&amp;quot; question. Basically it saves the current failcnt value during the first run, and shows its delta (rather than the absolute value) during the next runs. This is how it works:&lt;br /&gt;&lt;br /&gt;&lt;img src="https://wiki.openvz.org/images/9/9d/Vzubc-r-failcnt.png" loading="lazy" /&gt;&lt;br /&gt;&lt;br /&gt;There is also &lt;b&gt;-i or --incremental&lt;/b&gt; flag to activate a mode in which an additional column shows a difference in held value from the previous run, so you can see the change in usage. This option also affects quiet mode: all lines with changed held values are shown.&lt;br /&gt;&lt;br /&gt;Here&amp;#39;s a screenshot demonstrating the &amp;quot;color, relative, incremental, quiet&amp;quot; mode for vzubc:&lt;br /&gt;&lt;br /&gt;&lt;img src="https://wiki.openvz.org/images/1/12/Vzubc-incremental.png" loading="lazy" /&gt;&lt;br /&gt;&lt;br /&gt;Finally, you can use &lt;b&gt;-w or --watch&lt;/b&gt; to enable a la top mode, to monitor beancounters. It&amp;#39;s not as powerful as top, in fact it just uses watch(1) tool to run vzubc every 2 seconds and that&amp;#39;s it. Please note that this mode is not compatible with --color, and you have to press Ctrl-C to quit. Oh, and since I am not a big fan of animated gifs, there will be no screenshot.&lt;br /&gt;&lt;br /&gt;Man page &lt;a href="https://wiki.openvz.org/Man/vzubc.8" target="_blank" rel="nofollow"&gt;vzubc(8)&lt;/a&gt; man page gives you more formal description of vzubc, including some minor options I haven&amp;#39;t described here. Enjoy.&lt;br /&gt;&lt;br /&gt;</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:41835</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/41835.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=41835"/>
    <title>Effective live migration with ploop write tracker</title>
    <published>2012-06-03T10:53:25Z</published>
    <updated>2012-08-22T11:34:57Z</updated>
    <category term="ploop"/>
    <category term="openvz"/>
    <category term="vzctl"/>
    <category term="live migration"/>
    <content type="html">During my last holiday on the of hospitable Turkey sunny seaside at night, instead of quenching my thirst or taking rest after a long and tedious day at a beach, I was sitting in a hotel lobby, where they have free Wi-Fi, trying to make live migration of a container on a ploop device working. I succeeded, with &lt;a href="http://git.openvz.org/?p=ploop;a=history;f=lib/pcopy.c;h=1c8b52c3ecb735c7a47e769da5e7eb8a2363d870;hb=312780a4ba3c6c1bccb6fad619033ebf438eca8e" target="_blank" rel="nofollow"&gt;about 20 commits to ploop&lt;/a&gt; and &lt;a href="http://git.openvz.org/?p=vzctl;a=log;h=7bc785f65d929b57caf4b329e30f21a38413e78e" target="_blank" rel="nofollow"&gt;another 15 to vzctl&lt;/a&gt;, so now I'd like to share my findings and tell the story about it.&lt;br /&gt;&lt;br /&gt;Let's start from the basics and see how migration (i.e. moving a container from one OpenVZ server to another) is implemented. It's vzmigrate, a shell script which does the following (simplified for clarity):&lt;br /&gt;&lt;br /&gt;1. Checks that a destination server is available via ssh w/o entering a password, that there is no container with the same ID on it, and so on.&lt;br /&gt;2. Runs rsync of /vz/private/$CTID to the destination server&lt;br /&gt;3. Stops the container&lt;br /&gt;4. Runs a second rsync of /vz/private/$CTID to the destination&lt;br /&gt;5. Starts the container on the destination&lt;br /&gt;6. Removes it locally&lt;br /&gt;&lt;br /&gt;Obviously, two rsync runs are needed, so the first one moves most of the data, while the container is still up and running, and the second one moves the changes made during the time period between the first rsync run and the container stop.&lt;br /&gt;&lt;br /&gt;Now, if we need live migration (option --online to vzmigrate), then instead of CT stop we do vzctl checkpoint, and instead of start we do vzctl restore. As a result, a container will move to another system without your users noticing (process are not stopping, just freezing for a few seconds, TCP connections are migrating, IP addresses are not changed etc. -- no cheating, just a little bit of magic).&lt;br /&gt;&lt;br /&gt;So this is the way it was working for years, making users happy and singing in the rain. One fine day, though, ploop was introduced and it was soon discovered that live migration is not working for ploop-based containers. I found a few reasons why (for example, one can't use rsync --sparse for copying ploop images, because in-kernel ploop driver can't work with files having holes). But the main thing I found is the proper way of migrating a ploop image: not with rsync, but with ploop copy.&lt;br /&gt;&lt;br /&gt;ploop copy is a mechanism of effective copy of a ploop image with the help of build-in ploop kernel driver feature called write tracker. One ploop copy process is reading blocks of data from ploop image and sends them to stdout (prepending each block with a short header consisting of a magic label, a block position and its size). The other ploop copy process receives this data from stdin and writes them down to disk. If you connect these two processes via a pipe, and add ssh $DEST in between, you are all set.&lt;br /&gt;&lt;br /&gt;You can say, cat utility can do almost the same thing. Right. The difference is, before starting to read and send data, ploop copy asks the kernel to turn on the write tracker, and the kernel starts to memorize a list of data blocks that are modified (written into). Then, after all the blocks are sent, ploop copy is politely expressing the interest in this list, and send the blocks from the list again, while the kernel is creating another list. The process repeats a few times, and the list becomes shorter and shorter. After a few iterations (either the list is empty, or it is not getting shorter, or we just decide that we already did enough iterations) ploop copy executes an external command, which should stop any disk activity for this ploop device. This command is either vzctl stop for offline migration or vzctl checkpoint for live migration; obviously, stopped or frozen container will not write anything to disk. After that, ploop copy asks the kernel for the list of modified blocks again, transfers the blocks listed, and finally asks the kernel for this list again. If this time the list is not empty, that means something is very wrong, meaning that stopping command haven't done what it should and we fail. Otherwise all is good and ploop copy sends a marker telling the transfer is over. So this is how the sending&lt;br /&gt;process works.&lt;br /&gt;&lt;br /&gt;The receiving ploop copy process is trivial -- it just reads the blocks from stdin and writes those to file (to a specified position). If you want to see the code of both sending and receiving sides, &lt;a href="http://git.openvz.org/?p=ploop;a=blob;f=lib/pcopy.c" target="_blank" rel="nofollow"&gt;look no further&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;All right, in the previous migration description ploop copy is used in place of second rsync run (steps 3 and 4). I'd like to note that this way it's more effective, because rsync is trying to figure out which files have changed and where, while ploop copy just asks about it from the kernel. Also, because the "ask and send" process is iterative, container will be stopped or frozen as late as it can be and, even if container is writing data to disk actively, the period for which it is stopped is minimal.&lt;br /&gt;&lt;br /&gt;Just out of pure curiosity I performed a quick non-scientific test, having "od -x /dev/urandom &amp;gt; file" running inside a container and live migrating it back and forth. ploop copy time after the freeze was a bit over 1 second, and the total frozen time a bit less than 3 seconds. Similar numbers can be obtained from the traditional simfs+rsync migration, but only if a container is not doing any significant I/O. Then I tried to migrate similar container on simfs running the same command running inside, and the frozen time increased to 13-16 seconds. I don't want to say these measurements are to be trusted, I just ran it without any precautions, with OpenVZ instances running inside Parallels VMs, with the physical server busy with something else...&lt;br /&gt;&lt;br /&gt;Oh, the last thing. All this functionality is already included into latest tools releases: ploop 1.3 and vzctl 3.3.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Update:&lt;/b&gt; comments disabled due to spam</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:40599</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/40599.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=40599"/>
    <title>CT console</title>
    <published>2012-02-23T14:28:15Z</published>
    <updated>2013-01-10T19:13:45Z</updated>
    <category term="kernel"/>
    <category term="openvz"/>
    <category term="console"/>
    <category term="feature"/>
    <category term="vzctl"/>
    <category term="container"/>
    <content type="html">Are you ready for the next cool feature? Please welcome CT console.&lt;br /&gt;&lt;br /&gt;Available in RHEL6-based kernel since 042stab048.1, this feature is pretty simple to use. Use &lt;code&gt;vzctl attach &lt;i&gt;CTID&lt;/i&gt;&lt;/code&gt; to attach to this container's console, and you will be able to see all the messages CT init is writing to console, or run getty on it, or anything else.&lt;br /&gt;&lt;br /&gt;Please note that the console is persistent, i.e. it is available even if a container is not running. That way, you can run &lt;s&gt;&lt;code&gt;vzctl attach&lt;/s&gt; vzctl console&lt;/code&gt;and then (in another terminal) &lt;code&gt;vzctl start&lt;/code&gt;. That also means that if a container is stopped, vzctl attach is still there.&lt;br /&gt;&lt;br /&gt;Press &lt;b&gt;Esc .&lt;/b&gt; to detach from the console.&lt;br /&gt;&lt;br /&gt;The feature (&lt;a href="http://git.openvz.org/?p=vzctl;a=commitdiff;h=a1f523f59a6e321ce2cc6dd42d0f5a660a712339" target="_blank" rel="nofollow"&gt;vzctl git commit&lt;/a&gt;) will be available in up-coming vzctl-3.0.31. I have just made a nightly build of vzctl (version 3.0.30.2-18.git.a1f523f) available so you can test this. Check &lt;a target='_blank' href='http://wiki.openvz.org/Download/vzctl/nightly' rel='nofollow'&gt;http://wiki.openvz.org/Download/vzctl/nightly&lt;/a&gt; for information of how to get a nightly build.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Update:&lt;/b&gt; the feature is renamed to &lt;code&gt;vzctl console&lt;/code&gt;.&lt;br /&gt;&lt;b&gt;Update:&lt;/b&gt; comments disabled due to spam.</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:39765</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/39765.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=39765"/>
    <title>Recent improvements in vzctl</title>
    <published>2011-12-27T16:03:39Z</published>
    <updated>2012-11-11T19:36:51Z</updated>
    <category term="openvz"/>
    <category term="vzctl"/>
    <category term="git"/>
    <content type="html">&lt;b&gt;Update: note you need VSwap enabled (ie currently RHEL6-based) kernel for the below stuff to work, see &lt;a target='_blank' href='http://wiki.openvz.org/VSwap' rel='nofollow'&gt;http://wiki.openvz.org/VSwap&lt;/a&gt;&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;VSwap is an excellent feature, simplifying container resource management a lot. Now it's time to also simplify the command line interface, i.e. vzctl. Here is what we did recently (take a look at vzctl git repo if you want to review the actual changes):&lt;br /&gt;&lt;br /&gt;1. We no longer require to set kmemsize, dcachesize and lockedpages parameters to non-unlimited values (this is one of the enhancements in the recent kernels we have talked about recently). Therefore, setting these parameters to fractions of CT RAM (physpages) are now removed from configuration files and vzsplit output.&lt;br /&gt;&lt;br /&gt;2. There is no longer a need to specify all the UBC parameters in per-container configuration file. If you leave some parameters unset, the kernel will use default values (usually unlimited). So, VSwap example configs are now much smaller, with as much as 19 parameters removed from those.&lt;br /&gt;&lt;br /&gt;3. vzctl set now supports two new options: --ram and --swap. These are just convenient short aliases for --physpages and --swappages, the differences being that you only need to specify one value (the limit) and the argument is in bytes rather than pages.&lt;br /&gt;&lt;br /&gt;So, to configure a container named MyCT to have 1.5GB of RAM and 3GB of swap space available, all you need to do is just run this command:&lt;br /&gt;&lt;code&gt;vzctl set MyCT --ram 1.5G --swap 3G --save&lt;/code&gt;&lt;br /&gt;&lt;br /&gt;4. This is not VSwap-related, but nevertheless worth sharing. Let's illustrate it by a copy-paste from a terminal:&lt;br /&gt;&lt;code&gt;# vzctl create 123 --ostemplate centos-4-x86_64&lt;br /&gt;Creating container private area (centos-4-x86_64)&lt;br /&gt;Found centos-4-x86_64.tar.gz at &lt;a target='_blank' href='http://download.openvz.org/template/precreated//centos-4-x86_64.tar.gz' rel='nofollow'&gt;http://download.openvz.org/template/precreated//centos-4-x86_64.tar.gz&lt;/a&gt;&lt;br /&gt;Downloading...&lt;br /&gt;-- 2011-11-29 18:54:08 -- &lt;a target='_blank' href='http://download.openvz.org/template/precreated//centos-4-x86_64.tar.gz' rel='nofollow'&gt;http://download.openvz.org/template/precreated//centos-4-x86_64.tar.gz&lt;/a&gt;&lt;br /&gt;Resolving download.openvz.org... 64.131.90.11&lt;br /&gt;Connecting to download.openvz.org|64.131.90.11|:80... connected.&lt;br /&gt;HTTP request sent, awaiting response... 200 OK&lt;br /&gt;Length: 171979832 (164M) [application/x-gzip]&lt;br /&gt;Saving to: `/vz/template/cache/centos-4-x86_64.tar.gz'&lt;br /&gt;&lt;br /&gt;100%[======================================&amp;gt;] 171,979,832 588K/s in 4m 27s &lt;br /&gt;&lt;br /&gt;2011-11-29 18:58:35 (630 KB/s) - `/vz/template/cache/centos-4-x86_64.tar.gz' saved [171979832/171979832]&lt;br /&gt;&lt;br /&gt;Success!&lt;br /&gt;Performing postcreate actions&lt;br /&gt;Saved parameters for CT 123&lt;br /&gt;Container private area was created&lt;/code&gt;&lt;br /&gt;&lt;br /&gt;All this will be available in vzctl-3.0.30, which is to be released soon (next week? who knows). If you can't wait and want to test this stuff, here's a nightly build of vzctl available (version 3.0.29.3-27.git.0535fe1) from &lt;a target='_blank' href='http://wiki.openvz.org/Download/vzctl/nightly' rel='nofollow'&gt;http://wiki.openvz.org/Download/vzctl/nightly&lt;/a&gt;. Please give it a try and tell me what you think.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Update:&lt;/b&gt; comments disabled due to spam</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:35500</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/35500.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=35500"/>
    <title>cpulimit is back in RHEL6 based kernel</title>
    <published>2011-01-20T16:53:15Z</published>
    <updated>2011-01-20T16:53:15Z</updated>
    <category term="kernel"/>
    <category term="openvz"/>
    <category term="vzctl"/>
    <content type="html">Hard CPU limit (ability to specify that you don't want this container to use more than X per cent of CPU no matter what) is back in latest RHEL6-based kernel, &lt;a href="http://wiki.openvz.org/Download/kernel/rhel6/042test006.1" target="_blank" rel="nofollow"&gt;042test006.1, which has just been released&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;The feature was only available for the stable (i.e RHEL4 and RHEL5-based) kernels, and was missing from all of our development kernels from 2.6.20 to 2.6.32. So while it was always there in stable branches, the feeling is like it's back.&lt;br /&gt;&lt;br /&gt;In order to use CPU limit feature, set the limit using &lt;code&gt;vzctl set $CTID --cpulimit X&lt;/code&gt;, where X is in per cent of one single CPU. For example, if you have single 2 GHz CPU and want container 123 to use no more than 1 GHz, use &lt;code&gt;vzctl set 123 --cpulimit 50&lt;/code&gt;. If you have 2 GHz quad-core system and want to use no more than 4 GHz, use &lt;code&gt;vzctl set 123 --cpulimit 200&lt;/code&gt;. Well, in the second case it might be better to just use &lt;code&gt;--cpus 2&lt;/code&gt;. Anyways, see vzctl man page.</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:35029</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/35029.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=35029"/>
    <title>vzctl 3.0.25 released today, woo-hoo!</title>
    <published>2010-12-20T15:39:10Z</published>
    <updated>2010-12-20T15:39:10Z</updated>
    <category term="openvz"/>
    <category term="vzctl"/>
    <content type="html">I thought that Monday is a good day to release a new version of vzctl which was worked on for the last six months. There are a few important changes in this release, let me go through it.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;First&lt;/b&gt;, we have finally removed all the cronscripts trickery required for CT reboot. The thing is, if the container owner issues 'reboot' command from inside, the container just stops. Now, something needs to be done on the host system to start it again. Until this release, this was achieved by a hackish combination of vzctl (which adds an initscript inside container to add a reboot mark) and a cron script (which checks for the stopped containers having that reboot mark and starts those). Yet another cron script takes care about a situation when a CT is stopped from the inside -- in this case some cleanup needs to be done from the host system, namely we need to unmount the CT private area, and remove the routing and ARP records for the CT IP.&lt;br /&gt;&lt;br /&gt;There are a few problems with this cron-based approach. First, initscript handling can be different in different distributions, and it's really hard to support all of the distros. Second, cron script is run every 5 minutes, which means a mean time to reboot (or clean up network rules) is 2.5 minutes. To say it simple, it's all hackish and unreliable.&lt;br /&gt;&lt;br /&gt;Now, this hairy trickery is removed and replaced by a simple and clean daemon called vzeventd, which listens to CT stop and reboot events, and runs clean and simple scripts. No more trickery, no more waiting for reboot. The only catch is this requires support from the kernel (which comes in a form of vzevent kernel module).&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Second&lt;/b&gt;, new vzctl is able to start Fedora 14 containers on our stable (i.e. RHEL5-2.6.18) kernels. The thing is, Fedora 14 have glibc patched to check for specific kernel version (&amp;gt;=2.6.32 in this case) and refuse to work otherwise. This is done to prevent glibc from using the old kernels with some required features missing. We patch our kernels to have those features, but glibc just checks the version. So, our recent kernels is able to set osrelease field of uname structure to any given value for a given container. Now, vzctl 3.0.25 comes with a file (/etc/vz/osrelease.conf) which lists different distros and their required kernel version, which it sets during start and exec.&lt;br /&gt;&lt;br /&gt;I want to briefly mention yet another feature of recent vzctl (which, again, needs kernel support) -- an ability to delegate a PCI device into a container. It is only supported on RHEL6 kernel at the moment, and the only devices that we have tried are NVidia GPUs.&lt;br /&gt;&lt;br /&gt;Besides these three big things, there are a lot of improvements, fixes, and documentation updates all over the tree. I don't know of any known regressions in this release but I guess it's not entirely Bug Free. Fortunately there's a way to handle it -- if anything really bad appears in this version, it will be fixed by a quick 3.0.25.1 update. This worked pretty well for vzctl-3.0.24, should work fine this time, too.&lt;br /&gt;&lt;br /&gt;As always, please report all the bugs found to &lt;a target='_blank' href='http://bugzilla.openvz.org/' rel='nofollow'&gt;http://bugzilla.openvz.org/&lt;/a&gt;</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:32466</id>
    <author>
      <name>dowdle</name>
    </author>
    <lj:poster user="dowdle" userid="9725912"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/32466.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=32466"/>
    <title>Just in case you missed it</title>
    <published>2010-07-01T21:29:38Z</published>
    <updated>2010-07-01T21:29:38Z</updated>
    <category term="vzctl"/>
    <content type="html">Just wanted to mention a few news items from the OpenVZ Project.  &lt;br /&gt;&lt;br /&gt;&lt;b&gt;Updated vzctl&lt;/b&gt; - &lt;a href="http://wiki.openvz.org/Download/vzctl/3.0.24" target="_blank" target="_blank" rel="nofollow"&gt;vzctl 3.0.24&lt;/a&gt; has been released.  Even though the version number only changed from 3.0.23 to 3.0.24 there are a ton of changes, fixes and some feature additions.  Of special interest is the --swappages option as well as being able to refer to a container by its name rather than requiring the CTID with vzmigrate.  Overall it was a long overdue, much appreciated update.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Updated Official OS Templates&lt;/b&gt; - The last &lt;a href="http://wiki.openvz.org/News/updates#Precreated_templates_update" target="_blank" target="_blank" rel="nofollow"&gt;wiki notice&lt;/a&gt; is dated April 27th but looking today at the dates on the OS Templates they appear to have been updated May 27th.  One thing to note is that there are now OS Templates for Ubuntu 10.04 which I'm sure Ubuntu folks will be happy about.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Beta Fedora 13 OS Templates&lt;/b&gt; - And speaking of OS Templates, Kir just released Beta OS Templates for &lt;a href="http://download.openvz.org/template/precreated/beta/" target="_blank" target="_blank" rel="nofollow"&gt;Fedora 13&lt;/a&gt;.  The day Fedora 13 was released I tried creating my own OS Templates by taking Fedora 12 containers and upgrading them but ran into a snag.  With Fedora 13 a lot of new stuff has been added to the init setup and some of it causes a container to just hang during init.  I was glad to see the beta OS Templates released.  I created containers from them, made my own changes, and then uploaded those to the contrib section.&lt;br /&gt;&lt;br /&gt;As luck would have it, later in the afternoon the Fedora Project release a whole bunch of updates and among them was a new initscripts package.  I suspected that when I upgraded my container whatever changes the OpenVZ folks had made to the init setup that made it work in a container would be wiped out and I was correct as upgrading the initscripts package did indeed make the container get stuck in the init process upon container reboot.  I ended up filing two bugs: &lt;a href="http://bugzilla.openvz.org/show_bug.cgi?id=1566" target="_blank" target="_blank" rel="nofollow"&gt;1566&lt;/a&gt; and &lt;a href="http://bugzilla.openvz.org/show_bug.cgi?id=1567" target="_blank" target="_blank" rel="nofollow"&gt;1567&lt;/a&gt; and await their joyful resolution.&lt;br /&gt;&lt;br /&gt;*** Please note*** Any URLs mentioned (and the information they contain) in this posting are time sensitive and will surely be outdated not long after posting.</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:30163</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/30163.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=30163"/>
    <title>vzctl-3.0.24 is on its way; some thoughts on code refactoring</title>
    <published>2009-11-17T20:50:43Z</published>
    <updated>2009-11-17T20:50:43Z</updated>
    <category term="programming"/>
    <category term="openvz"/>
    <category term="vzctl"/>
    <content type="html">Long time no see. It's 11pm here now and I'm still in the office. Just though about why not to post to the blog right before I drive home. Really, why not?&lt;br /&gt;&lt;br /&gt;I'm mostly working on new vzctl release lately (you can see the progress in vzctl's git web interface &lt;a href="http://git.openvz.org/?p=vzctl;a=summary" target="_blank" rel="nofollow"&gt;here&lt;/a&gt;). The ultimate task is to fix most of bugs opened for vzctl in OpenVZ bugzilla (some are dated back to 2007 -- no, they are not critical or even major, but anyway). So far &lt;a href="http://bugzilla.openvz.org/buglist.cgi?query_format=advanced&amp;amp;bug_status=NEW&amp;amp;bug_status=ASSIGNED&amp;amp;bug_status=REOPENED&amp;amp;component=vzctl&amp;amp;product=OpenVZ" target="_blank" rel="nofollow"&gt;the list of vzctl bugs yet opened&lt;/a&gt; are down to one singe page (i.e. scroll bar has disappeared from the browser window) which is a big improvement.&lt;br /&gt;&lt;br /&gt;The process is somewhat slow because of my attitute to fix even minor things thenever I notice them -- known as &lt;a href="http://en.wikipedia.org/wiki/Code_refactoring" target="_blank" rel="nofollow"&gt;code refactoring&lt;/a&gt;. The simplest example is when you open a C file in vim (or Emacs, whatever) and see that there are spaces instead of tabs. When you notice it, there are three ways to go: (A) leave it as is; (B) fix it in place, keep working on what you've planned, commit; (C) fix it, make a separate commit, then continue working on what you've planned.&lt;br /&gt;&lt;br /&gt;Way A is not that bad, actually, it's definitely better than B, because the result of B is a spaghetti of a functional (e.g. a new feature or a bugfix) and non-functional (e.g. whitespace cleanup) changes. Mixing apples and oranges is a bad thing to do: it makes patch review harder, it makes porting between branches harder, and in case you'll want to revert the patch (say because of a bug in new code) you will also revert those cleanups (which are not buggy). So, if you are not a perfectionist, better follow way A but please not B.&lt;br /&gt;&lt;br /&gt;Unfortunately I just can't ignore the bad things I see, so I follow the way C. Sometimes this is very annoying, because instead of implementing a new feature you keep refactoring your code for hours, sometimes even days. No, it's not only whitespace cleanups or fixing typos in comments. Sometimes you see that there are a few identical snippets of code and you put that code into a new function. Sometimes you notice that some function arguments are unused (or always the same) and you remove those. Sometimes you just read the code and see there is a bug in it, and you fix it. Oh, the text of error message is not clear enough -- you fix it. The typesetting of a man page is wrong (e.g. bold is used instead of italic for variable argument) -- you fix it. You see that a function is not used outside of the source file -- you make it static and remove its prototype from a header file. You notice a typo in the function name (and of course also in every place that calls it, since otherwise that code won't compile) -- you fix it. A typo in a comment -- fix it! And so on, and so forth...&lt;br /&gt;&lt;br /&gt;Every single thing from the above paragraph (and some more) happened to me when I was working on the boot order priority feature (the subject of &lt;a href="http://bugzilla.openvz.org/1300" target="_blank" rel="nofollow"&gt;OpenVZ bug #1300&lt;/a&gt;). So I ended up with 22 (twenty two) code commits, from which 3 implement the actual feature, 1 documents it, and the other 18 are all code refactoring, preparation and minor bugfixing.&lt;a name='cutid1-end'&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;The way to perfection is never easy. That doesn't mean we shouldn't try.</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:28393</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/28393.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=28393"/>
    <title>Completion in vzctl</title>
    <published>2009-05-22T12:11:43Z</published>
    <updated>2009-05-22T12:23:09Z</updated>
    <category term="openvz"/>
    <category term="vzctl"/>
    <content type="html">There is a nice feature in vzctl (well, technically not in vzctl binary itself; it just comes in vzctl package) that many people don't know about -- completion. This basically makes it able to save a few keystrokes when typing.&lt;br /&gt;&lt;br /&gt;Say you want to create a container. You type &lt;tt&gt;vzct&lt;/tt&gt; and press &lt;tt&gt;&amp;lt;TAB&amp;gt;&lt;/tt&gt; -- it completes that to vzctl and a space after. This is usual feature of bash -- it looks all the binaries available in $PATH and tries to complete their names.&lt;br /&gt;&lt;br /&gt;Now let's see the vzctl completion:&lt;br /&gt;&lt;code&gt;# vzctl cr&amp;lt;TAB&amp;gt;&lt;/code&gt;&lt;br /&gt;completes to&lt;br /&gt;&lt;tt&gt;# vzctl create &lt;/tt&gt;&lt;br /&gt;and then after yet another &amp;lt;TAB&amp;gt; it suggests a CT ID which is the MAX+1 (i.e. if you have containers 101, 102 and 105 it will suggest 106):&lt;br /&gt;&lt;tt&gt;# vzctl create 106 &lt;/tt&gt;&lt;br /&gt;Now we want to specify an OS template:&lt;br /&gt;&lt;tt&gt;# vzctl create 106 --os&amp;lt;TAB&amp;gt;&lt;/tt&gt;&lt;br /&gt;will get you to&lt;br /&gt;&lt;tt&gt;# vzctl create 106 --ostemplate &lt;/tt&gt;&lt;br /&gt;and then you press &lt;tt&gt;&amp;lt;TAB&amp;gt;&lt;/tt&gt; again twice to see the list of available OS templates:&lt;br /&gt;&lt;tt&gt;# vzctl create 106 --ostemplate &amp;lt;TAB&amp;gt;&amp;lt;TAB&amp;gt;&lt;br /&gt;centos-5-x86        centos-5-x86-devel  fedora-9-x86        suse-11.1-x86&lt;/tt&gt;&lt;br /&gt;Now you type in the first few characters of the OS template you want to use:&lt;br /&gt;&lt;tt&gt;# vzctl create 106 --ostemplate f&amp;lt;TAB&amp;gt;&lt;/tt&gt;&lt;br /&gt;and it will complete that to&lt;br /&gt;&lt;tt&gt;# vzctl create 106 --ostemplate fedora-9-x86&lt;/tt&gt;&lt;br /&gt;Now, unless you want to specify &lt;tt&gt;--config&lt;/tt&gt; or some other parameters, just press Enter.&lt;br /&gt;&lt;br /&gt;This completion is smart -- say, if you want to start a container, type in&lt;br /&gt;&lt;tt&gt;# vzctl start &amp;lt;TAB&amp;gt;&amp;lt;TAB&amp;gt;&lt;/tt&gt;&lt;br /&gt;and it will give you the list of container IDs that can be started (i.e. all the stopped containers).&lt;br /&gt;&lt;br /&gt;And so on and so forth. Well, you say, it doesn't work! In that case you have to enable it, here's how.&lt;br /&gt;&lt;br /&gt;On a RHEL, CentOS or Fedora system run &lt;tt&gt;yum install bash-completion&lt;/tt&gt; and then relogin (i.e. log out and log in again). If your host system is Gentoo, run &lt;tt&gt;emerge bash-completion&lt;/tt&gt; and then &lt;tt&gt;eselect bashcomp enable vzctl&lt;/tt&gt;. I hope someone will comment on how to enable this for Debian/Ubuntu/SUSE or whatever your favorite distro is.</content>
  </entry>
  <entry>
    <id>urn:lj:livejournal.com:atom1:openvz:25028</id>
    <author>
      <name>Kir Kolyshkin</name>
    </author>
    <lj:poster user="k001" userid="990679"/>
    <link rel="alternate" type="text/html" href="https://openvz.livejournal.com/25028.html"/>
    <link rel="self" type="text/xml" href="https://openvz.livejournal.com/data/atom/?itemid=25028"/>
    <title>New vzctl and vzquota</title>
    <published>2008-11-11T17:52:07Z</published>
    <updated>2008-11-11T17:54:00Z</updated>
    <category term="tools"/>
    <category term="openvz"/>
    <category term="release"/>
    <category term="vzctl"/>
    <category term="parallels"/>
    <category term="vzquota"/>
    <content type="html">Today is definitely a day of releases.&lt;br /&gt;&lt;br /&gt;OpenVZ project has released both new &lt;a href="http://wiki.openvz.org/Download/vzctl/3.0.23" target="_blank" rel="nofollow"&gt;vzctl&lt;/a&gt; and &lt;a href="http://wiki.openvz.org/Download/vzquota/3.0.12" target="_blank" rel="nofollow"&gt;vzquota&lt;/a&gt; tools today.&lt;br /&gt;&lt;br /&gt;New vzctl has a handful of new small features and a bunch of bugfixes, including compatibility with recent glibc, bash, and kernel headers.&lt;br /&gt;&lt;br /&gt;New vzquota has only one (but quite useful) new feature -- an ability to explain what's wrong when it can not turn container's disk quota on or off. Recent OpenVZ kernels have a feature to report open files in container's private area, and now with the new vzquota the feature is finally available for mere mortals.&lt;br /&gt;&lt;br /&gt;In the meantime Parallels has released &lt;a href="http://www.parallels.com/desktop" target="_blank" rel="nofollow"&gt;Parallels Desktop for Mac 4.0&lt;/a&gt; -- and that's just a coincidence, I'm sure they do not sync their release cycles with OpenVZ. Or maybe it's not a coincidence... We're sitting in the same office and for the last few weeks they've been providing free late dinners because of their release, that maybe made me leave the office later and thus maybe gave more time to work on OpenVZ tools. :)</content>
  </entry>
</feed>
