<?xml version='1.0' encoding='utf-8' ?>
<!--  If you are running a bot please visit this policy page outlining rules you must respect. https://www.livejournal.com/bots/  -->
<rss version='2.0'  xmlns:lj='http://www.livejournal.org/rss/lj/1.0/' xmlns:media='http://search.yahoo.com/mrss/' xmlns:atom10='http://www.w3.org/2005/Atom'>
<channel>
  <title>OpenVZ</title>
  <link>https://openvz.livejournal.com/</link>
  <description>OpenVZ - LiveJournal.com</description>
  <lastBuildDate>Tue, 26 Jul 2016 07:22:21 GMT</lastBuildDate>
  <generator>LiveJournal / LiveJournal.com</generator>
  <lj:journal>openvz</lj:journal>
  <lj:journalid>9392309</lj:journalid>
  <lj:journaltype>community</lj:journaltype>
  

  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/53870.html</guid>
  <pubDate>Tue, 26 Jul 2016 07:22:21 GMT</pubDate>
  <title>OpenVZ 7.0 released</title>
  <author>estetus</author>
  <link>https://openvz.livejournal.com/53870.html</link>
  <description>I&amp;#39;m pleased to announce the release of OpenVZ 7.0. The new release focuses on merging OpenVZ and &lt;a href=&quot;https://src.openvz.org/projects/OVZ&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;Virtuozzo source codebase&lt;/a&gt;, replacing our own hypervisor with KVM.&lt;br /&gt;&lt;br /&gt;Key changes in comparison to the last stable OpenVZ release:&lt;br /&gt;&lt;ul&gt;&lt;br /&gt;&lt;li&gt;OpenVZ 7.0 becomes a complete Linux distribution based on our own &lt;a href=&quot;https://virtuozzo.com/products/virtuozzo-linux/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;VzLinux&lt;/a&gt;.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;The main difference between the Virtuozzo (commercial) and OpenVZ (free) versions are the EULA, packages with paid features, and Anaconda installer.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;The user documentation is &lt;a href=&quot;https://docs.openvz.org/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;publicly available&lt;/a&gt;.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;https://docs.openvz.org/virtuozzo_7_users_guide.webhelp/_managing_templates.html&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;EZ templates&lt;/a&gt; can be used instead of tarballs with template caches.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Additional features (see below)&lt;/li&gt;&lt;br /&gt;&lt;/ul&gt;&lt;br /&gt;This OpenVZ 7.0 release provides the following major improvements:&lt;br /&gt;&lt;ul&gt;&lt;br /&gt;&lt;li&gt;RHEL7 (3.10+) kernel.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;KVM/QEMU hypervisor.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Guest tools for virtual machines that currently allow the following: to execute commands in VMs from the host, to set user passwords, to set and obtain network settings, to change SIDs, to enter VMs.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Unified management of containers and KVM virtual machines with the prlctl tool and SDK. You get a single universal toolset for all your CT/VM management needs.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;UUIDs are used to identify both virtual machines and containers. With containers, prlctl treats the former VEID parameter as name.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Virtual machine HDD images are stored in the QCOW2 format.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Ability to manage containers and VMs with libvirt and &lt;a href=&quot;https://kb.virtuozzo.com/en/129047&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;virt-manager&lt;/a&gt; or virsh via a &lt;a href=&quot;http://libvirt.org/drvvirtuozzo.html&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;single driver&lt;/a&gt; for containers and virtual machines. Libvirt is an open-source API, daemon, and management tool for managing virtualization platforms. The API is widely used in the orchestration layer of hypervisors for cloud-based solutions. OpenVZ considers libvirt as the standard API for managing both virtual machines and containers. Libvirt provides storage management on the physical host through storage pools and volumes which can be used in OpenVZ containers.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;https://docs.openvz.org/virtuozzo_7_users_guide.webhelp/_managing_resources.html&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;Memory guarantees&lt;/a&gt;. A memory guarantee is a percentage of container&amp;#39;s or virtual machine&amp;#39;s RAM that said container or VM is guaranteed to have.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Memory hotplugging for containers and VMs that allows both increasing and reducing CT/VM memory size on the fly, without the need to reboot.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Kernel same-page merging. To optimize memory usage by virtual machines, OpenVZ uses a Linux feature called Kernel Same-Page Merging (KSM). The KSM daemon ksmd periodically scans memory for pages with identical content and merges those into a single page.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;https://openvz.org/Memory_management_in_VZ7&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;VCMMD&lt;/a&gt;, the fourth-generation unified memory manager, and vcmmd, a single daemon for managing memory of both virtual machines and containers. OpenVZ 7 uses memcg. Balancing and configuring memcg limits enables getting the exact OpenVZ parameters like overcommit, shadow gangs, swap, page cache overuse.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Container live migration via &lt;a href=&quot;https://criu.org/Main_Page&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;CRIU&lt;/a&gt; and &lt;a href=&quot;https://criu.org/P.Haul&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;P.Haul&lt;/a&gt;. In the previous versions of OpenVZ, most operations performed during migration were done in the kernel space. As a result, the migration process imposed a lot of restrictions. To improve upon migration, Virtuozzo launched the CRIU project aiming to move most of the migration code to the user space, make the migration process reliable, and remove excessive restrictions.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Containers use cgroups and namespaces that limit, account for, and isolate resource usage as isolated namespaces of a collection of processes. The beancounters interface remains in place for backward compatibility and, at the same time, acts as a proxy for actual cgroups and namespaces implementation.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;SimFS remains in OpenVZ 7.0, however, the support is limited and we don&amp;#39;t have plans to improve it in future.&lt;/li&gt;&lt;br /&gt;&lt;/ul&gt;&lt;h3&gt;Download&lt;/h3&gt;All binary components as well as &lt;a href=&quot;https://download.openvz.org/virtuozzo/releases/7.0/x86_64/iso/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;installation ISO image&lt;/a&gt; are freely available at the &lt;a href=&quot;https://download.openvz.org/virtuozzo/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;OpenVZ download server&lt;/a&gt; and &lt;a href=&quot;https://mirrors.openvz.org/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;mirrors&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;https://lists.openvz.org/pipermail/announce/2016-July/000664.html&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;Original announce&lt;/a&gt;</description>
  <comments>https://openvz.livejournal.com/53870.html?view=comments#comments</comments>
  <category>openvz</category>
  <category>release</category>
  <category>criu</category>
  <lj:security>public</lj:security>
  <lj:poster>estetus</lj:poster>
  <lj:posterid>12957684</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/53609.html</guid>
  <pubDate>Mon, 18 Jan 2016 14:18:02 GMT</pubDate>
  <title>Meet OpenVZ at FOSDEM 2016</title>
  <author>estetus</author>
  <link>https://openvz.livejournal.com/53609.html</link>
  <description>&lt;p&gt;&lt;img alt=&quot;&quot; src=&quot;https://ic.pics.livejournal.com/estetus/12957684/594/594_900.jpg&quot; style=&quot;color: rgb(34, 34, 34); font-family: Arial, sans-serif; font-size: 14px; line-height: 1.4;&quot; title=&quot;&quot; fetchpriority=&quot;high&quot; /&gt;&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;color: rgb(34, 34, 34); font-family: Arial, sans-serif; font-size: 14px; line-height: 1.4;&quot;&gt;The most important gathering of free software and open source enthusiasts in Europe is coming on Jan 30-31, in Brussels and OpenVZ will have a table booth there, plus several talks. Come to Containers and Process Isolation devroom (&lt;a href=&quot;https://www.fosdem.org/2016/schedule/track/containers_and_process_isolation/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;schedule&lt;/a&gt;) and &lt;a href=&quot;https://fosdem.org/2016/stands/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;OpenVZ booth&lt;/a&gt; to talk about Virtuozzo, CRIU, Live migration many other things related to containers.&lt;/span&gt;&lt;/p&gt;</description>
  <comments>https://openvz.livejournal.com/53609.html?view=comments#comments</comments>
  <category>openvz</category>
  <category>booth</category>
  <category>conference</category>
  <category>fosdem</category>
  <lj:security>public</lj:security>
  <lj:poster>estetus</lj:poster>
  <lj:posterid>12957684</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/53331.html</guid>
  <pubDate>Wed, 16 Sep 2015 08:47:23 GMT</pubDate>
  <title>Join Our Team at OpenStack Summit 2015 Tokyo</title>
  <author>estetus</author>
  <link>https://openvz.livejournal.com/53331.html</link>
  <description>&lt;img src=&quot;https://imgprx.livejournal.net/d8911e99663f757b2a2d744df849aac3da436c00/UihGilZPiLNEuutPaez678Jyuw--HW_55KICbusTY6ODu0CLcjusYvAYvOpVs4aXGGYbWJWSnT38wuVItY-SHkyyVvyOZMJlYGdI2DujyDk&quot; alt=&quot;&quot; align=&quot;left&quot; width=&quot;200&quot; fetchpriority=&quot;high&quot; /&gt; We&apos;re very excited that this year OpenVZ will have exhibit space at &lt;a href=&quot;https://www.openstack.org/summit/tokyo-2015/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;OpenStack Summit&lt;/a&gt; in Tokyo Japan, October 27-30. We will be showing and demoing OpenVZ server virtualization, answering questions and so on. &lt;br /&gt;&lt;br /&gt;We would like the community to participate with us in the event. If you live in Tokyo (or can come to this OpenStack Summit), are an OpenVZ user and would like to be a part of our team at the OpenVZ exhibit -- you are very welcome to join us! Please email me (sergeyb@openvz.org) your details and we&apos;ll discuss arrangements.</description>
  <comments>https://openvz.livejournal.com/53331.html?view=comments#comments</comments>
  <category>openstack</category>
  <category>openvz</category>
  <category>summit</category>
  <category>booth</category>
  <lj:security>public</lj:security>
  <lj:poster>estetus</lj:poster>
  <lj:posterid>12957684</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/52010.html</guid>
  <pubDate>Thu, 06 Aug 2015 15:40:33 GMT</pubDate>
  <title>OpenVZ upgrade and migration script</title>
  <author>estetus</author>
  <link>https://openvz.livejournal.com/52010.html</link>
  <description>&lt;i&gt;Author:&lt;/i&gt; Andre Moruga&lt;br /&gt;&lt;br /&gt;Every now and then our team is asked question &quot;How do I move a container created on OpenVZ to Virtuozzo&quot;? This is one of the issues which will be finally resolved in version 7 that we are now working on  (&lt;a href=&quot;https://openvz.org/Virtuozzo_7_Technical_Preview_-_Containers&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;first technical preview&lt;/a&gt; is just out). In this version the compatibility will be on binary and transfer protocol levels. So the regular mechanisms (like container migration) will work out of the box.&lt;br /&gt;In prior version this task, although not technically difficult, is not very straightforward, the data images cannot be simply moved - depending on configuration used in OpenVZ, they may be incompatible.Besides, an OpenVZ based container will have configuration that needs to be updated to fit the new platform.&lt;br /&gt;To facilitate such migrations, we created a script which automates all these operations: data transfer, migrating container configuration, and tuning configuration to ensure container will work on the new platform.&lt;br /&gt;The script is available at &lt;a href=&quot;https://src.openvz.org/projects/OVZL/repos/ovztransfer/browse/ovztransfer.sh&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;https://src.openvz.org/projects/OVZL/repos/ovztransfer/browse/ovztransfer.sh&lt;/a&gt;. Its usage is simple: run it in the source (old) node, and specify destination host IP and list of containers to be migrated:&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
 $ ./ovztransfer.sh TARGET_HOST SOURCE_VEID0:[TARGET_VEID0] ...
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;For example:&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
 ./ovztransfer.sh 10.1.1.3 101 102 103
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;The script has been designed to migrate containers from older OpenVZ versions to v.7; however it should also work migrating data to existing Virtuozzo versions (like 6.0).&lt;br /&gt;There is one restriction: containers based on obsolete templates that do not exist on the destination servers will be transferred as &quot;not template based&quot; - which means tools for template management (like adding an application via vzpkg) won&apos;t work for them.&lt;br /&gt;This is a first version of this script; we will have an opportunity to improve it before the final release. That&apos;s why your feedback (or even code contributions) is important here.&lt;br /&gt;If you tried it and want to share your thoughts, email to OpenVZ user group at &lt;a href=&quot;mailto:users@openvz.org&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;users@openvz.org&lt;/a&gt;.</description>
  <comments>https://openvz.livejournal.com/52010.html?view=comments#comments</comments>
  <category>openvz</category>
  <category>upgrade</category>
  <lj:security>public</lj:security>
  <lj:poster>estetus</lj:poster>
  <lj:posterid>12957684</lj:posterid>
  <lj:reply-count>2</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/51718.html</guid>
  <pubDate>Fri, 31 Jul 2015 17:50:01 GMT</pubDate>
  <title>OpenVZ survey results (May - July 2015)</title>
  <author>estetus</author>
  <link>https://openvz.livejournal.com/51718.html</link>
  <description>Now we are ready to publish results of survey which was in May-July 2015.&lt;br /&gt;There are 91 people participated. votes gathered from 19 May till 1 July 2015.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;How long do you use OpenVZ?&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://imgur.com/YqaIwSD&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;&lt;img src=&quot;https://i.imgur.com/YqaIwSD.png&quot; title=&quot;source: imgur.com&quot; fetchpriority=&quot;high&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;What are the reasons for choosing OpenVZ among other container-based solutions?&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://imgur.com/UZ802TG&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;&lt;img src=&quot;https://i.imgur.com/UZ802TG.png&quot; title=&quot;source: imgur.com&quot; loading=&quot;lazy&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;What do you use OpenVZ for&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://imgur.com/2XtjH6i&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;&lt;img src=&quot;https://i.imgur.com/2XtjH6i.png&quot; title=&quot;source: imgur.com&quot; loading=&quot;lazy&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;How big is your team supporting OpenVZ deployment&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://imgur.com/qPd8Bdr&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;&lt;img src=&quot;https://i.imgur.com/qPd8Bdr.png&quot; title=&quot;source: imgur.com&quot; loading=&quot;lazy&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;Further plans with OpenVZ&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://imgur.com/H5FHbvZ&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;&lt;img src=&quot;https://i.imgur.com/H5FHbvZ.png&quot; title=&quot;source: imgur.com&quot; loading=&quot;lazy&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;How many hardware servers do you use with OpenVZ?&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;table&gt;
  &lt;tr&gt;
    &lt;td&gt;Voices&lt;/td&gt;
    &lt;td&gt;Amount of servers&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;1&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;3&lt;/td&gt;
&lt;td&gt;20-30&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;15&lt;/td&gt;
&lt;td&gt;&amp;lt;10&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;5&lt;/td&gt;
&lt;td&gt;10-20&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;1&lt;/td&gt;
&lt;td&gt;100+&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;1&lt;/td&gt;
&lt;td&gt;10+&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;1&lt;/td&gt;
&lt;td&gt;~100&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;1&lt;/td&gt;
&lt;td&gt;80&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;1&lt;/td&gt;
&lt;td&gt;1600&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;1&lt;/td&gt;
&lt;td&gt;160+&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;1&lt;/td&gt;
&lt;td&gt;60&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;1&lt;/td&gt;
&lt;td&gt;45&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;1&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;1&lt;/td&gt;
&lt;td&gt;None now (using upstream kernels)&lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;What features are absent in OpenVZ from your point of view?&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;Note: Answers with more than 1 voice&lt;br /&gt;&lt;br /&gt;&lt;table&gt;
&lt;tr&gt;&lt;td&gt;Voices&lt;/td&gt;&lt;td&gt;Answer&lt;/td&gt;&lt;td&gt;Description&lt;/td&gt;&lt;td&gt;Current decision&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;7&lt;/td&gt;&lt;td&gt;Modern kernel support (at least RHEL7 kernel)&lt;/td&gt;&lt;td&gt;Reasons: There are a two main problems with stable 2.6.32 patches: modern hardware requires 3.x kernels, and latest user-space utilities (i.e. systemd) doesn&apos;t work with 2.6.x kernels.&lt;br /&gt;It&apos;s great, I just really hate being stuck with such an old kernel. I know it gets updated regularly but it&apos;s still technically almost 6 years old.&lt;/td&gt;&lt;td&gt;&lt;a href=&quot;https://openvz.org/Virtuozzo_7_Technical_Preview_-_Containers&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;In progress&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;5&lt;/td&gt;&lt;td&gt;Good web interface (cluster management: adding nodes, quorum management, logs, etc )&lt;/td&gt;&lt;td&gt;An entry-level web panel. OpenVZ Web Panel seems somewhat popular but I&apos;ve always been turned off by its reliance on Ruby...  and unsure of its security-related testing. The recent Packt Publishing boot, &quot;Essential OpenVZ&quot; spends half of the book covering  OpenVZ Web Panel. It would be nice if OWP was adopted in some way or replaced with something similar to offer an entry level  web-based management system like VMware does with ESXi. If considered, I&apos;d strongly recommend that there is a clear  differentiation between the features in the entry-level web-panel and those commercially offered. I know a few companies are  selling OpenVZ compatible web-interfaces... like SolusVM, Proxmox VE, etc.&lt;/td&gt;&lt;td&gt;There is no final decision about WebUI in Vz7. But we will have &lt;a href=&quot;https://openvz.org/LibVirt&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;libvirt in base of Vz7&lt;/a&gt;. It allows to use oVirt and virt-manager at least.&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;4&lt;/td&gt;&lt;td&gt;ZFS support&lt;/td&gt;&lt;td&gt;* native ZFS support for container files instead of ploop&lt;br /&gt;* ZFS integration (ZFS-aware tools for creation/cloning containers/creating snapshots)&lt;br /&gt;* ZFS integration (e.g. quota support)&lt;/td&gt;&lt;td&gt;There is no final decision regarding ZFS in VZ7.&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;4&lt;/td&gt;&lt;td&gt;Upstream kernel support and availability in Linux distributives&lt;/td&gt;&lt;td&gt;Better integration with LXC in the mainline kernel. I think LXC and Docker could be a stepping stone to OpenVZ / Virtuozzo... if the  OpenVZ tools worked reasonably well with LXC in the mainline kernel... and it was clear to the user what features they could gain  if they moved up to OpenVZ and/or Virtuozzo.&lt;/td&gt;&lt;td&gt;There is no final decision about it.&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;2&lt;/td&gt;&lt;td&gt;Better integration with OpenStack (especially networking)&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;&lt;a href=&quot;https://openvz.org/Setup_OpenStack_with_Virtuozzo_7&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;In progress&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;2&lt;/td&gt;&lt;td&gt;Support of Debian 8.0 Jessie&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;There is no final decision yet.&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;What 3rd party technologies/products do you use with OpenVZ?&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Note:&lt;/b&gt; Answers with more than 1 voice.&lt;br /&gt;&lt;br /&gt;&lt;table&gt;
&lt;tr&gt;&lt;td&gt;Votes&lt;/td&gt;&lt;td&gt;Product&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;9&lt;/td&gt;&lt;td&gt;Proxmox with KVM and OpenVZ&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;4&lt;/td&gt;&lt;td&gt;KVM&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;3&lt;/td&gt;&lt;td&gt;Docker&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;3&lt;/td&gt;&lt;td&gt;Pacemaker (HA management)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;2&lt;/td&gt;&lt;td&gt;Ansible&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;2&lt;/td&gt;&lt;td&gt;cPanel&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;2&lt;/td&gt;&lt;td&gt;SolusVM&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;2&lt;/td&gt;&lt;td&gt;ZFSonLinux&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;2&lt;/td&gt;&lt;td&gt;Zabbix&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;2&lt;/td&gt;&lt;td&gt;Asterisk&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;Do you have plans to buy a commercial version of Virtuozzo (Parallels Server Bare Metal/Parallels Cloud Server)?&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://imgur.com/xMaBATk&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;&lt;img src=&quot;https://i.imgur.com/xMaBATk.png&quot; title=&quot;source: imgur.com&quot; loading=&quot;lazy&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;What are the reasons preventing to buy a commercial version?&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Note:&lt;/b&gt; Answers with more than 1 voice.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Satisfied by OpenVZ or other opensource solutions (19 asnwers)&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;br /&gt;&lt;li&gt;I wasn&apos;t actually aware there was a commercial version so I don&apos;t know what the difference is. The free version has been working well for us though.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;We use it in small internal IT primarily so no need in Virtuozzo services - all things we can do by using standart OVZ tools.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;We are happy with the open source version.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;OpenVZ is very simple and good work for me.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Not needed, anything works fine without any problem.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;OpenVZ does all I need&lt;/li&gt;&lt;br /&gt;&lt;li&gt;OpenVZ is good enough for us&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Not needed at the time.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Can&apos;t see why we need it.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;We have all needed features in openvz.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;enough openvz&lt;/li&gt;&lt;br /&gt;&lt;li&gt;no reasons to buy, lot&apos;s of free solutions.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Why?&lt;/li&gt;&lt;br /&gt;&lt;li&gt;We have one and we don&apos;t feel that we need to buy another.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;What is the reason to buy?&lt;/li&gt;&lt;br /&gt;&lt;li&gt;I tried to use comersial version and stay on openvz free version.&lt;/li&gt;&lt;br /&gt;&lt;li&gt; The available opensource products are sufficient&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Not needed&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Our current needs are satisfied with docker, openstack + kvm and openvz for certain deployments where docker is not enough and kvm is too much.&lt;/li&gt;&lt;br /&gt;&lt;/ul&gt;&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Very high price of commercial version (10 answers)&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;br /&gt;&lt;li&gt;Strange prices, strange sales, enterprise orientation&lt;li&gt;&lt;br /&gt;&lt;li&gt;Cost, scale.&lt;li&gt;&lt;br /&gt;&lt;li&gt;Huge cost.&lt;li&gt;&lt;br /&gt;&lt;li&gt;budget&lt;li&gt;&lt;br /&gt;&lt;li&gt;no project funding and no use case&lt;li&gt;&lt;br /&gt;&lt;li&gt;No money&lt;li&gt;&lt;br /&gt;&lt;li&gt;Limited budget&lt;li&gt;&lt;br /&gt;&lt;/ul&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;Do you ever contribute to OpenVZ project?&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://imgur.com/IYUb6Pw&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;&lt;img src=&quot;https://i.imgur.com/IYUb6Pw.png&quot; title=&quot;source: imgur.com&quot; loading=&quot;lazy&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;What commercial products do you use in parallel with OpenVZ (Containers for Windows, Virtuozzo, Plesk etc).&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://imgur.com/xztjRBJ&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;&lt;img src=&quot;https://i.imgur.com/xztjRBJ.png&quot; title=&quot;source: imgur.com&quot; loading=&quot;lazy&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;Is there anything that could motivate you to switch to a supported/commercial version (Virtuozzo)?&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://imgur.com/9PsnK4O&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;&lt;img src=&quot;https://i.imgur.com/9PsnK4O.png&quot; title=&quot;source: imgur.com&quot; loading=&quot;lazy&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;a name=&apos;cutid1-end&apos;&gt;&lt;/a&gt;</description>
  <comments>https://openvz.livejournal.com/51718.html?view=comments#comments</comments>
  <category>openvz</category>
  <category>survey</category>
  <lj:security>public</lj:security>
  <lj:poster>estetus</lj:poster>
  <lj:posterid>12957684</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/51526.html</guid>
  <pubDate>Mon, 27 Jul 2015 20:12:02 GMT</pubDate>
  <title>Publishing of Virtuozzo 7 Technical Preview - Containers</title>
  <author>estetus</author>
  <link>https://openvz.livejournal.com/51526.html</link>
  <description>We are pleased to announce the official release of Virtuozzo 7.0 Technical Preview - Containers.&lt;br /&gt;&lt;br /&gt;It has been &lt;a href=&quot;http://openvz.org/History&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;more than a decade&lt;/a&gt; since we released Virtuozzo containers. Back then Linux kernel lacked isolation technologies and we had to implement those as a custom kernel patch. Since then we have worked closely with the community to bring these technologies to upstream. Today they are a part of most modern Linux kernels and this release is the first that will benefit significantly from our joint efforts and the strong upstream foundation.&lt;br /&gt;&lt;br /&gt;This is an early technology preview of Virtuozzo 7. We have made some good progress, but this is just the beginning. Much more still needs to be done. At this point we replaced the containers engine and made our tools work with the new kernel technologies. We consider this beta a major milestone on the road to the official Virtuozzo 7 release and want to share the progress with our customers.&lt;br /&gt;&lt;br /&gt;This Virtuozzo 7.0 Technical Preview offers the following significant improvements:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;br /&gt;&lt;li&gt;Virtuozzo 7 is based on &lt;a href=&quot;https://openvz.org/Download/kernel/rhel7-testing&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;RHEL7 and Kernel 3.10+&lt;/a&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Containers are using kernel features cgroups and namespaces that limit, account for, and isolate resource usage as isolated namespaces of a collection of processes. The beancounters interface remains in place for backward compatibility. At the same time it acts as a proxy for actual cgroups and namespaces implementation.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;UUID instead of VEID for container identification. You can now identify containers by their UUIDs or names. By default vzctl will treat the former VEID parameter as name.&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;https://openvz.org/VCMMD&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;VCMM 4th generation of memory manager&lt;/a&gt;. We switched to memcg. By balancing and configuring memcg limits we will get the exact overcommit, shadow gangs, swap, page cache overuse Virtuozzo parameters. This will be done by a userspace daemon.&lt;/li&gt;&lt;br /&gt;&lt;/ul&gt;&lt;br /&gt;&lt;br /&gt;Read more details in &lt;a href=&quot;http://lists.openvz.org/pipermail/announce/2015-July/000617.html&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;official announce&lt;/a&gt;.</description>
  <comments>https://openvz.livejournal.com/51526.html?view=comments#comments</comments>
  <category>openvz</category>
  <category>virtuozzo</category>
  <lj:security>public</lj:security>
  <lj:poster>estetus</lj:poster>
  <lj:posterid>12957684</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/51003.html</guid>
  <pubDate>Fri, 03 Jul 2015 17:03:29 GMT</pubDate>
  <title>Parallels and Docker: Not Just Competition</title>
  <author>estetus</author>
  <link>https://openvz.livejournal.com/51003.html</link>
  <description>&lt;img alt=&quot;&quot; height=&quot;600&quot; src=&quot;https://ic.pics.livejournal.com/estetus/12957684/357/357_900.jpg&quot; title=&quot;Pavel Emelyanov, CRIU project mantainer&quot; width=&quot;900&quot; fetchpriority=&quot;high&quot; /&gt;&amp;lt;/p&amp;gt;&lt;p class=&quot;&quot;&gt;&lt;br /&gt;&lt;span class=&quot;&quot;&gt;One of the questions that people ask us is how Parallels competes with Docker and why we do nothing while Docker is busy conquering the market? Firstly, since we created containers a decade ago, we have been perfecting container virtualization and pushing it to upstream. Secondly, Parallels and Docker operate on different levels: Docker packages and runs applications while Parallels provide virtualization, a low-level technology that Docker uses. This allows us to partner in a number of projects. Moreover, all existing container-related projects in the market do more than just compete with each other. We also try to cooperate in developing shared components.&lt;/span&gt;&lt;/p&gt;&lt;p class=&quot;&quot;&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;line-height: 1.4;&quot;&gt;One good example is the &lt;a href=&quot;https://github.com/docker/libcontainer&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;libcontainer&lt;/a&gt; project that unifies two versions of a library that manages kernel components used in container creation. We are currently trying to standardize how our own &lt;a href=&quot;https://openvz.org/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;OpenVZ&lt;/a&gt; as well as Docker and other projects interact with the Linux kernel. We also want to bind libcontainer to primary programming languages to provide more scenarios of container use in the market. Besides, we plan to integrate containers with OpenStack via &lt;a href=&quot;https://github.com/docker/libcontainer&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;libcontainer&lt;/a&gt;.&lt;/span&gt;&lt;/p&gt;&lt;p class=&quot;&quot;&gt;&lt;span class=&quot;&quot;&gt;The libcontainer project has interesting history, by the way. Docker was initially meant to be a container template management project that used vzctl to run containers. Then its developers moved to LXC and then came up with their own libcontainer library. At the same time we decided to &amp;quot;standardize&amp;quot; containers&amp;#39; kernel-related part and create a low-level library. In all, there were as many as three such systems at that time: ours, LXC, and libcontainer. We reworked our version and presented it to the public. And it happened so that our announcement was very close to the initial release of Docker&amp;#39;s library. Since the projects pursued the same goal, we decided to join forces. Libcontainer has several points of interest for us. Firstly, one willing to use containers has to choose between several projects. This is inconvenient for users and costly for developers (as they have to support multiple versions of essentially the same technology). However, the entire stack will be standardized sooner or later and we&amp;#39;d like to participate to be able to control both the development and results. Secondly, we&amp;#39;ll be able to achieve the dream of many users to run Docker containers on our stable kernel.&lt;/span&gt;&lt;/p&gt;&lt;p class=&quot;&quot;&gt;&lt;span style=&quot;line-height: 1.4;&quot;&gt;Recently, we announced jointly with Docker that &lt;a href=&quot;https://openvz.org/Virtuozzo&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;Virtuozzo&lt;/a&gt; (the successor of OpenVZ and Parallels Cloud Server) supports Docker containers and allows creating &amp;quot;containers within containers&amp;quot;, i.e. use Docker inside &lt;a href=&quot;https://openvz.org/Virtuozzo&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;Virtuozzo&lt;/a&gt;.&lt;/span&gt;&lt;/p&gt;&lt;p class=&quot;&quot;&gt;&lt;span class=&quot;&quot;&gt;Another good example of cooperation is live migration of Docker (and LXC) containers made possible by our CRIU project (Checkpoint/Restore In Userspace [mostly]). This technology enables you to save the state of a Linux process and restore it in a different location or at a different time (or &amp;quot;freeze&amp;quot; it). Moreover, this is the first ever implementation of an application snapshot technology that works on unmodified Linux (kernel + system libraries) and supports any process state. It&amp;#39;s available, for example, in Fedora 19 and newer. There were similar projects before, but they had drawback, e.g., required specific kernels and customized system libraries or supported only some process states.&lt;/span&gt;&lt;/p&gt;&lt;p class=&quot;&quot;&gt;&lt;span class=&quot;&quot;&gt;The live migration itself is performed by the &lt;a href=&quot;http://criu.org/P.Haul&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;P.Haul subproject&lt;/a&gt; that uses &lt;a href=&quot;http://criu.org/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;CRIU&lt;/a&gt; to correctly migrate containers between computers. CRIU allows performing two key actions: 1) save process states to files and 2) restore processes from saved data. There are nuances, for example, CRIU can work without stopping processes and save only changes to process states if need be.&lt;/span&gt;&lt;/p&gt;&lt;p class=&quot;&quot;&gt;&lt;span class=&quot;&quot;&gt;Migration is more difficult and implies at least three actions: 1) saving process state, 2) transferring it to a different computer, and 3) restoring the saved state. In actuality, it can also include transferring the file system, stopping the processes on the source computer and destroying them in the end as well as reducing freeze time by performing a series of memory transfers and saving changes in state, additional copying of memory after migration.&lt;/span&gt;&lt;/p&gt;&lt;p class=&quot;&quot;&gt;&lt;span class=&quot;&quot;&gt;Migration can also include such actions as transferring container&amp;#39;s IP address, reregistering it with the management system (e.g., docker-daemon in Docker), handling container&amp;#39;s external links. For example, LXC often links files inside containers with files outside it. You can have CRIU relink such files on the destination computer. Development of all these features and nuances was organised into a dedicated project.&lt;/span&gt;&lt;/p&gt;&lt;p class=&quot;&quot;&gt;&lt;span class=&quot;&quot;&gt;Today &lt;a href=&quot;http://criu.org/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;CRIU&lt;/a&gt; is a standard for implementing checkpoint/restore functionality in Linux (even though VMware claimed one should use vMotion for container migration). In this project we also cooperate with developers from Google, Canonical, and RedHat. They not only send patches but also actively discuss cgroup support in CRIU and successfully use CRIU with Docker and LXC tools.&lt;/span&gt;&lt;/p&gt;&lt;p class=&quot;&quot;&gt;&lt;span class=&quot;&quot;&gt;The &lt;a href=&quot;http://criu.org/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;CRIU&lt;/a&gt; technology has lots of uses aside from live migration: speeding up start of large applications, rebootless kernel updates, load balancing, state backup for failure recovery. Usage scenarios include network load balancing, analysis of application behaviour on different computers, process duplication, and such.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;https://xakep.ru/2015/06/22/parallels-i-docker/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;via&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;a name=&apos;cutid1-end&apos;&gt;&lt;/a&gt;</description>
  <comments>https://openvz.livejournal.com/51003.html?view=comments#comments</comments>
  <category>p.haul</category>
  <category>openvz</category>
  <category>docker</category>
  <category>libcontainer</category>
  <category>criu</category>
  <lj:security>public</lj:security>
  <lj:poster>estetus</lj:poster>
  <lj:posterid>12957684</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/50776.html</guid>
  <pubDate>Fri, 03 Jul 2015 07:41:45 GMT</pubDate>
  <title>Analyzing OpenVZ Components with PVS-Studio</title>
  <author>estetus</author>
  <link>https://openvz.livejournal.com/50776.html</link>
  <description>&lt;b&gt;Author:&lt;/b&gt; Svyatoslav Razmyslov&lt;br /&gt;&lt;br /&gt;In order to demonstrate our analyzer&apos;s diagnostic capabilities, we analyze open-source projects and write articles to discuss any interesting bugs we happen to find. We always encourage our users to suggest interesting open-source projects for analysis, and note down all the suggestions we receive via e-mail. Sometimes they come from people closely related to the project we are asked to check. In this article, we will tell you about a check of the components of the OpenVZ project we have been asked to analyze by the project manager Sergey Bronnikov.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;h3&gt;About PVS-Studio and OpenVZ&lt;/h3&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/pvs-studio/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;PVS-Studio&lt;/a&gt; is a static analyzer designed to detect errors in source code of C/C++ applications. It can be downloaded from the official website but is available for operating systems of the Windows family only. Therefore, to be able to analyze OpenVZ components in Linux, we had to use a PVS-Studio beta-version we once &lt;a href=&quot;http://www.viva64.com/en/b/0299/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;used to check the Linux Kernel&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;https://openvz.org/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;OpenVZ&lt;/a&gt; is an operating system-level virtualization technology based on the Linux kernel and operating system. OpenVZ allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs).&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Analysis results&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;OpenVZ components are small-sized projects, so there were relatively few warnings yet they were very characteristic of software written in C++.&lt;br /&gt;&lt;br /&gt;Troubles with pointers&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0205/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V595&lt;/a&gt; The &apos;plog&apos; pointer was utilized before it was verified against nullptr. Check lines: 530, 531. CPackedProblemReport.cpp 530&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
void
CPackedProblemReport::appendSystemLog( CRepSystemLog * plog )
{
QString strPathInTemp = m_strTempDirPath + QString(&quot;/&quot;) +
QFileInfo( plog-&amp;gt;getName() ).fileName();
if( !plog )
{
QFile::remove( strPathInTemp );
return;
}
....
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;It&apos;s a genuine pointer handling bug. The &apos;plog&apos; pointer is dereferenced right after entering the function and only then is checked for being valid.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0205/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V595&lt;/a&gt; The &apos;d&apos; pointer was utilized before it was verified against nullptr. Check lines: 1039, 1041. disk.c 1039&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
int vzctl2_add_disk(....)
{
....
if (created)
destroydir(d-&amp;gt;path);
if (d)
free_disk(d);
return ret;
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;In this case, when the &apos;created&apos; flag is set, the &apos;d&apos; variable will be dereferenced, though the next code line suggests that the pointer may be null. This code fragment can be rewritten in the following way:&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
int vzctl2_add_disk(....)
{
....
if (d)
{
if (created)
  destroydir(d-&amp;gt;path);
free_disk(d);
}
return ret;
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0205/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V595&lt;/a&gt; The &apos;param&apos; pointer was utilized before it was verified against nullptr. Check lines: 1874, 1876. env.c 1874&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
int
vzctl2_env_set_veth_param(....,
                      struct vzctl_veth_dev_param *param,
                      int size)
{
int ret;
struct vzctl_ip_param *ip = NULL;
struct vzctl_veth_dev_param tmp = {};
memcpy(&amp;tmp, param, size);
if (param == NULL || tmp.dev_name_ve == NULL)
return VZCTL_E_INVAL;
....
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;The &apos;memcpy&apos; function copies the contents of one memory area into another. The second function parameter is a pointer to the source address. The function contains a check of the &apos;param&apos; pointer for null, but before that the pointer is used by the &apos;memcpy&apos; function, which may cause a null-pointer dereferencing operation.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0205/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V595&lt;/a&gt; The &apos;units&apos; pointer was utilized before it was verified against nullptr. Check lines: 607, 610. wrap.c 607&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
int vzctl_set_cpu_param(....)
{
....
if (weight != NULL &amp;&amp;
  (ret = vzctl2_env_set_cpuunits(param, *units * 500000)))
goto err;
if (units != NULL &amp;&amp;
  (ret = vzctl2_env_set_cpuunits(param, *units)))
goto err;
....
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;The analyzer&apos;s warning about the &apos;units&apos; pointer being checked for null before being used may hint at a typo in this code. In the first condition, the &apos;weight&apos; pointer being checked is not used. It should have probably looked like this:&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
if (weight != NULL &amp;&amp;
(ret = vzctl2_env_set_cpuunits(param, *weight * 500000)))
goto err;
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;V668 There is no sense in testing the &apos;pRule&apos; pointer against null, as the memory was allocated using the &apos;new&apos; operator. The exception will be generated in the case of memory allocation error. PrlHandleFirewallRule.cpp 59&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
PRL_RESULT PrlHandleFirewallRule::Create(PRL_HANDLE_PTR phRule)
{
PrlHandleFirewallRule* pRule = new PrlHandleFirewallRule;
if ( ! pRule )
return PRL_ERR_OUT_OF_MEMORY;
*phRule = pRule-&amp;gt;GetHandle();
return PRL_ERR_SUCCESS;
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;The analyzer has detected an issue when a pointer value returned by the &apos;new&apos; operator is compared to zero. If the &apos;new&apos; operator fails to allocate a required amount of memory, the C++ standard forces the program to throw an std::bad_alloc() exception. Therefore, checking the pointer for null doesn&apos;t make sense. The developers need to check which kind of the &apos;new&apos; operator is used in their code. If it is really set to throw an exception in case of memory shortage, then there are 40 more fragments where the program may crash.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Troubles with classes&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0247/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V630&lt;/a&gt; The &apos;malloc&apos; function is used to allocate memory for an array of objects which are classes containing constructors and destructors. IOProtocol.cpp 527&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
/**
* Class describes IO package.
*/
class IOPackage {
public:
/** Common package type */
typedef quint32 Type;
....
};
IOPackage* IOPackage::allocatePackage ( quint32 buffNum )
{
return reinterpret_cast&lt;iopackage&gt;(
::malloc(IOPACKAGESIZE(buffNum))
);
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;The analyzer has detected a potential error related to dynamic memory allocation. Using malloc/calloc/realloc functions to allocate memory for C++ objects leads to a failure when calling the class constructor. The class fields are therefore left uninitialized, and probably some other important actions can&apos;t be accomplished as well. Accordingly, in some other place, the destructor can&apos;t be called when memory is freed through the free() function, which may cause resource leaks or other troubles.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0300/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V670&lt;/a&gt; The uninitialized class member &apos;m_stat&apos; is used to initialize the &apos;m_writeThread&apos; member. Remember that members are initialized in the order of their declarations inside a class. SocketClient_p.cpp 145&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
class SocketClientPrivate:protected QThread,
                      protected SocketWriteListenerInterface
{
....
SocketWriteThread m_writeThread;  //&amp;lt;==line 204
....
IOSender::Statistics m_stat;      //&amp;lt;==line 246
....
}
SocketClientPrivate::SocketClientPrivate (....) :
....
m_writeThread(jobManager, senderType, ctx, m_stat, this),
....
{
....
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;The analyzer has detected a potential bug in the class constructor&apos;s initialization list. Under the C++ standard, class members are initialized in the constructor in the same order they were declared in the class. In this case, the m_writeThread variable will be the first to be initialized instead of m_stat.&lt;br /&gt;&lt;br /&gt;So it may be unsafe to construct &apos;m_writeThread&apos; using the &apos;m_stat&apos; field as one of the arguments.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0326/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V690&lt;/a&gt; The &apos;CSystemStatistics&apos; class implements a copy constructor, but lacks the &apos;=&apos; operator. It is dangerous to use such a class. CSystemStatistics.h 632&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
class CSystemStatistics: public CPrlDataSerializer
{
public:
....
/** Copy constructor */
CSystemStatistics(const CSystemStatistics &amp;_obj);
/** Initializing constructor */
CSystemStatistics(const QString &amp;source_string);
....
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;There is a copy constructor for this class but its assignment operator has not been redefined.&lt;br /&gt;&lt;br /&gt;The &lt;a href=&quot;https://en.wikipedia.org/wiki/Rule_of_three_(C%252B%252B_programming)&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;&quot;Rule of three&quot;&lt;/a&gt; (also known as the &quot;Law of the Big Three&quot; or &quot;The Big Three&quot;) is violated here. This rule claims that if a class defines one (or more) of the following it should probably explicitly define all three:&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
destructor;
copy constructor;
copy assignment operator.
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;These three functions are special member functions automatically implemented by the compiler when they are not explicitly defined by the programmer. The Rule of Three claims that if one of these had to be defined by the programmer, it means that the compiler-generated version does not fit the needs of the class in one case and it will probably not fit in the other cases either and lead to runtime errors.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Other warnings&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0302/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V672&lt;/a&gt; There is probably no need in creating the new &apos;res&apos; variable here. One of the function&apos;s arguments possesses the same name and this argument is a reference. Check lines: 367, 393. IORoutingTable.cpp 393&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
bool IOJobManager::initActiveJob (
SmartPtr&lt;jobpool&gt;&amp; jobPool,
const IOPackage::PODHeader&amp; pkgHeader,
const SmartPtr&lt;iopackage&gt;&amp; package,
Job*&amp; job,                               //&amp;lt;==
bool urgent )
{
....
while ( it != jobPool-&amp;gt;jobList.end() ) {
    Job* job = *it;                      //&amp;lt;==
    if ( isJobFree(job) ) {
        ++freeJobsNum;
        // Save first job
        if ( freeJobsNum == 0 ) {
            freeJob = job;
            firstFreeJobIter = it;
        }
    }
    ++it;
}
....
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;It is strongly recommended not to declare variables bearing the same names as function arguments. You should generally avoid having identical names for local and global variables. Otherwise, it may cause a variety of errors due to careless use of such variables. Besides incorrect program execution logic, you may face an issue when, for instance, a global pointer points to a local object which will be destroyed in the future, and since it takes some time before a memory area is cleared, this error will take an irregular character.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Other issues of this kind:&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0302/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V672&lt;/a&gt; There is probably no need in creating the new &apos;job&apos; variable here. One of the function&apos;s arguments possesses the same name and this argument is a reference. Check lines: 337, 391. IOSendJob.cpp 391&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0302/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V672&lt;/a&gt; There is probably no need in creating the new &apos;res&apos; variable here. One of the function&apos;s arguments possesses the same name and this argument is a reference. Check lines: 367, 393. IORoutingTable.cpp 393&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0225/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V610&lt;/a&gt; Undefined behavior. Check the shift operator &apos;&amp;lt;&amp;lt;&apos;. The left operand &apos;~0&apos; is negative. util.c 1046&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
int parse_ip(const char *str, struct vzctl_ip_param **ip)
{
....
if (family == AF_INET)
    mask = htonl(~0 &amp;lt;&amp;lt; (32 - mask));
....
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;The analyzer has detected a shift operation leading to undefined behavior. The reason behind it is that in the &apos;~0&apos; operation, the number will be inverted into signed int and therefore there will be a negative number shift, which, under the C++ standard, leads to undefined behavior. The unsigned type should be defined explicitly.&lt;br /&gt;&lt;br /&gt;Correct code:&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
int parse_ip(const char *str, struct vzctl_ip_param **ip)
{
....
if (family == AF_INET)
    mask = htonl(~0u &amp;lt;&amp;lt; (32 - mask));
....
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;Two more warnings of this kind:&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0225/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V610&lt;/a&gt; Undefined behavior. Check the shift operator &apos;&amp;lt;&amp;lt;&apos;. The left operand &apos;~0&apos; is negative. util.c 98&lt;br /&gt;&lt;br /&gt;V610 Undefined behavior. Check the shift operator &apos;&amp;lt;&amp;lt;&apos;. The left operand &apos;~0&apos; is negative. vztactl.c 187&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0137/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V547&lt;/a&gt; Expression &apos;limit &amp;lt; 0&apos; is always false. Unsigned type value is never &amp;lt; 0. io.c 80&lt;br /&gt;&lt;br /&gt;&lt;pre&gt;
int
vz_set_iolimit(struct vzctl_env_handle *h, unsigned int limit)
{
int ret;
struct iolimit_state io;
unsigned veid = eid2veid(h);
if (limit &amp;lt; 0)
return VZCTL_E_SET_IO;
....
}
&lt;/pre&gt;&lt;br /&gt;&lt;br /&gt;The analyzer has detected an invalid conditional expression in this function. An unsigned variable can never be less than zero. This condition is probably just an excessive one, but it is also possible that identification of an incorrect state was meant to be implemented in some other way.&lt;br /&gt;&lt;br /&gt;Another similar fragment:&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.viva64.com/en/d/0137/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;V547&lt;/a&gt; Expression &apos;limit &amp;lt; 0&apos; is always false. Unsigned type value is never &amp;lt; 0. io.c 131&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Conclusion&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;At first, the developers suggested checking an already existing beta-version to be released soon - the version &quot;.... where we will at least rewrite all the planned product parts and fix the bugs found during the testing stage&quot;. But that would hugely contradict the static analysis ideology! Static analysis is most efficient and beneficial when used at the earlier development stages and on a regular basis. A single-time check of your project may help improve the code at some point but the overall quality will remain at the same low level. Delaying code analysis till the testing stage is the greatest mistake however you look at it - either as a manager or a developer. It is cheapest to fix a bug at the coding stage!&lt;br /&gt;&lt;br /&gt;This idea is discussed in more detail by my colleague Andrey Karpov in the article &lt;a href=&quot;http://www.viva64.com/en/b/0336/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;&quot;How Do Programs Run with All Those Bugs At All?&quot;&lt;/a&gt; I especially recommend reading the sections &quot;No need to use PVS-Studio then?&quot; and &quot;PVS-Studio is needed!&quot; And may your code stay bugless!&lt;br /&gt;&lt;br /&gt;P.S. Full logs from PVS Studio is &lt;a href=&quot;http://bronevichok.ru/trash/PVS-StudioLogsForOpenVZ.7z&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;here&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;&lt;a name=&apos;cutid1-end&apos;&gt;&lt;/a&gt;</description>
  <comments>https://openvz.livejournal.com/50776.html?view=comments#comments</comments>
  <category>openvz</category>
  <category>static analysis</category>
  <lj:security>public</lj:security>
  <lj:poster>estetus</lj:poster>
  <lj:posterid>12957684</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/50678.html</guid>
  <pubDate>Wed, 01 Jul 2015 21:35:00 GMT</pubDate>
  <title>OpenVZ / Virtuozzo 7 First Impressions</title>
  <author>dowdle</author>
  <link>https://openvz.livejournal.com/50678.html</link>
  <description>Odin and the OpenVZ Project announced the beta release of a new version of Virtuozzo today. This is also the next version of OpenVZ as the two are merging closer together.&lt;br /&gt;&lt;br /&gt;There will eventually be two distinct versions... a free version and a commercial version. So far as I can tell they currently call it Virtuozzo 7 but in a comparison wiki page they use the column names Virtuozzo 7 OpenVZ (V7O) and Virtuozzo 7 Commercial (V7C). The original OpenVZ, which is still considered the stable OpenVZ release at this time based on the EL6-based OpenVZ kernel, appears to be called OpenVZ Legacy.&lt;br /&gt;&lt;br /&gt;Odin had previously released the source code to a number of the Virtuozzo tools and followed that up with the release of spec-like source files used by Virtuozzo&apos;s vztt OS Template build system. The plan is to migrate away from the OpenVZ specific tools (like vzctl, vzlist, vzquota, and vzmigrate) to the Virtuozzo specific tools although there will probably be some overlap for a while.&lt;br /&gt;&lt;br /&gt;The release includes source code, binary packages and a bare-metal distro installer DVD iso.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Bare Metal Installer&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;I got a chance to check out the bare-metal installer today inside of a KVM virtual machine. I must admit that I&apos;m not very familiar with previous Virtuozzo releases but I am a semi-expert when it comes to OpenVZ. Getting used to the new system is taking some effort but will all be for the better.&lt;br /&gt;&lt;br /&gt;I didn&apos;t make any screenshots yet of the installer... I may do that later... but it is very similar to that of RHEL7 (and clones) because it is built by and based on CloudLinux... which is based on EL7.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;CloudLinux Confusion&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;What is CloudLinux? CloudLinux is a company that makes a commercial multi-tenant hosting product... that appears to provide container (or container-like) isolation as well as Apache and PHP enhancements specifically for multi-tenant hosting needs. CloudLinux also offers KernelCare-based reboot-less kernel updates. CloudLinux&apos;s is definitely independent from Odin and the CloudLinux products are in no way related to Virtuozzo. Odin and CloudLinux are partners however.&lt;br /&gt;&lt;br /&gt;Why is the distro based on CloudLinux and does one need a CloudLinux subscription to use it? Well it turns out that Odin really didn&apos;t want to put forth all of the effort and time required to produce a completely new EL7-clone. CloudLinux is already an expert at that... so Odin partnered with CloudLinux to produce a EL7-based distro for Virtuozzo 7. While CloudLinux built it and (I think) there are a few underlying CloudLinux packages, everything included is FOSS (Free and Open Source Software). It DOES NOT and WILL NOT require a CloudLinux subscription to use... because it is not related to CloudLinux&apos;s product line nor does it contain any of the CloudLinux product features.&lt;br /&gt;&lt;br /&gt;The confusion was increased when I did a yum update post-install and if failed with a yum repo error asking me to register with CloudLinux. Turns out that is a bug in this initial release and registration is NOT needed. There is a manual fix of editing a repo file in /etc/yum.repos.ed/) and replacing the incorrect base and updates URLs with a working ones. This and and other bugs that are sure to crop up will be addressed in future iso builds which are currently slated for weekly release... as well as daily package builds and updates available via yum.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;More Questions, Some Answers&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;So this is the first effort to merge Virtuozzo and OpenVZ together... and again... me being very Virtuozzo ignorant... there is a lot to learn. How does the new system differ from OpenVZ? What are the new features coming from Virtuozzo? I don&apos;t know if I can answer every conceivable question but I was able to publicly chat with Odin&apos;s sergeyb in the &lt;a href=&apos;https://www.livejournal.com/rsearch/?tags=%23openvz&apos;&gt;#openvz&lt;/a&gt; IRC channel on the Freenode IRC network. I also emailed the CloudLinux folks and got a reply back. Here&apos;s what I&apos;ve been able to figure out so far.&lt;br /&gt;&lt;br /&gt;Why CloudLinux? - I mentioned that already above, but Odin didn&apos;t want to engineer their own EL7 clone so they got CloudLinux to do it for them and it was built specifically for Virtuozzo and not related to any of the CloudLinux products... and you do not need a subscription from Odin nor CloudLinux to use it.&lt;br /&gt;&lt;br /&gt;What virtualization does it support? - Previous Virtuozzo products supported not only containers but a proprietary virtual machine hypervisor made by Odin/Parallels. In Virtuozzo 7 (both OpenVZ and Commercial so far as I can tell) the proprietary hypervisor has been replaced with the Linux kernel built-in one... KVM. See: &lt;a target=&apos;_blank&apos; href=&apos;https://openvz.org/QEMU&apos; rel=&apos;nofollow&apos;&gt;https://openvz.org/QEMU&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;How about libvirt support? - Anyone familiar with EL7&apos;s default libvirtd setup for KVM will be happy to know that it is maintained. libvirtd is running by default and the network interfaces you&apos;d expect to be there, are. virsh and virt-manager should work as expected for KVM.&lt;br /&gt;&lt;br /&gt;Odin has been doing some libvirt development and supposedly both virsh and virt-manager should work with VZ7 containers. They are working with upstream. libvirt has supposedly supported OpenVZ for some time but there weren&apos;t any client applications that supported OpenVZ. That is changing. See: &lt;a target=&apos;_blank&apos; href=&apos;https://openvz.org/LibVirt&apos; rel=&apos;nofollow&apos;&gt;https://openvz.org/LibVirt&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Command line tools? - OpenVZ&apos;s vzctl is there as is Virtuozzo&apos;s prlctl.&lt;br /&gt;&lt;br /&gt;How about GUIs or web-based management tools? - That seems to be unclear at this time. I believe V7C will offer web-based management but I&apos;m not sure about V7O. As mentioned in the previous question, virt-manager... which is a GUI management tool... should be usable for both containers and KVM VMs. virsh / virt-manager VZ7 container support remains to be seen but it is definitely on the roadmap.&lt;br /&gt;&lt;br /&gt;Any other new features? - Supposedly VZ7 has a fourth-generation resource management system that I don&apos;t know much about yet. Other than the most obvious stuff (EL7-based kernel, KVM, libvirt support, Virtuozzo tools, etc), I haven&apos;t had time to absorb much yet so unfortunately I can&apos;t speak to many of the new features. I&apos;m sure there are tons.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;About OS Templates&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;I created a CentOS 6 container on the new system... and rather than downloading a pre-created OS Template that is a big .tar.gz file (as with OpenVZ Legacy) it downloaded individual rpm packages. It appears to build OS Templates on demand from current packages on-demand BUT it uses a caching system whereby it will hold on to previously downloaded packages in a cache directory somewhere under /vz/template/. If the desired OS Template doesn&apos;t exist already in /vz/template/cache/ the required packages are downloaded, a temporary ploop image made, the packages installed, and then the ploop disk image is compressed and added to /vz/template/cache as a pre-created OS Template. So the end result for my CentOS 6 container created /vz/template/cache/centos-6-x86_64.plain.ploopv2.tar.lz4. I manually downloaded an OpenVZ Legacy OS Template and placed it in /vz/template/cache but it was ignored so at this time, I do not think they are compatible / usable.&lt;br /&gt;&lt;br /&gt;The only OS Template available at time of writing was CentOS 6 but I assume they&apos;ll eventually have all of the various Linux distros available as in the past... both rpm and deb based ones. We&apos;ll just have to wait and see.&lt;br /&gt;&lt;br /&gt;As previously mentioned, Odin has already released the source code to vztt (Virtuozzo&apos;s OS Template build system) as well as some source files for CentOS, Debian and Ubuntu template creation. They have also admitted that coming from closed source, vztt is a bit over-complicated and not easy-to-use. They plan on changing that ASAP but help from the community would definitely be appreciated.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;How about KVM VMs?&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;I&apos;m currently on vacation and only have access to a laptop running Fedora 22... that I&apos;m typing this from... and didn&apos;t want to nuke it... so I installed the bare-metal distro inside of a KVM virtual machine. I didn&apos;t really want to try nested KVM. That would definitely not have been a legitimate test of the new system... but I expect libvirtd, virsh, and virt-manager to work and behave as expected.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Conclusion&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;Despite the lack of perfection in this initial release Virtuozzo 7 shows a lot of promise. While it is a bit jarring coming from OpenVZ Legacy... with all of the changes... the new features... especially KVM... really show promise and I&apos;ll be watching all of the updates as they happen. There certainly is a lot of work left to do but this is definitely a good start.&lt;br /&gt;&lt;br /&gt;I&apos;d love to hear from other users to find out what experiences they have.  If I&apos;ve made any mistakes in my analysis, please correct me immediately.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Congrats Odin and OpenVZ!&lt;/b&gt; I only wish I had a glass of champagne and could offer up a respectable toast... and that there were others around me to clank glasses with. :)</description>
  <comments>https://openvz.livejournal.com/50678.html?view=comments#comments</comments>
  <category>openvz</category>
  <category>virtuozzo</category>
  <lj:security>public</lj:security>
  <lj:poster>dowdle</lj:poster>
  <lj:posterid>9725912</lj:posterid>
  <lj:reply-count>5</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/50178.html</guid>
  <pubDate>Wed, 01 Jul 2015 17:02:51 GMT</pubDate>
  <title>Publishing of Virtuozzo builds</title>
  <author>estetus</author>
  <link>https://openvz.livejournal.com/50178.html</link>
  <description>&lt;pre style=&quot;white-space: pre-wrap; color: rgb(0, 0, 0); line-height: normal;&quot;&gt;


We are ready to announce &lt;a href=&quot;https://download.openvz.org/virtuozzo/releases/7.0-beta1/x86_64/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;publishing of binaries&lt;/a&gt; compiled from &lt;a href=&quot;https://src.openvz.org/projects/OVZ&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;open components&lt;/a&gt;:
&lt;/pre&gt;&lt;br /&gt;&lt;ul&gt;&lt;br /&gt;&lt;li&gt;Virtuozzo installation ISO image&lt;/li&gt;&lt;br /&gt;&lt;li&gt;RPM packages (kernel and userspace)&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Source RPM packages (kernel and userspace)&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Debug RPM packages (kernel and userspace)&lt;/li&gt;&lt;br /&gt;&lt;li&gt;EZ templates (CentOS 7 x86_64, CentOS 6 x86_64 etc)&lt;/li&gt;&lt;br /&gt;&lt;/ul&gt;All installation paths &lt;a href=&quot;https://openvz.org/Quick_installation&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;described in OpenVZ wiki&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-size:1.4em;&quot;&gt;FAQ (Frequently Asked Questions)&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Q: Can we use binaries or Virtuozzo distribution in production?&lt;/b&gt;&lt;br /&gt;A: No. Virtuozzo 7 is in pre-Beta stage and we strongly recommend to avoid any production use. We continue to develop new features and Virtuozzo 7 may contain serious bugs.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Q: Would it be possible to upgrade from Beta 1 to Beta 2?&lt;/b&gt;&lt;br /&gt;A: Upgrade will be supported only for OpenVZ installed on Cloud Linux (i.e. using Virtuozzo installation image of OpenVZ installed using yum on Cloud Linux).&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Q: How often you will update Virtuozzo 7 files?&lt;/b&gt;&lt;br /&gt;A: RPM package (and yum repository) - nightly, ISO image - weekly.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Q: I don&amp;#39;t want to use your custom kernel or distribution. How to use OpenVZ on my own Linux distribution? &lt;/b&gt; A: We plan to make available OpenVZ for vanilla kernels and we are working on it. If you want it - please help us with testing and contribute patches [2]. Pay attention that using OpenVZ with vanilla kernel will have some limitations because some required kernel changes are not in upstream yet.</description>
  <comments>https://openvz.livejournal.com/50178.html?view=comments#comments</comments>
  <category>openvz</category>
  <category>beta</category>
  <category>virtuozzo core</category>
  <media:title type="plain">Avril Lavigne - What the Hell | Powered by Last.fm</media:title>
  <lj:music>Avril Lavigne - What the Hell | Powered by Last.fm</lj:music>
  <lj:security>public</lj:security>
  <lj:poster>estetus</lj:poster>
  <lj:posterid>12957684</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/50063.html</guid>
  <pubDate>Tue, 26 May 2015 23:17:51 GMT</pubDate>
  <title>Building a Fedora 22 MATE Desktop Container</title>
  <author>dowdle</author>
  <link>https://openvz.livejournal.com/50063.html</link>
  <description>Fedora 22 was released today.  Congrats Fedora Project!&lt;br /&gt;&lt;br /&gt;I updated the Fedora 22 OS Template I contributed so it was current with the release today... and for the fun of it I recorded a screencast showing how to make a Fedora 22 MATE Desktop GUI container... and how to connect to it via X2GO.&lt;br /&gt;&lt;br /&gt;Enjoy!&lt;br /&gt;&lt;br /&gt;&lt;lj-embed id=&quot;9&quot; /&gt;</description>
  <comments>https://openvz.livejournal.com/50063.html?view=comments#comments</comments>
  <category>openvz</category>
  <lj:security>public</lj:security>
  <lj:poster>dowdle</lj:poster>
  <lj:posterid>9725912</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/49501.html</guid>
  <pubDate>Fri, 01 May 2015 15:59:00 GMT</pubDate>
  <title>Kir&apos;s presentation from LFNW 2015</title>
  <author>dowdle</author>
  <link>https://openvz.livejournal.com/49501.html</link>
  <description>OpenVZ Project Leader Kir Kolyshkin gave a presentation on Saturday, April 25th, 2015 at LinuxFest Northwest entitled, &quot;OpenVZ, Virtuozzo, and Docker&quot;.  I recorded it but I think my sdcard was having issues because there are a few bad spots in the recording... but it is totally watchable.  Enjoy!&lt;br /&gt;&lt;br /&gt;&lt;lj-embed id=&quot;6&quot; /&gt;</description>
  <comments>https://openvz.livejournal.com/49501.html?view=comments#comments</comments>
  <category>openvz</category>
  <category>virtuozzo</category>
  <lj:security>public</lj:security>
  <lj:poster>dowdle</lj:poster>
  <lj:posterid>9725912</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/49158.html</guid>
  <pubDate>Fri, 26 Dec 2014 23:57:50 GMT</pubDate>
  <title>OpenVZ past and future</title>
  <author>k001</author>
  <link>https://openvz.livejournal.com/49158.html</link>
  <description>Looking forward to 2015, we have very exciting news to share on the future on OpenVZ. But first, let&apos;s take a quick look into OpenVZ history.&lt;br /&gt;&lt;br /&gt;Linux Containers is an ancient technology, going back to last century. Indeed it was 1999 when our engineers started adding bits and pieces of containers technology to Linux kernel 2.2. Well, not exactly &quot;containers&quot;, but rather &quot;virtual environments&quot; at that time -- as it often happens with new technologies, the terminology was different (the term &quot;container&quot; was coined by Sun only five years later, in 2004).&lt;br /&gt;&lt;br /&gt;Anyway, in 2000 we ported our experimental code to kernel 2.4.0test1, and in January 2002 we already had Virtuozzo 2.0 version released. From there it went on and on, with more releases, newer kernels, improved feature set (like adding live migration capability) and so on.&lt;br /&gt;&lt;br /&gt;It was 2005 when we finally realized we made a mistake of not employing the open source development model for the whole project from the very beginning. This is when OpenVZ was born as a separate entity, to complement commercial Virtuozzo (which was later renamed to Parallels Cloud Server, or PCS for short).&lt;br /&gt;&lt;br /&gt;Now it&apos;s time to admit -- over the course of years OpenVZ became just a little bit too separate, essentially becoming a fork (perhaps even a stepchild) of Parallels Cloud Server. While the kernel is the same between two of them, userspace tools (notably vzctl) differ. This results in slight incompatiblities between the configuration files, command line options etc. More to say, userspace development efforts need to be doubled.&lt;br /&gt;&lt;br /&gt;Better late than never; we are going to fix it now! &lt;b&gt;We are going to merge OpenVZ and Parallels Cloud Server into a single common open source code base.&lt;/b&gt; The obvious benefit for OpenVZ users is, of course, more features and better tested code. There will be other much anticipated changes, rolled out in a few stages.&lt;br /&gt;&lt;br /&gt;As a first step, &lt;b&gt;we will open the git repository of RHEL7-based Virtuozzo kernel&lt;/b&gt; early next year (2015, that is). This has become possible as we changed the internal development process to be more git-friendly (before that we relied on lists of patches a la quilt but with home grown set of scripts). We have worked on this kernel for quite some time already, initially porting our patchset to kernel 3.6, then rebasing it to RHEL7 beta, then final RHEL7. While it is still in development, we will publish it so anyone can follow the development process.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Our kernel development mailing list will also be made public.&lt;/b&gt; The big advantage of this change for those who want to participate in the development process is that you&apos;ll see our proposed changes discussed on this mailing list before the maintainer adds them to the repository, not just months later when the the code is published and we&apos;ll consider any patch sent to the mailing list.  This should allow the community to become full participants in development rather than mere bystanders as they were previously.&lt;br /&gt;&lt;br /&gt;Bug tracking systems have also diverged over time. Internally, we use JIRA (this is where all those PCLIN-xxxx and PSBM-xxxx codes come from), while OpenVZ relies on Bugzilla. &lt;b&gt;For the new unified product, we are going to open up JIRA&lt;/b&gt; which we find to me more usable than Bugzilla. Similar to what Red Hat and other major Linux vendors do, we will limit access to security-sensitive issues in order to not compromise our user base.&lt;br /&gt;&lt;br /&gt;Last but not least, the name. We had a lot of discussions about naming, had a few good candidates, and finally unanimously agreed on this one:&lt;br /&gt;&lt;br /&gt;&lt;big&gt;&lt;b&gt;&lt;center&gt;Virtuozzo Core&lt;/center&gt;&lt;/b&gt;&lt;/big&gt;&lt;br /&gt;&lt;br /&gt;Please stay tuned for more news (including more formal press release from Parallels). Feel free to ask any questions as we don&apos;t even have a FAQ yet.&lt;br /&gt;&lt;br /&gt;Merry Christmas and a Happy New Year!</description>
  <comments>https://openvz.livejournal.com/49158.html?view=comments#comments</comments>
  <category>kernel</category>
  <category>openvz</category>
  <category>vzcore</category>
  <category>virtuozzo core</category>
  <category>git</category>
  <category>jira</category>
  <lj:security>public</lj:security>
  <lj:poster>k001</lj:poster>
  <lj:posterid>990679</lj:posterid>
  <lj:reply-count>29</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/49112.html</guid>
  <pubDate>Wed, 26 Nov 2014 22:36:08 GMT</pubDate>
  <title>On kernel branching</title>
  <author>k001</author>
  <link>https://openvz.livejournal.com/49112.html</link>
  <description>This is a topic I always wanted to write about but was afraid my explanation would end up very cumbersome. This is no longer the case as we now have a picture that worth a thousand words!&lt;br /&gt;&lt;br /&gt;The picture describes how we develop kernel releases. It&apos;s bit more complicated than the linearity of version 1 -&amp;gt; version 2 -&amp;gt; version 3. The reason behind it is we are balancing between adding new features, fixing bugs, and rebasing to newer kernels, while trying to maintain stability for our users. This is our convoluted way of achieving all this:&lt;br /&gt;&lt;br /&gt;&lt;a target=&quot;_blank&quot; href=&quot;http://ic.pics.livejournal.com/k001/990679/928/928_original.png&quot; target=&quot;_blank&quot;&gt;&lt;img src=&quot;https://ic.pics.livejournal.com/k001/990679/928/928_1000.png&quot; alt=&quot;kernel_tree-2.6.32-x&quot; title=&quot;kernel_tree-2.6.32-x&quot; fetchpriority=&quot;high&quot;&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;As you can see, we create a new branch when rebasing to a newer upstream (i.e. RHEL6) kernel, as regressions are quite common during a rebase. At the same time, we keep maintaining the older branch in which we add stability and security fixes. Sometimes we create a new branch to add some bold feature that takes a longer time to stabilize. Stability patches are then forward-ported to the new branch, which is either eventually becoming stable or obsoleted by yet another new one.&lt;br /&gt;&lt;br /&gt;Of course there is a lot of work behind these curtains, including rigorous internal testing of new releases. In addition to that, we usually provide those kernels to our users (in &lt;a href=&quot;http://openvz.org/Download/kernel/rhel6-testing&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;rhel6-testing&lt;/a&gt; repo) so they could test new stuff before it hits production servers, and we can fix more bugs earlier (&lt;a href=&quot;http://openvz.livejournal.com/45010.html&quot; target=&quot;_blank&quot;&gt;more on that here&lt;/a&gt;). If you are not taking part in this testing, well, it&apos;s never late to start!</description>
  <comments>https://openvz.livejournal.com/49112.html?view=comments#comments</comments>
  <category>kernel</category>
  <category>openvz</category>
  <category>rhel6</category>
  <lj:security>public</lj:security>
  <lj:poster>k001</lj:poster>
  <lj:posterid>990679</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/48757.html</guid>
  <pubDate>Wed, 09 Jul 2014 20:05:13 GMT</pubDate>
  <title>Donation: a way to help or just say thanks</title>
  <author>k001</author>
  <link>https://openvz.livejournal.com/48757.html</link>
  <description>&lt;p&gt;For many years, when people asked us how can they help the project, our typical answer was something like &quot;just use it, file bugs, spread the word&quot;. Some people were asking specifically about how to donate money, and we had to say &quot;currently we don&apos;t have a way to accept&quot;, so it was basically &quot;no, we don&apos;t need your money&quot;. It was not a good answer. Even if &lt;a href=&quot;http://www.parallels.com/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;our big sponsor&lt;/a&gt; is generously helping us with everything we need, that doesn&apos;t mean your money would be useless.&lt;/p&gt;

&lt;p&gt;Today we have opened a PayPal account to accept donations. Here:&lt;/p&gt;

&lt;p&gt;&lt;form action=&quot;https://www.paypal.com/cgi-bin/webscr&quot; method=&quot;post&quot; target=&quot;_top&quot;&gt;
&lt;input type=&quot;hidden&quot; name=&quot;cmd&quot; value=&quot;_s-xclick&quot;&gt;
&lt;input type=&quot;hidden&quot; name=&quot;hosted_button_id&quot; value=&quot;5MN5P6K74HUWE&quot;&gt;
&lt;input type=&quot;image&quot; src=&quot;https://www.paypalobjects.com/en_US/i/btn/btn_donateCC_LG.gif&quot; border=&quot;0&quot; name=&quot;submit&quot; alt=&quot;PayPal - The safer, easier way to pay online!&quot; style=&quot;border-style: none; background-color: transparent;&quot;&gt;
&lt;img alt=&quot;&quot; border=&quot;0&quot; src=&quot;https://www.paypalobjects.com/en_US/i/scr/pixel.gif&quot; width=&quot;1&quot; height=&quot;1&quot;&gt;
&lt;/form&gt;&lt;/p&gt;

&lt;p&gt;How are we going to spend your money? In general:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Hardware for development and testing&lt;/li&gt;
&lt;li&gt;Travel budget for conferences, events etc.&lt;/li&gt;
&lt;li&gt;Accolades for active contributors&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In particular, right now we need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;About $200 to cover shipping expenses for donated hardware&lt;/li&gt;
&lt;li&gt;About $300 for network equipment&lt;/li&gt;
&lt;li&gt;About $2000 for hard drives&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href=&quot;http://wiki.openvz.org/Donations&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;Donations page&lt;/a&gt; is created on our wiki to track donations and spending. We hope that it will see some good progress in the coming days, with a little help from you!&lt;/p&gt;

&lt;p&gt;NOTE that if you feel like spending $500 or more, there is yet another way to spend it -- &lt;a href=&quot;http://www.parallels.com/support/virtualization-suite/openvz/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;a support contract from Parallels&lt;/a&gt;, with access to their excellent support team and 10 free support tickets.&lt;/p&gt;</description>
  <comments>https://openvz.livejournal.com/48757.html?view=comments#comments</comments>
  <category>paypal</category>
  <category>openvz</category>
  <category>donate</category>
  <lj:security>public</lj:security>
  <lj:poster>k001</lj:poster>
  <lj:posterid>990679</lj:posterid>
  <lj:reply-count>2</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/48634.html</guid>
  <pubDate>Thu, 12 Jun 2014 23:35:15 GMT</pubDate>
  <title>Yet more live migration goodness</title>
  <author>k001</author>
  <link>https://openvz.livejournal.com/48634.html</link>
  <description>&lt;p&gt;It&apos;s been two months since we have released vzctl 4.7 and ploop 1.11 with
&lt;a href=&quot;http://openvz.livejournal.com/47780.html&quot; target=&quot;_blank&quot;&gt;some vital improvements
to live migration&lt;/a&gt; that made it about 25% faster. Believe it or not,
this is another &quot;make it faster&quot; post, making it, well, faster.
That&apos;s right, we have more surprises in store for you! Read on and be delighted.&lt;/p&gt;

&lt;h2&gt;Asynchronous ploop send&lt;/h2&gt;

&lt;p&gt;This is something that should have been implemented at the very beginning,
but the initial implementation looked like this (in C):&lt;/p&gt;

&lt;p&gt;&lt;pre&gt;
/* _XXX_ We should use AIO. ploopcopy cannot use cached reads and
 * has to use O_DIRECT, which introduces large read latencies.
 * AIO is necessary to transfer with maximal speed.
 */
&lt;/pre&gt;&lt;/p&gt;

&lt;p&gt;Why there are no cached reads (and, in general, why ploop library is
using O_DIRECT, and the kernel also uses direct I/O, bypassing the cache)?
Note that ploop images are files that are used as block
devices containing a filesystem. That ploop block device and filesystem
access is going through cache as usual, so if we&apos;d do the same for ploop
image itself, it would result in double caching and waste of RAM. Therefore,
we do direct I/O on lower level (when working with ploop images), and allow
usual Linux cache to be used on the upper level (when container is accessing
files inside the ploop, which is the most common operation).&lt;/p&gt;

&lt;p&gt;So, ploop image itself is not (usually) cached in memory, and other tricks
like read-ahead are also not used by the kernel, and reading each block from
the image takes time as it is read directly from disk. In our test lab
(OpenVZ running inside KVM with a networked disk) it takes about 10
milliseconds to read each 1 MB block. Then, sending each block to the
destination system over network also takes time, it&apos;s about 15 milliseconds
in our setup. So, it takes 25 milliseconds to read and send a block of data,
if we do it one after another. Oh wait, this is exactly how we do it!&lt;/p&gt;

&lt;p&gt;Solution -- let&apos;s do reading and sending at the same time! In other words,
while sending the first block of data we can already read the second one,
and so on, bringing the time required to transfer one block down to
MAX(Tread, Tsend) instead of Tread + Tsend.&lt;/p&gt;

&lt;p&gt;Implementation uses POSIX threads. Main ploop send process spawns a separate
sending thread and two buffers -- one for reading and one for sending, then
they change place. This works surprisingly well for something that uses
threads, maybe because it is as simple as it could ever be (one thread, one
mutex, one conditional). If you happen to know something about pthreads
programming, please review the appropriate
&lt;a href=&quot;http://git.openvz.org/?p=ploop;a=commitdiff;h=a55e26e9606&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;commit 55e26e9606&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The new async send is used by default, you just need to have a newer ploop
(1.12, not yet released). Clone and compile ploop from git if you are curious.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;As for how much time we save using this, it happen to be 15 microseconds
instead of 25 per block of data, or 40% faster! Now, the real savings depend
on the number of blocks needs to be migrated after container is frozen,
it can be 5 or 100, so overall savings can be from 0.05 to 1 second.&lt;/b&gt;&lt;/p&gt;

&lt;h2&gt;ploop copy with feedback&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;http://openvz.livejournal.com/47780.html&quot; target=&quot;_blank&quot;&gt;The previous post on live migration improvements&lt;/a&gt;
described an optimization of doing fdatasync() on the receiving
ploop side before suspending the container on the sending side. It also noted
that implementation is sub-par:&lt;/p&gt;

&lt;p&gt;&lt;blockquote&gt;
The other problem is that sending side should wait for fsync to finish,
in order to proceed with CT suspend. Unfortunately, there is no way to solve
this one with a one-way pipe, so the sending side just waits for a few
seconds. Ugly as it is, this is the best way possible (let us know if
you can think of something better).
&lt;/blockquote&gt;&lt;/p&gt;

&lt;p&gt;So, the problem is trivial -- there&apos;s a need for a bi-directional
channel between ploop copy sender and receiver, so the receiver can say
&quot;All right, I have synced the freaking delta back to sender&quot;. In addition,
we want to do it in a simple way, not much more complicated as it is now.&lt;/p&gt;

&lt;p&gt;After some playing around with different approaches, it seemed
that &lt;a href=&quot;http://lmgtfy.com/?q=openssh+port+forwarding&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;OpenSSH
port forwarding&lt;/a&gt;, combined with tools like netcat, socat, or bash
/dev/tcp feature, can do the trick of establishing a two-way pipe
between ploop copy sides.&lt;/p&gt;

&lt;p&gt;The netcat (or nc) is available in various varieties, which might or might
not be available and/or compatible. socat is a bit better, but one problem
with it is it ignores its child exit code, so there is no (simple) way
to figure out an error. A problem with /dev/tcp feature is it is
bash-specific, but bash itself might not be universally available
(Debian and Ubunty users are well aware of that fact).&lt;/p&gt;

&lt;p&gt;So, all this was replaced with a tiny home-grown vznnc (&quot;nano-netcat) tool.
It is indeed nano, as there is only
&lt;a href=&quot;http://git.openvz.org/?p=vzctl;a=blob;f=src/vznnc.c&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;only 200
lines of code&lt;/a&gt; in it. It can either listen or connect to a specified
port at localhost (note that ssh is doing real networking for us), and
run a program with its stdin/stdout (or a specified file descriptor)
already connected to that TCP socket. Again, similar to netcat, but
small, to the point, and hopefully bug-free.&lt;/p&gt;

&lt;p&gt;Finally, with this in place, we can make sending side of ploop copy to
wait for feedback from the receiving side, so it can suspend the container
as soon as remote side finished syncing. This makes the whole migration
a bit faster (by eliminating the previously hard-coded 5 seconds
wait-for-sync delay), but it also helps to slash the frozen time
as we suspend the container as soon as we should, so it won&apos;t be given
any extra time to write some more data we&apos;ll be needing to copy while
it&apos;s frozen.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;It&apos;s hard to measure the practical impact of the feature, but in our
tests it saves about 3.5 seconds of total migration time, and from 0
to a few seconds of frozen time, depending on container&apos;s disk I/O
activity.&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;For the feature to work, you need the latest versions of vzctl (4.8)
and ploop (1.12) on both nodes (if one node doesn&apos;t have newer tools,
vzmigrate falls back to old non-feedback way of copying). Note that
those new version are not yet released at the time of writing this,
but you can get ones from git and compile yourself.&lt;/p&gt;

&lt;h2&gt;Using ssh connection multiplexing&lt;/h2&gt;

&lt;p&gt;It takes about 0.15 seconds to establish an ssh connection in our test lab.
Your mileage may wary, but it can&apos;t be zero. Well, unless an OpenSSH feature
of reusing an existing ssh connection is put to work! It&apos;s call connection
miltiplexing, and when used, once a connection is established, you can have
subsequent ones in practically no time. As vzmigrate is a shell script and
runs a number of commands at remote (destination) side, using it might save
us some time.&lt;/p&gt;

&lt;p&gt;Unfortunately, it&apos;s a relatively new OpenSSH feature and is implemented
quite ugly in a version available in CentOS 6 -- you need to keep one open
ssh session running. They fixed it in a later version by adding a special
daemon-like mode enabled with ControlPersist option, but alas not
in CentOS 6. Therefore, we have to maintain a special &quot;master&quot; ssh
connection for the duration of vzmigrate. For implementation details,
see commits
&lt;a href=&quot;http://git.openvz.org/?p=vzctl;a=commitdiff;h=06212ea3d&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;06212ea3d&lt;/a&gt;
and
&lt;a href=&quot;http://git.openvz.org/?p=vzctl;a=commitdiff;h=00b9ce043&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;00b9ce043&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This is still experimental, so you need to specify &lt;code&gt;--ssh-mux&lt;/code&gt;
flag to vzmigrate
to use it. &lt;b&gt;You won&apos;t believe it (so, go test it yourself!), but this alone
slashes container frozen time by about 25% (which is great, as we fight for
every microsecond here), and improves the total time taken by vzmigrate
by up to 1 second (which is probably not that important but still nice).&lt;/b&gt;&lt;/p&gt;

&lt;h2&gt;What&apos;s next?&lt;/h2&gt;

&lt;p&gt;Current setup used for development and testing is two OpenVZ instances
running in KVM guests on a
&lt;a href=&quot;http://shop.lenovo.com/us/en/desktops/thinkcentre/m-series-tiny/m93-m93p/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;ThinkCentre
box&lt;/a&gt;. While it&apos;s convenient and oh so very space-saving, is is probably
not the best approximation of a real world OpenVZ usage. &lt;s&gt;So, we need
some better hardware:&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot; lang=&quot;en&quot;&gt;&lt;p&gt;Buy us a server to support openvz devel today: $500 &lt;a href=&quot;http://t.co/jZVil7OBOY&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;http://t.co/jZVil7OBOY&lt;/a&gt; or $2000 &lt;a href=&quot;http://t.co/GX9oJUv0oX&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;http://t.co/GX9oJUv0oX&lt;/a&gt;&lt;/p&gt;&amp;mdash; OpenVZ Containers (@_openvz_) &lt;a href=&quot;https://twitter.com/_openvz_/statuses/474652872521842689&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;June 5, 2014&lt;/a&gt;&lt;/blockquote&gt;

&lt;p&gt;&lt;b&gt;If you are able to spend $500 to $2000 on ebay, please
let us know by email to donate at openvz dot org so we can arrange it&lt;/b&gt;.
Now, quite a few people offered hosted servers with a similar configuration.
While we are very thankful to all such offers, this time we are looking
for physical, not hosted, hardware.&lt;/s&gt;&lt;/p&gt;

&lt;b&gt;Update:&lt;/b&gt; Thanks to FastVPS, we got the Supermicro 4 node server. If you want to donate, see &lt;a href=&quot;https://openvz.org/Donations&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;wiki: Donations&lt;/a&gt;.</description>
  <comments>https://openvz.livejournal.com/48634.html?view=comments#comments</comments>
  <category>hardware</category>
  <category>ploop</category>
  <category>openvz</category>
  <category>ebay</category>
  <category>help</category>
  <category>vzmigrate</category>
  <lj:security>public</lj:security>
  <lj:poster>k001</lj:poster>
  <lj:posterid>990679</lj:posterid>
  <lj:reply-count>3</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/47780.html</guid>
  <pubDate>Fri, 04 Apr 2014 02:45:52 GMT</pubDate>
  <title>ploop and live migration: 2 years later</title>
  <author>k001</author>
  <link>https://openvz.livejournal.com/47780.html</link>
  <description>It has been almost two years since we wrote about &lt;a href=&quot;http://openvz.livejournal.com/41835.html&quot; target=&quot;_blank&quot;&gt;effective live migration with ploop write tracker&lt;/a&gt;. It&apos;s time to write some more about it, since we have managed to make ploop live migration yet more effective, by means of pretty simple optimizations. But let&apos;s not jump to resolution yet, it&apos;s a long and interesting story to tell.

&lt;p&gt;As you know, live migration is not quite live, although it looks that way to a user. There is a short time period, usually a few seconds, during which the container being migrated is frozen. This time (shown if &lt;code&gt;-t&lt;/code&gt; or &lt;code&gt;-v&lt;/code&gt; option to &lt;code&gt;vzmigrate --live&lt;/code&gt; is used) is what needs to be optimized, making it as short as possible. In order to do that, one needs to dig into details on what&apos;s happening when a container is frozen.&lt;/p&gt;

&lt;p&gt;Typical timings obtained via &lt;code&gt;vzmigrate -t --live&lt;/code&gt; look like this. We ran a few iterations migrating a container back and forth between two OpenVZ instances (running inside Parallels VMs on the same physical machine), so there are a few columns at the right side.&lt;/p&gt;

&lt;pre&gt;
(Software: vzctl 4.6.1, ploop-1.10)

              Iteration     1     2     3     4     5     6    AVG

        Suspend + Dump:   0.58  0.71  0.64  0.64  0.91  0.74  0.703
   Pcopy after suspend:   1.06  0.71  0.50  0.68  1.07  1.29  0.885
        Copy dump file:   0.64  0.67  0.44  0.51  0.43  0.50  0.532
       Undump + Resume:   2.04  2.06  1.94  3.16  2.26  2.32  2.297
                        ------  ----  ----  ----  ----  ----  -----
  Total suspended time:   4.33  4.16  3.54  5.01  4.68  4.85  4.428
&lt;/pre&gt;

&lt;p&gt;Apparently, the first suspect to look at is that &quot;undump + resume&quot;. Basically, it shows timing of vzctl restore command. Why it is so slow? Apparently, ploop mount takes some noticeable time. Let&apos;s dig deeper into that process. &lt;/p&gt;

&lt;p&gt;First, &lt;a href=&quot;http://git.openvz.org/?p=ploop;a=commit;h=c17b5ab8f&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;implement timestamps in ploop messages&lt;/a&gt;, raise the log level and see what is going on here. Apparently, adding deltas is not instant, takes any time from 0.1 second to almost a second. After some more experiments and thinking it becomes obvious that since ploop kernel driver works with data in delta files directly, bypassing the page cache, it needs to force those files to be written to the disk, and this costly operation happens while container is frozen. Is it possible to do it earlier? Sure, we just need to force write the deltas we just copied before suspending a container. Easy, just call fsync(), or yet better fdatasync(), since we don&apos;t really care about metadata being written.&lt;/p&gt;

&lt;p&gt;Unfortunately, there is no command line tool to do fsync or fdatasync, so we had to &lt;a href=&quot;http://git.openvz.org/?p=vzctl;a=commitdiff;h=842ae637&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;write one&lt;/a&gt; and call it from vzmigrate. Is it any better now? Yes indeed, delta adding times went down to from tenths to hundredths of a second.&lt;/p&gt;

&lt;p&gt;Except for the top delta, of course, which we migrate using ploop copy. Surely, we can&apos;t fsync it before suspending container, because we keep copying it after. Oh wait... actually we can! By adding an fsync before CT suspend, we force the data be written on disk, so the second fsync (which happens after everything is copied) will take less time. This time is shown as &quot;Pcopy after suspend&quot;.&lt;/p&gt;

&lt;p&gt;The problem is that ploop copy consists of two sides -- the sending one and the receiving one -- which communicate over a pipe (with ssh as a transport). It&apos;s the sending side which runs the command to freeze the container, and it&apos;s the receiving side which should do fsync, so we need to pass some sort of &quot;do the fsync&quot; command. Yet better, do it without breaking the existing protocol, so nothing bad will happen in case there is an older version of ploop on the receiving side.&lt;/p&gt;

&lt;p&gt;The &quot;do the fsync&quot; command ended up being a data block of 4 bytes, you can &lt;a href=&quot;http://git.openvz.org/?p=ploop;a=commitdiff;h=d56654a2a8&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;see the patch here&lt;/a&gt;. Older version will write these 4 bytes to disk, which is unnecessary but OK do to, and newer version will recognize it as a need to do fsync.&lt;/p&gt;

&lt;p&gt;The other problem is that sending side should wait for fsync to finish, in order to proceed with CT suspend. Unfortunately, there is no way to solve this one with a one-way pipe, so the sending side just waits for a few seconds. Ugly as it is, this is the best way possible (let us know if you can think of something better).&lt;/p&gt;

&lt;p&gt;To summarize, what we have added is a couple of fsyncs (it&apos;s actually fdatasync() since it is faster), and here are the results:&lt;/p&gt;
&lt;pre&gt;
(Software: vzctl 4.7, ploop-1.11)

              Iteration     1     2     3     4     5     6    AVG

        Suspend + Dump:   0.60  0.60  0.57  0.74  0.59  0.80  0.650
   Pcopy after suspend:   0.41  0.45  0.45  0.49  0.40  0.42  0.437 (-0.4)
        Copy dump file:   0.46  0.44  0.43  0.48  0.47  0.52  0.467
       Undump + Resume:   1.86  1.75  1.67  1.91  1.87  1.84  1.817 (-0.5)
                        ------  ----  ----  ----  ----  ----  -----
  Total suspended time:   3.33  3.24  3.12  3.63  3.35  3.59  3.377 (-1.1)
&lt;/pre&gt;

&lt;p&gt;As you see, both &quot;pcopy after suspend&quot; and &quot;undump + resume&quot; times decreased, shaving off about a second of time, which gives us about 25% improvement. Now, take into account that tests were done on an otherwise idle nodes with mostly idle containers, we suspect that the benefit will be more apparent with I/O load. Let checking if this statement is true will be your homework for today!&lt;/p&gt;

</description>
  <comments>https://openvz.livejournal.com/47780.html?view=comments#comments</comments>
  <category>ploop</category>
  <category>optimization</category>
  <category>openvz</category>
  <category>vzctl</category>
  <category>live migration</category>
  <lj:security>public</lj:security>
  <lj:poster>k001</lj:poster>
  <lj:posterid>990679</lj:posterid>
  <lj:reply-count>3</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/47610.html</guid>
  <pubDate>Thu, 03 Apr 2014 15:54:39 GMT</pubDate>
  <title>How about an OpenVZ CentOS Variant?</title>
  <author>dowdle</author>
  <link>https://openvz.livejournal.com/47610.html</link>
  <description>I&apos;ve used RHEL, CentOS and Fedora for many years... and as many of you already know... back in January, CentOS became a sponsored project of Red Hat.  For the upcoming CentOS 7 release they are going beyond just the normal release that is an as-perfect-as-possible clone of RHEL.  They have this concept of &lt;a href=&quot;https://www.centos.org/variants/&quot; target=&quot;_blank&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;variants&lt;/a&gt;... where Special Interest Groups (SIGs) are formed around making special purpose builds of CentOS... spins or remixs if you will.  I don&apos;t know a lot about it yet but I think I have the basic concept correct.&lt;br /&gt;&lt;br /&gt;Looking at the numbers on &lt;a href=&quot;http://stats.openvz.org/&quot; target=&quot;_blank&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;stats.openvz.org&lt;/a&gt; I see:&lt;br /&gt;&lt;br /&gt;Top  host   distros&lt;br /&gt;-------------------&lt;br /&gt;CentOS	     56,725&lt;br /&gt;Scientific    2,471&lt;br /&gt;RHEL	        869&lt;br /&gt;Debian	        576&lt;br /&gt;Fedora	        111&lt;br /&gt;Ubuntu	         82&lt;br /&gt;Gentoo	         54&lt;br /&gt;openSUS          18&lt;br /&gt;ALT Linux        10&lt;br /&gt;Sabayon	          6&lt;br /&gt;&lt;br /&gt;and&lt;br /&gt;&lt;br /&gt;Top 10  CT  distros&lt;br /&gt;-------------------&lt;br /&gt;centos	    245,468&lt;br /&gt;debian	    106,350&lt;br /&gt;ubuntu	     83,197&lt;br /&gt;OR	      8,354&lt;br /&gt;gentoo	      7,017&lt;br /&gt;pagoda	      4,024&lt;br /&gt;scientific    3,604&lt;br /&gt;fedora	      3,173&lt;br /&gt;seedunlimited 1,965&lt;br /&gt;&lt;br /&gt;Although reporting is optional, the popularity of CentOS as both an OpenVZ host and an OpenVZ container surely has to do with the fact that the two stable branches of the OpenVZ kernel are derived from RHEL kernels.&lt;br /&gt;&lt;br /&gt;Wouldn&apos;t be nice if there were a CentOS variant that has the OpenVZ kernel and utils pre-installed?  I think so.&lt;br /&gt;&lt;br /&gt;While I have made CentOS remixes in the past just for my own personal use... I have not had any official engagement with the CentOS community.  I was curious if there were some OpenVZ users out there who are already affiliated with the CentOS Project and who might want to get together in an effort to start a SIG and ultimately an OpenVZ CentOS 7 variant.  Anyone?  I guess if not, I could make a personal goal of building a CentOS and/or Scientific Linux 6-based remix that includes OpenVZ... as well as working on it after RHEL7 and clones are released... and after such time the OpenVZ Project has released a stable branch based on the RHEL7 kernel.&lt;br /&gt;&lt;br /&gt;I will acknowledge up front that some of the top CentOS devs / contributors have historically been fairly nasty to OpenVZ users on the &lt;a href=&apos;https://www.livejournal.com/rsearch/?tags=%23centos&apos;&gt;#centos&lt;/a&gt; IRC channel.  They generally did not want to help someone using a CentOS system running under an OpenVZ kernel... but then again... their reputation is for being obnoxious to many groups of people. :)  I don&apos;t think we should let that stop us.&lt;br /&gt;&lt;br /&gt;Comments, feedback, questions?</description>
  <comments>https://openvz.livejournal.com/47610.html?view=comments#comments</comments>
  <category>openvz</category>
  <category>centos</category>
  <lj:security>public</lj:security>
  <lj:poster>dowdle</lj:poster>
  <lj:posterid>9725912</lj:posterid>
  <lj:reply-count>2</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/47181.html</guid>
  <pubDate>Wed, 05 Mar 2014 20:22:11 GMT</pubDate>
  <title>N problems of Linux Containers</title>
  <author>k001</author>
  <link>https://openvz.livejournal.com/47181.html</link>
  <description>&lt;a href=&quot;http://lwn.net/SubscriberLink/588309/e430e474f22e673e/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;Seven problems with Linux containers&lt;/a&gt; is a detailed LWN.net description of the talk I gave at &lt;a href=&quot;http://socallinuxexpo.org/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;Southern California Linux Expo (aka SCALE)&lt;/a&gt; last week in Los Angeles. Couldn&apos;t have written it better myself!</description>
  <comments>https://openvz.livejournal.com/47181.html?view=comments#comments</comments>
  <category>lwn.net</category>
  <category>containers</category>
  <category>talk</category>
  <category>openvz</category>
  <category>scale12x</category>
  <category>problems</category>
  <lj:security>public</lj:security>
  <lj:poster>k001</lj:poster>
  <lj:posterid>990679</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/47071.html</guid>
  <pubDate>Wed, 05 Feb 2014 18:11:43 GMT</pubDate>
  <title>CRIU hangout on air</title>
  <author>k001</author>
  <link>https://openvz.livejournal.com/47071.html</link>
  <description>In a light of CRIU 1.1 release that happened last week, we will be doing a hangout on air to tell more about CRIU past and the future, and will be answering your questions. &lt;a href=&quot;https://plus.google.com/events/cfj8rg61m1uj6ns3pf6dd8f8me0&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;Event page is here&lt;/a&gt;, it is going to happen as soon as this Friday, Feb 7th, at 6:00am PST / 9:00am EST / 15:00 CET / 18:00 MSK. Feel free to ask your questions now (go to &lt;a href=&quot;https://plus.google.com/events/cfj8rg61m1uj6ns3pf6dd8f8me0&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;event page&lt;/a&gt; and click on &quot;Play&quot;).</description>
  <comments>https://openvz.livejournal.com/47071.html?view=comments#comments</comments>
  <category>openvz</category>
  <category>hangout on air</category>
  <category>criu</category>
  <lj:security>public</lj:security>
  <lj:poster>k001</lj:poster>
  <lj:posterid>990679</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/46644.html</guid>
  <pubDate>Sat, 28 Dec 2013 09:47:42 GMT</pubDate>
  <title>Why we still use gzip for templates?</title>
  <author>k001</author>
  <link>https://openvz.livejournal.com/46644.html</link>
  <description>&lt;a href=&quot;http://openvz.org/Download/template/precreated&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;OpenVZ precreated OS templates&lt;/a&gt; are tarballs of a pre-installed Linux distributions. While there are other ways to create a container, the easiest one is to take such a tarball and extract its contents. This is what takes 99.9% of &lt;code&gt;vzctl create&lt;/code&gt; command execution.&lt;br /&gt;&lt;br /&gt;To save some space and improve download speeds, those tarballs are compacted with good ol&apos; &lt;code&gt;gzip&lt;/code&gt; tool. For example, CentOS 6 template tar.gz is about 200 MB in size, while uncompacted tar would be about 550 MB. But why don&apos;t we use more efficient compacting tools, such as bzip2 or xz? Say, the same CentOS 6 tarball, compressed by xz, is as lightweight as 120 MB! Here are the numbers again:&lt;br /&gt;&lt;br /&gt;centos-6-x86.tar.gz: 203M&lt;br /&gt;centos-6-x86.tar.xz: 122M&lt;br /&gt;centos-6-x86.tar: 554M&lt;br /&gt;&lt;br /&gt;So, why don&apos;t we switch to xz which apparently looks way better? Well, there are other criteria to optimize for, except for file size and download speed. In fact, the main optimization target is container creation speed! I just ran a quick non-scientific test on my notebook in order to proof my words, measuring the time it takes to run &lt;code&gt;tar xf&lt;/code&gt; on a tarball:&lt;br /&gt;&lt;br /&gt;time tar xf tar.gz: ~7 seconds&lt;br /&gt;time tar xf tar.xz: ~13 seconds&lt;br /&gt;&lt;br /&gt;See, it takes twice the time if we switch to xz! Note that this ratio doesn&apos;t change much when I switched from fast SSD to (relatively slow) rotating hard disk drive:&lt;br /&gt;&lt;br /&gt;time tar xf tar.gz: ~8 seconds&lt;br /&gt;time tar xf tar.xz: ~16 seconds&lt;br /&gt;&lt;br /&gt;Note, while I call it non-scientific, I still ran each test at least three times, with proper syncs, rms and cache drops in between.&lt;br /&gt;&lt;br /&gt;Now, do we want to trade a double increase of container creation time for saving 80 MB of disk space. We sure don&apos;t!</description>
  <comments>https://openvz.livejournal.com/46644.html?view=comments#comments</comments>
  <category>xz</category>
  <category>performance</category>
  <category>vzctl create</category>
  <category>openvz</category>
  <category>gzip</category>
  <lj:security>public</lj:security>
  <lj:poster>k001</lj:poster>
  <lj:posterid>990679</lj:posterid>
  <lj:reply-count>10</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/46523.html</guid>
  <pubDate>Thu, 21 Nov 2013 04:13:38 GMT</pubDate>
  <title>Video: Containers and the Cloud</title>
  <author>dowdle</author>
  <link>https://openvz.livejournal.com/46523.html</link>
  <description>James Bottomley, CTO, Server Virtualization for Parallels gave a presentation entitled, &quot;Containers and The Cloud: Do you need another virtual environment?&quot; on Oct 23.  The Linux Foundation recently posted it to YouTube.&lt;br /&gt;&lt;br /&gt;There is a lot of good information in the video even for us OpenVZ folks.  Enjoy:&lt;br /&gt;&lt;br /&gt;&lt;lj-embed id=&quot;4&quot; /&gt;</description>
  <comments>https://openvz.livejournal.com/46523.html?view=comments#comments</comments>
  <category>containers</category>
  <category>openvz</category>
  <lj:security>public</lj:security>
  <lj:poster>dowdle</lj:poster>
  <lj:posterid>9725912</lj:posterid>
  <lj:reply-count>1</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/46181.html</guid>
  <pubDate>Thu, 07 Nov 2013 18:42:26 GMT</pubDate>
  <title>vzctl 4.6</title>
  <author>k001</author>
  <link>https://openvz.livejournal.com/46181.html</link>
  <description>vzctl 4.6 build hit download.openvz.org (and its many &lt;a href=&quot;http://openvz.org/Download_mirrors&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;mirrors&lt;/a&gt; around the world) last week. Let&apos;s see what&apos;s in store, shall we?&lt;br /&gt;&lt;br /&gt;First and foremost, &lt;a href=&quot;http://openvz.livejournal.com/45831.html&quot; target=&quot;_blank&quot;&gt;&lt;b&gt;I/O limits&lt;/b&gt;, but the feature is already described in great details&lt;/a&gt;. What I want to add is the feature was sponsored by GleSYS Internet Services AB, and is one of the first results of &lt;a href=&quot;http://www.parallels.com/support/virtualization-suite/openvz/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;OpenVZ partnership&lt;/a&gt; program in action. This program is a great opportunity for you to help keep the project up and running, and also experience our expert support service just in case you&apos;d need it. Think of it as a two-way support option. Anyway, I digress. What was it about? Oh yes, vzctl.&lt;br /&gt;&lt;br /&gt;Second, &lt;b&gt;improvements to UBC setting in VSwap mode&lt;/b&gt;. Previously, if you set RAM and swap, all other UBC parameters not set explicitly were set to unlimited. Now they are just left unset (meaning that the default in-kernel setting is used, whatever it is). Plus, in addition to physpages and swappages, vzctl sets lockedpages and oomguarpages (to RAM), vmguarpages (to RAM+swap).&lt;br /&gt;&lt;br /&gt;Plus, there is a new parameter vm_overcommit, and it works in the following way -- if set, it is used as a multiplier to ram+swap to set privvmpages. In layman terms, this is a ratio of real memory (ram+swap) to virtual memory (privvmpages). Again, physpages limits RAM, and physpages+swappages limits real memory &lt;b&gt;used&lt;/b&gt; by a container. On the other side, privvmpages limits memory &lt;b&gt;allocated&lt;/b&gt; by a container. While it depends on the application, generally not all allocated memory is used -- sometimes allocated memory is 5 or 10 times more than used memory. What vm_overcommit gives is a way to set this gap. For example, command&lt;br /&gt;&lt;br /&gt;&lt;code&gt;vzctl set $CTID --ram 2G --swap 4G --vm_overcommit 3 --save&lt;/code&gt;&lt;br /&gt;&lt;br /&gt;is telling OpenVZ to limit container $CTID with 2 GB of RAM, 4 GB of swap, and (2+4)*3, i.e. 18 GB of virtual memory. That means this container can allocate up to 18 GB of memory, but can&apos;t use more than 6 GB. So, vm_overcommit is just a way to set privvmpages implicitly, as a function of physpages and swappages. Oh, and if you are lost in all those *pages, we have extensive documentation at &lt;a target=&apos;_blank&apos; href=&apos;http://openvz.org/UBC&apos; rel=&apos;nofollow&apos;&gt;http://openvz.org/UBC&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;A first version of &lt;b&gt;vzoversell&lt;/b&gt; utility is added. This is a proposed vzmemcheck replacement for VSwap mode. Currently it just summarizes RAM and swap limits for all VSwap containers, and compares it to RAM and swap available on the host. Surely you can oversell RAM (as long as you have enough swap), but sum of all RAM+swap limits should not exceed RAM+swap on the node, and the main purpose of this utility is to check that constraint.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;vztmpl-dl&lt;/b&gt; got a new --list-orphans option. It lists all local templates that are not available from the download server(s) (and therefore can&apos;t be updated by vztmpl-dl). Oh, by the way, since vzctl 4.5 you can use &lt;code&gt;vztmpl-dl --update-all&lt;/code&gt; to refresh all templates (i.e. download an updated template, if a newer version is available from a download server). For more details, see &lt;a href=&quot;http://openvz.org/Man/vztmpl-dl.8&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;vztmpl-dl(8) man page&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;vzubc&lt;/b&gt; got some love, too. It now skips unlimited UBCs by default, in order to improve the signal to noise ratio. If you want old behavior, (i.e. all UBCs), use -v flag.&lt;br /&gt;&lt;br /&gt;Surely, there&apos;s a bunch of other fixes and improvements, please &lt;a href=&quot;http://openvz.org/Download/vzctl/4.6&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;read the changelog&lt;/a&gt; if you want to know it all. One thing in particular worth mentioning, it&apos;s a &lt;b&gt;hack to vzctl console&lt;/b&gt;. As you might know, in OpenVZ container&apos;s console is sort of eternal, meaning you can attach to it before a container is even started, and it keeps its state even if you detach from it. That creates a minor problem, though -- if someone run, say, vim in console, then detaches and reattaches, vim is not redrawing anything and the console shows nothing. To workaround, one needs to press Ctrl-L (it is also recognized by bash and other software). But it&apos;s a bit inconvenient to do that every time after reattach. Although, this is not required if terminal size has changed (i.e. you detach from console, change your xterm size, then run vzctl console again), because in this case vim is noting the change and redraws accordingly. So what vzctl now does after reattach is telling the underlying terminal its size twice -- first the wrong size (with incremented number of rows), then the right size (the one of the terminal vzctl is running in). This forces vim (or whatever is running on container console) to redraw.&lt;br /&gt;&lt;br /&gt;Finally, new vzctl (as well as other utilities) are now in our &lt;b&gt;Debian wheezy repo&lt;/b&gt; at &lt;a target=&apos;_blank&apos; href=&apos;http://download.openvz.org/debian&apos; rel=&apos;nofollow&apos;&gt;http://download.openvz.org/debian&lt;/a&gt;, so Debian users are now on par with those using RPM-based distros, and can have latest and greatest stuff as soon as it comes out. Same that &lt;a href=&quot;http://openvz.livejournal.com/45345.html&quot; target=&quot;_blank&quot;&gt;we did with kernels&lt;/a&gt; some time ago.&lt;br /&gt;&lt;br /&gt;Enjoy, and don&apos;t forget to report bugs to &lt;a target=&apos;_blank&apos; href=&apos;http://bugzilla.openvz.org/&apos; rel=&apos;nofollow&apos;&gt;http://bugzilla.openvz.org/&lt;/a&gt;</description>
  <comments>https://openvz.livejournal.com/46181.html?view=comments#comments</comments>
  <category>donation</category>
  <category>openvz</category>
  <category>vzctl</category>
  <category>support</category>
  <lj:security>public</lj:security>
  <lj:poster>k001</lj:poster>
  <lj:posterid>990679</lj:posterid>
  <lj:reply-count>0</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/45831.html</guid>
  <pubDate>Wed, 30 Oct 2013 03:12:10 GMT</pubDate>
  <title>Yay to I/O limits!</title>
  <author>k001</author>
  <link>https://openvz.livejournal.com/45831.html</link>
  <description>&lt;p&gt;Today we are releasing a somewhat small but very important OpenVZ feature: &lt;b&gt;per-container disk I/O bandwidth and &lt;a href=&quot;http://en.wikipedia.org/wiki/IOPS&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;IOPS&lt;/a&gt; limiting.&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;OpenVZ have I/O priority feature for a while, which lets one set a per-container I/O priority -- a number from 0 to 7. This is working in a way that if two similar containers with similar I/O patterns, but different I/O priorities are run on the same system, a container with a prio of 0 (lowest) will have I/O speed of about 2-3 times less than that of a container with a prio of 7 (highest). This works for some scenarios, but not all.&lt;/p&gt;

&lt;p&gt;So, I/O bandwidth limiting was introduced in &lt;a href=&quot;http://www.parallels.com/products/pcs/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;Parallels Cloud Server&lt;/a&gt;, and as of today is available in OpenVZ as well. Using the feature is very easy: you set a limit for a container (in megabytes per second), and watch it obeying the limit. For example, here I try doing I/O without any limit set first:&lt;/p&gt;

&lt;pre&gt;
root@host# vzctl enter 777
root@CT:/# cat /dev/urandom | pv -c - &amp;gt;/bigfile
 88MB 0:00:10 [8.26MB/s] [         &amp;lt;=&amp;gt;      ]
^C
&lt;/pre&gt;

&lt;p&gt;Now let&apos;s set the I/O limit to 3 MB/s:&lt;/p&gt;

&lt;pre&gt;
root@host# vzctl set 777 --iolimit 3M --save
UB limits were set successfully
Setting iolimit: 3145728 bytes/sec
CT configuration saved to /etc/vz/conf/777.conf
root@host# vzctl enter 777
root@CT:/# cat /dev/urandom | pv -c - &amp;gt;/bigfile3
39.1MB 0:00:10 [   3MB/s] [         &amp;lt;=&amp;gt;     ]
^C
&lt;/pre&gt;

&lt;p&gt;If you run it yourself, you&apos;ll notice a spike of speed at the beginning, and then it goes down to the limit. This is so-called burstable limit working, it allows a container to over-use its limit (up to 3x) for a short time.&lt;/p&gt;

&lt;p&gt;In the above example we tested writes. Reads work the same way, except when read data are in fact coming from the page cache (such as when you are reading the file which you just wrote). In this case, no actual I/O is performed, therefore no limiting.&lt;/p&gt;

&lt;p&gt;Second feature is &lt;a href=&quot;http://en.wikipedia.org/wiki/IOPS&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;I/O operations per second, or just IOPS&lt;/a&gt; limit. For more info on what is IOPS, go read the linked Wikipedia article -- all I can say here is for traditional rotating disks the hardware capabilities are pretty limited (75 to 150 IOPS is a good guess, or 200 if you have high-end server class HDDs), while for SSDs this is much less of a problem. IOPS limit is set in the same way as iolimit (&lt;code&gt;vzctl set $CTID --iopslimit NN --save&lt;/code&gt;), although measuring its impact is more tricky.&amp;lt;/o&amp;gt;

&lt;p&gt;Finally, to play with this stuff, you need:&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://openvz.org/Download/vzctl&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;vzctl&lt;/a&gt; 4.6 (or higher)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://openvz.org/Download/kernel/rhel6-testing&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;Kernel&lt;/a&gt; 042stab084.3 (or higher)&lt;/li&gt;
&lt;/ul&gt;

Note that the kernel with this feature is currently still in testing -- so if you haven&apos;t done so, it&apos;s time to read about &lt;a href=&quot;http://openvz.livejournal.com/45010.html&quot; target=&quot;_blank&quot;&gt;testing kernels&lt;/a&gt;.</description>
  <comments>https://openvz.livejournal.com/45831.html?view=comments#comments</comments>
  <category>kernel</category>
  <category>openvz</category>
  <category>iolimit</category>
  <category>vzctl</category>
  <category>iopslimit</category>
  <lj:security>public</lj:security>
  <lj:poster>k001</lj:poster>
  <lj:posterid>990679</lj:posterid>
  <lj:reply-count>15</lj:reply-count>
  </item>
  <item>
  <guid isPermaLink='true'>https://openvz.livejournal.com/45647.html</guid>
  <pubDate>Tue, 15 Oct 2013 16:11:53 GMT</pubDate>
  <title>Is OpenVZ obsoleted?</title>
  <author>k001</author>
  <link>https://openvz.livejournal.com/45647.html</link>
  <description>Oh, such a provocative subject! Not really. Many people do believe that OpenVZ is obsoleted, and when I ask why, three most popular answers are:&lt;br /&gt;&lt;br /&gt;1. OpenVZ kernel is old and obsoleted, because it is based on 2.6.32, while everyone in 2013 runs 3.x.&lt;br /&gt;2. LXC is the future, OpenVZ is the past.&lt;br /&gt;3. OpenVZ is no longer developed, it was even removed from Debian Wheezy.&lt;br /&gt;&lt;br /&gt;Let me try to address all these misconceptions, one by one.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;1. &quot;OpenVZ kernel is old&quot;.&lt;/b&gt; Current OpenVZ kernels are based on kernels from Red Hat Enterprise Linux 6 (RHEL6 for short). This is the latest and greatest version of enterprise Linux distribution from Red Hat, a company who is always almost at the top of the list of top companies contributing to the Linux kernel development (see &lt;a href=&quot;http://lwn.net/Articles/507986/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;1&lt;/a&gt;, &lt;a href=&quot;http://lwn.net/Articles/451243/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;2&lt;/a&gt;, &lt;a href=&quot;http://lwn.net/Articles/373405/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;3&lt;/a&gt;, &lt;a href=&quot;http://lwn.net/Articles/222773/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;4&lt;/a&gt; for a few random examples). While no kernel being ideal and bug free, RHEL6 one is a good real world approximation of these qualities.&lt;br /&gt;&lt;br /&gt;What people in Red Hat do for their enterprise Linux is they take an upstream kernel and basically fork it, ironing out the bugs, cherry-picking security fixes, driver updates, and sometimes new features from upstream. They do so for about half a year or more before a release, so the released kernel is already &quot;old and obsoleted&quot;, as it seems if one is looking at the kernel version number. Well, don&apos;t judge a book by its cover, don&apos;t judge a kernel by its number. Of course it&apos;s not old, neither obsoleted -- it&apos;s just more stable and secure. And then, after a release, it is very well maintained, with modern hardware support, regular releases, and prompt security fixes. This makes it a great base for OpenVZ kernel. In a sense, we are standing on the shoulders of a red hatted giant (and since this is open source, &lt;a href=&quot;http://openvz.livejournal.com/23621.html&quot; target=&quot;_blank&quot;&gt;they are standing just a little bit on our shoulders&lt;/a&gt;, too).&lt;br /&gt;&lt;br /&gt;RHEL7 is being worked on right now, and it will be based on some 3.x kernel (possibly 3.10). We will port OpenVZ kernel to RHEL7 once it will become available. In the meantime, RHEL6-based OpenVZ kernel is latest and greatest, and please don&apos;t be fooled by the fact that uname shows 2.6.32.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;2. OpenVZ vs LXC.&lt;/b&gt; OpenVZ kernel was historically developed separately, i.e. aside from the upstream Linux kernel. This mistake was recognized in 2005, and since then we keep working on merging OpenVZ bits and pieces to the upstream kernel. It took way longer than expected, we are still in the middle of the process with some great stuff (like net namespace and &lt;a href=&quot;http://criu.org/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;CRIU&lt;/a&gt;, totally more than 2000 changesets) merged, while some other features are still in our TODO list. In the future (another eight years? who knows...) OpenVZ kernel functionality will probably be fully upstream, so it will just be a set of tools. We are happy to see that Parallels is not the only company interested in containers for Linux, so it might happen a bit earlier. For now, though, we still rely on our organic non-GMO home grown kernel (although it is &lt;a href=&quot;http://wiki.openvz.org/Vzctl_for_upstream_kernel&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;already optional&lt;/a&gt;).&lt;br /&gt;&lt;br /&gt;Now what is LXC? In fact, it is just another user-space tool (not unlike vzctl) that works on top of a recent upstream kernel (again, not unlike vzctl). As we work on merging our stuff upstream, LXC tools will start using new features and therefore benefit from this work. So far at least half of kernel functionality used by LXC was developed by our engineers, and while we don&apos;t work on LXC tools, it would not be an overestimation to say that Parallels is the biggest LXC contributor.&lt;br /&gt;&lt;br /&gt;So, both OpenVZ and LXC are actively developed and have their future. We might even merge our tools at some point, the idea was briefly discussed during last containers mini-conf at Linux Plumbers. LXC is not a successor to OpenVZ, though, they are two different projects, although not entirely separate (since OpenVZ team contributes to the kernel a lot, and both tools use the same kernel functionality). OpenVZ is essentially LXC++, because it adds some more stuff that are not (yet) available in the upstream kernel (such as stronger isolation, better resource accounting, plus some auxiliary ones like ploop).&lt;br /&gt;&lt;br /&gt;&lt;b&gt;3. OpenVZ no longer developed, removed from Debian&lt;/b&gt;. Debian kernel team decided to drop OpenVZ (as well as few other) kernel flavors from Debian 7 a.k.a. Wheezy. This is completely understandable: kernel maintenance takes time and other resources, and they probably don&apos;t have enough. That doesn&apos;t mean though that OpenVZ is not developed. It&apos;s really strange to argue that, but please check our &lt;a href=&quot;http://openvz.org/News/updates&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;software updates page&lt;/a&gt; (or the &lt;a href=&quot;https://lists.openvz.org/pipermail/announce/&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;announce@ mailing list archives&lt;/a&gt;). We made about 80 software releases this year so far. This accounts for 2 releases every week. Most of those are new kernels. So no, in no way it is abandoned.&lt;br /&gt;&lt;br /&gt;As for Debian Wheezy, we are providing our repository with OpenVZ kernel and tools, &lt;a href=&quot;http://openvz.livejournal.com/45345.html&quot; target=&quot;_blank&quot;&gt;as it was announced just yesterday&lt;/a&gt;.</description>
  <comments>https://openvz.livejournal.com/45647.html?view=comments#comments</comments>
  <category>containers</category>
  <category>kernel</category>
  <category>lxc</category>
  <category>openvz</category>
  <category>ubuntu</category>
  <category>rhel6</category>
  <category>debian</category>
  <lj:security>public</lj:security>
  <lj:poster>k001</lj:poster>
  <lj:posterid>990679</lj:posterid>
  <lj:reply-count>7</lj:reply-count>
  </item>
</channel>
</rss>
