Article ID: 118784, created on Nov 21, 2013, last review on Jun 17, 2016

  • Applies to:
  • Virtuozzo
  • Virtuozzo containers for Linux
  • Virtuozzo hypervisor
  • Virtual Automation


A backup process for a container hangs on "Preparing for backup operation" stage:

[root@vz ~]# vzabackup -F localhost -e 101
Starting backup operation for node ''...
* Operation with the Container test is started
* Backing up environment test locally
* Checking parameters
* Creating backup ed58e6f5-1b5e-dd4c-bfd3-a46ed2880b2a/20131117074642
* Adjusting backup type (full)
* Backup storage: receiving backup file
* Preparing for backup operation
...hangs here...

All processes inside the container appear in D-state, i.e. suspended, which leaves the container completely inaccessible:

[root@vz ~]# vzps -E 101 auxfwww
0         273955  0.0  0.0  5988  332 ?        Ds   Oct12   3:00  \_ /sbin/syslogd
0         274038  0.0  0.0 45340 1292 ?        Dl   Oct12   2:44  \_ /usr/sbin/monit -c /etc/monit/monitrc -s /var/lib/monit/monit.state
0         274515  0.0  0.0 27024  196 ?        D    Oct12   1:46  \_ /usr/sbin/vsftpd
0         274518  0.0  0.0 19340   12 ?        Ds   Oct12   0:00  \_ /usr/sbin/xinetd -pidfile /var/run/ -stayalive -inetd_compat -inetd_ipv6
114       274523  0.0  0.0 24704   92 ?        Ds   Oct12   3:25  \_ /usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -d
0         274529  0.0  0.0 20912  228 ?        Ds   Oct12   0:14  \_ /usr/sbin/cron

vzlpl process on the hardware node, which is responsible for creating the backup, is in D-state as well:

[root@vz ~]# ps auxfwww | grep vzlpl
root      119734  0.1  0.2 364428 50708 ?        Dl   11:00   0:05  \_ /opt/pva/agent/bin/vzlpl /var/opt/pva/agent/tmp.oqwbvJ

Its stack points to interaction with NFS module:

[root@vz ~]# cat /proc/119734/stack
[<ffffffffa0436bb4>] rpc_wait_bit_killable+0x24/0x40 [sunrpc]
[<ffffffffa043714d>] __rpc_execute+0x13d/0x400 [sunrpc]
[<ffffffffa0437471>] rpc_execute+0x61/0xa0 [sunrpc]
[<ffffffffa042d5a0>] rpc_run_task+0xa0/0xe0 [sunrpc]
[<ffffffffa042d6e2>] rpc_call_sync+0x42/0x70 [sunrpc]
[<ffffffffa04f211d>] nfs3_rpc_wrapper.clone.0+0x3d/0xd0 [nfs]
[<ffffffffa04f2f07>] nfs3_proc_getattr+0x47/0x90 [nfs]
[<ffffffffa04df7f3>] __nfs_revalidate_inode+0xe3/0x220 [nfs]
[<ffffffffa04dfad6>] nfs_revalidate_inode+0x36/0x60 [nfs]
[<ffffffffa04dfbb6>] nfs_getattr+0x66/0x120 [nfs]
[<ffffffff8119c8e8>] vfs_getattr+0x38/0x70
[<ffffffff8119d8e0>] vfs_fstatat+0x60/0x80
[<ffffffff8119d91e>] vfs_lstat+0x1e/0x20
[<ffffffff8119dab4>] sys_newlstat+0x24/0x50
[<ffffffff8100b182>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff


A hung NFS client on the hardware node or in one of the containers leads to the backup process getting hung until the NFS client is stopped or the unavailable NFS server is brought to a working state.

This behavior is recognized as a PVA product bug PVA-33866.


Find out the hung NFS client and resolve the situation.

Check the available NFS mounts on the hardware node - it will list the mounts from all containers as well:

[root@vz ~]# cat /proc/mounts | grep nfs /vz/root/106/mnt/nfsshare nfs rw,noatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=,mountvers=3,mountport=34443,mountproto=udp,local_lock=none,addr= 0 0

This example shows that there's available NFS mount inside container 106. Any attempt to access it hangs:

[root@vz ~]# ls /vz/root/106/mnt/nfsshare
...hangs here...

So, it's necessary either to kill the NFS mount process (e.g. stop this container) or fix the availability of the NFS server

The permanent fix for PVA-33866 is included in Parallels Virtual Automation 6.1 Update 1 (6.0-2695).

Search Words

pvaagent freeze

backup stuck

Shared webhosting server was unreachable during backup

backups leave sites offline


services are down


parallels high load

Preparing for backup operation

container hung up when creating a new backup

backup network

a26b38f94253cdfbf1028d72cf3a498b 2897d76d56d2010f4e3a28f864d69223 d02f9caf3e11b191a38179103495106f e8e50b42231236b82df27684e7ec0beb 319940068c5fa20655215d590b7be29b 0dd5b9380c7d4884d77587f3eb0fa8ef

Email subscription for changes to this article
Save as PDF