Article ID: 116768, created on Aug 14, 2013, last review on Nov 25, 2014

  • Applies to:
  • Virtuozzo 6.0
  • Virtuozzo containers for Linux 4.7

Symptoms

Parallels Cloud Server hangs (or hangs and crashes). In /var/log/messages similar process stacks can be seen:

Jul  6 08:39:48 cs11 kernel: [278400.971458] INFO: task parallels-c2v-a:9416 blocked for more than 120 seconds.
Jul  6 08:39:48 cs11 kernel: [278400.971458] INFO: task parallels-c2v-a:9416 blocked for more than 120 seconds.
Jul  6 08:39:48 cs11 kernel: [278400.971613] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul  6 08:39:48 cs11 kernel: [278400.971753] parallels-c2v D ffff88031c16ebe0     0  9416      1    0 0x00000080
Jul  6 08:39:48 cs11 kernel: [278400.971760]  ffff88031d3b1cc8 0000000000000086 0000000000000000 dead000000100100
Jul  6 08:39:48 cs11 kernel: [278400.971768]  dead000000200200 ffff880316ed99d8 00000000000000a6 0000000000000010
Jul  6 08:39:48 cs11 kernel: [278400.971773]  0000000000000000 0000000110918ff0 ffff88031c16f1a8 000000000001ea80
Jul  6 08:39:48 cs11 kernel: [278400.971778] Call Trace:
Jul  6 08:39:48 cs11 kernel: [278400.971791]  [<ffffffff81517bee>] __mutex_lock_slowpath+0x13e/0x180
Jul  6 08:39:48 cs11 kernel: [278400.971799]  [<ffffffff8109d210>] ?
autoremove_wake_function+0x0/0x40   
Jul  6 08:39:48 cs11 kernel: [278400.971804]  [<ffffffff81517a8b>] mutex_lock+0x2b/0x50
Jul  6 08:39:48 cs11 kernel: [278400.971809]  [<ffffffff814660e5>] rtnl_lock+0x15/0x20
Jul  6 08:39:48 cs11 kernel: [278400.971814]  [<ffffffff81459b2d>] dev_ioctl+0x11d/0x6f0
Jul  6 08:39:48 cs11 kernel: [278400.971821]  [<ffffffff8118c640>] ? ub_slab_ptr+0x20/0x90
Jul  6 08:39:48 cs11 kernel: [278400.971827]  [<ffffffff810b15fe>] ?  
...............

Cause

The issue is recognized as product bug PSBM-21008.

Resolution

The fix is included in CU-2.6.32-042stab078.28 kernel update.

To resolve this issue, update the PCS node:

[ root@pcs ]# yum update

And reboot the node into the newest kernel.

Search Words

hardware node load average increase too high

node hang

c62e8726973f80975db0531f1ed5c6a2 2897d76d56d2010f4e3a28f864d69223 0dd5b9380c7d4884d77587f3eb0fa8ef e8e50b42231236b82df27684e7ec0beb d02f9caf3e11b191a38179103495106f 0c05f0c76fec3dd785e9feafce1099a9

Email subscription for changes to this article
Save as PDF