Article ID: 123228, created on Oct 23, 2014, last review on Jun 17, 2016

  • Applies to:
  • Virtuozzo 6.0

Symptoms

Virtual machines are jumping to PAUSED status and back to RUNNING in cycles:

# grep PAUSED /var/log/parallels.log | head
06-14 04:02:05.534 F /disp:4155:266757/ Vm state was changed from VMS_PAUSED to VMS_RUNNING for vm {afa153d6-82b9-4501-822f-0e11dfa838a0} (name='MyVM'), to powerState 0(vpsNormal)
06-14 04:02:05.753 F /disp:4155:132997/ Vm state was changed from VMS_PAUSED to VMS_RUNNING for vm {a15e8bac-a0e8-4e05-ac42-1dfa46a1f760} (name='cloud.VM'), to powerState 0(vpsNormal)
06-14 04:02:05.762 F /disp:4155:318573/ Vm state was changed from VMS_RUNNING to VMS_PAUSED for vm {770880f3-8cf9-4af5-8d66-55ce69c23215} (name='VM'), to powerState 2(vpsPausedByVmFrozen)
06-14 04:02:05.899 F /disp:4155:14827/ Vm state was changed from VMS_PAUSED to VMS_RUNNING for vm {1e169800-c1ad-4b0b-848b-030e01c72bfe} (name='MyVM'), to powerState 0(vpsNormal)
06-14 04:02:05.926 F /disp:4155:17698/ Vm state was changed from VMS_RUNNING to VMS_PAUSED for vm {61d27f62-d3de-414a-b091-e3b562363b31} (name='test.VM'), to powerState 2(vpsPausedByVmFrozen)
06-14 04:02:05.942 F /disp:4155:20950/ Vm state was changed from VMS_PAUSED to VMS_RUNNING for vm {291af834-6568-4a75-9ce9-0969fea05e6f} (name='VM'), to powerState 0(vpsNormal)
06-14 04:02:07.196 F /disp:4155:266757/ Vm state was changed from VMS_RUNNING to VMS_PAUSED for vm {afa153d6-82b9-4501-822f-0e11dfa838a0} (name='MyVM2'), to powerState 2(vpsPausedByVmFrozen)
06-14 04:02:07.242 F /disp:4155:132997/ Vm state was changed from VMS_RUNNING to VMS_PAUSED for vm {a15e8bac-a0e8-4e05-ac42-1dfa46a1f760} (name='cloud.VM2'), to powerState 2(vpsPausedByVmFrozen)
06-14 04:02:07.251 F /disp:4155:318573/ Vm state was changed from VMS_PAUSED to VMS_RUNNING for vm {770880f3-8cf9-4af5-8d66-55ce69c23215} (name='VM'), to powerState 0(vpsNormal)

Cause

Virtual Machine will be PAUSED if its disk I/O is blocked for longer than 8 seconds. Facing this problems would indicate issues with a storage where VM's virtual disk is placed.

Possible reasons:

  • Local disk on a host is malfunctioning, e.g.

    # grep udevd /var/log/messages
    Jun 14 05:58:48 chk2 udevd[479]: worker [4281] unexpectedly returned with status 0x0100
    Jun 14 05:58:48 chk2 udevd[479]: worker [4281] failed while handling '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host1/target1:2:4/1:2:4:0/block/sdf/sdf2'
    Jun 14 06:17:01 chk2 udevd[986]: worker [6669] unexpectedly returned with status 0x0100
    Jun 14 06:17:01 chk2 udevd[986]: worker [6669] failed while handling '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host0/target0:2:4/0:2:4:0/block/sde'
    Jun 14 06:24:30 chk2 udevd[986]: worker [6763] unexpectedly returned with status 0x0100
    Jun 14 06:24:30 chk2 udevd[986]: worker [6763] failed while handling '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host0/target0:2:4/0:2:4:0/block/sde/sde1'
    Jun 14 06:46:45 chk2 udevd[986]: worker [7067] unexpectedly returned with status 0x0100
    Jun 14 06:46:45 chk2 udevd[986]: worker [7067] failed while handling '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host0/target0:2:4/0:2:4:0/block/sde'
    Jun 14 07:50:15 chk2 udevd[986]: worker [2569] unexpectedly returned with status 0x0100
    Jun 14 07:50:15 chk2 udevd[986]: worker [2569] failed while handling '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host0/target0:2:4/0:2:4:0/block/sde/sde1'
    
  • VM is stored on an disk attached over iSCSI, iSCSI is malfunctioning (storage itself is returning errors, connectivity is bad, multipath is not working correctly, etc...)

  • Pstorage is malfunctioning, leading to high I\O latency

Resolution

Depending on a host configuration different counter-measures should be taken:

Related articles:

112532 Chunk server failed suddenly, causing high load on the server

122650 [HOW TO] Verify if the SSD disk is healthy

Search Words

RUNNING PAUSED

VMS getting paused

FROZEN

automatically pause VMs

pcs

paused

running paused

entering disabled state

becup

RDP session hangs

VM hung stopping

c62e8726973f80975db0531f1ed5c6a2 2897d76d56d2010f4e3a28f864d69223 0dd5b9380c7d4884d77587f3eb0fa8ef

Email subscription for changes to this article
Save as PDF