Article ID: 119300, created on Dec 24, 2013, last review on May 7, 2014

  • Applies to:
  • Virtuozzo 6.0

Symptoms

On a high availability cluster on Parallels Cloud Storage, if a hardware node crashes, then virtual machines and containers are moved to another node. How can I track the movement of the resources after the crash?

Resolution

  1. Right after the crash the movement can be tracked in real time in shell session to one of the working nodes using shaman -c ClusterName top command.

  2. After the resources have been relocated, you can check the /var/log/shaman.log on the master shaman node:

    # shaman -c pcs stat
     Cluster 'pcs'
     Nodes: 5
     Resources: 8
    
    
       NODE_IP      STATUS          RESOURCES
       10.1.2.11    Active          3 CT, 0 VM
    *M 10.1.2.12    Active          2 CT, 1 VM
       10.1.2.13    Active          1 CT, 0 VM
       10.1.2.14    Inactive        0 CT, 0 VM
       10.1.2.15    Active          1 CT, 0 VM
    
    
       CT ID        PWRR        STATUS      OWNER_IP    PRIORITY
       100500        on         Active      10.1.2.13    0
       101           on         Active      10.1.2.11    1
       1010          on         Active      10.1.2.15    0
       102           on         Active      10.1.2.11    1
       103           on         Active      10.1.2.11    1
       12345         on         Active      10.1.2.12    1
       50            on         Active      10.1.2.12    1
    
    
       VM NAME      PWRR        STATUS      OWNER_IP        PRIORITY
       MyVM         on          Active      10.1.2.12       0
    

    10.1.2.12 is the master node in the example (marked by M).

    For example, if containers 50, 100500 and MyVM virtual machine have been running on 10.1.2.11 node and it crashed, then the similar messages will show up in /var/log/shaman.log on 10.1.2.12:

    16-11-13 12:18:48.678 ha: handle resource ct-100500
    16-11-13 12:18:48.678 ha: Will bring [ct-100500] resource UP locally
    16-11-13 12:18:48.678 shaman: execute 'pstorage -c pcs revoke -p -R private/100500'
    16-11-13 12:18:48.986 shaman: 'pstorage -c pcs revoke -p -R private/100500' completed
    16-11-13 12:18:53.833 shaman: Working on cluster pcs (action 15)
    16-11-13 12:18:54.289 shaman: Getting parameters for resource [ct-100500]
    16-11-13 12:18:54.290 ha: file .shaman/md.04394ad6c34a4c13/resources/ct-100500 does not exists, try seek it in pool
    16-11-13 12:18:55.023 shaman: Working on cluster pcs (action 15)
    16-11-13 12:18:55.502 shaman: Getting parameters for resource [ct-100500]
    16-11-13 12:18:55.504 ha: file .shaman/md.04394ad6c34a4c13/resources/ct-100500 does not exists, try seek it in pool
    16-11-13 12:18:57.599 shaman: Executed script /usr/share/shaman/relocate
    16-11-13 12:18:57.599 ha: Resource ct-100500 was relocated
    16-11-13 12:18:57.654 ha: resource ct-100500 started
    16-11-13 12:18:57.696 shaman: Executed script /usr/share/shaman/notify
    16-11-13 12:19:42.445 shaman: Working on cluster pcs (action 15)
    16-11-13 12:19:42.630 shaman: Getting parameters for resource [ct-100500]
    16-11-13 12:19:44.786 shaman: Working on cluster pcs (action 9)
    16-11-13 12:20:43.463 shaman: Working on cluster pcs (action 6)
    16-11-13 12:20:43.655 shaman: Move resource vm-MyVMfrom node md.24c75eb3a1b44644
    16-11-13 12:20:43.701 shaman: Set last node parameter for resource vm-MyVM on node md.04394ad6c34a4c13
    16-11-13 12:20:43.701 shaman: Setting parameters for resource [vm-MyVM]
    16-11-13 12:20:43.734 shaman: Successfully set parameters for resource [.shaman/md.04394ad6c34a4c13/resources/vm-MyVM]
    16-11-13 12:20:43.734 shaman: Resource vm-MyVM was moved from .shaman/md.24c75eb3a1b44644/resources/vm-MyVM to .shaman/md.04394ad6c34a4c13/resources/vm-MyVM
    ..
    16-11-13 12:20:55.100 shaman: Working on cluster pcs (action 6)
    16-11-13 12:20:55.821 shaman: Move resource ct-50 from node md.24c75eb3a1b44644
    16-11-13 12:20:55.878 shaman: Set last node parameter for resource ct-50 on node md.04394ad6c34a4c13
    16-11-13 12:20:55.878 shaman: Setting parameters for resource [ct-50]
    16-11-13 12:20:55.922 shaman: Successfully set parameters for resource [.shaman/md.04394ad6c34a4c13/resources/ct-50]
    16-11-13 12:20:55.922 shaman: Resource ct-50 was moved from .shaman/md.24c75eb3a1b44644/resources/ct-50 to .shaman/md.04394ad6c34a4c13/resources/ct-50
    

    Note the following record:

    Resource vm-MyVM was moved from .shaman/md.24c75eb3a1b44644/resources/vm-MyVM to .shaman/md.04394ad6c34a4c13/resources/vm-MyVM
    

    24c75eb3a1b44644 and 04394ad6c34a4c13 are the IDs of the nodes, which can be checked with shaman -c ClusterName top command:

    # shaman -c pcs top
    

    and then press v to get verbose information:

        NODE_IP     STATUS       NODE_ID            RESOURCES
        10.1.2.11   Active      24c75eb3a1b44644   3 CT, 0 VM
     *M 10.1.2.12   Active      04394ad6c34a4c13   2 CT, 1 VM
        10.1.2.13   Active      c9191ffa6d1e41ff   1 CT, 0 VM
        10.1.2.14   Inactive    c33e93b822ac4b43   0 CT, 0 VM
        10.1.2.15   Active      d380e1143fc84c6d   1 CT, 0 VM
    
    
    
    CT ID       PWRR    STATUS  OWNER_IP    OWNER_ID            PRIORITY
    100500      on      Active  10.1.2.13   c9191ffa6d1e41ff     0
    101         on      Active  10.1.2.11   24c75eb3a1b44644     1
    1010        on      Active  10.1.2.15   d380e1143fc84c6d     0
    102         on      Active  10.1.2.11   24c75eb3a1b44644     1
    103         on      Active  10.1.2.11   24c75eb3a1b44644     1
    12345       on      Active  10.1.2.12   04394ad6c34a4c13     1
    50          on      Active  10.1.2.12   04394ad6c34a4c13     1
    
    
    VM NAME     PWRR    STATUS  OWNER_IP     OWNER_ID           PRIORITY
    MyVM        on      Active  10.1.2.12    04394ad6c34a4c13    0
    

    So as per the log records, the move happened from md.24c75eb3a1b44644 (10.1.2.11) to md.04394ad6c34a4c13 (10.1.2.12).

    The messages in the log can be slightly different from the example above, but you can use the NODE_ID and virtual machine hostnames to check the resources movement.

  3. Another approach to check resource relocation:

    Each server has its own ID:

    # prlsrvctl info | grep ID
    ID: {0f124fd0-7499-46e2-a0ce-b0b8c9fd11e7}
    

    If you know the ID of the crashes server (for example, server with ID db35178c-94db-46e3-ad80-c4e35cdf17f8 crashed) and the virtual machine has been relocated, then db35178c-94db-46e3-ad80-c4e35cdf17f8 will be specified in the <LastServerUuid> tag in the virtual machine configuration:

    # grep LastServer /var/parallels/*/config.pvs | grep db35178c-94db-46e3-ad80-c4e35cdf17f8
    /var/parallels/MyVm.pvm/config.pvs:      <LastServerUuid>{db35178c-94db-46e3-ad80-c4e35cdf17f8}</LastServerUuid>
    

Search Words

high availability relocation

resouce relocation

c62e8726973f80975db0531f1ed5c6a2 2897d76d56d2010f4e3a28f864d69223 0dd5b9380c7d4884d77587f3eb0fa8ef

Email subscription for changes to this article
Save as PDF