Article ID: 119128, created on Dec 12, 2013, last review on Jun 17, 2016

  • Applies to:
  • Virtuozzo

Answer

This article assumes all virtual environments in the cluster should be online during the cluster recreation procedure. If it's possible to shut them all down, please follow this article.

It's necessary to obtain a trial license for the period when two clusters co-exist together.

The example is shown for a 3-node cluster (minimal configuration), and it is assumed any two out of three nodes have enough resources to host all VEs of the whole cluster. If it is possible, it is advised to add a spare node to the cluster to host the VEs temporarily.

For clusters that consist of more than 3 nodes, the same instructions are applicable for the first 3 servers, then the 4th node should be handled exactly like the 3rd one, and so on for each next node. Servers with Client role should be processed first, Chunk Servers after them and then Metadata Servers.

The cluster name in the example is "pcs1". The new cluster name is "pcs2". The nodes have the following IPs:

  • Node 1 -- 192.168.10.11
  • Node 2 -- 192.168.10.12
  • Node 3 -- 192.168.10.13

Preliminary actions

  1. On all nodes, create a file that contains the hostname of the server the virtual environment resides on inside its home directory:

    # for veid in $(vzlist -Hao veid | sed 's/ //g' | grep -v "^1$") ; do echo $(hostname) > /vz/private/$veid/source; done
    
    # for vmhome in $(awk -F"[<>]" '/VmHome.*pvm/{gsub("/config.pvs","");print$3}' /etc/parallels/vmdirectorylist.xml); do echo $(hostname) > "$vmhome/source"; done
    

    It is necessary to be able to migrate the VEs to their original host. If the location of the VEs doesn't matter, this step can be skipped.

    Note: the hostnames of the nodes should resolve to their IP addresses. Otherwise, alter /etc/hosts with IP-hostname mapping

  2. It is important to check that online migration is possible for containers and VMs between the servers on a few test machines.

  3. Decrease the cluster replication factor to 1 to be able to downsize the cluster to 1 node only:

    # pstorage -c pcs1 set-attr -R -p / replicas=1
    

Node 1

  1. Migrate all virtual environments in online mode to other servers:

    In this example, half of CTs/VMs are migrated to 192.168.10.12 and another half - to 192.168.10.13. This is done in a for loop by switching $n variable between 2 and 3. The migration is done iteratively, one VE at a time.

    Note: for the automatic solution to be possible, it is advised to distribute ssh keys between all servers in the cluster to enable passwordless access.

    # n=2; for veid in $(vzlist -Hao veid | sed 's/ //g' | grep -v "^1$"); do pmigrate c $veid c 192.168.10.1$n --online ; [[ $n -eq 2 ]] && n=3 || n=2; done
    
    # m=2; for vm in $(awk -F"[<>]" '/VmHome.*pvm/{gsub(".pvm/config.pvs","");print$3}' /etc/parallels/vmdirectorylist.xml); do vm=$(basename "$vm") ; pmigrate v "$vm" v 192.168.10.1$m ; [[ $m -eq 2 ]] && m=3 || m=2; done
    

    Make sure there are no VEs left on the server:

    # prlctl list -a 
    
  2. Stop vz and parallels-server services:

    # service vz stop
    # service parallels-server stop
    
  3. Stop pstorage-fs service:

    # service pstorage-fs stop 
    
  4. Remove the local CS and MDS from the cluster and stop pstorage-csd and pstorage-mdsd services:

    # mdsid=$(cat /pstorage/pcs1-mds/id)
    # pstorage -c pcs1 rm-mds $mdsid
    # csid=$(cat /pstorage/pcs1-cs/control/id)
    # pstorage -c pcs1 rm-cs $csid -W
    ...wait until this operation is completed...
    # service pstorage-csd stop
    # service pstorage-mdsd stop
    

    Note: if some Cloud Storage role is missing on the node, the corresponding step just can be skipped.

  5. Install all available updates (it's a good opportunity to do this):

    # yum -y update
    

    Reboot to the newest kernel, if available:

    # vzreboot
    

    If vzreboot is not allowed by the license, perform a plain reboot.

  6. Clean up the cluster:

    # rm -rf /pstorage/pcs1-mds*
    # rm -rf /pstorage/pcs1-cs*
    # rm -rf /etc/pstorage/clusters/*
    # rm -rf /root/.pstorage/*
    

    If it's not necessary to save the log files, delete them as well to release the disk space:

    # rm -rf /var/log/pstorage/*
    

    Remove the pstorage-mount entry in /etc/fstab:

    # sed -i 's~pstorage://.*$~~' /etc/fstab
    

    Remove the pstorate mount point:

    # rm -rf /pstorage/pcs1
    
  7. Initiate a new cluster:

    # pstorage -c pcs2 make-mds -I -a 192.168.10.11 -r /pstorage/pcs2-mds -p
    # service pstorage-mdsd start
    

    Configure the new cluster discovery parameters (also described in man pstorage-discovery).

  8. Create a CS server:

    # pstorage -c pcs2 make-cs -r /pstorage/pcs2-cs
    # service pstorage-csd start
    
  9. Load the temporary Cloud Storage license:

    # pstorage -c pcs2 load-license -p <activation_code>
    
  10. Add a new entry to /etc/fstab and mount the new storage:

    # mkdir /pstorage/pcs2
    # echo "pstorage://pcs2 /pstorage/pcs2 fuse.pstorage rw,nosuid,nodev 0 0" >> /etc/fstab
    # service pstorage-fs start
    

    Recreate the links for the default CTs and VMs directories:

    # mkdir /pstorage/pcs2/vmprivate /pstorage/pcs2/private
    # rm -rf /var/parallels /vz/private /vz/vmprivate
    # ln -s /pstorage/pcs2/private /vz/private
    # ln -s /pstorage/pcs2/vmprivate /var/parallels
    # ln -s /pstorage/pcs2/vmprivate /vz/vmprivate
    
  11. Verify that the cluster is healthy:

    # pstorage -c pcs2 top
    
  12. Start vz and parallels-server services:

    # service vz start
    # service parallels-server start
    

Node 2

  1. Move all VEs to the new storage on Node 1:

    # for veid in $(vzlist -Hao veid | sed 's/ //g' | grep -v "^1$"); do pmigrate c $veid c 192.168.10.11 --online --nonsharedfs; done
    
    # for vm in $(awk -F"[<>]" '/VmHome.*pvm/{gsub(".pvm/config.pvs","");print$3}' /etc/parallels/vmdirectorylist.xml); do vm=$(basename "$vm"); pmigrate v "$vm" v 192.168.10.11; done
    

    Make sure there are no VEs left on the server:

    # prlctl list -a 
    
  2. Repeat steps 2-6 of Node 1.

  3. Authenticate the node in the new cluster:

    # pstorage -c pcs2 auth-node
    
  4. Create MDS and CS:

    # pstorage -c pcs2 make-mds -a 192.168.10.12 -r /pstorage/pcs2-mds
    # service pstorage-mdsd start
    # pstorage -c pcs2 make-cs -r /pstorage/pcs2-cs
    # service pstorage-csd start
    
  5. Add a new entry to /etc/fstab and mount the new storage:

    # mkdir /pstorage/pcs2
    # echo "pstorage://pcs2 /pstorage/pcs2 fuse.pstorage rw,nosuid,nodev 0 0" >> /etc/fstab
    # service pstorage-fs start
    

    Recreate the links for the default CTs and VMs directories:

    # rm -rf /var/parallels /vz/private /vz/vmprivate
    # ln -s /pstorage/pcs2/private /vz/private
    # ln -s /pstorage/pcs2/vmprivate /var/parallels
    # ln -s /pstorage/pcs2/vmprivate /vz/vmprivate
    
  6. Verify that the cluster is healthy:

    # pstorage -c pcs2 top
    
  7. Start vz and parallels-server services:

    # service vz start
    # service parallels-server start
    

Node 3

  1. Move all VEs to Node 2:

    # for veid in $(vzlist -Hao veid | sed 's/ //g' | grep -v "^1$"); do pmigrate c $veid c 192.168.10.12 --online --nonsharedfs; done
    
    # for vm in $(awk -F"[<>]" '/VmHome.*pvm/{gsub(".pvm/config.pvs","");print$3}' /etc/parallels/vmdirectorylist.xml); do vm=$(basename "$vm"); pmigrate v "$vm" v 192.168.10.12; done
    

    Make sure there are no VEs left on the server:

    # prlctl list -a 
    
  2. Stop vz and parallels-server services:

    # service vz stop
    # service parallels-server stop
    
  3. Stop pstorage-fs service:

    # service pstorage-fs stop 
    
  4. Stop pstorage-csd and pstorage-mdsd services:

    # service pstorage-csd stop
    # service pstorage-mdsd stop
    
  5. Install all available updates:

    # yum -y update
    

    Reboot to the newest kernel, if available:

    # vzreboot
    

    If vzreboot is not allowed by the license, perform a plain reboot.

  6. Clean up the remnants of the old cluster:

    # rm -rf /pstorage/pcs1-mds*
    # rm -rf /pstorage/pcs1-cs*
    # rm -rf /etc/pstorage/clusters/*
    # rm -rf /root/.pstorage/*
    

    If it's not necessary to save the log files, delete them as well to release the disk space:

    # rm -rf /var/log/pstorage/*
    

    Remove the pstorage-mount entry in /etc/fstab:

    # sed -i 's~pstorage://.*$~~' /etc/fstab
    

    Remove the pstorate mount point:

    # rm -rf /pstorage/pcs1
    
  7. Repeat steps 3-7 of Node 2 (make sure to change the IP to 192.168.10.13).

Final actions

  1. Move all VEs back to their source server. Run the following commands on Node 1 and Node 2:

    # for veid in $(vzlist -Hao veid | sed 's/ //g' | grep -v "^1$"); do source=$(cat /vz/private/$veid/source); [[ $(hostname) == $source ]] || pmigrate c $veid c $source --online; done
    
    # for vm in $(awk -F"[<>]" '/VmHome.*pvm/{gsub(".pvm/config.pvs","");print$3}' /etc/parallels/vmdirectorylist.xml); do source=$(cat "$vm.pvm/source"); vm=$(basename $vm); [[ $(hostname) == $source ]] || pmigrate v "$vm" v $source; done
    
  2. Set the required replication level:

    # pstorage -c pcs1 set-attr -R -p / replicas=3 
    
  3. Verify that the cluster is healthy:

    # pstorage -c pcs2 top
    

The operation of cluster recreation is completed at this point. If any issues arise during any step of the whole process, and the reason is not clear, contact Parallels Technical Support for assistance.

Search Words

Change PCS Chunk MD and Client IP

How to Remove Node from Virtuozzo 6.0 cluster

Pstorage-mount network not connected

restore cloud storage

from scratch

rebuild

pstorage replicating

recreate

0dd5b9380c7d4884d77587f3eb0fa8ef 2897d76d56d2010f4e3a28f864d69223

Email subscription for changes to this article
Save as PDF