All containers and virtual machines running on pstorage cluster are unavailable.
It is impossible to start any VE:
[root@vz ~]# prlctl start VM Login failed: Operation timeout. The operation could not be completed due to a timeout.
pstorage -c $clustername top shows license errors:
12-12-14 17:04:18.248 MDS WRN: File operation requested by the client at 172.16.55.87:49274 is failed due to licensed capacity limit [+6075/60]
At the same time, containers can not be stopped with the following error:
[root@vz ~]# vzctl --verbose stop 101 Stopping the Container ... Stop the Container... Forcibly stop the Container... Set up iolimit: 0 Set up iopslimit: 0 Kill all Container processes... Unable to stop Container: operation timed out
Virtuozzo Storage license has expired:
[root@vz ~]# pstorage -c pcs1 view-license | grep status status="EXPIRED"
When storage license expires, all write operations across the cluster are stopped.
The license may not get updated under certain circumstances, the issue is known under internal ID PSBM-35308.
Update the license manually:
# pstorage -c $clustername update-license
In order to avoid the issue reoccurrence, make sure the storage packages are fully updated:
# yum update pstorage-*
In order to capture and determine the cause of the update failure in the future, place the attached script on one server in the cluster and configure a cron task to send emails with the script output:
MAILTO=admin@localhost 0 4,16 * * * /root/check-license-collect-logs.sh
Note: replace "admin@localhost" with the actual email. It is required that the server is capable of sending emails with
This script assumes that passwordless access is configured to all MDS servers to run commands through SSH. In case of a custom SSH port configured on the server, edit the
sshport variable in the script.
The script will send the notification when the license turns GRACED and is not updated automatically. It will collect the fresh logs of the master MDS server and put them to
/vz/tmp directory. Provide the collected information to Odin Technical Support for further investigation.