I have a PVA installation that connects with three hardware nodes:
- 2 new hardware nodes running PVC 4.6.0 (one of these runs the PVA container)
- 1 older hardware node running PVC 4.0.0
All are running CentOS 5.
I can view all the hardware nodes in PVA without problems. I can also see the containers listed in the 'Virtual Environment' tab (including status, resource usage, etc.). However for the 2 new nodes, I cannot click the container to see the details - the request just times out. The containers on the older HN can be opened without problems.
When I try to open the VE's by clicking multiple times, PVA becomes unresponsive and I have to restart the pvamnd service to regain access.
I can't make much of the log file. The only odd messages I see are these:
T=11:29:11:156; L=(wrn); PID=13961; TID=2b73e2426bf0; P=VZLFunctionalityCore [insert] EnvCache::insert called for 933a55af-6743-e04f-a16a-97710494b257, returning -1.
T=11:29:11:162; L=(udf); PID=13961; TID=2b73e2426bf0; P=VZLServerGroup [doHandleError] Can't start monitoring of client 028b6c77-9e3f-0945-8b2d-799b8af93c8d: can't update env config/status list, envcache unavailable.
T=11:45:41:869; L=(err); PID=28396; TID=2b0ff0394bf0; P=VZLCore [throwErrorImpl] Throwing exception from /usr/src/redhat/BUILD/pva-mn-core-4.6/vzl/plugins/VZLPolicySlave/VZLPolicySlLocal.cpp(48): policy id:f1166f7f-99a9-b344-a20f-1359a4aac15a.
T=11:45:41:870; L=(err); PID=28396; TID=2b0ff0394bf0; P=VZLCore [processException] Exception is caught. Error code 10401, reason: policy id:f1166f7f-99a9-b344-a20f-1359a4aac15a.
T=11:45:41:911; L=(err); PID=28396; TID=2b0ff0394bf0; P=VZLPolicyManager [handleStartup] [policym] can't generate effective policy on CT creation for eid="f1166f7f-99a9-b344-a20f-1359a4aac15a":policy id:f1166f7f-99a9-b344-a20f-1359a4aac15a
Any clue on what's going on or where to look? DNS on the hardware nodes and the PVA server is fine, pinging (from the console) works.
Feb 11, 2011, 09:05 AM
Please post output of # rpm -qi pva-agent-vz from one of 4.6 HW nodes
Feb 14, 2011, 02:28 AM
Thanks for your reply. The problem has changed slightly.
Our hardware nodes have two addresses, 213.x.x.x on eth5 and 10.x.x.x on eth4, the latter being on our backup network. Now when I add the nodes to PVA (using hostname or 213.x.x.x address), PVA somehow registers these nodes with their 10.x.x.x addresses (they are listed in the infrastructure under 'Default IP address') and it seems unable to connect with them (all VE's show as offline even though they're running, but there's no error message).
When I click on a server it's status reads:
Status Not Licensed
This is the output from the pva-agent-vz rpm:
[root@cl1 ~]# rpm -qi pva-agent-vz
Name : pva-agent-vz Relocations: (not relocatable)
Version : 4.6 Vendor: SWsoft
Release : 646.1 Build Date: Sat 26 Jun 2010 12:12:47 AM CEST
Install Date: Tue 25 Jan 2011 04:06:45 PM CET Build Host: head-build-x64.vt.sw.ru
Group : System Environment Source RPM: pva-agent-vz-4.6-646.1.src.rpm
Size : 17058410 License: SWsoft
Last edited by Renev; Feb 14, 2011 at 02:32 AM.
Feb 14, 2011, 04:57 AM
You need to uninstall PVA and then install it using this KB (see section How to (Re)Install PVA 4.6.2 Agent on a PVC 4.6 HW Node): http://kb.parallels.com/en/9445
Last edited by Pavel Ivlev; Feb 14, 2011 at 05:13 AM.
Feb 14, 2011, 05:51 AM
Ok I've uninstalled PVA by running yum remove pva*
But now when I try to install again using the KB instructions I get this error:
# Installing packages
> Performing PVA 4.6 preinstall configuration
[ERROR] Failed to execute hint /var/opt/pva/setup/downloads/linux/x86_64/11/4.6-1509.3/hints/pva-pp-preInstall.sh:
[ERROR 1] Container with CTID 1 already exists, please remove it or change CTID and launch PVA installation again
What should I do?
Feb 14, 2011, 06:00 AM
The VE with my PVA MN is gone as well..
Feb 14, 2011, 06:33 AM
You shouldn't use yum to uninstall PVA. You didn't use it to install PVA, right?
In order to uninstall PVA, run the following command: # pva-setup --uninstall
Feb 14, 2011, 07:51 AM
I guess you're right :) I've uninstalled PVA the proper way and reinstalled the agents + the MN using the instructions in the KB article (and moving the /vz/pva after installation to prevent cluster conflicts). Installation was successful.
After registering the HN's the MN still doesn't seem to connect with the HN's properly. I've attached screenshots so you can see what I see. The MN container only has a public IP adress, no 10.* IP and I'd like to keep it that way.
Feb 14, 2011, 08:10 AM
Agent log shows this:
T=13:59:14:092; L=(err); PID=45878; TID=2afab4da12b0; P=VZLAuthEngineLocal [handleError] Couldn't synchronize roles with master 1: Internal error: only 'sessionm' requests are allowed for non-licensed slaves.
I'm confused.. do we need an extra license for PVA? This has worked before..
I think there's a problem with our license, since this used to work when we used the trial license. I've opened a ticket with our reseller about this.
Last edited by Renev; Feb 14, 2011 at 08:26 AM.
Feb 14, 2011, 08:30 AM
Looks like a license issue. What is the output of command # vzlicview | grep max_vzcc
max_vzcc_users should be non-zero, otherwise node will be offline
Last edited by Pavel Ivlev; Feb 14, 2011 at 08:39 AM.
Feb 14, 2011, 08:38 AM
I have the same idea.. however the Virtuozzo specialist at our reseller is not in the office today so they haven't been able to help me yet.
The output is: