Results 1 to 13 of 13

Thread: Dependency problem when installing PVA 4.6

  1. #1
    Kilo Poster
    Join Date
    Nov 2009
    Posts
    61

    Default Dependency problem when installing PVA 4.6

    When trying to install PVA on a clean CentOS 5.5 installation with Virtuozzo 4.6 I get the following dependency error:

    pva-pp-httpd is needed by pva-pp-engine-4.6-210.43.x86_64

    When I look in the downloaded packages there is a pva-pp-httpd-third, but not a pva-pp-httpd. I don't understand why this is not working. Any ideas?



  2. #2
    Kilo Poster
    Join Date
    Nov 2009
    Posts
    61

    Default

    Nevermind, problem was fixed by using the larger downloads instead of the automatic installer. Deployment still puzzles me though..



  3. #3

    Default

    You need to use http://kb.parallels.com/en/9445 in order to install PVA for PVCL 4.6



  4. #4
    Kilo Poster
    Join Date
    Nov 2009
    Posts
    61

    Default

    Okay thanks, the agent install succeeded. However, I'm running into a new issue.

    I have two hardware nodes, cl1 and cl2 which are clustered using RHCS. On cl1 I have a container called 'vz' in which I run the PVA management. I have sucessfully added cl2 to the infrastructure using the web interface, but when I try to add cl1 I a message stating 'node is already in this serverGroup'.

    Output on the vz container:

    [root@vz ~]# vzagroup list
    Connecting to local host...
    Listing group members...
    TITLE STATUS ROLE
    cl2 online slave
    vz online master

    Operation 'list' completed successfully


    [root@vz ~]# vzagroup addSlave root:myrootpass@cl1
    Connecting to a remote host...
    Connecting to local host...
    Adding slave...
    Address: 10.0.0.71
    Operation failed, node 10.0.0.71 is already in this serverGroup.
    Can't add slave to master node: Operation failed, node 10.0.0.71 is already in this serverGroup.

    I should be able to add hardware node cl1 even though my PVA Management Server is running in a container on that node, right? Adding --force to the addSlave command doesn't work and removeSlave 10.0.0.71 fails as well (Host with this address is not registered).



  5. #5

    Default

    What documentation did you follow to configure PVA in RHCS?



  6. #6
    Kilo Poster
    Join Date
    Nov 2009
    Posts
    61

    Default

    The document was:

    Parallels Virtuozzo Containers 4.6 for Linux
    Deploying Clusters in Parallels-Based Systems

    It's a data sharing cluster based on a single GFS on an iSCSI SAN. The cluster side of things seems to be working fine btw, automatic failover works very well (meaning the containers don't even go down while relocating the vz-cluster service).



  7. #7

    Default

    This document doesn't describe how to configure PVA in RHCS. The only document I've found is http://download.swsoft.com/pvc/46/li...nuxUpgrade.pdf

    Do you have A-A or A-P cluster? If it is an Active-Passive cluster then you don't have to register second slave in PVA MN.



  8. #8
    Kilo Poster
    Join Date
    Nov 2009
    Posts
    61

    Default

    Hmm that's a totally new document for me. I'll read that and see where I went a different way. I already know that I didn't use the --clustermode option while installing PVC. Hope that's not a big problem (of course i disabled the vz service after installation).



  9. #9
    Kilo Poster
    Join Date
    Nov 2009
    Posts
    61

    Default

    Ok, so I amended the configuration so that pvaagentd and pvapp are started from the cluster service. This is how the services in cluster.conf look now:

    <resources>
    <ip address="10.0.0.81/24" monitor_link="0"/>
    <script file="/etc/init.d/vz-cluster" name="vz-cluster"/>
    <ip address="10.0.0.82/24" monitor_link="1"/>
    <script file="/etc/init.d/pvaagentd" name="pvascr"/>
    <script file="/etc/init.d/pvapp" name="ppscr"/>
    </resources>
    <service autostart="1" domain="virtuozzo-servers" exclusive="1" max_restarts="0" name="vz-1" recovery="restart" restart_expire_time="0">
    <ip ref="10.0.0.81/24"/>
    <script ref="vz-cluster"/>
    <script ref="pvascr"/>
    <script ref="ppscr"/>
    </service>
    <service autostart="1" domain="virtuozzo-servers" exclusive="1" max_restarts="0" name="vz-2" recovery="restart" restart_expire_time="0">
    <ip ref="10.0.0.82/24"/>
    <script ref="vz-cluster"/>
    <script ref="pvascr"/>
    <script ref="ppscr"/>
    </service>
    I can see pvagent running on the hardware nodes, but now I can't connect with them from the master server.

    [root@vz ~]# vzagroup addSlave root:password@10.0.0.72
    Connecting to a remote host...
    Connecting to local host...
    Adding slave...
    Address: 10.0.0.72
    Connection accepted by 10.0.0.72
    init client node
    Failed to register the physical server. Make sure that it has PVA Agent for Virtuozzo or for Parallels Server installed and is accessible via the network.
    Can't add slave to master node: Failed to register the physical server. Make sure that it has PVA Agent for Virtuozzo or for Parallels Server installed and is accessible via the network. [code -4](Connection to vzagent was closed [Connection closed])
    I also tried connecting to the IP from the cluster resource (since I'm not sure what this is for, still)

    That gives:

    [root@vz ~]# vzagroup addSlave root:password@10.0.0.82
    Connecting to a remote host...
    Connecting to local host...
    Adding slave...
    Address: 10.0.0.82
    Connection accepted by 10.0.0.82
    init client node
    Failed to add new slave node: step 'init client node' failed with error 'Connection to vzagent was closed')
    Can't add slave to master node: Failed to init client node [code -4](Connection to vzagent was closed)
    Just to be clear, it's an active-active configuration. I have two servers, cl1 and cl2 and two vz services, vz-1 and vz-2.



  10. #10

    Default

    OK, here is what's going on when PVA is being installed to A-A cluster (I'll call two instances of PVA as PVA-cl1, PVA-cl2)
    PVA-cl1 installs to cl1 host, writes binary data to cl1:/opt/pva/..., and data files to /vz/pva/...
    PVA-cl2 installs to cl2 host, writes binary data to cl2:/opt/pva/..., and data files to /vz/pva/...

    Since /vz is a shared partition, PVA-cl2 overwrites data files of PVA-cl1 and this leads to observed effect.

    You need to install PVA-cl2 when /vz partition is not shared yet, so both PVA will get their own data sets, after that you have to modify /opt/pva/agent/bin/pva.conf on both cl1 and cl2 hosts and specify correct paths to base_folder and etc_folder



  11. #11
    Kilo Poster
    Join Date
    Nov 2009
    Posts
    61

    Default

    Hmm I notice this in the PVA agent log when connecting:

    T=15:51:01:197; L=(inf); PID=57688; TID=2b6eb3be12b0; P=VZLControl [close] Close transport -1 with error: System errors : Cannot authenticate the user due to a system error: RDBMS error (-2147483648) : DBMS Error #HY000:1:11:database disk image is malformed (11) INSERT INTO Session (id, sid, status, creationTime, lastAccessTime, expiration, sessionId, ip, logoffTime, userName) VALUES ('1016d435-0bf6-3e49-9aa8-a3e2e181e25e', 'AQUAAAAAIAEjgxRq+85CCbl6crw0jg3CAAAAAA==', 1, '2011-01-25 14:50:58', '2011-01-25 14:50:58', -1, 'vzl.40500.65537.6a148323-cefb-0942-b97a-72bc348e0dc2..irmpbvuaaaaaajn35jtwctqgcj2c5666', 167772251, NULL, 'root')
    It also seems like both agents are trying to store data in /vz/pva/agent, which is of course on the shared volume. Could that explain the error above?



  12. #12
    Kilo Poster
    Join Date
    Nov 2009
    Posts
    61

    Default

    Quote Originally Posted by Pavel Ivlev View Post
    OK, here is what's going on when PVA is being installed to A-A cluster (I'll call two instances of PVA as PVA-cl1, PVA-cl2)
    PVA-cl1 installs to cl1 host, writes binary data to cl1:/opt/pva/..., and data files to /vz/pva/...
    PVA-cl2 installs to cl2 host, writes binary data to cl2:/opt/pva/..., and data files to /vz/pva/...

    Since /vz is a shared partition, PVA-cl2 overwrites data files of PVA-cl1 and this leads to observed effect.

    You need to install PVA-cl2 when /vz partition is not shared yet, so both PVA will get their own data sets, after that you have to modify /opt/pva/agent/bin/pva.conf on both cl1 and cl2 hosts and specify correct paths to base_folder and etc_folder
    Cool, as you see above I just discovered the /vz/pva problem. I'm going to redo the PVA installation using your instructions, I'm sure that will work out for the better.

    Too bad the installation isn't a bit more straightforward :) but we'll get there. Thanks for your kind support, Pavel.



  13. #13
    Kilo Poster
    Join Date
    Nov 2009
    Posts
    61

    Default

    It works! I now have both hardware nodes in the PVA Manager and have successfully moved a container from one HN to another. This did cause the container to be rebooted, though, but I guess that's acceptable.

    Again thanks for your help!



Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •