No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

FusionCloud 6.3.1.1 Troubleshooting Guide 02

Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Infrastructure Faults

Infrastructure Faults

Unavailable CPS Service

Symptom

After the Cloud Provisioning Service (CPS) command is executed on the command-line interface (CLI), nothing or error Connection refused! is returned. You can run the cpssafe command to enter the secure operation mode, enter 1, enter the username and password as prompted, and run the following command to query the host status:

cps host-list

Nothing or the following information is displayed:

Connection refused!
Possible Causes
  • The network malfunctions.
  • In authentication mode, if the DNS configuration is incorrect, the CPS command is unavailable.
  • The socket port for providing the CPS service on the host where the control node resides malfunctions.
Procedure
  1. Use PuTTY to log in to any host in the AZ through the IP address of the External OM plane.

    The user account is fsp and the default password is Huawei@CLOUD8.
    NOTE:
    • The system supports login authentication using a password or private-public key pair. If a private-public key pair is used for login authentication, seeUsing PuTTY to Log In to a Node in Key Pair Authentication Mode.
    • For details about the IP address of the External OM plane, see the LLD generated by FCD sheet of the xxx_export_all.xlsm file exported from FusionCloud Deploy during software installation, and search for the IP addresses corresponding to VMs and nodes.The parameter names in different scenarios are as follows:
      • Cascading layer in the Region Type I scenario : Cascading-ExternalOM-Reverse-Proxy, Cascaded layer : Cascaded-ExternalOM-Reverse-Proxy.
      • Region Type II and Type III scenarios : ExternalOM-Reverse-Proxy.

  2. Run the following command to switch to the root user, and enter the root password as prompted:

    su - root

    The default password of the root user is Huawei@CLOUD8!.

  3. Run the TMOUT=0 command to disable user logout upon system timeout.
  4. Import environment variables. For details, see Importing Environment Variables.
  5. Run the following command to check whether the IP address of the CPS service can be pinged:

    ping Management IP address of the host where the CPS service resides

    Management IP address of the host where the CPS service resides: The IP address is 172.28.8.130.

    • If yes, go to 7.
    • If no, go to 6.

  6. Contact the O&M personnel to restore the network. Then run the CPS command again and check whether command output is correct.

    • If yes, no further action is required.
    • If no, go to 7.

  7. Run the commands below to log in to a controller host using SSH.

    During system installation, the three hosts that are first installed are typically controller hosts, and their IP addresses can be obtained from the administrator. If the administrator has not taken note of the IP addresses, contact technical support. In this section, IP addresses 172.28.0.2, 172.28.0.3, and 172.28.0.4 are used as examples. If the administrator does not record the information, contact technical support.

    su - fsp

    ssh fsp@IP address

    Enter the private key password as prompted. The default password is Huawei@CLOUD8!. If newly generated public and private key files have replaced the old ones, enter password of the new private key, or press Enter and enter the password of the fsp user.

    Run the su - root command and enter the password of the root user to switch to the root user.

  8. Run the following commands to forcibly disable the authentication mode:

    sed -i 's/"auth_mode": "True"/"auth_mode": "False"/g' /etc/huawei/fusionsphere/cps.cps-client/cfg/cps.cps-client.cfg

    sed -i 's/"auth_mode": "True"/"auth_mode": "False"/g' /etc/huawei/fusionsphere/cps.cps-server/cfg/cps.cps-server.cfg

    sed -i 's/auth_mode = True/auth_mode = False/g' /usr/local/bin/cps-client/cps_client/cps_client.ini

    sed -i 's/auth_mode = True/auth_mode = False/g' /usr/local/bin/cps-server/cps_server/cps-server.ini

  9. Run the following command to stop the CPS service process:

    kill -9 `ps -eo pid,cmd ww | grep ' /usr/local/bin/cps-server/cps_server/cpsserver.py'| grep -v grep| awk '{print $1}'`

  10. Repeat 7 to 9 on the other controller hosts.
  11. On any controller host you have logged in to, run the cps host-list command to display the host list. The CPS service has automatically restarted if a complete list of hosts is displayed.
  12. Run the following command to query the current DNS configuration:

    cps template-params-show --service dns dns-server

    Information similar to the following is displayed:

    +----------+------------------------------------------+ 
    | Property | Value                                    | 
    +----------+------------------------------------------+ 
    | address  | /az1.dc1.domainname.com/192.168.211.10,/ | 
    |          | identity.az1.dc1.domainname.com/192.168. | 
    |          | 211.10,/image.az1.dc1.domainname.com/192 | 
    |          | .168.211.10                              | 
    | network  | []                                       | 
    | server   |                                          | 
    +----------+------------------------------------------+

  13. Check the IP addresses in the command output. If detecting any error, run the following command to correct the DNS configuration:

    cps template-params-update --service dns dns-server --parameter address=/address/IP

    Multiple pieces of DNS information are separated by commas (,). For details about the DNS address, see (Optional) Modifying System Settings in the .

  14. Run the following command to commit the configuration:

    cps commit

  15. Run the following command to check the configuration written to the configuration file:

    cat /etc/dnsmasq.conf | grep address=

  16. After the configuration is confirmed, run the following commands to enable the authentication mode:

    sed -i 's/"auth_mode": "False"/"auth_mode": "True"/g' /etc/huawei/fusionsphere/cps.cps-client/cfg/cps.cps-client.cfg

    sed -i 's/"auth_mode": "False"/"auth_mode": "True"/g' /etc/huawei/fusionsphere/cps.cps-server/cfg/cps.cps-server.cfg

    sed -i 's/auth_mode = False/auth_mode = True/g' /usr/local/bin/cps-client/cps_client/cps_client.ini

    sed -i 's/auth_mode = False/auth_mode = True/g' /usr/local/bin/cps-server/cps_server/cps-server.ini

  17. Run the following command to stop the CPS service process:

    kill -9 `ps -eo pid,cmd ww | grep ' /usr/local/bin/cps-server/cps_server/cpsserver.py'| grep -v grep| awk '{print $1}'`

  18. Repeat 16 to 17 on the other controller hosts.
  19. After the authentication mode is enabled, run the CPS command again and check whether the command is successfully executed.

    • If yes, no further action is required.
    • If no, go to 20.

  20. Run the following commands to log in to the host where the CPS service resides:

    ssh fsp@ Management IP address

    su - root

  21. Import environment variables. For details, see Importing Environment Variables.
  22. Run the following command to check the listening port status of the CPS service:

    netstat -anp | grep 8000 | grep 130

    Information similar to the following is displayed:

    tcp        0      0 172.28.8.130:8000       0.0.0.0:*               LISTEN      -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58434        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59575        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59759        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58449        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58437        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59765        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58439        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58451        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58450        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59748        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58243        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58441        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58245        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58443        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59749        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58444        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59750        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58250        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59755        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59758        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59751        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59760        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58442        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.3:49066        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58436        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59763        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59754        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59752        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59762        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.3:47725        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58446        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58438        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59572        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59753        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59764        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59578        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58445        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58440        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59757        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58435        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58452        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58448        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59561        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59761        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:59756        TIME_WAIT   -                    
    tcp        0      0 172.28.8.130:8000       172.28.0.2:58248        TIME_WAIT   - 

    Check whether a large number of rows containing SYN_RECV or CLOSE_WAIT are displayed in the second column from the right.

    • If yes, go to 23.
    • If no, go to 26.

  23. Run the following command to obtain the process ID of the CPS service:

    ps -ef | grep cpsserver.py | grep -v grep | awk -F ' ' '{print $2}'

    Check whether information similar to the following is displayed, indicating that the process ID is successfully obtained:

    25567
    • If yes, go to 24.
    • If no, go to 26.

  24. Run the following commands to stop all obtained service processes:

    kill -9 process ID

    echo $?

    Process ID specifies the process ID obtained in 23.

    Check whether the command output is 0.

    0
    • If yes, go to 25.
    • If no, go to 26.

  25. Wait for 1 minute, run the cpssafe command to enter the secure operation mode, enter 1, enter the password as prompted, and run the following command to query host information:

    cps host-list

    Check whether information similar to the following is displayed:

    +--------------------------------------+-----------+----------------------+--------+------------+------+ 
    | id                                   | boardtype | roles                | status | manageip   | omip | 
    +--------------------------------------+-----------+----------------------+--------+------------+------+ 
    | D826749C-FD53-118E-8567-000000821800 | BC21THSA  | auth,                | normal | 172.28.6.1 |      | 
    |                                      |           | blockstorage-driver, |        |            |      | 
    |                                      |           | compute,             |        |            |      | 
    |                                      |           | controller,          |        |            |      | 
    |                                      |           | database,            |        |            |      | 
    |                                      |           | image,               |        |            |      | 
    |                                      |           | measure,             |        |            |      | 
    |                                      |           | rabbitmq,            |        |            |      | 
    |                                      |           | router,              |        |            |      | 
    |                                      |           | sys-server           |        |            |      | 
    | 57CB4932-E26B-1167-8567-000000821800 | BC21THSA  | auth,                | normal | 172.28.0.2 |      | 
    |                                      |           | blockstorage-driver, |        |            |      | 
    |                                      |           | compute,             |        |            |      | 
    |                                      |           | controller,          |        |            |      | 
    |                                      |           | database,            |        |            |      | 
    |                                      |           | image,               |        |            |      | 
    |                                      |           | measure,             |        |            |      | 
    |                                      |           | mongodb,             |        |            |      | 
    |                                      |           | router,              |        |            |      | 
    |                                      |           | sys-server           |        |            |      | 
    +--------------------------------------+-----------+----------------------+--------+------------+------+
    • If yes, no further action is required.
    • If no, go to 26.

  26. Contact technical support for assistance.

Active and Standby Services Became Both Standby

Symptom

After a user runs the cps template-instance-list --service service_name template_name command to query the status of a certain service that is supposed to work in active/standby mode, both service instances are in the standby state according to the command output. The instances keep in the standby state for more than 5 minutes.

Possible Causes

Data between ZooKeeper servers is inconsistent. In this case, make the system to arbitrate the active and standby services again.

Procedure
  1. Use PuTTY to log in to the first host in the AZ.

    The user account is fsp and the default password is Huawei@CLOUD8.

    NOTE:
    • The system supports login authentication using a password or private-public key pair. If a private-public key pair is used for login authentication, seeUsing PuTTY to Log In to a Node in Key Pair Authentication Mode.
    • For details about the IP address of the External OM plane, see the LLD generated by FCD sheet of the xxx_export_all.xlsm file exported from FusionCloud Deploy during software installation, and search for the IP addresses corresponding to VMs and nodes.The parameter names in different scenarios are as follows:
      • Cascading layer in the Region Type I scenario : Cascading-ExternalOM-Reverse-Proxy, Cascaded layer : Cascaded-ExternalOM-Reverse-Proxy.
      • Region Type II and Type III scenarios : ExternalOM-Reverse-Proxy.

  2. Run the following command to switch to the root user, and enter the root password as prompted:

    su - root

    The default password of the root user is Huawei@CLOUD8!.

  3. Run the TMOUT=0 command to disable user logout upon system timeout.
  4. Import environment variables. For details, see Importing Environment Variables.
  5. Run the following command to query the IP address segment of ZooKeeper servers:

    ip addr show | grep zk-s | awk -F '/' '{print $1}' | awk -F ' ' '{print $2}' | awk -F '.' '{print $1"."$2"."$3}'

    Information similar to the following is displayed:

    172.28.8

    According to the obtained IP address segment, the ZooKeeper IP addresses are 172.28.8.121, 172.28.8.122, and 172.28.8.123.

  6. Run the following commands to log in to a ZooKeeper server. The server_ip is first IP address among the three ones.

    export JAVA_HOME=/usr/lib/jre

    sh /usr/local/bin/zookeeper/zookeeper/bin/zkCli.sh -server server_ip:9880

  7. Run the following command to configure the ZooKeeper access control list (ACL) data:

    addauth digest zookeeper:cps200@HW

  8. Run the following command to query the arbitrated node queue for the hosts where the two standby services are deployed:

    ls /cps/runtime/srvdeploy/service_name.template_name/haarbitration

    In this command, service_name indicates the service name, and template_name indicates the component name, for example, haproxy.haproxy.

    [10000000004, 10000000003]

  9. Run the following command to log out of the ZooKeeper server:

    quit

  10. Use the other two IP addresses to log in to the ZooKeeper servers, respectively, and query the arbitrated node queues. For details, see 6 to 9.
  11. Check whether the arbitrated node queues on the three servers are consistent.

    • If yes, go to 12.
    • If no, contact technical support for assistance.

  12. Log in to the three controller hosts one by one and run the following commands on each controller host:

    export JAVA_HOME=/usr/lib/jre

    sh /usr/local/bin/zookeeper/zookeeper/bin/zkServer.sh status /usr/local/bin/zookeeper/zookeeper/conf/zoo_Clusters.cfg

    Information similar to the following is displayed:

    JMX disabled by user request 
    Using config: /usr/local/bin/zookeeper/zookeeper/conf/zoo_Clusters.cfg 
    Mode: leader

    Perform the next step if the Mode value is leader in the command output.

  13. Run the following command to stop the ZooKeeper service:

    kill -9 `ps -ef | grep -v grep | grep /usr/local/bin/zookeeper | awk -F ' ' '{print $2}'`

  14. After the ZooKeeper service is stopped, run the following commands wait for 5 seconds:

    export JAVA_HOME=/usr/lib/jre

    sh /usr/local/bin/zookeeper/zookeeper/bin/zkServer.sh status /usr/local/bin/zookeeper/zookeeper/conf/zoo_Clusters.cfg

    Information similar to the following is displayed:

    JMX disabled by user request 
    Using config: /usr/local/bin/zookeeper/zookeeper/conf/zoo_Clusters.cfg 
    Mode: follower     

    Check whether the above command output is displayed.

    • If yes, go to 15.
    • If no, wait for 2 minutes and run the commands again. If the fault persists, contact technical support for assistance.

  15. Run the following command to query to service status:

    cps template-instance-list --service service name template name

    In this command, service name indicates the service name, and template name indicates the component name.

    For example, run the following command to query the HAProxy service status:

    cps template-instance-list --service haproxy haproxy

    The fault has been rectified if information similar to the following is displayed:

    +------------+---------------------------------+---------+------------+ 
    | instanceid | componenttype                   | status  | runsonhost | 
    +------------+---------------------------------+---------+------------+ 
    | 0          | haproxy-2015.1.521-1.noarch.rpm | standby | 106control | 
    | 1          | haproxy-2015.1.521-1.noarch.rpm | active  | 107control | 
    +------------+---------------------------------+---------+------------+     

    Run the command every 10 seconds to check whether the fault is rectified.

    • If yes, no further action is required.
    • If no and the fault persists for more than 5 minutes, contact technical support for assistance.

Remote Storage Is Faulty

Symptom

A remote disk connecting to services, including the Image, image-cache, and MongoDB services, is faulty, causing service unavailability.

Possible Causes

The remote storage is faulty, resulting the failure in writing data to or reading data from LUNs used by the services.

Procedure
  1. Resolve the fault of the remote storage according to the troubleshooting manual of the remote storage in use.
  2. Delete all VMs that boot from the local images on each host.

    These VMs, including the management VMs created on FusionSphere OpenStack, can be deleted on the FusionSphere OpenStack web client.

  3. Query the storage rules applied to each host in the system.

    1. On the FusionSphere OpenStack web client, choose Configuration > Disk, query host groups, locate the host group that uses the faulty remote storage, and take note of the host IDs in the group.
    2. Use PuTTY to log in to the FusionSphere OpenStack controller node through the IP address of the External OM plane.
      The user account is fsp and the default password is Huawei@CLOUD8.
      NOTE:
      • The system supports login authentication using a password or private-public key pair. If a private-public key pair is used for login authentication, seeUsing PuTTY to Log In to a Node in Key Pair Authentication Mode.
      • For details about the IP address of the External OM plane, see the LLD generated by FCD sheet of the xxx_export_all.xlsm file exported from FusionCloud Deploy during software installation, and search for the IP addresses corresponding to VMs and nodes.The parameter names in different scenarios are as follows:
        • Cascading layer in the Region Type I scenario : Cascading-ExternalOM-Reverse-Proxy, Cascaded layer : Cascaded-ExternalOM-Reverse-Proxy.
        • Region Type II and Type III scenarios : ExternalOM-Reverse-Proxy.
    3. Run the following command to switch to the root user, and enter the root password as prompted:

      su - root

      The default password of the root user is Huawei@CLOUD8!.

    4. Run the TMOUT=0 command to disable user logout upon system timeout.
    5. Import environment variables. For details, see Importing Environment Variables.
    6. Run the following command to query storage rules configured in the system:

      cps hostcfg-list --type storage

      Information similar to the following is displayed:

      +---------+-------------------+--------------------------------+ 
      | type    | name              | hosts                          | 
      +---------+-------------------+--------------------------------+ 
      | storage | default           | default:all                    | 
      |         |                   |                                | 
      | storage | control_group0    | hostid:first-node, second-node | 
      |         |                   |                                | 
      | storage | compute_group0    | hostid:forth-node              | 
      |         |                   |                                | 
      | storage | control_group1    | hostid:third-node              | 
      +---------+-------------------+--------------------------------+     

      In the command output, the hosts column displays the storage rules matching conditions. Currently, the matching priority is as follows: MAC address > host ID > board type > role > default: all.

      Locate the target storage rule based on the host ID.

    7. Run the following command to query the disk partition sizes occupied by each service in the storage rule and take note of the sizes:

      cps hostcfg-show --type storage ${hostcfg_name}

      In this command:

      • ${hostcfg_name} specifies the name of the target storage rule.
      • The partition names for the Image, image-cache, and MongoDB services are image, image-cache, and ceilometer-data, respectively. Take note of the sizes of these three partitions.

  4. Run the following commands to delete the remote storage used in the storage rule:

    cps hostcfg-item-delete --item logical-volume --lvname ${lv_name} --type storage ${hostcfg_name}

    cps commit

    In this command:

    • The ${lv_name} values of the Image, image-cache, and MongoDB services are image, image-cache, and ceilometer-data, respectively.
    • ${hostcfg_name} specifies the name of the target storage rule.

  5. Manually restart all hosts using this storage rule.

    The hosts must be restarted in sequence. You can restart the second host only after the first host is successfully restarted and is running properly.

  6. On the FusionSphere OpenStack web client, choose Configuration > Disk and reconfigure the remote storage.
  7. On the host you have logged in to, create a storage rule.

    If all hosts have been restarted, use PuTTY to log in to a host again.

    cps hostcfg-add --type storage ${new_hostcfg_name}

    cps commit

    The new storage rule must be named starting with control_.

    NOTE:

    Two hosts are considered having the same configurations only when they have the same model, hardware configurations, hardware specifications, and slots. If the hosts involved have different configurations, you need to create multiple storage rules. The number of storage rules to create is the same as the types of hosts.

  8. Run the following commands to configure the new storage rule to be the same as the original one:

    cps hostcfg-item-update --item logical-volume --lvname image --size ${size} --type storage ${new_hostcfg_name}

    cps commit

    cps hostcfg-item-update --item logical-volume --lvname image-cache --size ${size} --type storage ${new_hostcfg_name}

    cps commit

    cps hostcfg-item-update --item logical-volume --lvname ceilometer-data --size ${size} --type storage ${new_hostcfg_name}

    cps commit

    ${size} specifies the partition size in the original storage rule.

  9. Run the following commands to change the backend storage type of the partition to the remote storage:

    cps hostcfg-item-update --item logical-volume --lvname image --backendtype remote --type storage ${new_hostcfg_name}

    cps commit

    cps hostcfg-item-update --item logical-volume --lvname image-cache --backendtype remote --type storage ${new_hostcfg_name}

    cps commit

    cps hostcfg-item-update --item logical-volume --lvname ceilometer-data --backendtype remote --type storage ${new_hostcfg_name}

    cps commit

  10. Run the following commands to delete hosts from the original storage rule and add them to the new storage rule:

    cps hostcfg-host-delete --type storage --host hostid=${hostid} --type storage ${hostcfg_name}

    cps hostcfg-host-add --type storage --host hostid=${hostid} --type storage ${new_hostcfg_name}

    Multiple host IDs are separated by commas (,).

    NOTE:
    • The hosts with different configurations must be added to different storage rules.
    • After hosts are deleted from the original rule, choose Configuration > Disk on the FusionSphere OpenStack web client and manually delete the associated host groups.

  11. Wait for a while and run the following command to query the partition configuration:

    df -h

    The new partitions will be formatted when they are created. Therefore, the partitioning duration depends on the partition capacity. A larger partition takes a longer period of time to format. The partitioning operation is successful until all new partitions are displayed. The new partitions are displayed as extend_vg-image, extend_vg-image--cache, and extend_vg-ceilometer--data.

  12. On the FusionSphere OpenStack web client, choose Configuration > Disk and check whether the new host groups are properly configured.

    • If yes, no further action is required.
    • If no, contact technical support for assistance.

Images Fail to Be Uploaded or Downloaded When a Swift Partition Is Connected to a Remote Storage Device

Symptom

When a Swift partition is connected to a remote storage device, images fail to be uploaded or downloaded. After a user logs in to the node where the swift-store component resides and runs the ls /opt/HUAWEI/swift command, the message "Input/output error" is displayed.

Possible Causes

The network or remote storage device is abnormal.

Procedure
  1. Rectify the fault of the remote storage device. For details, see the product documentation of the target remote storage device.
  2. Use PuTTY to log in to the first FusionSphere OpenStack node.

    Ensure that the reverse proxy IP address and username fsp are used to establish the connection. The default password of user fsp is Huawei@CLOUD8.

    NOTE:
    • The system supports login authentication using a password or private-public key pair. If a private-public key pair is used for login authentication, seeUsing PuTTY to Log In to a Node in Key Pair Authentication Mode.
    • For details about the IP address of the External OM plane, see the LLD generated by FCD sheet of the xxx_export_all.xlsm file exported from FusionCloud Deploy during software installation, and search for the IP addresses corresponding to VMs and nodes.The parameter names in different scenarios are as follows:
      • Cascading layer in the Region Type I scenario : Cascading-ExternalOM-Reverse-Proxy, Cascaded layer : Cascaded-ExternalOM-Reverse-Proxy.
      • Region Type II and Type III scenarios : ExternalOM-Reverse-Proxy.

  1. Run the following command to switch to user root:

    su - root

    The default password of user root is Huawei@CLOUD8!.

  1. Import environment variables.

    For details, see Importing Environment Variables.

  1. Run the following command to stop the swift-store service:

    cps host-template-instance-operate --action stop --service swift swift-store

  1. Run the following command to query the nodes where the Swift service is located:

    cps template-instance-list --service swift swift-store

  1. Log in to the nodes queried in 6 in sequence and run the following commands to rectify the fault:

    cd /home/fsp

    umount /opt/HUAWEI/swift

    mount /dev/mapper/extend_vg-swift /opt/HUAWEI/swift

  1. Run the following command to start the swift-store service:

    cps host-template-instance-operate --action start --service swift swift-store

Translation
Download
Updated: 2019-06-10

Document ID: EDOC1100063248

Views: 22927

Downloads: 37

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next