No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

FusionCloud 6.3.1.1 Troubleshooting Guide 02

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
File Storage Faults

File Storage Faults

The Active/Standby States of GaussDB Nodes Are Abnormal

Symptom

A user fails to log in to the OceanStor DJ GUI, and OceanStor DJ services are abnormal.

Possible Causes
  • The user waits for more than 10 minutes after the user stops services on all nodes or GaussDB nodes are powered off unexpectedly. Then, services on the active GaussDB node cannot be restarted properly.
  • The time difference is greater than 10 minutes after the system time is changed. Services on the active GaussDB node cannot be restarted properly.
Procedure
  1. Use PuTTY to log in to the SFS-DJ01, SFS-DJ02, or SFS-DJ03 node using the management plane IP address of the node.

    The default account and password are djmanager and CloudService@123!, respectively.

    To obtain the management plane IP address of the SFS_DJ01, SFS_DJ02, or SFSDJ_03 node, search for SFS_DJ01, SFS_DJ02, or SFSDJ_03 respectively in the LLD generated by FCD sheet of xxx_export_all.xlsm.

  2. Run the following command and enter the password of user root (Cloud12#$) to switch to user root:

    su - root

  3. Run the following command to disable user logout upon timeout:

    TMOUT=0

  4. Run the show_service --service omm-ha command, and determine the two nodes where omm-ha is running according to the command output.

    The command output is as follows:

    [root@localhost ~]# show_service --service omm-ha  
    +-------------+---------+---------+------------+  
    | instanceid  | service | status  | runsonhost |  
    +-------------+---------+---------+------------+  
    | DJ03_omm-ha | omm-ha  | active  | DJ03       |  
    | DJ01_omm-ha | omm-ha  | standby | DJ01       |  
    +-------------+---------+---------+------------+     

  5. Log in to the two nodes where omm-ha is running. Run the bash /usr/local/bin/ha/ha/config_script/sync_monitor.sh get_status command to check the last online time of GaussDB.

    The command output is as follows:

    [root@localhost ~]# bash /usr/local/bin/ha/ha/config_script/sync_monitor.sh get_status  
    DB last online role : Standby  
    DB last online time : 2018-03-21 19:14:25.      

  6. Compare the last online time of GaussDB on each node obtained in 5 with the current time.

    • If the time differences are greater than 10 minutes, go to 7.
    • If the time differences are not greater than 10 minutes, go to 9.

  7. On the two nodes where omm-ha is running, run the bash /usr/local/bin/ha/ha/config_script/sync_monitor.sh get_status command. Determine the active GaussDB node according to the command output.

    • If the roles of the two nodes are Primary and Standby, the node with the Primary role is the active GaussDB node.
    • If the roles of both nodes are Primary, calculate the time difference between the last online time of GaussDB on each node and the current time. The node with a shorter time difference is the active GaussDB node.

  8. Run the bash /usr/local/bin/ha/ha/config_script/sync_monitor.sh reset_status command on the active GaussDB node.
  9. Wait 2 minutes, and then log in to the administrator GUI to check whether OceanStor DJ services are normal.

    • If yes=>no further action is required.
    • If no=>Contact technical support for assistance.

Uninstalling OceanStor DJ Failed

Symptom

Uninstalling OceanStor DJ fails.

Possible Causes

A process of uninstalling OceanStor DJ exists and OceanStor DJ cannot be uninstalled again.

Procedure
  1. Use PuTTY to log in to the SFS-DJ01, SFS-DJ02, or SFS-DJ03 node using the management plane IP address of the node.

    The default account and password are djmanager and CloudService@123!, respectively.

    To obtain the management plane IP address of the SFS_DJ01, SFS_DJ02, or SFSDJ_03 node, search for SFS_DJ01, SFS_DJ02, or SFSDJ_03 respectively in the Tool-generated IP Parameters sheet of xxx_export_all_EN.xlsm.

  2. Run the following command and enter the password of user root (Cloud12#$) to switch to user root:

    su root

  3. Run the following command to disable user logout upon timeout:

    TMOUT=0

  1. Run the docker ps -a command to view the status of the Docker container. Check whether the container corresponding to the component where uninstalling OceanStor DJ fails is in the Exited state.

    [root@DJ182 inst]# docker ps -a 
    CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS                        PORTS               NAMES
    a5589b3df054        dashboard:1.2.10.2           "bash /etc/dashboard/"   33 hours ago        Exited (137) 20 seconds ago                       dashboard
    1e5e7e08f6c0        oms-controller:1.2.10.2      "/bin/bash /usr/bin/i"   33 hours ago        Up 33 hours                                       oms-controller
    672d9e966363        hermes:1.2.10.2              "/bin/bash -c 'sh /et"   33 hours ago        Up 33 hours                                       hermes
    3b2e1646cbdf        heat:1.2.10.2                "bash -c 'sh /install"   33 hours ago        Up 33 hours                                       heat-engine
    1e44b9b55269        heat:1.2.10.2                "bash -c 'sh /install"   33 hours ago        Up 33 hours                                       heat-api
    29027f6ae2cc        filemeter-service:1.2.10.1   "/bin/bash /usr/bin/S"   33 hours ago        Up 33 hours                                       filemeter-service
    f2bb90699e6d        filemeter-api:1.2.10.1       "/bin/bash /usr/bin/S"   33 hours ago        Up 33 hours                                       filemeter-api
    662653dcdda7        authkeepmgt:1.2.10.2         "/bin/bash -c /usr/bi"   33 hours ago        Up 31 hours                                       authkeepmgt
    a3bf03de8c2f        oms-agent:1.2.10.2           "/bin/bash /usr/bin/i"   33 hours ago        Up 33 hours                                       oms-agent
    ac75776db2cd        manila-scheduler:1.2.10.0    "/bin/bash /usr/bin/S"   33 hours ago        Up 33 hours                                       manila-scheduler
    fd2b42f8d015        manila-api:1.2.10.0          "/bin/bash /usr/bin/S"   33 hours ago        Up 33 hours                                       manila-api_tenant
    0c43cf729dd9        manila-api:1.2.10.0          "/bin/bash /usr/bin/S"   33 hours ago        Up 33 hours                                       manila-api_admin
    fe0beebc452b        oms-api:1.2.10.2             "/bin/bash /usr/bin/i"   33 hours ago        Up 33 hours                                       oms-api
    30cda53ce979        certms:1.2.10.2              "/bin/bash -c /usr/bi"   33 hours ago        Up 31 hours                                       certms
    9bf80fcbca10        rabbitmq:1.2.10.2            "bash /usr/local/lib/"   33 hours ago        Up 33 hours                                       rabbitmq
    681eb9754aa0        keystone:1.2.10.2            "/bin/bash keystone_r"   33 hours ago        Up 33 hours                                       keystone
    1e1805ce94c4        fms:1.2.10.2                 "bash /opt/huawei/dj/"   33 hours ago        Up 33 hours                                       fms
    f37ac70e39e9        cms:1.2.10.2                 "bash /etc/cms/cms-se"   33 hours ago        Up 33 hours                                       cms
    27aa50fe68bf        zookeeper:1.2.10.2           "/bin/bash -c 'bash /"   33 hours ago        Up 33 hours                                       zookeeper
    1b2ca1fff7c8        gaussdb:1.2.10.2             "bash /home/start_gau"   33 hours ago        Up 33 hours                                       gaussdb

  2. Run the ps -ef | grep dashboardControl command to check whether an uninstallation process which is not cleared exists on the dashboard..

    • If message dashboardControl -S STOP is displayed, check the process ID. As shown in Figure 14-3, the process ID is 10636. Go to 6.
      Figure 14-3 Screen of DashboardControl -S STOP Process
    • If the message is not displayed, please Contact technical support for assistance.

  3. Run the kill -9 Process ID command to forcibly stop the dashboardControl -S STOP process.
  4. Run the ps -ef | grep dashboardControl command to check whether the dashboardControl -S STOP process exists.

    • If yes, please Contact technical support for assistance.
    • If no, go to 8.

  5. Please refer to Uninstalling OceanStor DJ of STaaS Solution V1R3C00RC1 SFS Software Installation Guide to uninstall OceanStor DJ again.

The ManageOne Operation Plane Displays You are not allowed to perform any operation on a deleted resource

Symptom

After GuassDB data is restored, the message "You are not allowed to perform any operation on a deleted resource." is displayed when you perform operations on a file system on the ManageOne operation plane.

Possible Causes

After the backup time, the user deletes the file system permanently. As a result, the file system still exists after the GuassDB database is restored using the backup, but no operation can be performed.

Procedure
  1. Use PuTTY to log in to the SFS-DJ01, SFS-DJ02, or SFS-DJ03 node using the management plane IP address of the node.

    The default account and password are djmanager and CloudService@123!, respectively.

    To obtain the management plane IP address of the SFS_DJ01, SFS_DJ02, or SFSDJ_03 node, search for SFS_DJ01, SFS_DJ02, or SFSDJ_03 respectively in the Tool-generated IP Parameters sheet of xxx_export_all_EN.xlsm.

  2. Run the following command and enter the password of user root (Cloud12#$) to switch to user root:

    su root

  3. Run the following command to disable user logout upon timeout:

    TMOUT=0

  1. Run the docker exec -it -u root manila-api_tenant bash command to go to the manila container.
  2. Run the vi /home/env.sh command to check whether environment variables exist in the env.sh file.

    • If environment variables exist, go to 6. Environment variables are as follows:
      #!/bin/bash
      FULL_PATH=`readlink -f ${BASH_SOURCE}`
      CWD=`dirname ${FULL_PATH}`
      IP_ADDR=$(get_info.py --manage_float_ip)
      if [[ ${IP_ADDR} == *:* ]];then
              IP_ADDR="["${IP_ADDR}"]"
          fi
      export OS_PASSWORD=CloudService@123!
      export OS_AUTH_URL=https://${IP_ADDR}:35357/identity/v3
      export OS_USERNAME=manila
      export OS_TENANT_NAME=service
      export OS_PROJECT_DOMAIN_NAME=Default
      export OS_USER_DOMAIN_NAME=Default
      export OS_IDENTITY_API_VERSION=3
      export OS_SERVICE_ENDPOINT=https://${IP_ADDR}:35357/identity-admin/v3
      export OS_SERVICE_TOKEN=$(curl -g -k -i -X POST https://${IP_ADDR}:35357/identity-admin/v3/auth/tokens -H "Content-Type:application/json" -d '{"auth": {"identity": {"methods":[ "password" ],"password": {"user": {"name": "manila","domain": { "name": "Default" },"password": "CloudService@123!" } } }, "scope": {"project": { "name":"service", "domain": {"name":"Default" }}}}}' |grep "X-Subject-Token"|awk -F':' '{print $2}')
      export OS_REGION_NAME="az1.dc1"
      export OS_ENDPOINT_TYPE=internalURL
      export MANILA_ENDPOINT_TYPE=adminURL
      export MANILACLIENT_INSECURE=True
      NOTE:

      password is the password of manila account, and it is CloudService@123! by default.

    • If no environment variable exists, copy and paste the environment variable contents to the file, and then enter wq! to save the file and exit, go to 6.
      NOTE:

      The value of OS_SERVICE_TOKEN is only one piece of information, which is automatically wrapped in a PDF file. After copying the information, manually delete newline characters.

  1. Run the source /home/env.sh command to import environment variables.
  2. Log in to the ManageOne O&M plane, view tenant operation logs, and record the IDs of file systems which have been permanently deleted after the backup time.
  3. Run the manila force-delete <share_id> command to delete the file systems.

    Replace <share_id> with the IDs of the file systems recorded in 7.

  4. Run the rm /home/env.sh command to delete environment variables.
Translation
Download
Updated: 2019-06-10

Document ID: EDOC1100063248

Views: 23171

Downloads: 37

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next