No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor 9000 V300R005C00 File System Feature Guide 11

Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Troubleshooting

Troubleshooting

A Snapshot Fails to Be Created

If a snapshot fails to be created, the file system cannot be protected using this snapshot. This problem does not affect the file system.

Symptom

Log in to DeviceManager. Choose Data Protection > Snapshot > Snapshot. Click Create. Select the directory for which you want to create a snapshot. Set parameters such as the snapshot name and click OK. An error message is returned.

Possible Causes

  • Possible cause 1: The operation does not comply with rules for creating a snapshot. For example, the snapshot name already exists or the snapshot is nested with an existing snapshot.
  • Possible cause 2: The snas_cm process is abnormal.
  • Possible cause 3: The ccdb_server process is abnormal.
  • Possible cause 4: The operating status of the file system is abnormal.

Fault Diagnosis

Figure 5-14  Troubleshooting flowchart

Procedure

  • Possible cause 1: The operation does not comply with rules for creating a snapshot.
    1. Modify the snapshot parameters.

      • If a dialog box is displayed indicating repeated snapshot name, change the snapshot name.

      • If a dialog box is displayed indicating snapshot nesting, change the snapshot directory.

    2. Create a snapshot again and check whether the snapshot is created.

  • Possible cause 2: The snas_cm process is abnormal.
    1. Use PuTTY to log in as the omuser user to the node where the OceanStor 9000 management IP address resides. Run the su command and enter root user's password to switch to the root user. In the CLI, run the ps -ef |grep snas_cm command to check whether the snas_cm process is abnormal.

    2. Restart the snas_cm process.

      In the CLI, run the kill -9 `ps -ef|grep snas_cm|grep -v grep|awk '{print $2}'` command to restart the snas_cm process.

    3. Create a snapshot again and check whether the snapshot is created.

  • Possible cause 3: The ccdb_server process is abnormal.
    1. Use PuTTY to log in as the omuser user to the node where the OceanStor 9000 management IP address resides. Run the su command and enter root user's password to switch to the root user. In the CLI, run the MmlBatch 4004 "mon ccdbmap 0" to query the ID of the primary CCDB node.

      Master Ccdb Id: 1 indicates that the ID of the primary CCDB node is 1.

      *****************CCDB MAP Start***************
      Epoch : 6
      Master Ccdb Id : 1
      RackName : h1
      Node Id : 2
      Role : 1
      State : 1
      RackName : h1
      Node Id : 1
      Role : 0
      State : 1
      RackName : h1
      Node Id : 3
      Role : 1
      State : 1
      *****************CCDB MAP End***************

    2. Run cat /proc/monc_nodemap to query the IP address of the primary CCDB node.

      Based on the primary CCDB node ID queried in 1, the IP address of the primary CCDB node can be obtained. In the following example, the IP address of the primary CCDB node is 192.168.100.14.

      NODE14:~ # cat /proc/monc_nodemap 
      ***************** Node Map *****************
      Node: NodeID(3), BackIp(0x61ca640f), BirthTime(6148640756791479442), DevName(192.168.100.15), ClusterID(snassnap11431590979863), NodeType(1), RegStatus(1), FaultTime(0), DelTime(0)
      Node: NodeID(2), BackIp(0x61ca6410), BirthTime(6148640610762061891), DevName(192.168.100.16), ClusterID(snassnap11431590979863), NodeType(1), RegStatus(1), FaultTime(0), DelTime(0)
      Node: NodeID(1), BackIp(0x61ca640e), BirthTime(6148640606467511425), DevName(192.168.100.14), ClusterID(snassnap11431590979863), NodeType(1), RegStatus(1), FaultTime(0), DelTime(0)
      ************** Local Node Info *************
      Node RegStat: 1, NodeId: 1, BirthTime: 6148640606467511425, ClusterId: snassnap11431590979863, DevName: 192.168.100.14, NodeType: 1, FaultTime: 0, DelTime: 0
      NIDStat: 1(1:normal), HbStop: 0(1:stop), Ntf: 0(0:done), NIDFlg: 0(0:normal 1:map_fault 2:detect_fault)

    3. Use PuTTY to log in to the primary CCDB node as the omuser user. Run the su command and enter root user's password to switch to the root user. In the CLI, run cat /proc/ccdb_statemap to check whether the CCDB process status is normal.

      If result:0,status:2 is returned, the CCDB process status is normal.

      NODE15:/home/omuser # cat /proc/ccdb_statemap 
      result:0,status:2

    4. Run ps -ef |grep ccdb_server to view the CCDB process.
    5. Restart the ccdb_server process.

      1. Run the following commands to stop the CCDB process:

        /opt/huawei/deploy/bin/daemon -s /opt/huawei/snas_cluster/bin/ccdb_server

      2. Run the following commands to restart the CCDB process:

        /opt/huawei/deploy/bin/daemon /opt/huawei/snas_cluster/bin/ccdb_server

    6. Create a snapshot again and check whether the snapshot is created.

  • Possible cause 4: The operating status of the file system is abnormal.
    1. Rectify the fault. For details, see The Running Status of a File System Fails to Be Queried or an Incorrect Running State Is Displayed.
    2. Create a snapshot again and check whether the snapshot is created.

      • If yes, no further action is required.
      • If no, contact technical support.

A Snapshot Fails to Be Queried

If a snapshot fails to be queried, the snapshot information cannot be viewed. This problem does not affect the file system.

Symptom

Log in to DeviceManager. Choose Data Protection > Snapshot > Snapshot. Click Refresh. An error message dialog box is displayed.

Possible Causes

  • Possible cause 1: The snas_cm process is abnormal.
  • Possible cause 2: The ccdb_server process is abnormal.
  • Possible cause 3: The operating status of the file system is abnormal.

Fault Diagnosis

Figure 5-15  Troubleshooting flowchart

Procedure

  • Possible cause 1: The snas_cm process is abnormal.
    1. Use PuTTY to log in to the node where the OceanStor 9000 management IP address resides as the omuser user. Run the su command and enter root user's password to switch to the root user. In the CLI, run the ps -ef |grep snas_cm command to check whether the snas_cm process is abnormal.

    2. Restart the snas_cm process.

      In the CLI, run the kill -9 `ps -ef|grep snas_cm|grep -v grep|awk '{print $2}'` command to restart the snas_cm process.

    3. Query the snapshot again and check whether the snapshot information can be viewed.

  • Possible cause 2: The ccdb_server process is abnormal.
    1. Use PuTTY to log in as the omuser user to the node where the OceanStor 9000 management IP address resides. Run the su command and enter root user's password to switch to the root user. In the CLI, run the MmlBatch 4004 "mon ccdbmap 0" to query the ID of the primary CCDB node.

      Master Ccdb Id: 1 indicates that the ID of the primary CCDB node is 1.

      *****************CCDB MAP Start***************
      Epoch : 6
      Master Ccdb Id : 1
      RackName : h1
      Node Id : 2
      Role : 1
      State : 1
      RackName : h1
      Node Id : 1
      Role : 0
      State : 1
      RackName : h1
      Node Id : 3
      Role : 1
      State : 1
      *****************CCDB MAP End***************

    2. Run cat /proc/monc_nodemap to query the IP address of the primary CCDB node.

      Based on the primary CCDB node ID queried in 1, the IP address of the primary CCDB node can be obtained. In the following example, the IP address of the primary CCDB node is 192.168.100.14.

      NODE14:~ # cat /proc/monc_nodemap 
      ***************** Node Map *****************
      Node: NodeID(3), BackIp(0x61ca640f), BirthTime(6148640756791479442), DevName(192.168.100.15), ClusterID(snassnap11431590979863), NodeType(1), RegStatus(1), FaultTime(0), DelTime(0)
      Node: NodeID(2), BackIp(0x61ca6410), BirthTime(6148640610762061891), DevName(192.168.100.16), ClusterID(snassnap11431590979863), NodeType(1), RegStatus(1), FaultTime(0), DelTime(0)
      Node: NodeID(1), BackIp(0x61ca640e), BirthTime(6148640606467511425), DevName(192.168.100.14), ClusterID(snassnap11431590979863), NodeType(1), RegStatus(1), FaultTime(0), DelTime(0)
      ************** Local Node Info *************
      Node RegStat: 1, NodeId: 1, BirthTime: 6148640606467511425, ClusterId: snassnap11431590979863, DevName: 192.168.100.14, NodeType: 1, FaultTime: 0, DelTime: 0
      NIDStat: 1(1:normal), HbStop: 0(1:stop), Ntf: 0(0:done), NIDFlg: 0(0:normal 1:map_fault 2:detect_fault)

    3. Use PuTTY to log in to the primary CCDB node as the omuser user. Run the su command and enter root user's password to switch to the root user. In the CLI, run cat /proc/ccdb_statemap to check whether the CCDB process status is normal.

      If result:0,status:2 is returned, the CCDB process status is normal.

      NODE15:/home/omuser # cat /proc/ccdb_statemap 
      result:0,status:2

    4. Run ps -ef |grep ccdb_server to view the CCDB process.
    5. Restart the ccdb_server process.

      1. Run the following commands to stop the CCDB process:

        /opt/huawei/deploy/bin/daemon -s /opt/huawei/snas_cluster/bin/ccdb_server

      2. Run the following commands to restart the CCDB process:

        /opt/huawei/deploy/bin/daemon /opt/huawei/snas_cluster/bin/ccdb_server

    6. Query the snapshot again and check whether the snapshot information can be viewed.

  • Possible cause 3: The operating status of the file system is abnormal.
    1. Rectify the fault by referring to The Running Status of a File System Fails to Be Queried or an Incorrect Running State Is Displayed.
    2. Query the snapshot again and check whether the snapshot information can be viewed.

      • If yes, no further action is required.
      • If no, contact technical support.

A Snapshot Fails to Be Deleted

If a snapshot fails to be deleted, the existing snapshot cannot be deleted. This problem does not affect the file system.

Symptom

Log in to DeviceManager. Choose Data Protection > Snapshot > Snapshot. Select the snapshot that you want to delete and click Delete. Confirm the involved risks and click OK. An error message dialog box is displayed as Snapshot ID is not exist or The communication is abnormal or the system is busy, indicating that the operation failed.

Possible Causes

  • Possible cause 1: The snas_cm process is abnormal.
  • Possible cause 2: The ccdb_server process is abnormal.
  • Possible cause 3: The operating status of the file system is abnormal.

Fault Diagnosis

Figure 5-16  Troubleshooting flowchart

Procedure

  • Possible cause 1: The snas_cm process is abnormal.
    1. Use PuTTY to log in to the node where the OceanStor 9000 management IP address resides as the omuser user. Run the su command and enter root user's password to switch to the root user. In the CLI, run the ps -ef |grep snas_cm command to check whether the snas_cm process is abnormal.

    2. Restart the snas_cm process.

      In the CLI, run the kill -9 `ps -ef|grep snas_cm|grep -v grep|awk '{print $2}'` command to restart the snas_cm process.

    3. Delete the snapshot again and check whether the snapshot is deleted.

  • Possible cause 2: The ccdb_server process is abnormal.
    1. Use PuTTY to log in as the omuser user to the node where the OceanStor 9000 management IP address resides. Run the su command and enter root user's password to switch to the root user. In the CLI, run the MmlBatch 4004 "mon ccdbmap 0" to query the ID of the primary CCDB node.

      Master Ccdb Id: 1 indicates that the ID of the primary CCDB node is 1.

      *****************CCDB MAP Start***************
      Epoch : 6
      Master Ccdb Id : 1
      RackName : h1
      Node Id : 2
      Role : 1
      State : 1
      RackName : h1
      Node Id : 1
      Role : 0
      State : 1
      RackName : h1
      Node Id : 3
      Role : 1
      State : 1
      *****************CCDB MAP End***************

    2. Run cat /proc/monc_nodemap to query the IP address of the primary CCDB node.

      Based on the primary CCDB node ID queried in 1, the IP address of the primary CCDB node can be obtained. In the following example, the IP address of the primary CCDB node is 192.168.100.14.

      NODE14:~ # cat /proc/monc_nodemap 
      ***************** Node Map *****************
      Node: NodeID(3), BackIp(0x61ca640f), BirthTime(6148640756791479442), DevName(192.168.100.15), ClusterID(snassnap11431590979863), NodeType(1), RegStatus(1), FaultTime(0), DelTime(0)
      Node: NodeID(2), BackIp(0x61ca6410), BirthTime(6148640610762061891), DevName(192.168.100.16), ClusterID(snassnap11431590979863), NodeType(1), RegStatus(1), FaultTime(0), DelTime(0)
      Node: NodeID(1), BackIp(0x61ca640e), BirthTime(6148640606467511425), DevName(192.168.100.14), ClusterID(snassnap11431590979863), NodeType(1), RegStatus(1), FaultTime(0), DelTime(0)
      ************** Local Node Info *************
      Node RegStat: 1, NodeId: 1, BirthTime: 6148640606467511425, ClusterId: snassnap11431590979863, DevName: 192.168.100.14, NodeType: 1, FaultTime: 0, DelTime: 0
      NIDStat: 1(1:normal), HbStop: 0(1:stop), Ntf: 0(0:done), NIDFlg: 0(0:normal 1:map_fault 2:detect_fault)

    3. Use PuTTY to log in to the primary CCDB node as the omuser user. Run the su command and enter root user's password to switch to the root user. In the CLI, run cat /proc/ccdb_statemap to check whether the CCDB process status is normal.

      If result:0,status:2 is returned, the CCDB process status is normal.

      NODE15:/home/omuser # cat /proc/ccdb_statemap 
      result:0,status:2

    4. Run ps -ef |grep ccdb_server to view the CCDB process.
    5. Restart the ccdb_server process.

      1. Run the following commands to stop the CCDB process:

        /opt/huawei/deploy/bin/daemon -s /opt/huawei/snas_cluster/bin/ccdb_server

      2. Run the following commands to restart the CCDB process:

        /opt/huawei/deploy/bin/daemon /opt/huawei/snas_cluster/bin/ccdb_server

    6. Delete the snapshot again and check whether the snapshot is deleted.

  • Possible cause 3: The operating status of the file system is abnormal.
    1. Rectify the fault by referring to The Running Status of a File System Fails to Be Queried or an Incorrect Running State Is Displayed.
    2. Delete the snapshot again and check whether the snapshot is deleted.

      • If yes, no further action is required.
      • If no, contact technical support.

A Periodic Snapshot Policy Fails to Be Created

If a periodic snapshot policy fails to be created, the file system cannot be protected using the snapshot.

Symptom

Log in to DeviceManager. When you create a periodic snapshot policy, the Execution Result dialog box is displayed, indicating that the creation failed. You can view the details in the Cause And Suggestion column, as shown in Figure 5-17.

Figure 5-17  Symptom when creating a periodic snapshot policy failed

Possible Causes

  • Possible cause 1: A cause displayed in the Cause And Suggestion column, for example, the snapshot is nested, the policy name is repeated, or the directory does not exist.
  • Possible cause 2: An internal error occurs.

Fault Diagnosis

Figure 5-18  Troubleshooting flowchart

Procedure

  1. On DeviceManager, check whether an internal error occurs.

    • If yes, contact Huawei technical support.
    • If no, go to 2.

  2. Perform operations based on the displayed cause and suggestion. Create a periodic snapshot policy again. Check whether a periodic snapshot policy is created.

    • If yes, no further action is required.
    • If no, contact Huawei technical support.

A Periodic Snapshot Policy Fails to Be Deleted

If a periodic snapshot policy fails to be deleted, the scheduled snapshot creation cannot be stopped.

Symptom

Log in to DeviceManager. When you delete a periodic snapshot policy, the Execution Result dialog box is displayed, indicating that the deletion failed. You can view the details in the Cause And Suggestion column, as shown in Figure 5-19.

Figure 5-19  Symptom when deleting a periodic snapshot policy failed

Possible Causes

  • Possible cause 1: A cause displayed in the Cause And Suggestion column, for example, the communication is abnormal or the system is busy.
  • Possible cause 2: An internal error occurs.

Fault Diagnosis

Figure 5-20  Troubleshooting flowchart

Procedure

  1. On DeviceManager, check whether an internal error occurs.

    • If yes, contact Huawei technical support.
    • If no, go to 2.

  2. Perform operations based on the displayed cause and suggestion. Delete the periodic snapshot policy again and check whether the policy is deleted.

    NOTE:

    If the communication or the snas_cm process is abnormal, restart the snas_cm process.

    Use PuTTY to log in to the node where the DeviceManager management IP address resides as the omuser user. Run the su command and enter root user's password to switch to the root user. In the CLI, run the kill -9 `ps -ef|grep snas_cm|grep -v grep|awk '{print $2}'` command. The snas_cm process is restarted.

    • If yes, no further action is required.
    • If no, contact Huawei technical support.

A Periodic Snapshot Policy Fails to Be Queried

If a periodic snapshot policy fails to be queried, the policy information cannot be viewed.

Symptom

Log in to DeviceManager. When you query the general or specific information about a periodic snapshot policy, an error message is returned, such as The communication is abnormal or the system is busy or Inner error, as shown in Figure 5-21.

Figure 5-21  Symptom when querying a periodic snapshot policy failed

Possible Causes

  • Possible cause 1: The system is rectifying faults.
  • Possible cause 2: The snas_cm process is abnormal.
  • Possible cause 3: The ccdb_server process is abnormal.

Fault Diagnosis

Figure 5-22  Troubleshooting flowchart

Procedure

  • Possible cause 1: The system is rectifying faults.
    1. Check whether /proc/monc_mdsmap and /proc/monc_camap are running properly.

      • Use PuTTY to log in to the node where the OceanStor 9000 management IP address resides as the omuser user. Run the su command and enter root user's password to switch to the root user. In the CLI, run the cat /proc/monc_mdsmap. Figure 5-23 shows the query result. If State Normal appears in the result, the file status is normal. Otherwise, the system is rectifying faults.
        Figure 5-23  Query result of /proc/monc_mdsmap status

      • Use PuTTY to log in to the node where the OceanStor 9000 management IP address resides as the omuser user. Run the su command and enter root user's password to switch to the root user. In the CLI, run the cat /proc/monc_camap command. Figure 5-24 shows the query result. If State Normal appears in the result, the file status is normal. Otherwise, the system is rectifying faults.
        Figure 5-24  Query result of /proc/monc_camap status

    2. Check whether MDS MAP and CA MAP are in the state of rectifying faults.

      • If yes, wait until MDS MAP and CA MAP returns to normal and then perform 3.
      • If no, go to Possible cause 2.

    3. Query a periodic snapshot policy again. Check whether the policy information can be viewed.

  • Possible cause 2: The snas_cm process is abnormal.
    1. Use PuTTY to log in to the node where the OceanStor 9000 management IP address resides as the omuser user. Run the su command and enter root user's password to switch to the root user. In the CLI, run the ps -ef |grep snas_cm command to check whether the snas_cm process is abnormal.

      • If yes, run the kill -9 `ps -ef|grep snas_cm|grep -v grep|awk '{print $2}'` command to restart the snas_cm process.
      • If no, go to Possible cause 3.

    2. Query a periodic snapshot policy again. Check whether the policy information can be viewed.

  • Possible cause 3: The ccdb_server process is abnormal.
    1. Use PuTTY to log in as the omuser user to the node where the OceanStor 9000 management IP address resides. Run the su command and enter root user's password to switch to the root user. In the CLI, run the MmlBatch 4004 "mon ccdbmap 0" to query the ID of the primary CCDB node.

      Master Ccdb Id: 1 indicates that the ID of the primary CCDB node is 1.

      *****************CCDB MAP Start***************
      Epoch : 6
      Master Ccdb Id : 1
      RackName : h1
      Node Id : 2
      Role : 1
      State : 1
      RackName : h1
      Node Id : 1
      Role : 0
      State : 1
      RackName : h1
      Node Id : 3
      Role : 1
      State : 1
      *****************CCDB MAP End***************

    2. Run cat /proc/monc_nodemap to query the IP address of the primary CCDB node.

      Based on the primary CCDB node ID queried in 1, the IP address of the primary CCDB node can be obtained. In the following example, the IP address of the primary CCDB node is 192.168.100.14.

      NODE14:~ # cat /proc/monc_nodemap 
      ***************** Node Map *****************
      Node: NodeID(3), BackIp(0x61ca640f), BirthTime(6148640756791479442), DevName(192.168.100.15), ClusterID(snassnap11431590979863), NodeType(1), RegStatus(1), FaultTime(0), DelTime(0)
      Node: NodeID(2), BackIp(0x61ca6410), BirthTime(6148640610762061891), DevName(192.168.100.16), ClusterID(snassnap11431590979863), NodeType(1), RegStatus(1), FaultTime(0), DelTime(0)
      Node: NodeID(1), BackIp(0x61ca640e), BirthTime(6148640606467511425), DevName(192.168.100.14), ClusterID(snassnap11431590979863), NodeType(1), RegStatus(1), FaultTime(0), DelTime(0)
      ************** Local Node Info *************
      Node RegStat: 1, NodeId: 1, BirthTime: 6148640606467511425, ClusterId: snassnap11431590979863, DevName: 192.168.100.14, NodeType: 1, FaultTime: 0, DelTime: 0
      NIDStat: 1(1:normal), HbStop: 0(1:stop), Ntf: 0(0:done), NIDFlg: 0(0:normal 1:map_fault 2:detect_fault)

    3. Use PuTTY to log in to the primary CCDB node as the omuser user. Run the su command and enter root user's password to switch to the root user. In the CLI, run cat /proc/ccdb_statemap to check whether the CCDB process status is normal.

      If result:0,status:2 is returned, the CCDB process status is normal.

      NODE15:/home/omuser # cat /proc/ccdb_statemap 
      result:0,status:2
      • If yes, go to 6.
      • If no, go to 4.

    4. Run ps -ef |grep ccdb_server to view the CCDB process.
    5. Restart the ccdb_server process.

      1. Run the following commands to stop the CCDB process:

        /opt/huawei/deploy/bin/daemon -s /opt/huawei/snas_cluster/bin/ccdb_server

      2. Run the following commands to restart the CCDB process:

        /opt/huawei/deploy/bin/daemon /opt/huawei/snas_cluster/bin/ccdb_server

    6. Query a periodic snapshot policy again. Check whether the policy information can be viewed.

      • If yes, no further action is required.
      • If no, contact technical support.

Translation
Download
Updated: 2019-03-30

Document ID: EDOC1000101823

Views: 20519

Downloads: 99

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next