No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

FusionStorage 8.0.0 Block Storage Parts Replacement 04

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Replacing a Metadata Disk

Replacing a Metadata Disk

NOTE:

This chapter applies to TaiShan 2280 V2 12-slot nodes, TaiShan 2280 V2 25-slot nodes, TaiShan 5280 V2 36-slot nodes, 2288H V5 12-slot nodes, and 5288 V5 36-slot nodes.

Impact on the System

If the system has heavy service loads, the replacement will increase the service response time. Perform the replacement during off-peak hours.

Prerequisites

  • A spare metadata disk is ready.
  • The metadata disk to be replaced has been located.
  • The storage pool to which the metadata disk to be replaced belongs is normal, and no data reconstruction task is running.
NOTE:

For details about the slot numbers of metadata disks, see Slot Numbers.

Precautions

  • Remove metadata disks in sequence. Remove one metadata disk completely before removing another.
  • Insert metadata disks in sequence. Insert one metadata disk completely before inserting another.
  • Wait at least 30 seconds between removal and insertion actions.

Tools and Materials

  • ESD gloves
  • ESD wrist straps
  • ESD bags
  • Labels

Procedure

  1. Check the metadata disk type.

    1. Log in to DeviceManager, and choose Cluster > Control Cluster.
    2. In the control cluster area, check the value of the metadata disk storage location to determine the metadata disk type.

  2. Log in to the primary management node as user dsware, and run the sh /opt/dsware/client/bin/dswareTool.sh --op setServerStorageMode -ip Management IP address of the faulty node -mode 1 command to switch to the maintenance mode. To run this command, enter the name and password of CLI super administrator account admin as prompted.
  3. If the faulty device does not need to be powered off, replace the faulty metadata disk directly.
  4. If the faulty metadata disk is an NVMe SSD, log in to the primary management node as user dsware and run the following command to logically power it off. To run this command, enter the name and password of CLI super administrator account admin as prompted:

    sh /opt/dsware/client/bin/dswareTool.sh --op poweroffNvmeDisk -ip Management IP address of the faulty node -slotNo Slot number of the faulty SSD -esn ESN of the faulty SSD

  5. If the faulty device needs to be powered off, perform the following operations:

    • FusionSphere scenario: Replace the faulty metadata disk by referring to Parts Replacement in FusionSphere solution documentation.
    • ServerSAN scenario: Perform service protection measures based on the services running on the faulty device, power it off, and replace the faulty metadata disk.

  6. Remove the disk module.

    Correctly record the slots where disk modules reside. Install disk modules into the same slots before and after the replacement. Otherwise, services may be affected.

    1. Press the button that secures the disk module ejector lever, as shown in step 1 in Figure 13-1.

      The ejector lever automatically ejects.

      Figure 13-1 Removing a disk module
    2. Hold the ejector lever, and pull out the disk module for approximately 3 cm, as shown in step 2 in Figure 13-1.
    3. Wait at least 30 seconds until the disk stops spinning, and slowly pull out the disk module, as shown in step 3 in Figure 13-1.

  7. Place the removed metadata disk in an ESD bag.
  8. Take the spare metadata disk out of its ESD bag.
  9. Install the disk module.

    Install disk modules into the same slots before and after the replacement. Otherwise, services may be affected.

    1. Raise the ejector lever and push the disk module in along the guide rails until it does not move, as shown in step 1 in Figure 13-2.
      Figure 13-2 Installing a disk module
    2. Ensure that the ejector lever is fastened to the beam, and lower the ejector lever to completely insert the disk module into the slot, as shown in step 2 in Figure 13-2.

  10. Perform post-processing on the node where the metadata disk is replaced.

    • If the control cluster and replication cluster share the metadata disk, perform the following operations:
      1. Restore the ZK.
        1. Log in to the primary management node as user dsware.
        2. Run the following command to perform post-processing on the node where the metadata disk is replaced. To run this command, enter the name and password of CLI super administrator account admin as prompted:

          sh /opt/dsware/client/bin/dswareTool.sh --op restoreControlNode -ip Management IP address of the node where the metadata disk is replaced -zkDiskSlot Slot number of the metadata disk -replaceZkDisk true

          In the preceding command, zkDiskSlot is an optional parameter. If the slot number of the metadata disk is changed, you need to set this parameter to the new slot number of the metadata disk.

      2. Restore the CCDB of the control cluster.
        1. Query the control cluster ID and process ID of the ccdb_server.

          sh /opt/dsware/client/bin/dswareTool.sh --op queryNodeProcessInfo

        2. Change the drive letter in the template.
          1. Open the template.

            vi /opt/dsware/client/conf/service/eds/serviceMediaConfig.xml

          2. Manually change the value of deviceName to the new drive letter and the value of manageIp to the management IP address of the new metadata disk.
        3. Run the disk replacement command.

          sh /opt/dsware/client/bin/dswareTool.sh --op ReplaceMediaDisk -controlClusterId Control cluster ID -processType ccdb_server -processId Process ID of the CCDB on the node where the metadata disk is replaced -nodeIps Management IP address of the node where the metadata disk is replaced

      3. Restore the CCDB of the replication cluster.
        1. Query the replication cluster ID.

          sh /opt/dsware/client/bin/dswareTool.sh --op drCmd -subOp queryControlCluster

        2. Query the process ID of the ccdb_server.

          sh /opt/dsware/client/bin/dswareTool.sh --op drCmd -subOp queryProcessInfo

        3. Change the drive letter in the template.
          1. Open the template.

            vi /opt/dsware/client/conf/service/dr/serviceMediaConfig.xml

          2. Manually change the value of deviceName to the new drive letter and the value of manageIp to the management IP address of the new metadata disk.
        4. Run the disk replacement command.

          sh /opt/dsware/client/bin/dswareTool.sh --op drCmd -subOp ReplaceMediaDisk -controlClusterId Replication cluster ID -processId Process ID of the CCDB on the node where the metadata disk is replaced -processType ccdb_server -nodeIps Management IP address of the node where the metadata disk is replaced

    • If the control cluster exclusively uses the metadata disk, perform the following operations:
      1. Restore the ZK.
        1. Log in to the primary management node as user dsware.
        2. Run the following command to perform post-processing on the node where the metadata disk is replaced. To run this command, enter the name and password of CLI super administrator account admin as prompted:

          sh /opt/dsware/client/bin/dswareTool.sh --op restoreControlNode -ip Management IP address of the node where the metadata disk is replaced -zkDiskSlot Slot number of the metadata disk -replaceZkDisk true

          In the preceding command, zkDiskSlot is an optional parameter. If the slot number of the metadata disk is changed, you need to set this parameter to the new slot number of the metadata disk.

      2. Restore the CCDB of the control cluster.
        1. Query the control cluster ID and process ID of the ccdb_server.

          sh /opt/dsware/client/bin/dswareTool.sh --op queryNodeProcessInfo

        2. Change the drive letter in the template.
          1. Open the template.

            vi /opt/dsware/client/conf/service/eds/serviceMediaConfig.xml

          2. Manually change the value of deviceName to the new drive letter and the value of manageIp to the management IP address of the new metadata disk.
        3. Run the disk replacement command.

          sh /opt/dsware/client/bin/dswareTool.sh --op ReplaceMediaDisk -controlClusterId Control cluster ID -processType ccdb_server -processId Process ID of the CCDB on the node where the metadata disk is replaced -nodeIps Management IP address of the node where the metadata disk is replaced

    • If the replication cluster exclusively uses the metadata disk, perform the following operations:
      1. Restore the CCDB of the replication cluster.
        1. Query the replication cluster ID.

          sh /opt/dsware/client/bin/dswareTool.sh --op drCmd -subOp queryControlCluster

        2. Query the process ID of the ccdb_server.

          sh /opt/dsware/client/bin/dswareTool.sh --op drCmd -subOp queryProcessInfo

        3. Change the drive letter in the template.
          1. Open the template.

            vi /opt/dsware/client/conf/service/dr/serviceMediaConfig.xml

          2. Manually change the value of deviceName to the new drive letter and the value of manageIp to the management IP address of the new metadata disk.
        4. Run the disk replacement command.

          sh /opt/dsware/client/bin/dswareTool.sh --op drCmd -subOp ReplaceMediaDisk -controlClusterId Replication cluster ID -processId Process ID of the CCDB on the node where the metadata disk is replaced -processType ccdb_server -nodeIps Management IP address of the node where the metadata disk is replaced

  11. Log in to the primary management node as user dsware, and run the sh /opt/dsware/client/bin/dswareTool.sh --op setServerStorageMode -ip Management IP address of the faulty node -mode 0 command to switch to the normal mode. To run this command, enter the name and password of CLI super administrator account admin as prompted.
  12. Check the firmware version.

    1. Use the KVM to log in to the storage node as user root.
    2. Run the following command to check the firmware version (nvme0 is used as an example):

      hioadm updatefw -d nvme0

      [root@node0101 ~]# hioadm updatefw -d nvme0
      slot  version   activation
      1     3.10       
      2     3.10      current
      3     3.10 

      The version on the left of current is the current firmware version. If the current firmware version is not 3.10 or later, contact Huawei technical support.

  13. Check the system status.

    On SmartKit, choose Home > Storage > Routine Maintenance > More > Inspection and check the system status.
    • If all inspection items pass the inspection, the inspection is successful.
    • If some inspection items fail, the inspection fails. Rectify the faults by taking recommended actions in the inspection reports. Perform inspection again after fault rectification. If the inspection still fails, contact Huawei technical support.

    For details, see the FusionStorage Block Storage Administrator Guide.

Follow-Up Procedure

Label the replaced metadata disk to facilitate subsequent operations.

Translation
Download
Updated: 2019-09-19

Document ID: EDOC1100081420

Views: 5043

Downloads: 4

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next