No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor 9000 V5 7.0 Parts Replacement 05

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Replacing a Storage Node

Replacing a Storage Node

This section introduces how to replace a P12X. The method to replace a P25X is the same.

Impact on the System

You must power off a storage node before replacing it. Exercise caution when performing this operation.

Prerequisites

  • The replacement storage node is ready.
  • The storage node that you want to replace has been located.
  • The cable connection positions of the storage node that you want to replace have been labeled on the cables.
  • You have obtained the OceanStor 9000 V5 Version Mapping and OceanStor 9000 V5 Node Firmware Upgrade Guide of the corresponding version.
    NOTE:
    You can log in to http://support.huawei.com/enterprise/, choose Support > Cloud Storage, and click the product model to go to the document page of the product. Then, click Downloads tab page. On the Downloads tab page, click the version that you need download, the download page is displayed. And then download the documents.

Precaution

  • Remove and insert a device component with even force. Excess force may damage the appearance or connectors of the device component.
  • Ensure that both power modules of the storage node have been disconnected.

Tools and Materials

  • Flat-head screwdrivers
  • Phillips screwdrivers
  • Electrostatic discharge (ESD) gloves
  • ESD wrist straps
  • ESD bags
  • Labels

Procedure

  1. Wear an ESD wrist strap or ESD gloves.
  2. Use the KVM to log in to the operating system of the node, enter user name root and its password (default: Root@storage), and run poweroff command to power off the node.

    Wait until the KVM has no output. The power-off is complete.

  3. Remove all power cables and external signal cables.
  4. Remove the storage node from the enclosure.
    1. Loosen the captive screws on the storage node panel using a screwdriver, as shown in step 1 in Figure 3-252.

      Figure 3-252  Pulling out a storage node

    2. Pull out the storage node along the guide rails away from the cabinet, as shown in step 2 in Figure 3-252.
    3. Lever the latches on both sides upwards and remove the storage node in the direction of the arrow by holding the storage node bottom, as shown in Figure 3-253.

      Figure 3-253  Removing the storage node from the holding rails

  5. Place the removed storage node on an ESD table.
  6. Take out the replacement storage node and place it on an ESD table.
  7. Remove all system disk modules, service disk modules, DIMMs, PCIe cards, and power modules from the faulty storage node and install them on the replacement storage node.
    1. Remove filler panels of system disk modules, service disk modules, DIMMs, PCIe cards, and power modules from the replacement storage node and put them into ESD bags.
    2. Use labels to record slots of all system disk modules and service disk modules removed from the faulty storage node.
    3. Remove all system disk modules, service disk modules, PCIe cards, and power modules from the faulty storage node and install them on the replacement storage node.

      Insert the system and service disk modules into the same slots in the replacement storage node as where they are in the faulty storage node. Otherwise, system services may be affected.

    4. For the extra DIMMs that exceed the standard configuration (two for P12X and C36X and three for P25X and P36X) and are additionally installed by users, they must be also installed in the replacement storage node according to their positions in original slots.
    5. Insert the filler panels of system disk modules, service disk modules, DIMMs, PCIe cards, and power modules which are removed from the replacement storage node into the the faulty storage node in sequence.
  8. Put the storage node into the cabinet.
    1. Pull out the inner rails as far as they will go, as shown in Figure 3-254.

      Figure 3-254  Pulling out an inner rail

    2. Align the screws on the storage node with the notches on the inner guide rails, push the storage node inwards until you hear a sound, and ensure that the latches eject and completely block the screws to affix the storage node to the inner guide rails, as shown in Figure 3-255.

      Figure 3-255  Installing a storage node

    3. Press the release buttons on both sides and push the storage node into the cabinet along the holding rails, as shown in steps 1 and 2 in Figure 3-256.

      Figure 3-256  Pushing the server into the cabinet along the holding rails

    4. Optional: Tighten the captive screws on the mounting ears to secure the storage node, as shown in Figure 3-257.

      Figure 3-257  Securing the storage node

  9. Connect the storage node to the power sockets and the KVM according to the cable connection labels. Then power on the storage node.
  10. Log in to the storage node using the KVM.
  11. Clear NVDIMM data.
    1. When the screen shown in Figure 3-258 is displayed during the system startup, press Delete.

      Figure 3-258  BIOS startup page

      NOTE:
      If a dialog box that informs you of entering the password is displayed, enter the BIOS password (default: Admin@9000).

    2. Choose Security > Clear NvDimm Flash and press Enter, as shown in Figure 3-259.

      Figure 3-259  Choosing Clear NvDimm Flash

    3. In the Clear NvDimm Flash dialog box, select Yes and press Enter, as shown in Figure 3-260.

      Figure 3-260  Starting to clear NVDIMM data

    4. If the Clear NvDimm flash success! message is displayed, data in the NVDIMM is cleared, as shown in Figure 3-261.

      Figure 3-261  NVDIMM data cleared

    5. Press Enter and then F10. In the dialog box that is displayed, select Yes and press Enter to save the configuration and restart the storage node, as shown in Figure 3-262.

      Figure 3-262  Saving the configuration and restarting the storage node

  12. Import the original RAID information.
    1. Configure the UEFI mode.

      When the screen shown in Figure 3-263 is displayed during the system startup, press F11.

      Figure 3-263  BIOS startup page
      NOTE:
      If a dialog box that informs you of entering the password is displayed, enter the BIOS password (default: Admin@9000).

      The BIOS interface of the RAID controller card is displayed, as shown in Figure 3-264. Select Setup Utility and press Enter.

      Figure 3-264  BIOS interface of the RAID controller card

      On the Boot page, set Boot Type to UEFI Boot, as shown in Figure 3-265.

      Figure 3-265  Configuring the UEFI mode

      Press F10 to save the configuration. In the confirmation dialog box that is displayed, select Yes and press Enter. The system restarts.

    2. Log in to the management interface.

      When the screen shown in Figure 3-266 is displayed during the system startup, press F11.

      Figure 3-266  BIOS startup page
      NOTE:
      If a dialog box that informs you of entering the password is displayed, enter the BIOS password (default: Admin@9000).

      The BIOS interface of the RAID controller card is displayed, select Device Manager and press Enter, as shown in Figure 3-267.

      Figure 3-267  BIOS interface of the RAID controller card

      The screen shown in Figure 3-268 is displayed, select the LSI SAS3008IR controller to be operated, and press Enter.

      Figure 3-268  Device Manager page

      The screen shown in Figure 3-269 is displayed, press Enter.

      Figure 3-269  LSI SAS3008IR

      The main interface shown in Figure 3-270 is displayed.

      Figure 3-270  Main interface of LSI SAS3008IR

    3. Import external configurations.

      On the main interface of the RAID controller card, select Controller Management and press Enter. Select Manage Foreign Configuration and press Enter. The external configuration management page is displayed, as shown in Figure 3-271.

      Figure 3-271  Managing external configurations

      Select Select Foreign Configuration and press Enter. In the displayed list, select the external configuration to be imported and press Enter. Select View Foreign Configuration and press Enter. The page for viewing external configurations is displayed, as shown in Figure 3-272.

      Figure 3-272  Viewing external configurations

      Select Import Foreign Configuration and press Enter. The operation confirmation page is displayed, as shown in Figure 3-273.

      Figure 3-273  Confirming the operation

      Select Confirm and press Enter. Use and to select Yes and press Enter.

      When Operation completed successfully is displayed, press Esc. The configuration is complete.

  13. Check RAID card configurations.

    To configure an SR130 (LSI3008) RAID controller card for a storage node, perform the following steps:

    1. Use the KVM to log in to the operating system of a storage node as user root.
    2. Run the cd /opt/driver/lsisas-mpt3sas-driver; ./lsiutil_operateioc_ncqswitch.sh query command to check whether NCQ is Enabled.

      • If yes, go to 13.d.
      • If no, go to 13.c.

    3. Run the ./lsiutil_operateioc_ncqswitch.sh open command to open NCQ and check whether the output is open ioc ncq switch success, it will active after host reset..

      • If yes, go to 13.d.
      • If no, preserve the site, record the output, and contact technical service engineers.

    4. Run the cd /opt/driver/lsisas-mpt3sas-driver; ./lsiutil_operateioc_writecache.sh query command to check whether cache is Disabled.

      • If yes, go to 14.
      • If no, go to 13.e.

    5. Run the ./lsiutil_operateioc_writecache.sh close command to close cache and check whether the output is close ioc write cache success, it will active after host reset.

      • If yes, go to 14.
      • If no, preserve the site, record the output, and contact technical service engineers.

  14. Configure an IP address for the Mgmt port of the storage node. For details, see Appendix D Basic System Operations > Configuring an IP Address for the Mgmt Port on a P12X/P25X/P36X/C36X in OceanStor 9000 V5 Software Installation Guide.
  15. Execute the following script to check the storage node status.

    cd /opt/product/deploy/script

    ./envcheck.py

    The normal self-check result is as follows:
    [2014-11-05 16:45:02,491][INFO    ][module:envcheck, line:0620] 
    ********************************************************************************
    Check Item: SNAS Environment Check Summary
    Check Pass:True
    Check Cmd :
    Check Info:
    kernel version: OK
    nvdimm driver: OK
    TOE driver: OK
    HardRaid status: OK
    Firewall status: OK
    Command Output:
    ********************************************************************************
    

  16. Check whether the network ports on the storage node are named SLOT1-0, SLOT1-1, SLOT2-0, and SLOT2-1.
    1. Run command ifconfig -a on the KVM to check whether the network ports on the storage node are named SLOT1-0, SLOT1-1, SLOT2-0, and SLOT2-1.

      • If yes, go to 17.
      • If no, go to 16.b.

    2. Run reboot to restart the storage node, so that new PCIe cards can be identified.
  17. Connect the other signal cable to the storage node according to the cable connection label.
  18. On DeviceManager, check the status of the storage node.
    1. Start the browser on the maintenance terminal and log in to DeviceManager.
    2. In the Alarms area on the home page, check whether a new alarm is reported.

      • If yes, clear the alarm according to the alarm help.
      • If no, go to 18.c.

    3. On the right of the home page, click System to go to the System page.
    4. Click Device View tab to go to the Device View page.
    5. In Nodes, check the status of the node that is newly added to the cluster.

      • Health Status: Normal
      • Running Status: Online

  19. Ensure that the node version is the same as the version recorded in the OceanStor 9000 V5 Version Mapping. For detail about how to query and upgrade the node version, see the PANGEA Firmware Upgrade Guide in the OceanStor 9000 V5 Node Firmware Upgrade Guide.
  20. Inspect the system using SmartKit. Choose Home > Storage > Routine Maintenance > More > Inspection to check the system status. For details, see the OceanStor 9000 V5 Routine Maintenance.

    • If all inspection items are successful: The inspection is successful.
    • If some inspection items fail the inspection: The inspection is failed. Rectify the faults by taking recommended actions in the inspection reports. You need to reperform inspection after rectifying. If the inspection is still failed, contact Huawei technical support.

  21. Attach the nameplate of the node to the chassis.

Follow-up Procedure

After replacing the storage node, label the replaced storage node to facilitate subsequent operations.

Download
Updated: 2019-07-25

Document ID: EDOC1100067965

Views: 11847

Downloads: 11

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next