No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor 9000 V300R006C10 Parts Replacement 06

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Replacing a Storage Node

Replacing a Storage Node

This section introduces how to replace a P12E node. The method to replace a P25E is the same.

Impact on the System

You must power off a storage node before replacing it. Exercise caution when performing this operation.

Prerequisites

  • The replacement storage node is ready.
  • The storage node that you want to replace has been located.
  • The cable connection positions of the storage node that you want to replace have been labeled on the cables.
  • You have obtained the OceanStor 9000 Version Mapping and OceanStor 9000 Node Firmware Upgrade Guide of the corresponding version.
    NOTE:
    You can log in to http://support.huawei.com/enterprise/, choose Support > Cloud Storage, and click the product model to go to the document page of the product. Then, click Downloads tab page. On the Downloads tab page, click the version that you need download, the download page is displayed. And then download the documents.

Precaution

  • Remove and insert a device component with even force. Excess force may damage the appearance or connectors of the device component.
  • Ensure that both power modules of the storage node have been disconnected.

Tools and Materials

  • Flat-head screwdrivers
  • Phillips screwdrivers
  • Electrostatic discharge (ESD) gloves
  • ESD wrist straps
  • ESD bags
  • Labels

Procedure

  1. Wear an ESD wrist strap or ESD gloves.
  2. Use the KVM to log in to the operating system of the node, enter user name root and its password (default: Root@storage), and run poweroff command to power off the node.

    Wait until the KVM has no output. The power-off is complete.

  3. Remove all power cables and external signal cables.
  4. Remove the storage node from the enclosure.
    1. Slide the storage node out of the cabinet, as shown in step 1 in Figure 2-204.
    2. Press the baffle plate and slide the storage node out of enclosure, as shown in step 2 in Figure 2-204.

    Figure 2-204  Removing a storage node

  5. Place the removed storage node on an ESD table.
  6. Take out the replacement storage node and place it on an ESD table.
  7. Remove all system disk modules, service disk modules, NVDIMM and PCIe card from the faulty storage node and install them on the replacement storage node.
    1. Remove all system disk modules, filler panel of service disk modules, NVDIMM and filler panel of PCIe card from the replacement storage node and put them into ESD bags.
    2. Use labels to record slots of all system disk modules and service disk modules removed from the faulty storage node.
    3. Remove all system disk modules, service disk modules, NVDIMM and PCIe card from the faulty storage node and install them on the replacement storage node.

      • Insert the system and service disk modules into the same slots in the replacement storage node as where they are in the faulty storage node. Otherwise, system services may be affected.
      • For the extra DIMMs that exceed the standard configuration (two for P12E and C36E and three for P25E and P36E) and are additionally installed by users, they must be also installed in the replacement storage node according to their positions in original slots.

    4. Insert the system disk modules, filler panels of service disk modules, NVDIMM, and filler panels of PCIe cards which are removed from the replacement storage node into the the faulty storage node in sequence.
  8. Put the storage node into the cabinet.
    1. Align the securing holes in the inner rails with the positioning pins on the storage node, and secure the inner rails to the storage node, as shown in step 1 in Figure 2-205.
    2. Align the inner rails with the holding rails, and push the storage node into the holding rails, as shown in step 2 in Figure 2-205.
    3. Ensure that the storage node is horizontally installed and screwed.

    Figure 2-205  Installing a storage node

  9. Connect the storage node to the power sockets and the KVM according to the cable connection labels. Then power on the storage node.
  10. Log in to the storage node using the KVM.
  11. Clear NVDIMM data.
    1. Press Delete repeatedly when the screen shown in the following figure is displayed during server startup.

      Figure 2-206  BIOS startup screen

    2. Enter a BIOS password as prompted. The screen for setting the BIOS is displayed.

      NOTE:

      The default BIOS password is Huawei12#$ for the French or American keyboard and is Huawei12£$ for the English keyboard.

      To ensure system security, you are advised to change the default BIOS password after the first login.

    3. Choose Security > Clear NvDimm Flash.

      Figure 2-207  Choosing Clear NvDimm Flash

    4. In the Clear NvDimm Flash? dialog box, select Yes and press Enter.

      Figure 2-208  Starting to clear NVDIMM data

    5. Wait 20 seconds. Check whether the displayed dialog box states "Clear NvDimm flash success. Please reboot system!"

      • If yes, go to 11.f.
      • If no, contact technical support.
      Figure 2-209  Clearing NVDIMM data

    6. Press Enter and then F10. In the dialog box that is displayed, select Yes to save the configuration and restart the storage node.

      Figure 2-210  Saving the configuration and restarting the storage node

  12. Activate the RAID group (system disks) of the storage node.
    1. Press Ctrl+C to go to the RAID basic input/output system (BIOS) interface when the following information is displayed during the startup of the operating system running on the storage node.

      Press Ctrl-C to start LSI Corp Configuration Utility...

    2. Press Enter and choose RAID Properties > View Existing Volume > Manage Volume > Activate Volume, as shown in Figure 2-211.

      Figure 2-211  Going to the Activate Volume page

    3. Press Y to activate the RAID group, as shown in Figure 2-212.

      Figure 2-212  Activating the RAID group

    4. Choose SAS Topology > LSI. Press Alt+B to set Device Info to Boot, as shown in Figure 2-213.

      Figure 2-213  Setting Boot options

    5. Press Esc and select Save changes then exit this menu. Press Esc twice and then select Exit the Configuration Utility and Reboot.
  13. Configure an IP address for the Mgmt management network port of the storage node.
    1. Run the restart command on the KVM to restart the storage node.
    2. Press Delete repeatedly when the screen shown in the following figure is displayed during server startup.

      Figure 2-214  BIOS startup screen

    3. Enter a BIOS password as prompted. The screen for setting the BIOS is displayed.

      NOTE:

      The default BIOS password is Huawei12#$ for the French or American keyboard and is Huawei12£$ for the English keyboard.

      To ensure system security, you are advised to change the default BIOS password after the first login.

    4. Choose Advanced > IPMI iBMC Configuration, press Enter.

      The IPMI iBMC Configuration is displayed, as shown in Figure 2-215.

      Figure 2-215  IPMI iBMC Configuration screen

    5. Select iBMC Configuration, press Enter.

      The iBMC Configuration screen is displayed, showing information about the IP address of the Mgmt management network port, as shown in Figure 2-216.

      Figure 2-216  iBMC Configuration screen

    6. Select IP Address in IPV4 Configuration, press Enter On the configuration screen, set the IPv4 address of the Mgmt management network port.
    7. Set the parameter in IPV4 configuration and IPV6 Configruation for the Mgmt management network port in the same way.
    8. Press F10, select yes to save settings and exit.

      After you change IP address of the Mgmt port and press F10 to save the exit, the new IP address takes effect, and the KVM disconnects from the storage node. You need to use the new IP address to re-log in.

  14. Execute the following script to check the storage node status.

    cd /opt/product/deploy/script

    ./envcheck.py

    The normal self-check result is as follows:
    [2014-11-05 16:45:02,491][INFO    ][module:envcheck, line:0620] 
    ********************************************************************************
    Check Item: SNAS Environment Check Summary
    Check Pass:True
    Check Cmd :
    Check Info:
    kernel version: OK
    nvdimm driver: OK
    TOE driver: OK
    HardRaid status: OK
    Firewall status: OK
    Command Output:
    ********************************************************************************
    

  15. Check whether the network ports on the storage node are named SLOT4-0, SLOT4-1, SLOT5-0, and SLOT5-1.
    1. Run command ifconfig -a on the KVM to check whether the network ports on the storage node are named SLOT4-0, SLOT4-1, SLOT5-0, and SLOT5-1.

      • If yes, go to 16.
      • If no, go to 15.b.

    2. Run reboot to restart the storage node, so that new PCIe cards can be identified.
  16. Connect the other signal cable to the storage node according to the cable connection label.
  17. On DeviceManager, check the status of the storage node.
    1. Start the browser on the maintenance terminal and log in to DeviceManager.
    2. In the Alarms area on the home page, check whether a new alarm is reported.

      • If yes, clear the alarm according to the alarm help.
      • If no, go to 17.c.

    3. On the right of the home page, click System to go to the System page.
    4. Click Device View tab to go to the Device View page.
    5. In Nodes, check the status of the node that is newly added to the cluster.

      • Health Status: Normal
      • Running Status: Online

  18. Ensure that the node version is the same as the version recorded in the OceanStor 9000 Version Mapping. For detail about how to query and upgrade the node version, see the PANGEA Firmware Upgrade Guide in the OceanStor 9000 Node Firmware Upgrade Guide.
  19. Inspect the system using SmartKit. Choose Home > Storage > Routine Maintenance > More > Inspection to check the system status. For details, see the OceanStor 9000 Routine Maintenance.

    • If all inspection items are successful: The inspection is successful.
    • If some inspection items fail the inspection: The inspection is failed. Rectify the faults by taking recommended actions in the inspection reports. You need to reperform inspection after rectifying. If the inspection is still failed, contact Huawei technical support.

Follow-up Procedure

After replacing the storage node, label the replaced storage node to facilitate subsequent operations.

Translation
Download
Updated: 2019-02-25

Document ID: EDOC1000162172

Views: 29014

Downloads: 28

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next