No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Administrator Guide 15

OceanStor 5300 V3, 5500 V3, 5600 V3, 5800 V3, and 6800 V3 Storage System V300R003

Routine maintenance activities are the most common activities for the storage device, including powering on or off the storage device, managing users, modifying basic parameters of the storage device, and managing hardware components. This document applies to the system administrators who are responsible for carrying out routine maintenance activities, monitoring the storage device, and rectifying common device faults.
Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Reclaiming Space (AIX)

Reclaiming Space (AIX)

This section describes how to reclaim the space used by an AIX host using full reclamation or partial reclamation.

Full Reclamation

  1. Delete a disk device.
    1. Run upadm show vlun and lsdev -Cc disk to show all LUNs and disks on the host.
    2. Run rmdev -dl hdiskX to delete the aggregation device composed of the disks to be reclaimed. hdiskX represents the aggregation device.
    3. Run upadm show path and view the result in 1.a to check whether the aggregation device has been deleted.
    4. Run lsdev -Cc disk and lsdev -Cc disk | wc -l and view the result in 1.a to check whether the device path file has been deleted.
  2. Reclaim the World Wide Name (WWN).
    1. Run show mapping_view general to obtain the host group ID in the mapping view to be reclaimed. Specify the mapping view using the mapping_view_id parameter.
    2. Obtain the information about hosts and initiators in the to-be-reclaimed host group.

      1. Run show host_group host to show the host that has been added to the to-be-reclaimed host group. Specify the ID of the to-be-reclaimed host group using the mapping_view_id parameter.
      2. Run show initiator to view the WWN about the host HBA that has been added to the to-be-reclaimed host group. Specify the ID of the host in the to-be-reclaimed host group using the host_id parameter.

    3. Run remove host initiator initiator_type=FC to remove the WWN. Specify the to-be-reclaimed WWN using the wwn parameter.
    4. Run show initiator isfree=yes initiator_type=FC to check whether the WWN is successfully deleted.

      If the deleted WWN exists in the command output, the deletion is successful.

       admin:/>show initiator isfree=yes initiator_type=FC      WWN   
                   Running Status        Free       Alias      Host ID 
             Multipath Type      -----------------   -------------------
        -----      -------    ----------      ------------------      100000000000*
            Online                Yes        --         --             
      Default

    5. In DeviceManager, view the port information about the host.
  3. Run upadm show path to check whether only the paths of the to-be-reclaimed disks are Failed. If other paths are Failed, find out the cause and solve the problem.

    NOTE:
    Wait at least 15 minutes and confirm that no errors exist on disks of other hosts. Then proceed to the next step.

  4. Delete a mapping view.
    1. Run show mapping_view general to obtain the IDs of the LUN group and host group in the mapping view to be reclaimed. Specify the mapping view using the mapping_view_id parameter.
    2. Run remove mapping_view lun_group to delete the LUN group mapped to the mapping view. Specify the to-be-claimed mapping view and the to-be-claimed LUN group using the mapping_view_id and lun_group_id parameters.
    3. Run remove mapping_view port_group to delete the port group in the mapping view. Specify the to-be-claimed mapping view and the to-be-claimed port group using the mapping_view_id and port_group_id parameters.
    4. Run remove mapping_view host_group to delete the host group in the mapping view. Specify the to-be-claimed mapping view and the to-be-claimed host group using the mapping_view_id and host_group_id parameters.
    5. Run delete mapping_view to delete a mapping view. Specify the to-be-claimed mapping view using the mapping_view_id parameter.
    6. Run show mapping_view general to check whether that mapping view has been deleted.
      The deleted mapping view should not exist in the command output.
    7. In DeviceManager, view all mapping views. The deleted mapping view should not exist.
  5. Delete a LUN Group
    1. Run remove lun_group lun to remove all LUNs in the LUN group. Specify the to-be-reclaimed LUN group and the to-be-removed LUN using the lun_group_id and lun_id_list parameters.
    2. Run delete lun_group to delete a LUN group. Specify the to-be-reclaimed LUN group using the lun_group_id parameter.
  6. Delete a port group.
    1. Run remove port_group port to remove all ports in the port group. Specify the to-be-reclaimed port group and to-be-removed ports using the port_group_id and port_id_list parameters.
    2. Run delete port_group to delete a port group. Specify the to-be-reclaimed port group using the port_group_id parameter.
  7. Delete a host group.
    1. Run remove host_group host to remove all hosts in the host group. Specify the to-be-reclaimed host group and to-be-removed hosts using the host_group_id and host_id_list parameters.
    2. Run delete host_group to delete a host group. Specify the to-be-reclaimed host group using the host_group_id parameter.
    3. Run remove host initiator initiator_type=FC to remove all initiators of the to-be-reclaimed host. Specify the to-be-removed initiator using the wwn parameter.
    4. Run delete host to delete a to-be-reclaimed host. Specify the to-be-reclaimed host using the host_id parameter.
  8. Uninstall UltraPath.
    1. Run lslpp -L | grep -i UltraPath to view the version of the installed UltraPath.
    2. Run installp -u program_name to uninstall UltraPath, where program_name is the name of UltraPath shown in 8.a.
    3. Run lslpp -L | grep -i UltraPath. If the command output does not contain the UltraPath shown in 8.a, the uninstallation is successful.
  9. Run shutdown -Fr to restart the host.
  10. On the switch, delete zone configurations.

Partial Reclamation

  1. Delete a disk device.
    1. Run upadm show vlun and lsdev -Cc disk to show all LUNs and disks on the host.
    2. Run rmdev -dl hdiskX to delete the aggregation device composed of the disks to be reclaimed. hdiskX represents the aggregation device.
    3. Run upadm show path and view the result in 1.a to check whether the aggregation device has been deleted.
    4. Run lsdev -Cc disk and lsdev -Cc disk | wc -l and view the result in 1.a to check whether the device path file has been deleted.
  2. Remove the to-be-reclaimed LUN from the owning LUN group.
    1. Run show mapping_view general to obtain the details about the to-be-reclaimed mapping view. Specify the mapping view using the mapping_view_id parameter.
    2. Run remove lun_group lun to remove the to-be-reclaimed LUN from the LUN group. Specify the LUN group and to-be-reclaimed LUN using the lun_group_id and lun_id_list parameters.
    3. Run show lun_group lun to see whether the to-be-reclaimed LUN has been removed from the LUN group. Specify the LUN group where the to-be-claimed LUN resides using the lun_group_id parameter.

      The deleted LUN should not exist in the command output.

      admin:/>show lun_group lun
      lun_group_id=LGID      ID     Name           Pool ID    Capacity 
          Helth Status      Running Status      Type      WWN      ----
        -------------  ---------- ------------- ---------------   -------------------
      -------   ---------------------------------      1      LUN1     
           0          1.000TB       Normal            Online           
        Thick     60022a11000******************

Translation
Download
Updated: 2019-04-17

Document ID: EDOC1000084191

Views: 85801

Downloads: 2300

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next