No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.


OceanStor UltraPath for vSphere 21.3.0 User Guide 03

This document covers the functions, features, installation, configuration, upgrade, uninstallation, maintenance, troubleshooting, and FAQs of OceanStor UltraPath for vSphere (UltraPath forvSphere). UltraPath for vSphere is the multipathing software developed by HuaweiTechnologies Co., Ltd (Huawei for short). The document aims at helping users to be fully familiar with UltraPath for vSphere and its use.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Principles and Functions

Principles and Functions

UltraPath provides powerful functions and features, ensuring secure, stable, and fast service operation. This section introduces the basic principles and functions of UltraPath.

Integrating UltraPath with Operating Systems

UltraPath is a type of filter driver software running in host kernels. It can manage and process disk creation/deletion and I/O delivery of operating systems.

  • Figure 1-3 shows the layer where the UltraPath driver resides in Windows, Linux, and Solaris.
    Figure 1-3 Layers where UltraPath resides in different operating systems

  • On the AIX and VMware ESXi platform, UltraPath is implemented based on the multipath framework of operating systems.
    • UltraPath for AIX is a kernel driver developed based on the MPIO of AIX operating systems.

      MPIO is introduced to AIX 5.2 TL04 and 5.3, as well as later versions. With MPIO, a storage system can connect to a host through multiple paths and is present as one device on the host. MPIO employs Path-Control Modules (PCMs) to implement multipath management, such as path adding or deleting, I/O path selection, path detection, and failover.

    • UltraPath for vSphere is a Multipath Plug-in adaptable to the pluggable storage architecture (PSA) of VMware vShpere/ESXi platforms.

UltraPath Functions

  • Masking of Redundant LUNs

    In a redundant storage network, an application server with no multipathing software detects a LUN on each path. Therefore, a LUN mapped through multiple paths is mistaken for two or more different LUNs. Redundant LUNs exist because each path reports a LUN directly to the application server.

    The dual-link direct-connection network shown in the left side of Figure 1-4 is an example. As shown in the figure, the storage system maps one LUN to the application server. Since two paths exist between the application server and the storage system and no multipathing software is installed, the application server simultaneously detects two LUNs, LUN0 and LUN1, indicating that a redundant LUN exists. The two detected LUNs actually are the same LUN from the storage system. Due to the identification errors of the application server, different applications on the application server repeatedly write different data to the same location of the LUN, resulting in data corruption. To resolve this problem, the application server must identify which is the real and available LUN.

    Figure 1-4 Masking the redundant LUN

    As UltraPath is able to acquire configuration information of the storage system, it clearly knows which LUN has been mapped to the application server. As shown in the right side of Figure 1-4, UltraPath installed on the application server masks redundant LUNs on the operating system driver layer to provide the application server with only one available LUN, the virtual LUN. In this case, the application server only needs to deliver data read and write operations to UltraPath that masks the redundant LUNs, and properly writes data into LUNs without damaging other data.

  • Optimum Path Selection

    To ensure service continuity and stability, a storage system is generally equipped with two or more controllers to implement redundancy parts. Each LUN in a storage system has its owning controller, and no other controllers can operate on the LUN, preventing data corruption due to possible controller conflicts. If an application server wants to access a LUN through non-owning controllers, this access request is still redirected to the owning controller. Therefore, the highest I/O speed occurs when application servers access the target LUN directly through the owning controller.

    In a multipath environment, the owning controller of a LUN on the application server that corresponds to the LUN on the storage array is called the prior controller of the LUN on the application server. Therefore, the highest I/O speed occurs when an application server with UltraPath inside accesses the LUN on the storage system through the prior controller (owning controller). The path to the prior controller is the optimum path.

    As UltraPath is able to acquire owning controller information, it can automatically select one or more optimum paths for data streams to achieve the highest I/O speed.

    As shown in Figure 1-5, the owning controller (prior controller) is controller A, and UltraPath selects the path to controller A as the optimum path.

    Figure 1-5 Optimum path selection by UltraPath

  • Failover and Failback
    • Failover

      When a path fails, UltraPath fails over its services to another functional path. Figure 1-6 shows the failover process.

      Figure 1-6 UltraPath failover
      1. An application on the the application server sends an I/O request to the virtual LUNs displayed on UltraPath.
      2. UltraPath designate Path0 to transfer this I/O request.
      3. A fault on Path0 prevents this I/O from being sent to the storage system. The I/O is returned to UltraPath.
      4. UltraPath designate Path1 to transfer this I/O request.
      5. Path1 is normal. The I/O request is sent to the storage system successfully. A message indicating the I/O request is sent successfully is sent to UltraPath.
      6. UltraPath sends the message to the application server.

        In the process displayed in 3, the HBA tries reconnection for a period of time after a path is faulty. During the period of time, I/Os remain in the HBA instead of returning back to UltraPath. For this reason, I/Os are blocked for a period of time during the failover.

    • Failback

      UltraPath automatically delivers I/Os to the first path again after the path recovers from the fault. There are two methods to recover a path:

      • For a hot-swappable system (for example, Windows), the SCSI device will be deleted if the link between an application and a storage array is down. After the link is recovered, a SCSI device will be created. UltraPath can immediately sense the path recovery.
      • For a non-hot-swappable system (for example, AIX or earlier versions of Linux), UltraPath periodically tests and detects the path recovery.
  • I/O Load Balancing

    UltraPath provides load balancing within a controller and across controllers, as shown in Figure 1-7.

    Figure 1-7 Two I/O load balance modes
    • For load balancing within a controller, I/Os poll among all the paths of the controller.
    • For load balancing across controllers, I/Os poll among the paths of all these controllers.
      The path selection algorithm provided by UltraPath is as follows:
      • Round robin: As is shown in Figure 1-8, when an application server delivers I/Os to a storage system, UltraPath sends the first set of I/Os through path0 and the second set of I/Os through path1, and so on. Paths are used in turn to ensure that each path is fully utilized.
        Figure 1-8 Round robin algorithm

      • Minimum queue depth: As shown in Figure 1-9, UltraPath calculates the number of I/Os queuing in each path and delivers new I/Os to the path with minimum number of I/Os. The path with shortest I/O queue has the priority to send new I/Os.
        Figure 1-9 Minimum queue depth algorithm

      • Minimum task: On the basis of minimum queue depth algorithm, UltraPath uses the block size to calculate the overall load of each path and delivers new I/Os to the path with the minimum data load. The path with minimum I/O load has the priority to send new I/Os.

        According to the test, the minimum queue depth algorithm is superior than other algorithms in both performance and reliability. You are advised to use the minimum queue depth algorithm.

  • Path test

    UltraPath tests the following paths:

    • Faulty paths.

      UltraPath tests faulty paths with a high frequency to detect the path recover as soon as possible.

    • Idle, available paths.

      UltraPath tests idle paths to identify faulty paths in advance, preventing unnecessary I/O retires. The test frequency is kept low to minimize impact on service I/Os.

SAN Boot Functions

SANBOOT is a network storage management system that stores data (including servers' operating systems) totally on storage systems. Specifically, operating systems are installed on and booted from SAN storage devices. Therefore, SANBOOT is also called REMOTE BOOT or boot from SAN.

SAN Boot is beneficial to system integration and central management. Its advantages are as follows:

  • Server integration: Blade servers are used to integrate a large number of servers within a small space. There is no need to configure local disks.
  • Centralized management: Boot disks of servers are centrally managed on a storage device. All advanced management functions of the storage device can be fully utilized. For example, the volume replication function can be used for backup. Devices of the same model can be quickly deployed using the volume replication function. In addition, the remote mirroring function can be used for disaster recovery.
  • Quick recovery: Once a server that is booted from SAN fails, its boot volume can be quickly mapped to another server, achieving quick recovery.

Boot modes supported by UltraPath:

  • Boot from Local: Install the operating systems on the local disks of an application server and start the application server from local disks.
  • Boot from SAN: Install the operating systems on the SAN storage devices and start the application server from the SAN storage devices.
Updated: 2019-06-29

Document ID: EDOC1100052672

Views: 15701

Downloads: 150

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Previous Next