No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Basic Storage Service Configuration Guide for Block

OceanStor V5 Series V500R007

This document is applicable to OceanStor 5110 V5, 5300 V5, 5500 V5, 5600 V5, 5800 V5, 6800 V5, 5300F V5, 5500F V5, 5600F V5, 5800F V5, 6800F V5, 18500 V5, 18800 V5, 18500F V5, and 18800F V5. It describes the basic storage services and explains how to configure and manage them.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Planning LUNs

Planning LUNs

This section describes how to plan types, cache policies, prefetch policies, and value-added features for LUNs based on the data storage requirements to achieve optimal performance of the storage system.

Planning the LUN Type

Storage systems support two types of LUNs: common LUNs (including thin LUNs and thick LUNs) and Protocol Endpoint (PE) LUNs.

  • Common LUN:
    • Thin LUN: A thin LUN is configured with an initial capacity when created and dynamically allocated required storage resources when its available capacity is insufficient.
    • Thick LUN: When a thick LUN is created, the system uses the auto provisioning technology to allocate a fixed capacity of storage resources to the thick LUN.
      NOTE:

      When a host initially reads data from and writes data to a storage system, thick LUNs deliver better performance and thin LUNs provide higher space utilization.

  • PE LUN: PE LUNs are dedicated to VMware Virtual Volume (VVol) LUNs in VMware ESXi software defined storage.

Planning Cache Policies (Applicable to Common LUNs)

Cache policies are classified into read policies and write policies. Improper setting of read and write policies will deteriorate the read and write performance and compromise the reliability of the storage system.

Table 3-12 describes the two cache policies and their optional policies.

Table 3-12 Cache policies

Policy

Description

Optional Policy

Read policy

Residence policy of data in the cache after the application server delivers read I/O requests.

The following cache policies are available on the storage system:

  • Resident: applies to randomly accessed services, ensuring that data can be cached as long as possible to improve the read hit ratio.
  • Default: applies to regular services, striking a balance between the hit ratio and disk access performance.
  • Recycle: applies to sequentially accessed services, releasing cache resources for other services as soon as possible.

Write policy

Residence policy of data in the cache after the application server delivers write I/O requests.

Planning Prefetch Policies (Applicable to Common LUNs)

Applications have different size requirements for data reads. The prefetch policies of LUNs can improve the read performance.

The storage system supports four prefetch policies: intelligent prefetch, constant prefetch, variable prefetch, and non-prefetch. Table 3-13 describes the principles and application scenarios of the four prefetch policies.

Table 3-13 Principles and application scenarios of prefetch policies

Prefetch Policy

Principle

Application Scenario

Intelligent prefetch

The storage system analyzes whether the requested data is sequential. If it is, the data following the currently requested data is prefetched from disks to the cache to improve the cache hit ratio. The length of intelligent prefetch ranges from the start address of the currently requested data to the end address of the CK.

Suitable for single-stream read applications or for the read applications that cannot be determined sequential or random, for example, file read/write.

Constant prefetch

After receiving a data read request, the storage system prefetches the data to the cache based on the preset prefetch length, regardless of the read length specified in the I/O request.

Suitable for the sequential read applications that have a fixed size, for example, requests initiated by multiple users for playing streaming media on demand at the same bit rate.

Variable prefetch

After receiving a data read request, the storage system prefetches the data to the cache based on a multiple of the read length specified in the I/O request.

Suitable for the sequential read applications that have an unfixed size or for the multi-user concurrent read applications whose prefetch data amount cannot be determined, for example, requests initiated by multiple users for playing multimedia on demand at different bit rates.

Non-prefetch

The host reads data directly from disks based on the read length specified in the I/O request without a prefetch process.

Suitable for small-block random read applications, for example, databases.

NOTE:

In the scenario where read services are running on the storage system, you are advised to use the non-prefetch policy. This because cache prefetch may deteriorate the system performance in such a scenario.

Planning Value-Added Features

For details about the supported value-added features, see "Software Specifications" in the product description specific to your product model and version.

For details about how to plan value-added features, see the corresponding feature guides.

Download
Updated: 2019-08-30

Document ID: EDOC1000181506

Views: 77529

Downloads: 582

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next