No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

SmartMulti-Tenant Feature Guide for Block

OceanStor V5 Series V500R007

This document is applicable to OceanStor 5110 V5, 5110F V5, 5300 V5, 5300F V5, 5500 V5, 5500F V5, 5600 V5, 5600F V5, 5800 V5, 5800F V5, 6800 V5, 6800F V5, 18500 V5, 18500F V5, 18800 V5, and 18800F V5. This document describes the implementation principles and application scenarios of the SmartMulti-Tenant feature. Also, it explains how to configure and manage SmartMulti-Tenant.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Planning LUNs

Planning LUNs

Select appropriate LUN types, read/write policies, and prefetch policies for LUNs based on the data storage requirements to achieve the optimal performance of the storage system.

Planning the LUN Type

Storage systems support two types of LUNs: common LUNs (including thin LUNs and thick LUNs) and Protocol Endpoint (PE) LUNs.

  • Common LUN:
    • Thin LUN: A thin LUN is configured with an initial capacity when created and dynamically allocated required storage resources when its available capacity is insufficient.
    • Thick LUN: When a thick LUN is created, the system uses the auto provisioning technology to allocate the fixed capacity of storage resources to the thick LUN.
NOTE:

When a host initially reads data from and writes data to a storage system, thick LUNs deliver better performance and thin LUNs boasts higher space utilization.

  • PE LUN: PE LUNs are applied only for VMware Virtual Volume (VVol) LUNs in VMware ESXi 6.0 software defined storage.

Planning a Cache Policy (Applicable to Common LUNs)

Cache policies are divided into read policies and write policies. Improper setting of read and write policies will deteriorate the read and write performance and reduce the reliability of a storage system. A storage system supports two cache policies: read policy and write policy.

Table 2-6 describes the two cache policies and optional policies.

Table 2-6 Description and optional policies of the cache policy

Policy

Description

Optional Policy

Read policy

Cache read policy refers to the residence policy of data in the cache after the application server delivers the read I/O request.

The following cache policies are available on the storage system:

  • Resident: Applies to randomly accessed services, ensuring that data can be cached as long as possible to improve the read hit ratio.
  • Default: Applies to regular services, striking a balance between the hit ratio and disk access performance.
  • Recycle: Applies to sequentially accessed services, releasing cache resources for other services as soon as possible.

Write policy

Cache write policy refers to the residence policy of data in the cache after the application server delivers the write I/O request.

Planning Prefetch Policies (Applicable to Common LUNs)

Applications have different size requirements for data reads. The prefetch policies of LUNs can improve the read performance.

Storage system supports four prefetch policies: intelligent prefetch, constant prefetch, variable prefetch, and non-prefetch. Table 2-7 describes the principles and application scenarios of the four prefetch policies.

Table 2-7 Principles and application scenarios of prefetch policies

Prefetch Policy

Principle

Application Scenario

Intelligent prefetch

Intelligent prefetch analyzes whether the requested data is sequential. If it is, the data following the currently requested data is prefetched from disks to the cache to improve the cache hit ratio. The length of intelligent prefetch ranges from the start address of the currently requested data to the end address of the CK.

Suitable for single-stream read applications or for the read applications that cannot be determined sequential or random, for example, file read/write.

Constant prefetch

After receiving a data read request, the storage system prefetches the data to the cache based on the preset prefetch length, regardless of the read length specified in the I/O request.

This policy is applicable to the sequential read applications that have a fixed size, for example, requests initiated by multiple users for playing streaming media on demand at the same bit rate.

Variable prefetch

After receiving a data read request, the storage system prefetches the data to the cache based on a multiple of the read length specified in the I/O request.

This policy is applicable to the sequential read applications that have an unfixed size or for the multi-user concurrent read applications whose prefetch data amount cannot be determined, for example, requests initiated by multiple users for playing multimedia on demand at different bit rates.

Non-prefetch

The host reads data directly from disks based on the read length specified in the I/O request without a prefetch process.

Suitable for small-block random read applications, for example, databases.

NOTE:

Cache prefetch may deteriorate the system performance when random read services are running on the storage system. You are advised to use the non-prefetch policy.

Translation
Download
Updated: 2019-07-11

Document ID: EDOC1000181478

Views: 33276

Downloads: 198

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next