No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

FusionCloud 6.3.1.1 Solution Description 04

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Related Concepts

Related Concepts

Device Type

Definition

Device types of EVS disks are divided based on whether advanced SCSI commands are supported. The device type can be Virtual Block Device (VBD) or Small Computer System Interface (SCSI).
  • VBD: EVS disks of this type support only basic SCSI read and write commands. They are usually used in common scenarios, for example, OA, tests, Linux clusters such as RHCS.
  • SCSI: EVS disks of this type support transparent SCSI command transmission and allow the ECS operating system to directly access the underlying storage media. SCSI EVS disks support advanced SCSI commands (such as SCSI-3 persistent pre-lock) in addition to basic SCSI read and write commands. They can be used in cluster scenarios where data security is ensured by using the SCSI lock mechanism, such as the Windows MSCS cluster.
    NOTE:

    For details about BMS OSs supported and BMS software required by SCSI EVS disks, see section Usage requirements on SCSI EVS disks.

Usage requirements on SCSI EVS disks

Currently, only SCSI EVS disks can be attached to BMSs. VBD EVS disks attached to BMSs will be used as SCSI EVS disks by default.

The BMS OS is preinstalled with the driver required for using SCSI EVS disks, and you do not need to install the driver.

Disk Type

Definition

The disk type can be selected during disk creation. A disk type represents backend storage devices used by a group of disks. You can divide disk types of EVS disks based on backend storage types to meet different performance requirements of services.

Based on performance differences of backend storage used by disks, typical disk types and their application scenarios are as follows:

  • Common performance: EVS disks of this type are suitable for scenarios that require large capacity, medium-level read and write speed, and relative fewer transactions, such as the scenario for deploying development and test applications.
  • Medium performance: EVS disks of this type are suitable for scenarios that require common performance but rich enterprise-class features. They can be used in common databases, application VMs, and middleware VMs.
  • High performance: EVS disks of this type are suitable for scenarios that require high performance, fast read and write speed, and large throughput, such as data warehouses.
  • Ultra-high performance: EVS disks of this type are suitable for data-intensive scenarios that require very high I/O performance, such as NoSQL and relational databases.

Changing the Disk Type

When the read and write performance of the storage device where the upper-layer service resides no longer suits the service, you can change the disk type to alter the type of the storage device to change the read and write performance, meeting the requirements of varying service performance of the instance. Examples are as follows:

  • When your service requires a higher read and write performance, you can migrate your service from disks created on low-speed storage media to disks created on high-speed storage media to improve the read and write performance.
  • If the priority of the performance of a service degrades, you can migrate your service to disks created on low-performance storage media. This helps release storage resources for high-performance disks for other services.

You can change the disk type of an in-use disk (a disk that has been attached to an instance). You can also detach a disk from the instance, and then change the disk type of the disk.

If you change the disk type of an in-use EVS disk, the service of the source EVS disk on the instance will be migrated to the destination EVS disk without interrupting host services. After service migration, the destination EVS disk replaces the source EVS disk to run the service, without any adverse impact on customer experience. However, when you change the disk type of an in-use EVS disk, the performance of the instance is adversely affected to some extent.

Figure 8-18 shows the implementation principle of changing the disk type. In the following figure, two disks are attached to an instance. One of the disks serves as a log disk, and the other serves as a data disk. The original disk type of the two disks is SLA_SAS. Because the service has a higher performance requirement on the data disk, the disk type of the data disk is changed from SLA_SAS to SLA_SSD, seamlessly migrating service data to a disk of the target disk type. The backend storage device performs service data migration. After service data migration, the system automatically attaches the destination disk to the instance, without service interruption. In addition, the source disk will be deleted to release storage resources for other services.
Figure 8-18 Implementation principle of changing the disk type

Shared Disk

In the traditional cluster architecture, multiple computing nodes need to access the same data. This ensures that when a single or multiple computing nodes are faulty, the HA cluster can continue providing services, which means that a faulty component will not cause service interruption. Therefore, important data files need to be stored on shared block storage, and shared block storage is centrally managed using the cluster file system. When multiple frontend computing nodes access data, the data will be the same on the multiple computing nodes.

The shared disk is designed for the core service HA architecture of enterprise customers. The shared disk is suitable for scenarios that require shared block storage access in the share-everything architecture. The scenarios include the HA Oracle RAC database architecture for government, enterprise, and finance customers and the HA server cluster architecture.

Definition

EVS disks can be classified into non-shared EVS disks and shared EVS disks based on whether an EVS disk can be attached to multiple instances. A non-shared EVS disk can be attached to only one instance. A shared EVS disk can be attached to multiple instances.

Currently, shared EVS disks can be used as data disks only and cannot be used as system disks.

You can use the EVS console to create VBD shared EVS disks or SCSI shared EVS disks. However, only SCSI EVS disks can be attached to BMSs. Therefore, you can attach only SCSI shared EVS disks to BMSs.

You can use the BMS console to create VBD shared EVS disks (default EVS disks) together with BMSs, and attach the VBD shared EVS disks to BMSs as data disks. VBD EVS disks attached to BMSs will be used as SCSI EVS disks by default.

SCSI Reservation

SCSI shared EVS disks support SCSI reservation. If SCSI reservation is required for your applications, create SCSI shared EVS disks.

SCSI reservation is the basic mechanism for multiple hosts to use disks. In a shared storage environment, multiple service hosts may access a disk simultaneously. If multiple hosts perform the write operation on the disk at the same time, the disk does not know data from which host will be written first. To prevent this problem that may cause data damage, SCSI reservation is introduced.

Figure 8-19 shows how SCSI reservation is implemented. When a SCSI shared disk is attached to multiple BMSs, if one of the BMSs sends a SCSI reservation command to the SCSI shared disk, the SCSI shared disk is locked for the other BMSs. In this case, the other BMSs cannot write data into the SCSI shared disk.

Figure 8-19 SCSI reservation implementation mechanism

Usage Instructions

A shared EVS disk is essentially the disk that can be attached to multiple instances for use, which is similar to a physical disk in that the disk can be attached to multiple physical servers, and each server can read data from and write data into any space on the disk. If the data read and write rules, such as the read and write sequence and meaning, between these servers are not defined, data read and write interference between servers or other unpredictable errors may occur.

Shared EVS disks provide block storage devices whose data can be randomly read or written and allows shared access. Shared EVS disks do not provide the cluster file system. You need to install the cluster file system to manage shared EVS disks.

If a shared EVS disk is attached to multiple instances but is managed using a common file system, disk space allocation conflict will occur and data files will be inconsistent. The details are as follows:

  • Disk space allocation conflict

    Suppose that a shared EVS disk is attached to multiple instances. When a process on instance A writes files into the shared EVS disk, it checks the file system and available disk space. After files are written into the shared EVS disk, instance A will change its own space allocation records, but will not change the space allocation records on the other instances. Therefore, when instance B attempts to write files to the shared EVS disk, it may allocate disk space addresses that have been allocated by instance A, resulting in disk space allocation conflict.

  • Inconsistent data files

    Suppose instance A reads data and records it in the cache. When another process on instance A accesses the same data, the process will read the data directly from the cache. If instance B changes the data, instance A will not know and will read the data from the cache. As a result, service data will be inconsistent on instance A and instance B.

Therefore, the proper method for using shared EVS disks is to use a cluster file system to centrally manage the block devices. The cluster file system can be Oracle RAC, Windows WSFC cluster, Linux RHCS cluster, Veritas VCS cluster, or CFS cluster application. In typical Oracle RAC service scenarios, it is recommended that you use ASM to manage storage volumes and the file system in a unified manner.

EVS Disk snapshot

Definition

EVS disk snapshot is an important data recovery method that records the status of EVS disk data at a specific point in time. The snapshot created for an EVS disk at a certain point in time is independent from the life cycle of the EVS disk. The snapshot can be used to roll back and restore data of the EVS disk at the time when the snapshot was taken.

A snapshot is different from a backup. A backup is a copy of EVS disk data at a certain point in time while a snapshot is not. Therefore, a snapshot occupies less space and is executed faster than a copy. However, if the disk is physically damaged, data cannot be restored using the snapshot rollback function. In this case, backup can be used.

Currently, snapshots have to be created manually.

You can create an EVS disk from a snapshot. The created EVS disk contains the data of the snapshot, and is a precise copy of the source EVS disk. An EVS disk created from a snapshot does not need to be partitioned or formatted, and no file system needs to be created. When the EVS disk is attached to an instance, the EVS disk can read and write data. Therefore, the snapshot is an important way of sharing and migrating data.

Snapshots are region-specific, you can create EVS disks from snapshots only in the AZ where EVS disks need to be created.

Application Scenarios

The snapshot is a convenient and efficient means of data protection, and it is recommended that you use this means of data protection in the following scenarios:

  • Routine data backup and restoration

    Snapshots are used to periodically back up important service data on system disks and data disks to prevent data loss caused by misoperations, attacks, or viruses.

    When data loss or data inconsistency occurs on an EVS disk due to misoperations, viruses, or hacker attacks, you can use a snapshot to restore a previous normal status of the EVS disk. In addition, you are advised to create disk snapshots before a big change (such as application software upgrade and service data migration). If the operation fails, you can roll back the snapshots to restore service data, as shown in Figure 8-20.

    Figure 8-20 Using snapshots for routine data backup and restoration

    Multi-service quick deployment

    You can use a snapshot to create multiple disks containing the same initial data, and these disks can be used as data resources for various services, such as data mining, report query, and development and test. This method protects the initial data and creates disks rapidly, meeting the diversified service data requirements. Figure 8-21 shows the procedure for using a snapshot to deploy multiple services.

    Figure 8-21 Using a snapshot to deploy multiple services

Recommendation Policies

You can choose an appropriate snapshot policy and retention policy based on your service type. Recommended policies are as follows:

  • Core services: For core services that require very high Recovery Point Objective (RPO), it is recommended that data be backed up every several hours and snapshots be retained for one day.
  • Production services: For production services, it is recommended that data be backed up every week and snapshots be retained for one month.
  • Archiving services: For archiving services, it is recommended that data be backed up every month and snapshots be retained for one year.

Implementation Principles

The snapshot implementation principle varies with the type of backend storage where the disk resides. Snapshot implementation principles for different backend storage types are described as follows:

  • OceanStor V3 or OceanStor V5 series as backend storage

    A snapshot is a copy of source disk data, which is generated at a specific time. A snapshot consists of a source disk, Copy-on-Write (COW) data space, and snapshot data. Snapshots are implemented using the mapping table and COW technology. Figure 8-22 shows the snapshot implementation principle.

    Figure 8-22 Snapshot implementation principle (OceanStor V3 or OceanStor V5 series as backend storage)
    • Before creating a snapshot: When no snapshot is created for a disk, the procedure for writing data into the disk is the same as the procedure for writing data into other disks. Data changes will be directly written into disk data blocks, overwriting the original data, and the original data will not be retained.
    • After creating a snapshot: After a snapshot is created, a data copy that is identical to the source disk is generated. In this step, the backend storage system dynamically allocates COW data space in the storage pool where the source disk resides, and automatically generates a snapshot. The pointer of the snapshot points to the storage location of source disk data.
    • Writing data into the source disk: When an instance sends a request to write data into the source disk, the backend storage system will not write the new data immediately. Instead, the backend storage system employs the COW mechanism to copy the original data from the source disk to the COW data space, modifies the mapping in the mapping table, and writes the new data to the source disk. As shown in Figure 8-22, when data A of the source disk needs to be changed, data A will be copied to the COW data space, and then the snapshot pointer will be changed to point to the storage location of data A in the COW data space. Finally, data A' will be written into the source disk.
  • Dorado V3 series as backend storage

    The core technology in snapshot implementation is Redirect-on-Write (ROW). Figure 8-23 shows the snapshot implementation principle.

    Figure 8-23 Snapshot implementation principle (Dorado V3 series as backend storage)
    • Before creating a snapshot: When no snapshot is created for a disk, the procedure for writing data into the disk is the same as the procedure for writing data into other disks. Data changes will be directly written into disk data blocks, overwriting the original data, and the original data will not be retained.
    • After creating a snapshot: After a snapshot is created, a data copy that is identical to the source disk is generated. In this step, the backend storage system copies the pointer of the source disk to the snapshot, and the pointer of the snapshot points to the storage location of source disk data.
    • Writing data into the source disk: When an instance sends a request to write data into the source disk after a snapshot is created, the storage system uses the ROW technology to save the new data to a new location and changes the pointer of the source disk to point to the storage location of the new data. The pointer of the snapshot still points to the storage location of the original data. The source disk data at the time when the snapshot was created is saved. As shown in Figure 8-23, when data A of the source disk needs to be changed, data A' (new data) will be written into a new location, and the pointer of the source disk will be changed to point to the storage location of data A'. The pointer of the snapshot still points to the storage location of data A (original data).
  • FusionStorage as backend storage

    Snapshot data is based on the Distributed Hash Table (DHT) mechanism. Figure 8-24 shows the snapshot implementation principle.

    Figure 8-24 Snapshot implementation principle (FusionStorage as backend storage)

Rolling Back a Disk from a Snapshot

Snapshot rollback is a mechanism for quickly restoring data on the source disk by using the snapshot of the source disk at a certain point in time. If the data on the source disk is accidentally deleted, damaged, or infected by viruses and the source disk is not physically damaged, you can use the snapshot rollback function to quickly restore data on the source disk at the point in time when the snapshot was taken, reducing the amount of data lost. Figure 8-25 shows snapshot rollback process.

Figure 8-25 Snapshot rollback

EVS Disk Quota

A Quota is a resource management and control technology that limits the maximum number of resources (including the resource capacity and number of resources) that can be used by a single VDC, preventing resources from being overused by users in some VDCs and affecting other VDCs. When creating a level-1 VDC, the operation administrator can set the total quota (capacity and quantity) of EVS disks in the VDC and the EVS disk quota of the current-level VDC. When creating a lower-level VDC, the VDC administrator can set the total quota of EVS disks in the lower-level VDC and the EVS disk quota of the current-level VDC. Figure 8-26 shows the quota of EVS disks in VDCs of different levels.

Figure 8-26 EVS disk quota

There are three levels of VDCs in the figure.

  • Users in the VDC of each level can use EVS disk resources in the quota of the current-level VDC.
  • The maximum total quota of the level-2 VDC is the total quota of the level-1 VDC minus the quota of the current-level VDC corresponding to the level-1 VDC.
  • The maximum total quota of the level-3 VDC is the total quota of the level-2 VDC minus the quota of the current-level VDC corresponding to the level-2 VDC.

Mapping Between Mount Points and Device Names

A block storage device is a storage device that moves data in sequences by bytes or bits (blocks). These devices support random access and wide use of cache I/O, including hard disks, CD-ROM, and flash drives. A block storage device can be attached to a computer or remotely accessed as it is attached to a computer. The instance supports the following block storage devices:

  • Local disk: is the hard disk that is attached to the physical machine (host machine) where the instance is located and is a temporary block storage device.
  • EVS disk: is a cloud disk that is attached to an instance and is a persistent block storage device.

The attachment point is the entry directory of the disk file system in Linux. It is similar to the drive letters, such as C:, D:, and E:, which are used to access different partitions in Windows. Each attachment point corresponds to a device name. Users can attach the corresponding disk to an instance by specifying the device name of the attachment point.

Block Storage Device Mapping

The instance uses the device name (for example, /dev/sdb) to describe the block storage device and uses the block storage device mapping to specify the block storage device to be attached to the instance. Figure 8-27 shows an example of mapping between EVS disks as well as local disks and instances. In the preceding figure, one local disk is attached to the Linux instance, the local disk is mapped to /dev/sda as the system disk, and two EVS disks are mapped to /dev/sdb and /dev/sdc, respectively, as data disks.

Figure 8-27 Example of mapping between EVS disks as well as local disks and instances
Translation
Download
Updated: 2019-10-23

Document ID: EDOC1100063247

Views: 64628

Downloads: 182

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next