No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

FusionCloud 6.3.1.1 Solution Description 04

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Region Deployment Principles

Region Deployment Principles

FusionCloud involves multiple DCs that may belong to different Regions. Figure 3-2 and Table 3-2 list the principles for Global deployment or Region deployment.

Figure 3-2 Principles for Global or Region deployment
Table 3-2 Principles for Global or Region deployment

Deployment Type

Description

Planning Principle

Global

Global is a top-level logical concept in the FusionCloud solution. Only one Global can be deployed in one FusionCloud solution.

ManageOne is deployed in the Global to serve as the unified management platform for multiple Regions. Identity and Access Management (IAM) serves as the global unified authentication service.

Region

Region is a Layer 0 regional concept, that is, the geographic Region. Region can be considered as a circle with the access latency as its radius.

  • Access latency: Users in a Region receive services within a latency shorter than a specific value, for example, 100 ms.
  • Coverage area: Service quality cannot be guaranteed beyond the radius (latency). In this case, another Region is required for service provisioning.
  • Geographic redundancy: Regions are geographically diverse and allow geographical redundancy in different levels.

Region planning in a project must consider physical locations and network solutions.

  • If the delay exceeds 2 ms between two physical DCs, they must belong to different regions.
  • The inter-device management, storage, and service traffic are relatively heavy in a Region, requiring large bandwidth. It is recommended that devices in a Region do not belong to different physical DCs.
  • The management-plane devices are interconnected in a region. If the project has strict security requirements, services with high security levels can be deployed in an independent Region.
  • The cloud server DR service (CSDR) is a cross-region DR service. When using the CSDR service, you need to plan an active region and a DR region.
NOTE:

Based on network solutions, Region types are classified into software SDN and hardware SDN. Only one type of Regions can be deployed on a cloud.

AZ

An available zone (AZ) is a logical zone of physical resources (computing, storage, and network resources).

A Region can contain multiple AZs. An AZ is included in a Region and cannot span across a Region. Multiple AZs within a Region are interconnected using high-speed optical fibers to meet requirements of building cross-AZ high-availability systems. Each AZ can contain one or multiple host groups.

  • Resource pool type: Different types of computing resource pools must be divided into different AZs, for example, BMS resource pools, VM resource pools, and converged resource pools.
  • Reliability: Physical resources in an AZ share the reliability fault points, such as the power supply, disk array, and switch. If users want to implement cross-AZ reliability for service applications (for example, deploy VMs running service applications in two AZs), they must plan multiple AZs.
  • The Cloud Server High Availability (CSHA) is a cross-AZ DR service. When using the CSHA service, you need to plan an active AZ and a DR AZ.
NOTE:

Compute, storage, and network resources in an AZ are interconnected with each other. Users can bind VMs to disks and networks in the same AZ without restrictions. However, the binding relationship is not supported across AZs.

Resource pools

The resource pool architecture consists of the physical DC layer, unified resource layer, and service layer.

  • Physical DC layer: The cloud platform includes DCs distributed in multiple physical areas.

    The form of a single physical DC is similar to that of a traditional DC, including the physical facilities and infrastructure. A flattened Layer 2 network is designed to connect IT devices in the DC at a high speed.

  • Unified resource pool layer: Provides unified computing, storage, and network resource pools. Each type of resource pools has a practical function scope.

    The division of resource pools is independent of locations of underlying physical devices. FusionSphere virtualizes physically dispersed computing, storage, and network devices into a unified logical resource pool for on-demand scheduling of upper-layer services.

  • Service layer: Provides an application computing environment, including deployment of enterprises' and carriers' various services, as well as VDCs divided based on service requirements.
  • General-purpose computing pool
    • Applications need to be divided into independent resource pools (such as general-purpose type and SAP HANA) based on the ECS type. SAP HANA must be deployed in an independent resource pool and cannot share the same resource pool with other ECS types.
    • SAP HANA running on BMSs and KVM virtualization must be divided into two different resource pools.
  • Bare metal server pool
    • A bare metal server pool cannot share the same resource pool with other types of computing resource pools.
    • The number of servers in a bare metal server pool cannot exceed 512.
    • BMSs support FusionStorage Block (distributed storage) and FC SAN as storage.
  • GPU computing resource pool
    • It is recommended that the GPU computing resource pool be an independent resource pool.
    • GPU passthrough specifications: 1:1, 1:2, 1:4, and 1:8. It is recommended that servers with different GPU specifications be divided into different host groups.
  • Storage resource pools
    • The block storage resource pool AZ corresponding to the EVS service contains only one storage type. The storage types include FC SAN (enterprise-class block storage), ServerSAN (distributed block storage), AFA (all-flash storage), and Others (heterogeneous storage). One backend storage device contains multiple storage pools from the same storage. A storage pool cannot be added to multiple backend storage devices. It is recommended that a disk type contains only one type of backend storage to ensure that backend storage has the same performance.
    • OBS resource pools apply to only the backup and archiving scenario and must be independent. Each Region supports one OBS resource pool.
    • The file storage resource pool corresponding to the SFS service supports only the OceanStor 9000.
  • Network resource pool
    • Network architectures include software SDN (Region Type I), hardware SDN (Region Type II), and VLAN without SDN (Region Type III). One Region (cascading FusionSphere OpenStack) supports only one network architecture. Regions under different network architectures can be centrally managed by ManageOne.
    • SDN deployment (Region Type I and Region Type II) is recommended for scenarios where services are frequently changed and require fast rollout. Non-SDN deployment (Region Type III) is recommended for small-scale, cost-sensitive scenarios where services do not change frequently or need to rolled out quickly.

Host group

A host group, a logical group in FusionSphere OpenStack, consists of a group of physical hosts and related metadata.

A computing server cluster is a host group that consists of a group of computing servers which have the same hardware configurations (CPUs and memory) and are connected to the same shared storage or distributed storage. Host groups are logically divided by administrators in the system, for example, a BMS cluster or a KVM-based server cluster. A maximum of 128 computing servers are supported in a computing server cluster.

Translation
Download
Updated: 2019-10-23

Document ID: EDOC1100063247

Views: 65420

Downloads: 182

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next