No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

FusionStorage V100R006C20 Block Storage Service Software Installation Guide 07

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Configuring FusionStorage Block

Configuring FusionStorage Block

Scenarios

Configure FusionStorage Block on the FusionStorage Block Self-Maintenance Platform. The configuration information is as follows:
  • Configuring system parameters: Configure the number of storage network planes, network types, and storage network IP addresses or network port names.
  • Creating a control cluster: Select three, five, or seven servers from the storage nodes to create the control cluster.
  • Creating storage pools: Select servers and add them to storage pools based on the planned number of storage pools. That is, start the Object Storage Device (OSD) processes on these servers.
  • Creating block storage clients: Create block storage clients for all the servers using the storage pool resources. That is, start the Virtual Block System (VBS) processes on these servers.

Prerequisites

Conditions

You have logged in to the FusionStorage Block Self-Maintenance Platform.

Procedure

    Configure FusionStorage Block.

    1. Choose Resource Pool on the FusionStorage Block Self-Maintenance Platform.

      A dialog box is displayed.

    2. Click Configuration Navigation.

      The Configure System Parameter page on the Configuration Navigation tab is displayed, as shown in Figure 7-44.

      Figure 7-44  FusionStorage Block configuration navigation

    3. Configure the following parameters:

      NOTE:

      System parameters can be modified before you create the control cluster. If you need to modify system parameters after creating the control cluster, delete the control cluster and navigate to the Configure System Parameter page to modify these parameters.

      The button for deleting the control cluster is provided on the Create Control Cluster page.

      • Performance Monitoring Interval: Click Recommended Value and enter the planned system scale to query the recommended monitoring interval.
      • Storage Network Type: Select the storage network type.
      • Storage Network: Enter the IP address segment to which the storage plane IP addresses belong, or enter the network port names for storage plane IP addresses. If the FusionStorage Block system is planned to be expanded in the future, reserve storage network IP addresses for the to-be-added nodes in advance.

        • IP Address Range/Network Segment: address range 192.168.70.100-192.168.70.200 or network segment 192.168.70.0/24.
        • Network Port Name: Enter the network port name of the storage network in the FSA node operating system, for example, storage-0,fsb_data0.
        No matter which storage network type is configured, each IP address must match with only one node in the system. Otherwise, storage nodes cannot be identified. For example:
        • If a network port is configured with multiple IP addresses, configure only IP Address Range/Network Segment.
        • If both IP Address Range/Network Segment and Network Port Name are selected, they cannot be matched with different IP addresses on the same storage node. For example: IP Address Range/Network Segment is 192.168.70.0/24, and Network Port Name is storage-0,fsb_data0. The IP address of port storage-0 on a server is 192.168.60.10, and the IP address of another port is 192.168.70.10, then the IP address matched to IP Address Range/Network Segment on the storage node is 192.168.70.10, and the IP address matched to Network Port Name is 192.168.60.10, two different IP addresses are matched. As a result, the system cannot identify that which IP address is used.

    4. Click Next.

      The Create Control Cluster page is displayed.

    5. Set Cluster Name for the control cluster.
    6. Configure the metadata disk based on the data plan.
    7. Click Add.

      The Add Server page is displayed.

    8. (Optional) If you have selected an independent disk, set the slot number of the metadata disk.

      NOTE:

      You are advised to set the disk in the last slot except the slots housing the RAID disks to the metadata disk. The metadata disk slot numbers can be different on different servers. However, you are advised to set the metadata disks on the servers with the same slot number to facilitate management and monitoring.

      Click on the left of each server. View and select a disk.

    9. Select the servers in which the metadata disks are to be deployed (servers in the control cluster) and click Add.
    10. After setting the control cluster parameters, click Create to create the control cluster.

      NOTE:

      If you fail to create the control cluster, go back to the control cluster creation page to re-create the control cluster.

    11. After the control cluster is created, click Next.

      The Create Storage Pool page is displayed.

      When SSD cards or NVMe SSD devices are used as the main storage for FusionStorage Block, the system splits the SSD cards or NVMe SSD devices by 600 GB by default. If the number of units after split for each server exceeds 36, storage pools fail to create. In this case, manually specify a new split size for the SSD cards or NVMe SSD devices. For details, see Changing the Split Size for SSD Cards and NVMe SSD Devices.

    12. Perform either of the following operations based on the type of the storage pool that needs to be created.

      • If you want to create a storage pool and set its redundancy policy to redundancy, perform 13.
      • If you want to create a storage pool and set its redundancy policy to EC, perform 24.

    Creating a storage pool whose redundancy policy is redundancy

    1. Configure the following storage parameters:

      NOTE:

      These storage parameters are configured only for the first storage pool. If multiple storage pools have been planned in the system, reserve sufficient resources for the other resource pools when you configure these parameters.

      • Storage Pool Name: Set the storage pool name, for example, FusionStorage_0.
      • Set Storage Pool Redundancy Policy to Redundancy.
      • Redundancy:
        • The three-copy mode must be selected to ensure data reliability if SATA or NL-SAS disks are used. If other storage media are used, you can select either the two- or three-copy mode.
        • If server-level security is configured, the system must have at least three servers in two-copy mode or in three-copy mode.
        • If rack-level security is configured, the system must have at least three cabinets in two-copy mode and four cabinets in three-copy mode. The three-copy mode is recommended to ensure high data reliability.
      • Security Level: If the system (or the system after planned capacity expansion) contains greater than 64 servers, rack-level security is recommended. If rack-level security is used, the system must contain at least 12 servers.
        NOTE:
        • Server-level security: Copies of one piece of user data are distributed in different servers. Each server stores one copy at most, providing users high reliability.
        • Rack-level security: Copies of one piece of user data are distributed in different racks. Each rack stores one copy at most, providing users the highest reliability.
      • Main Storage Type:

        Set the parameter based on the type of the storage media added to the storage pool.

        NOTE:

        If the main storage type is SSD Card/NVMe SSD Disk , the Enable DIF option is available. You are advised to keep the default settings for this option.

        The Enable DIF option is to check whether the DIF function is configured on disks. If the main storage type is SSD Card/NVMe SSD Disk, an alarm will be reported when the DIF function is not configured. For details, see FusionStorage Hardware DIF Configuration Guide to enable the DIF function. If the configuration has been completed in Installation Drivers and Tools section of this scenario, it can be ignored.

      • Cache Type: Set the cache type of the storage pool. If no cache is available in the system, set this parameter to none. That is, storage pools do not use caches.
        NOTE:

        If SATA or SAS disks are used as the main storage in the storage pool, you cannot set the cache type to none.

        If the cache type is SSD Card/NVMe SSD Disk, you are advised to retain the default settings. That is, select Enable DIF so that the system can check whether DIF verification is enabled for disks. If DIF verification is disabled, an alarm will be reported.

      • Maximum Number of Main Storage: The maximum number of disks (excluding operating system disks and cache disks or cards) used to store user data on a storage server. When the system is configured with the maximum number of disks, you are advised to select Default. Or, you can select a value to reserve cache resources for future capacity expansion.
      • Cache/Main Storage Ratio: Ratio of the cache capacity to the main storage capacity. This parameter is displayed only when the cache type is SSD Card/NVMe SSD or SSD Disk.

        The rules for setting this parameter are as follows:

        • If Maximum Number of Main Storage is set to Default and Cache/Main Storage Ratio is not set, the parameter value is calculated based on the following fomula by default:

          Total cache capacity on each server/(Maximum number of main storage devices on each server x Capacity of a main storage device)

          • If the number of main storage devices on each server is not greater than 12, the value of Maximum number of main storage devices on each server is 12.
          • If the number of main storage devices on each server is greater than 12 but equal to or smaller than 16, the value of Maximum number of main storage devices on each server is 16.
          • If the number of main storage devices on each server is greater than 16 but equal to or smaller than 24, the value of Maximum number of main storage devices on each server is 24.
          • If the number of main storage devices on each server is greater than 24 but equal to or smaller than 36, the value of Maximum number of main storage devices on each server is 36.
        • If Maximum Number of Main Storage is set to Default and a value is entered for Cache/Main Storage Ratio, The entered value must meet the following requirement: 1% ≤ Cache/Main Storage Ratio ≤ Minimum ratio between the cache and main storage of the storage pool.

          Minimum ratio between the cache and main storage of the storage pool = Total cache capacity on each server/(Actual number of main storage devices on a server x Capacity of a main storage device).

          NOTE:

          The parameter value used in the virtualization scenarios ranges from 1% to 10% (2% is recommended) and is accurate to three decimal places, for example, 2.125%.

        • If Maximum Number of Main Storage is a numerical value, you do not need to set this parameter. The system calculates the value based on the following formula: Total cache capacity of each server/(Maximum number of main storage devices selected on the Portal page x Capacity of a single main storage).
        NOTE:
        The cache of one main storage device can be distributed on one cache device only. If the Cache/Main Storage Ratio value is too large, the cache allocated to a main storage device may be insufficient, and the storage pool will fail to create.

    2. In Servers and disks, click Add.

      The Select Server page is displayed.

    3. Select the servers to be added to the storage pool.

      One server can be added to only one storage pool.

      NOTE:

      The storage resources on the servers added to the storage pool are only for exclusive use by FusionStorage Block, and the original data stores will be deleted.

    4. Configure the main storage and cache that each server provides for storage pools.

      NOTE:

      A storage pool must contain a minimum of 12 OSD processes. To be more specific, the storage pool must have at least 12 disks or a total of 7.2 TB of SSD cards used as the main storage. When SSD cards are used to provide main storage resources, one OSD process must be configured for each 600 GB of space.

      The selected slot numbers must be consecutive no matter they are batch selected or manually selected.

      The main storage and cache on servers can be configured in batches or one by one.

      • Batch Select: Select the main storage and cache for selected servers in batches. This mode applies to the scenario where the storage media quantity and slot numbers on the servers are the same.

        If the storage media are SAS disks, SATA disks, or SSDs, set the start and end slot numbers. If the storage media are SSD cards, set the disk quantity, that is, the SSD card quantity because the SSD card does not have a slot number.

      • Manual Select: Select the main storage and cache for servers one by one. This mode applies to the scenario where the storage media quantity and slot numbers on the servers are different.

        Click to the left of each server and select the main storage and cache from the storage media that are displayed.

    5. Click Add.

      Servers in the storage pool are displayed in the server list.

    6. Click Create.

      A dialog box is displayed.

    7. Click Yes to create a storage pool.

      NOTE:
      • Large block write-though
        • Large block write-through is disabled for storage pools by default. If SATA or SAS disks are used as the main storage and SSD devices are used as the cache, data is first written to the cache and then to the main storage. When the cache bandwidth on a storage node is less than 1 GB, you are advised to enable large block write-though. After the function is enabled, large data blocks (≥ 256 KB) are directly written to the main storage.
        • If the system is upgraded from a relative earlier version, the status of the large block write-through function remains the same as that before system upgrade.
        • Run the sh /opt/dsware/client/bin/dswareTool.sh --op poolParametersOperation -opType modify -parameter p_write_through_switch:open -poolId Storage pool ID command on the active FSM node as user dsware to enable large block write-through for storage pools.
      • Thin provisioning ratio
        • In most cases, the upper-layer management system including FusionSphere OpenStack controls the system thin provisioning ratio. When there is no upper-layer software to control it, you can specify a thin provisioning ratio for the FusionStorage Block storage pools.
        • The thin provisioning ratio parameter of a storage pool is 0 by default, indicating that there is no restrictions on the storage pool space allocation. However, if the storage pool space is over-allocated and the in-use capacity reaches the threshold, data write operations fail.
        • Run sh /opt/dsware/client/bin/dswareTool.sh --op poolParametersOperation -opType modify -parameter p_capacity_thin_rate:100 -poolId Storage pool ID as user dsware to set the thin provisioning ratio for the storage pool. The value of the thin provisioning ratio parameter ranges from 0 to 10000. You are advised to set the parameter to 100. In this case, the thin provisioning ratio is 100%, and the storage pool space will not be over-allocated.

    8. After the storage pool is created, click Next.

      The Create Block Storage Client page is displayed.

    9. Click Create.

      The Create Block Storage Client page is displayed.

      NOTE:

      Only the server that has the block storage client created can use the data stores provided by the FusionStorage Block storage pool.

    10. Select the servers for which the block storage clients are to be created and click Create to create the block storage clients.

      Select all servers if FusionStorage Block is deployed in converged node and select all computing nodes if FusionStorage Block is deployed in separated mode.

      After the block storage clients are successfully created, the FusionStorage Block configuration is complete.

    11. After the storage pool is created, perform 34.

    Creating a storage pool whose redundancy policy is EC

    1. Configure the following storage parameters:

      NOTE:

      These storage parameters are configured only for the first storage pool. If multiple storage pools have been planned in the system, reserve sufficient resources for the other resource pools when you configure these parameters.

      • Storage Pool Name: Set the storage pool name, for example, FusionStorage_0.
      • Set Storage Pool Redundancy Policy to EC.
      • EC Ratio: The proportion mode can be N+M or N+M:B. For details, see FusionStorage Product Description and choose Block Storage Service > High Data Reliability.
      • EC Strip Size:: If FusionStorage Block is used for storing a large amount of sequential data, such as media data, you are advised to set the stripe depth to a value greater than or equal to 64 KB. If FusionStorage Block is used for storing a large amount of random data, such as event processing data, you are advised to set the stripe depth to 16 KB.
      • Security Level: If the system (or the system after planned capacity expansion) contains greater than 64 servers, rack-level security is recommended. If rack-level security is used, the system must contain at least 12 servers.
        NOTE:
        • Server-level security: Copies of one piece of user data are distributed in different servers. Each server stores one copy at most, providing users high reliability.
        • Rack-level security: Copies of one piece of user data are distributed in different racks. Each rack stores one copy at most, providing users the highest reliability.
      • Main Storage Type:

        Set the parameter based on the type of the storage media added to the storage pool.

        NOTE:

        If the main storage type is SSD Card/NVMe SSD Disk , the Enable DIF option is available. You are advised to keep the default settings for this option.

        The Enable DIF option is to check whether the DIF function is configured on disks. If the main storage type is SSD Card/NVMe SSD Disk, an alarm will be reported when the DIF function is not configured. For details, see FusionStorage Hardware DIF Configuration Guide to enable the DIF function. If the configuration has been completed in Installation Drivers and Tools section of this scenario, it can be ignored.

      • Cache Type: Set the cache type of the storage pool. If no cache is available in the system, set this parameter to none. That is, storage pools do not use caches.
        NOTE:

        If SATA or SAS disks are used as the main storage in the storage pool, you cannot set the cache type to none.

        If the cache type is SSD Card/NVMe SSD Disk, you are advised to retain the default settings. That is, select Enable DIF so that the system can check whether DIF verification is enabled for disks. If DIF verification is disabled, an alarm will be reported.

      • Maximum Number of Main Storage: The maximum number of disks (excluding operating system disks and cache disks or cards) used to store user data on a storage server. When the system is configured with the maximum number of disks, you are advised to select Default. Or, you can select a value to reserve cache resources for future capacity expansion.
      • Cache/Main Storage Ratio: Ratio of the cache capacity to the main storage capacity. This parameter is displayed only when the cache type is SSD Card/NVMe SSD or SSD Disk.

        The rules for setting this parameter are as follows:

        • If Maximum Number of Main Storage is set to Default and Cache/Main Storage Ratio is not set, the parameter value is calculated based on the following fomula by default:

          Total cache capacity on each server/(Maximum number of main storage devices on each server x Capacity of a main storage device)

          • If the number of main storage devices on each server is not greater than 12, the value of Maximum number of main storage devices on each server is 12.
          • If the number of main storage devices on each server is greater than 12 but equal to or smaller than 16, the value of Maximum number of main storage devices on each server is 16.
          • If the number of main storage devices on each server is greater than 16 but equal to or smaller than 24, the value of Maximum number of main storage devices on each server is 24.
          • If the number of main storage devices on each server is greater than 24 but equal to or smaller than 36, the value of Maximum number of main storage devices on each server is 36.
        • If Maximum Number of Main Storage is set to Default and a value is entered for Cache/Main Storage Ratio, The entered value must meet the following requirement: 1% ≤ Cache/Main Storage Ratio ≤ Minimum ratio between the cache and main storage of the storage pool.

          Minimum ratio between the cache and main storage of the storage pool = Total cache capacity on each server/(Actual number of main storage devices on a server x Capacity of a main storage device).

          NOTE:

          The parameter value used in the virtualization scenarios ranges from 1% to 10% (2% is recommended) and is accurate to three decimal places, for example, 2.125%.

        • If Maximum Number of Main Storage is a numerical value, you do not need to set this parameter. The system calculates the value based on the following formula: Total cache capacity of each server/(Maximum number of main storage devices selected on the Portal page x Capacity of a single main storage).
        NOTE:
        The cache of one main storage device can be distributed on one cache device only. If the Cache/Main Storage Ratio value is too large, the cache allocated to a main storage device may be insufficient, and the storage pool will fail to create.
      • EC Cache: After this option is selected, cache acceleration is enabled and the read and write speeds are higher.
        NOTE:
        If not all main storage and cache media of the storage pool are SSD devices and the selected EC ratio is N+3, the system cannot reach the N+3 reliability when cache acceleration is enabled. To guarantee the reliability of the N+3 EC ratio, cache acceleration must be disabled.

    2. In Servers and disks, click Add.

      The Select Server page is displayed.

    3. Select the servers to be added to the storage pool.

      One server can be added to only one storage pool.

      NOTE:

      The storage resources on the servers added to the storage pool are only for exclusive use by FusionStorage Block, and the original data stores will be deleted.

    4. Configure the main storage and cache that each server provides for storage pools.

      NOTE:

      A storage pool must contain a minimum of 12 OSD processes. To be more specific, the storage pool must have at least 12 disks or a total of 7.2 TB of SSD cards used as the main storage. When SSD cards are used to provide main storage resources, one OSD process must be configured for each 600 GB of space.

      The selected slot numbers must be consecutive no matter they are batch selected or manually selected.

      The main storage and cache on servers can be configured in batches or one by one.

      • Batch Select: Select the main storage and cache for selected servers in batches. This mode applies to the scenario where the storage media quantity and slot numbers on the servers are the same.

        If the storage media are SAS disks, SATA disks, or SSDs, set the start and end slot numbers. If the storage media are SSD cards, set the disk quantity, that is, the SSD card quantity because the SSD card does not have a slot number.

      • Manual Select: Select the main storage and cache for servers one by one. This mode applies to the scenario where the storage media quantity and slot numbers on the servers are different.

        Click to the left of each server and select the main storage and cache from the storage media that are displayed.

    5. Click Add.

      Servers in the storage pool are displayed in the server list.

    6. Click Create.

      A dialog box is displayed.

    7. Click Yes to create a storage pool.

      NOTE:
      • Large block write-though
        • Large block write-through is disabled for storage pools by default. If SATA or SAS disks are used as the main storage and SSD devices are used as the cache, data is first written to the cache and then to the main storage. When the cache bandwidth on a storage node is less than 1 GB, you are advised to enable large block write-though. After the function is enabled, large data blocks (≥ 256 KB) are directly written to the main storage.
        • If the system is upgraded from a relative earlier version, the status of the large block write-through function remains the same as that before system upgrade.
        • Run the sh /opt/dsware/client/bin/dswareTool.sh --op poolParametersOperation -opType modify -parameter p_write_through_switch:open -poolId Storage pool ID command on the active FSM node as user dsware to enable large block write-through for storage pools.
      • Thin provisioning ratio
        • In most cases, the upper-layer management system including FusionSphere OpenStack controls the system thin provisioning ratio. When there is no upper-layer software to control it, you can specify a thin provisioning ratio for the FusionStorage Block storage pools.
        • The thin provisioning ratio parameter of a storage pool is 0 by default, indicating that there is no restrictions on the storage pool space allocation. However, if the storage pool space is over-allocated and the in-use capacity reaches the threshold, data write operations fail.
        • Run sh /opt/dsware/client/bin/dswareTool.sh --op poolParametersOperation -opType modify -parameter p_capacity_thin_rate:100 -poolId Storage pool ID as user dsware to set the thin provisioning ratio for the storage pool. The value of the thin provisioning ratio parameter ranges from 0 to 10000. You are advised to set the parameter to 100. In this case, the thin provisioning ratio is 100%, and the storage pool space will not be over-allocated.

    8. After the storage pool is created, click Next.

      The Create Block Storage Client page is displayed.

    9. Click Create.

      The Create Block Storage Client page is displayed.

      NOTE:

      Only the server that has the block storage client created can use the data stores provided by the FusionStorage Block storage pool.

    10. Select the servers for which the block storage clients are to be created and click Create to create the block storage clients.

      Select all servers if FusionStorage Block is deployed in converged node and select all computing nodes if FusionStorage Block is deployed in separated mode.

      After the block storage clients are successfully created, the FusionStorage Block configuration is complete.

    (Optional) Replace the certificate.

    1. To improve system operation and maintenance security, replace the certificate and key file after the FusionStorage Block installation is complete. For details, see Operation and Maintenance > Block Storage Service Security Maintenance > Certificate Management in FusionStorage Product Documentation.

    (Optional) Configure time synchronization and a time zone.

    To configure an external NTP clock source for FSM and FSA nodes, perform the following operations:

    Consider configuring an external NTP clock source. If no clock source is configured, the performance monitoring function will be unavailable in the event of time inconsistency between FSM and FSA nodes.

    1. On the FusionStorage Block Self-Maintenance Platform, choose System > System Configuration > Time Management.

      The Time Management page is displayed, as shown in Figure 7-45.

      Figure 7-45  Configuring an NTP clock source and a time zone

    2. Select FusionStorage Manager and set the following parameters:

      • Time synchronization information
        • NTP Server: Set domain names or IP addresses of three time servers.
        • Time Synchronization Interval: Enter a time interval in the unit of seconds. If time synchronization is enabled, the system synchronizes time with the NTP clock source after five to ten intervals. Therefore, set the interval to a small value, for example, 64 seconds.

        FusionStorage Block supports only an external clock source as the NTP server. Therefore, the FSM or FSA node cannot be used as the NTP server.

        After time synchronization information is configured, click Save.

      • Time zone information
        • Region
        • City

        After the time zone is configured, click Save.

      NOTE:
      If the system time cannot be synchronized with the clock source time ten intervals after time synchronization is configured, click Force Time Synchronization to forcibly synchronize time.

    3. Select FusionStorage Agent and configure time synchronization and a time zone for FSA nodes.

      For detailed parameter settings, see 36.

      NOTE:
      • In converged deployment mode, if the NTP clock source has been configured for computing nodes or computing storage nodes, you do not need to configure it again. In separated deployment mode, if the NTP clock source has been configured for computing nodes, you also need to configure it for storage nodes.
      • If two methods are used to configure the NTP clock source for FSA nodes and a synchronization conflict occurs, run the sh /opt/dsware/client/bin/dswareTool.sh --op stopNTPServiceByNodes -nodes ip1,ip2,ip3... command on the active FSM node to stop the time synchronization monitoring between the FSA nodes where the conflict occurs.

    Configure VBS flow control parameters.

    If the I/O throughput on a computing node is too high, the read/write performance from this node to FusionStorage Block will deteriorate significantly. Therefore, Virtual Block System (VBS) flow control parameters need to be configured based on the storage bandwidth or number of CPUs on this computing node, ensuring that the read/write performance from this node to FusionStorage Block is normal.

    1. Configure VBS flow control parameters. For details, see Configuring VBS Flow Control Parameters.

    Configure the maximum bandwidth used in the network flow control.

    1. If the 10GE or 25GE network is used as the storage network and the number of OSD processes on a storage node is greater than 12, configure the maximum bandwidth used in the network flow control. For details, see Configuring the Maximum Bandwidth Used in the Network Flow Control.

    (Optional) Change the network congestion control algorithm of the storage pool.

    Change the network congestion control algorithm of the storage pool to avoid the sequential read performance deterioration of large data blocks in the storage pool when either of the following conditions are met:
    • All of the following conditions are met in the storage pool:
      • The storage network uses the TCP/IP networking.
      • The switches used in the storage network do not support explicit congestion notification (ECN).
      • The Linux kernel version of the storage node OS is later than 3.10.0, for example, the Linux kernel version of CentOS 7, Red Hat Enterprise Linux 7, SUSE Linux Enterprise Server 12, or EulerOS 2.0 is later than 3.10.0.
    • Taishan nodes are deployed in the storage pool, and the storage network uses the 25GE networking.

    1. Change the network congestion control algorithm of the storage pool. For details, see Changing the Network Congestion Control Algorithm.

    Configure other storage pools.

    1. If multiple storage pools need to be created, continue the creation by choosing Block Storage Service > Configuration > Block Storage Service Administrator Guide > Storage Pool Management > Resource Management > Creating a Storage Pool in FusionStorage Product Documentation.
Translation
Download
Updated: 2019-06-29

Document ID: EDOC1100016637

Views: 34875

Downloads: 33

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next