No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

FusionStorage V100R006C20 Block Storage Service Software Installation Guide 07

Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Configuring FusionStorage Block

Configuring FusionStorage Block

Scenarios

After FSA is installed on each node, configure FusionStorage Block on the FusionStorage Block Self-Maintenance Platform. The configuration information is as follows:
  • Configuring system parameters: Configure the number of storage network planes, network types, and storage network IP addresses or network port names.
  • Creating a control cluster: Select three, five, or seven servers from the storage nodes to create the control cluster.
  • Creating storage pools: Select servers and add them to storage pools based on the planned number of storage pools. That is, start the Object Storage Device (OSD) processes on these servers.
  • Creating block storage clients: Create block storage clients for all the servers using the storage pool resources. That is, start the Virtual Block System (VBS) processes on these servers.

Prerequisites

Conditions

  • The FSA nodes have been installed.
  • You have logged in to the FusionStorage Block Self-Maintenance Platform.

Procedure

    (Optional) Use commands to change the I/O suspending timeout duration.

    You need to change the I/O suspension timeout duration if you want to enable the ASM Mirror function of the Oracle database to ensure service continuity even in the event of a system fault.

    1. Use PuTTY to log in to the FusionStorage Manager (FSM) node.

      Ensure that the floating IP address and username dsware are used to establish the connection.

      The default password of user dsware is IaaS@OS-CLOUD9!.

      If the public and private keys are used to authenticate the login, perform the operations based on Using PuTTY to Log In to a Node in Key Pair Authentication Mode.

    2. Run the following command to switch to the specified directory:

      cd /opt/dsware/client/bin

    3. Run the following command to change the I/O suspension timeout duration to 90 seconds:

      NOTE:
      • Since the system has been hardened, you need to enter the username and password for login authentication after running the dswareTool command of FusionStorage Block. The default username is cmdadmin, and its default password is IaaS@PORTAL-CLOUD9!.

      • The system supports authentication using environment variables so that you do not need to repeatedly enter the username and password for authentication each time you run the dswareTool command. For details, see Authentication Using Environment Variables.

      sh dswareTool.sh --op modifySysPara -para g_dsware_io_hang_timeout:90

    Configure FusionStorage Block.

    1. Choose Resource Pool on the FusionStorage Block Self-Maintenance Platform.

      A dialog box is displayed.

    2. Click Configuration Navigation.

      The Configure System Parameter page on the Configuration Navigation tab is displayed, as shown in Figure 8-43.

      Figure 8-43  FusionStorage Block configuration navigation

    3. Configure the following parameters:

      NOTE:

      System parameters can be modified before you create the control cluster. If you need to modify system parameters after creating the control cluster, delete the control cluster and navigate to the Configure System Parameter page to modify these parameters.

      The button for deleting the control cluster is provided on the Create Control Cluster page.

      • Performance Monitoring Interval: Click Recommended Value and enter the planned system scale to query the recommended monitoring interval.
      • Storage Network Type: Select the storage network type.
      • Storage Network: Enter the IP address segment to which the storage plane IP addresses belong, or enter the network port names for storage plane IP addresses. If the FusionStorage Block system is planned to be expanded in the future, reserve storage network IP addresses for the to-be-added nodes in advance.

        • IP Address Range/Network Segment: address range 192.168.70.100-192.168.70.200 or network segment 192.168.70.0/24.
        • Network Port Name: Enter the network port name of the storage network in the FSA node operating system, for example, storage-0,fsb_data0.
        No matter which storage network type is configured, each IP address must match with only one node in the system. Otherwise, storage nodes cannot be identified. For example:
        • If a network port is configured with multiple IP addresses, configure only IP Address Range/Network Segment.
        • If both IP Address Range/Network Segment and Network Port Name are selected, they cannot be matched with different IP addresses on the same storage node. For example: IP Address Range/Network Segment is 192.168.70.0/24, and Network Port Name is storage-0,fsb_data0. The IP address of port storage-0 on a server is 192.168.60.10, and the IP address of another port is 192.168.70.10, then the IP address matched to IP Address Range/Network Segment on the storage node is 192.168.70.10, and the IP address matched to Network Port Name is 192.168.60.10, two different IP addresses are matched. As a result, the system cannot identify that which IP address is used.

    4. Click Next.

      The Create Control Cluster page is displayed.

    5. Set Cluster Name for the control cluster.
    6. Set the metadata disk.
    7. Click Add.

      The Add Server page is displayed.

    8. (Optional) Set the slot number of the metadata disk if an independent disk is used.

      NOTE:

      You are advised to set the disk in the last slot except the slots housing the RAID disks to the metadata disk. The metadata disk slot numbers can be different on different servers. However, you are advised to set the metadata disks on the servers with the same slot number to facilitate management and monitoring.

      Click on the left of each server. View and select a disk.

    9. Select the servers in which the metadata disks are to be deployed (servers in the control cluster) and click Add.
    10. After setting the control cluster parameters, click Create to create the control cluster.

      NOTE:

      If you fail to create the control cluster, return to the control cluster creation page to re-create the control cluster.

    11. After the control cluster is created, click Next.

      The Create Storage Pool page is displayed.

      When SSD cards or NVMe SSD devices are used as the main storage for FusionStorage Block, the system splits the SSD cards or NVMe SSD devices by 600 GB by default. If the number of units after split for each server exceeds 36, storage pools fail to create. In this case, manually specify a new split size for the SSD cards or NVMe SSD devices. For details, see Changing the Split Size for SSD Cards and NVMe SSD Devices.

    12. Perform either of the following operations based on the type of the storage pool that needs to be created.

      • If you want to create a storage pool and set its redundancy policy to redundancy, perform 16.
      • If you want to create a storage pool and set its redundancy policy to EC, perform 28.

    Creating a storage pool whose redundancy policy is redundancy

    1. Configure the following storage parameters:

      NOTE:

      These storage parameters are configured only for the first storage pool. If multiple storage pools have been planned in the system, reserve sufficient resources for the other resource pools when you configure these parameters.

      • Storage Pool Name: Set the storage pool name, for example, FusionStorage_0.
      • Set Storage Pool Redundancy Policy to Redundancy.
      • Redundancy:
        • The three-copy mode must be selected to ensure data reliability if SATA or NL-SAS disks are used. If other storage media are used, you can select either the two- or three-copy mode.
        • If server-level security is configured, the system must have at least three servers in two-copy mode or in three-copy mode.
        • If rack-level security is configured, the system must have at least three cabinets in two-copy mode and four cabinets in three-copy mode. The three-copy mode is recommended to ensure high data reliability.
      • Security Level: If the system (or the system after planned capacity expansion) contains greater than 64 servers, rack-level security is recommended. If rack-level security is used, the system must contain at least 12 servers.
        NOTE:
        • Server-level security: Copies of one piece of user data are distributed in different servers. Each server stores one copy at most, providing users high reliability.
        • Rack-level security: Copies of one piece of user data are distributed in different racks. Each rack stores one copy at most, providing users the highest reliability.
      • Main Storage Type:

        Set the parameter based on the type of the storage media added to the storage pool.

        NOTE:

        If the main storage type is SSD Card/NVMe SSD Disk , the Enable DIF option is available. You are advised to keep the default settings for this option.

        The Enable DIF option is to check whether the DIF function is configured on disks. If the main storage type is SSD Card/NVMe SSD Disk, an alarm will be reported when the DIF function is not configured. For details, see FusionStorage Hardware DIF Configuration Guide to enable the DIF function. If the configuration has been completed in Installation Drivers and Tools section of this scenario, it can be ignored.

      • Cache Type: Set the cache type of the storage pool. If no cache is available in the system, set this parameter to none. That is, storage pools do not use caches.
        NOTE:

        If SATA or SAS disks are used as the main storage in the storage pool, you cannot set the cache type to none.

        If the cache type is SSD Card/NVMe SSD Disk, you are advised to retain the default settings. That is, select Enable DIF so that the system can check whether DIF verification is enabled for disks. If DIF verification is disabled, an alarm will be reported.

      • Maximum Number of Main Storage: The maximum number of disks (excluding operating system disks and cache disks or cards) used to store user data on a storage server. When the system is configured with the maximum number of disks, you are advised to select Default. Or, you can select a value to reserve cache resources for future capacity expansion.
      • Cache/Main Storage Ratio: Ratio of the cache capacity to the main storage capacity. This parameter is displayed only when the cache type is SSD Card/NVMe SSD or SSD Disk.

        The rules for setting this parameter are as follows:

        • If Maximum Number of Main Storage is set to Default and Cache/Main Storage Ratio is not set, the parameter value is calculated based on the following fomula by default:

          Total cache capacity on each server/(Maximum number of main storage devices on each server x Capacity of a main storage device)

          • If the number of main storage devices on each server is not greater than 12, the value of Maximum number of main storage devices on each server is 12.
          • If the number of main storage devices on each server is greater than 12 but equal to or smaller than 16, the value of Maximum number of main storage devices on each server is 16.
          • If the number of main storage devices on each server is greater than 16 but equal to or smaller than 24, the value of Maximum number of main storage devices on each server is 24.
          • If the number of main storage devices on each server is greater than 24 but equal to or smaller than 36, the value of Maximum number of main storage devices on each server is 36.
        • If Maximum Number of Main Storage is set to Default and a value is entered for Cache/Main Storage Ratio, The entered value must meet the following requirement: 1% ≤ Cache/Main Storage Ratio ≤ Minimum ratio between the cache and main storage of the storage pool.

          Minimum ratio between the cache and main storage of the storage pool = Total cache capacity on each server/(Actual number of main storage devices on a server x Capacity of a main storage device).

          NOTE:

          The parameter value used in physical deployment scenarios ranges from 2% to 20% (10% is recommended) and is accurate to three decimal places, for example, 10.125%.

        • If Maximum Number of Main Storage is a numerical value, you do not need to set this parameter. The system calculates the value based on the following formula: Total cache capacity of each server/(Maximum number of main storage devices selected on the Portal page x Capacity of a single main storage).
        NOTE:
        The cache of one main storage device can be distributed on one cache device only. If the Cache/Main Storage Ratio value is too large, the cache allocated to a main storage device may be insufficient, and the storage pool will fail to create.

    2. In Servers and disks, click Add.

      The Select Server page is displayed.

    3. Select the servers to be added to the storage pool.

      One server can be added to only one storage pool.

      Select only storage nodes here because only the separated deployment mode is supported in physical deployment scenarios.

      NOTE:

      The storage resources on the servers added to the storage pool are only for exclusive use by FusionStorage Block, and the original data stores will be deleted.

    4. Configure the main storage and cache that each server provides for storage pools.

      NOTE:

      A storage pool must contain a minimum of 12 OSD processes. To be more specific, the storage pool must have at least 12 disks or a total of 7.2 TB of SSD cards used as the main storage. When SSD cards are used to provide main storage resources, one OSD process must be configured for each 600 GB of space.

      The selected slot numbers must be consecutive no matter they are batch selected or manually selected.

      The main storage and cache on servers can be configured in batches or one by one.

      • Batch Select: Select the main storage and cache for selected servers in batches. This mode applies to the scenario where the storage media quantity and slot numbers on the servers are the same.

        If the storage media are SAS disks, SATA disks, or SSDs, set the start and end slot numbers. If the storage media are SSD cards, set the disk quantity, that is, the SSD card quantity because the SSD card does not have a slot number.

      • Manual Select: Select the main storage and cache for servers one by one. This mode applies to the scenario where the storage media quantity and slot numbers on the servers are different.

        Click to the left of each server and select the main storage and cache from the storage media that are displayed.

    5. Click Add.

      Servers in the storage pool are displayed in the server list.

    6. Click Create.

      A dialog box is displayed.

    7. Click Yes to create a storage pool.

      NOTE:
      • Large block write-though
        • Large block write-through is disabled for storage pools by default. If SATA or SAS disks are used as the main storage and SSD devices are used as the cache, data is first written to the cache and then to the main storage. When the cache bandwidth on a storage node is less than 1 GB, you are advised to enable large block write-though. After the function is enabled, large data blocks (≥ 256 KB) are directly written to the main storage.
        • If the system is upgraded from a relative earlier version, the status of the large block write-through function remains the same as that before system upgrade.
        • Run the sh /opt/dsware/client/bin/dswareTool.sh --op poolParametersOperation -opType modify -parameter p_write_through_switch:open -poolId Storage pool ID command on the active FSM node as user dsware to enable large block write-through for storage pools.
      • Thin provisioning ratio
        • In most cases, the upper-layer management system including FusionSphere OpenStack controls the system thin provisioning ratio. When there is no upper-layer software to control it, you can specify a thin provisioning ratio for the FusionStorage Block storage pools.
        • The thin provisioning ratio parameter of a storage pool is 0 by default, indicating that there is no restrictions on the storage pool space allocation. However, if the storage pool space is over-allocated and the in-use capacity reaches the threshold, data write operations fail.
        • Run sh /opt/dsware/client/bin/dswareTool.sh --op poolParametersOperation -opType modify -parameter p_capacity_thin_rate:100 -poolId Storage pool ID as user dsware to set the thin provisioning ratio for the storage pool. The value of the thin provisioning ratio parameter ranges from 0 to 10000. You are advised to set the parameter to 100. In this case, the thin provisioning ratio is 100%, and the storage pool space will not be over-allocated.

    8. Create a block storage client based on site requirements.

      • If multiple VBS processes need to be installed on each compute node, change the configuration of number of VBS processes by referring to Configuring the Number of VBS Processes and perform the following operations in sequence.
      • If you do not need to install multiple VBS processes on each compute node, perform the following operations in sequence to create a block storage client.
      NOTE:

      By default, one VBS process is deployed on each compute node of the system. In database scenarios, IB or RoCE networking places high requirements on compute node performance. One VBS process may not meet service requirements. You are advised to deploy multiple VBS processes based on service requirements and resources.

    9. Click Next and then click Create.

      NOTE:

      Only the server that has the block storage client created can use the data stores provided by the FusionStorage Block storage pool.

    10. Select the computing nodes for which block storage clients are to be created and click Create to create the block storage clients.

      Block storage clients are created only on computing nodes because only the separated deployment mode is supported in physical deployment scenarios. Storage nodes do not require block storage clients.

      After the block storage clients are successfully created, the FusionStorage Block configuration is complete.

    11. You can view the number of installed VBS processes by choosing Resource Pool > Summary on the web client, as shown in Figure 8-44.

      Figure 8-44  Viewing the number of installed VBS processes

    12. After the storage pool is created, perform 39.

    Creating a storage pool whose redundancy policy is EC

    1. Configure the following storage parameters:

      NOTE:

      These storage parameters are configured only for the first storage pool. If multiple storage pools have been planned in the system, reserve sufficient resources for the other resource pools when you configure these parameters.

      • Storage Pool Name: Set the storage pool name, for example, FusionStorage_0.
      • Set Storage Pool Redundancy Policy to EC.
      • EC Ratio: The proportion mode can be N+M or N+M:B. For details, see FusionStorage Product Description and choose Block Storage Service > High Data Reliability.
      • EC Strip Size:: If FusionStorage Block is used for storing a large amount of sequential data, such as media data, you are advised to set the stripe depth to a value greater than or equal to 64 KB. If FusionStorage Block is used for storing a large amount of random data, such as event processing data, you are advised to set the stripe depth to 16 KB.
      • Security Level: If the system (or the system after planned capacity expansion) contains greater than 64 servers, rack-level security is recommended. If rack-level security is used, the system must contain at least 12 servers.
        NOTE:
        • Server-level security: Copies of one piece of user data are distributed in different servers. Each server stores one copy at most, providing users high reliability.
        • Rack-level security: Copies of one piece of user data are distributed in different racks. Each rack stores one copy at most, providing users the highest reliability.
      • Main Storage Type:

        Set the parameter based on the type of the storage media added to the storage pool.

        NOTE:

        If the main storage type is SSD Card/NVMe SSD Disk , the Enable DIF option is available. You are advised to keep the default settings for this option.

        The Enable DIF option is to check whether the DIF function is configured on disks. If the main storage type is SSD Card/NVMe SSD Disk, an alarm will be reported when the DIF function is not configured. For details, see FusionStorage Hardware DIF Configuration Guide to enable the DIF function. If the configuration has been completed in Installation Drivers and Tools section of this scenario, it can be ignored.

      • Cache Type: Set the cache type of the storage pool. If no cache is available in the system, set this parameter to none. That is, storage pools do not use caches.
        NOTE:

        If SATA or SAS disks are used as the main storage in the storage pool, you cannot set the cache type to none.

        If the cache type is SSD Card/NVMe SSD Disk, you are advised to retain the default settings. That is, select Enable DIF so that the system can check whether DIF verification is enabled for disks. If DIF verification is disabled, an alarm will be reported.

      • Maximum Number of Main Storage: The maximum number of disks (excluding operating system disks and cache disks or cards) used to store user data on a storage server. When the system is configured with the maximum number of disks, you are advised to select Default. Or, you can select a value to reserve cache resources for future capacity expansion.
      • Cache/Main Storage Ratio: Ratio of the cache capacity to the main storage capacity. This parameter is displayed only when the cache type is SSD Card/NVMe SSD or SSD Disk.

        The rules for setting this parameter are as follows:

        • If Maximum Number of Main Storage is set to Default and Cache/Main Storage Ratio is not set, the parameter value is calculated based on the following fomula by default:

          Total cache capacity on each server/(Maximum number of main storage devices on each server x Capacity of a main storage device)

          • If the number of main storage devices on each server is not greater than 12, the value of Maximum number of main storage devices on each server is 12.
          • If the number of main storage devices on each server is greater than 12 but equal to or smaller than 16, the value of Maximum number of main storage devices on each server is 16.
          • If the number of main storage devices on each server is greater than 16 but equal to or smaller than 24, the value of Maximum number of main storage devices on each server is 24.
          • If the number of main storage devices on each server is greater than 24 but equal to or smaller than 36, the value of Maximum number of main storage devices on each server is 36.
        • If Maximum Number of Main Storage is set to Default and a value is entered for Cache/Main Storage Ratio, The entered value must meet the following requirement: 1% ≤ Cache/Main Storage Ratio ≤ Minimum ratio between the cache and main storage of the storage pool.

          Minimum ratio between the cache and main storage of the storage pool = Total cache capacity on each server/(Actual number of main storage devices on a server x Capacity of a main storage device).

          NOTE:

          The parameter value used in physical deployment scenarios ranges from 2% to 20% (10% is recommended) and is accurate to three decimal places, for example, 10.125%.

        • If Maximum Number of Main Storage is a numerical value, you do not need to set this parameter. The system calculates the value based on the following formula: Total cache capacity of each server/(Maximum number of main storage devices selected on the Portal page x Capacity of a single main storage).
        NOTE:
        The cache of one main storage device can be distributed on one cache device only. If the Cache/Main Storage Ratio value is too large, the cache allocated to a main storage device may be insufficient, and the storage pool will fail to create.
      • EC Cache: After this option is selected, cache acceleration is enabled and the read and write speeds are higher.
        NOTE:
        If not all main storage and cache media of the storage pool are SSD devices and the selected EC ratio is N+3, the system cannot reach the N+3 reliability when cache acceleration is enabled. To guarantee the reliability of the N+3 EC ratio, cache acceleration must be disabled.

    2. In Servers and disks, click Add.

      The Select Server page is displayed.

    3. Select the servers to be added to the storage pool.

      One server can be added to only one storage pool.

      Select only storage nodes here because only the separated deployment mode is supported in physical deployment scenarios.

      NOTE:

      The storage resources on the servers added to the storage pool are only for exclusive use by FusionStorage Block, and the original data stores will be deleted.

    4. Configure the main storage and cache that each server provides for storage pools.

      NOTE:

      A storage pool must contain a minimum of 12 OSD processes. To be more specific, the storage pool must have at least 12 disks or a total of 7.2 TB of SSD cards used as the main storage. When SSD cards are used to provide main storage resources, one OSD process must be configured for each 600 GB of space.

      The selected slot numbers must be consecutive no matter they are batch selected or manually selected.

      The main storage and cache on servers can be configured in batches or one by one.

      • Batch Select: Select the main storage and cache for selected servers in batches. This mode applies to the scenario where the storage media quantity and slot numbers on the servers are the same.

        If the storage media are SAS disks, SATA disks, or SSDs, set the start and end slot numbers. If the storage media are SSD cards, set the disk quantity, that is, the SSD card quantity because the SSD card does not have a slot number.

      • Manual Select: Select the main storage and cache for servers one by one. This mode applies to the scenario where the storage media quantity and slot numbers on the servers are different.

        Click to the left of each server and select the main storage and cache from the storage media that are displayed.

    5. Click Add.

      Servers in the storage pool are displayed in the server list.

    6. Click Create.

      A dialog box is displayed.

    7. Click Yes to create a storage pool.

      NOTE:
      • Large block write-though
        • Large block write-through is disabled for storage pools by default. If SATA or SAS disks are used as the main storage and SSD devices are used as the cache, data is first written to the cache and then to the main storage. When the cache bandwidth on a storage node is less than 1 GB, you are advised to enable large block write-though. After the function is enabled, large data blocks (≥ 256 KB) are directly written to the main storage.
        • If the system is upgraded from a relative earlier version, the status of the large block write-through function remains the same as that before system upgrade.
        • Run the sh /opt/dsware/client/bin/dswareTool.sh --op poolParametersOperation -opType modify -parameter p_write_through_switch:open -poolId Storage pool ID command on the active FSM node as user dsware to enable large block write-through for storage pools.
      • Thin provisioning ratio
        • In most cases, the upper-layer management system including FusionSphere OpenStack controls the system thin provisioning ratio. When there is no upper-layer software to control it, you can specify a thin provisioning ratio for the FusionStorage Block storage pools.
        • The thin provisioning ratio parameter of a storage pool is 0 by default, indicating that there is no restrictions on the storage pool space allocation. However, if the storage pool space is over-allocated and the in-use capacity reaches the threshold, data write operations fail.
        • Run sh /opt/dsware/client/bin/dswareTool.sh --op poolParametersOperation -opType modify -parameter p_capacity_thin_rate:100 -poolId Storage pool ID as user dsware to set the thin provisioning ratio for the storage pool. The value of the thin provisioning ratio parameter ranges from 0 to 10000. You are advised to set the parameter to 100. In this case, the thin provisioning ratio is 100%, and the storage pool space will not be over-allocated.

    8. Create a block storage client based on site requirements.

      • If multiple VBS processes need to be installed on each compute node, change the configuration of number of VBS processes by referring to Configuring the Number of VBS Processes and perform the following operations in sequence.
      • If you do not need to install multiple VBS processes on each compute node, perform the following operations in sequence to create a block storage client.
      NOTE:

      By default, one VBS process is deployed on each compute node of the system. In database scenarios, IB or RoCE networking places high requirements on compute node performance. One VBS process may not meet service requirements. You are advised to deploy multiple VBS processes based on service requirements and resources.

    9. Click Next and then click Create.

      NOTE:

      Only the server that has the block storage client created can use the data stores provided by the FusionStorage Block storage pool.

    10. You can view the number of installed VBS processes by choosing Resource Pool > Summary on the web client, as shown in Figure 8-45.

      Figure 8-45  Viewing the number of installed VBS processes

    11. Select the computing nodes for which block storage clients are to be created and click Create to create the block storage clients.

      Block storage clients are created only on computing nodes because only the separated deployment mode is supported in physical deployment scenarios. Storage nodes do not require block storage clients.

      After the block storage clients are successfully created, the FusionStorage Block configuration is complete.

    Group server CPUs.

    You need to perform this step on servers that have both FSM and FSA nodes deployed, and you can also perform this step on servers that have services and FSA nodes deployed to configure resource isolation.

    Deploy the FSM and FSA nodes on the same server only when the system contains fewer than 64 servers. In this case, allocate four physical cores and 16 GB of memory for the FSM node and reserve other CPU and memory resources for the FSA node to use.

    If you need to configure resource isolation for services and the FSA node, reserve CPU and memory resources for the FSA node based on the services deployed. For details about the requirement of the reserved FSA resources, see System Requirements. Insufficient reserved resources may adversely affect the performance of the FusionStorage Block system.

    1. Use PuTTY to log in to the server as user dsware.

      If the public and private keys are used to authenticate the login, perform the operations based on Using PuTTY to Log In to a Node in Key Pair Authentication Mode.

    2. Run the su - root command and enter the password of user root to switch to user root.

      In scenarios where hardware and software are integrated, the default password of user root is Huawei@123 at server delivery.

    3. Run the following command to query the CPU information of the server:

      sh /opt/dsware/agent/script/dsware_cpu_group.sh query_cpu_info

      Information similar to the following is displayed:

      ----------1: system detail cpu info-------- processor=[24 0] in core_id=0 and physical_id=0 processor=[25 1] in core_id=1 and physical_id=0 ...

      In the command output, physical_id identifies the physical CPU, core_id identifies the physical core on the physical CPU, and processor specifies the logical CPU.

    4. Run the free -g command to query the memory information of the server.
    5. Run the following command to group CPUs for FusionStorage Block and reserve CPU and memory resources for the FSA node:

      sh /opt/dsware/agent/script/dsware_cpu_group.sh add_cpu_group phy_cpu_id1@phy_cpu_id2@phy_cpu_idn N M

      In this command, phy_cpu_id1, phy_cpu_id2, and phy_cpu_idn indicate the physical CPU IDs, N indicates the number of CPU cores to be added to a CPU group and M indicates the memory size (in GB) reserved for the FSA node.

      • For the resource isolation on servers that have both FSM and FSA nodes deployed:

        Because the FSM and FSA nodes can be deployed on the same server only when the system contains fewer than 64 servers, allocate 4 physical cores and 16 GB of memory for the FSM node and reserve the remaining CPU and memory resources for the FSA node.

        For example, if a server has 4 CPUs (16 cores) and 96 GB of memory, run the following command to reserve 12 cores and 80 GB of memory for the FSA node:

        sh /opt/dsware/agent/script/dsware_cpu_group.sh add_cpu_group 0@1@2@3 3 80

      • For the resource isolation on servers that have services and the FSA node deployed:

        Use the total CPU and memory resources to minus those required by services and reserve the remaining resources for the FSA node.

        For example, if a server has 4 CPUs (16 cores) and 96 GB of memory and services require 4 cores and 16 GB of memory, run the following command to reserve 12 cores and 80 GB of memory for the FSA node:

        sh /opt/dsware/agent/script/dsware_cpu_group.sh add_cpu_group 0@1@2@3 3 80

    6. Restart the server to make CPU group take effect.
    7. Check whether the CPU grouping takes effect.

      1. Run either of the following commands to check whether the CPU grouping configuration information exists in the directory:

        ls /sys/fs/cgroup/cpuset/fusionstorage/dsware or ls /cgroup/cpuset/dsware

        Example:

      2. Run the cat cpuset.cpus command to view details about logical CPUs of the grouped CPUs.

        Example:

      3. Run the ps -ef|grep -v grep|grep dsware command to query IDs of processes related to DSware and record them.

        Example:

      4. Run the cat tasks command to check whether all IDs of processes related to DSware recorded in 45.c exist. If yes, the CPU grouping is successful.

      NOTE:
      You can run the sh /opt/dsware/agent/script/dsware_cpu_group.sh del_cpu_group command to cancel CPU grouping. The canceling setting takes effect after a server restart.

    8. Group CPUs on all other servers. For details, see 39 to 45.

    (Optional) Replace the certificate.

    1. To improve system operation and maintenance security, replace the certificate and key file after the FusionStorage Block installation is complete. For details, see Operation and Maintenance > Block Storage Service Security Maintenance > Certificate Management in FusionStorage Product Documentation.

    (Optional) Configure time synchronization and a time zone.

    To configure an external NTP clock source for FSM and FSA nodes, perform the following operations:

    Consider configuring an external NTP clock source. If no clock source is configured, the performance monitoring function will be unavailable in the event of time inconsistency between FSM and FSA nodes.

    1. On the FusionStorage Block Self-Maintenance Platform, choose System > System Configuration > Time Management.

      The Time Management page is displayed, as shown in Figure 8-46.

      Figure 8-46  Configuring an NTP clock source and a time zone

    2. Select FusionStorage Manager and set the following parameters:

      • Time synchronization information
        • NTP Server: Set domain names or IP addresses of three time servers.
        • Time Synchronization Interval: Enter a time interval in the unit of seconds. If time synchronization is enabled, the system synchronizes time with the NTP clock source after five to ten intervals. Therefore, set the interval to a small value, for example, 64 seconds.

        FusionStorage Block supports only an external clock source as the NTP server. Therefore, the FSM or FSA node cannot be used as the NTP server.

        After time synchronization information is configured, click Save.

      • Time zone information
        • Region
        • City

        After the time zone is configured, click Save.

      NOTE:
      If the system time cannot be synchronized with the clock source time ten intervals after time synchronization is configured, click Force Time Synchronization to forcibly synchronize time.

    3. Select FusionStorage Agent and configure time synchronization and a time zone for FSA nodes.

      For detailed parameter settings, see 49.

    Configure VBS flow control parameters.

    If the I/O throughput on a computing node is too high, the read/write performance from this node to FusionStorage Block will deteriorate significantly. Therefore, Virtual Block System (VBS) flow control parameters need to be configured based on the storage bandwidth or number of CPUs on this computing node, ensuring that the read/write performance from this node to FusionStorage Block is normal.

    1. Configure VBS flow control parameters. For details, see Configuring VBS Flow Control Parameters.

    Configure VBS flow control parameters.

    If the I/O throughput on a computing node is too high, the read/write performance from this node to FusionStorage Block will deteriorate significantly. Therefore, Virtual Block System (VBS) flow control parameters need to be configured based on the storage bandwidth or number of CPUs on this computing node, ensuring that the read/write performance from this node to FusionStorage Block is normal.

    1. Configure VBS flow control parameters. For details, see Configuring VBS Flow Control Parameters.

    Configure the maximum bandwidth used in the network flow control.

    1. If the 10GE or 25GE network is used as the storage network and the number of OSD processes on a storage node is greater than 12, configure the maximum bandwidth used in the network flow control. For details, see Configuring the Maximum Bandwidth Used in the Network Flow Control.

    (Optional) Connect FusionStorage Block to the native OpenStack system.

    1. If FusionStorage Block is used in the native OpenStack system, connect FusionStorage Block to the native OpenStack system after installing and configuring FusionStorage Block. For details, see Connecting FusionStorage Block to the Native OpenStack System.

    (Optional) Change the network congestion control algorithm of the storage pool.

    1. Change the network congestion control algorithm of the storage pool. For details, see Changing the Network Congestion Control Algorithm.

      Change the network congestion control algorithm of the storage pool to avoid the sequential read performance deterioration of large data blocks in the storage pool when all of the followings are met:
      • The storage network uses the Ethernet networking.
      • The switches used in the storage network do not support explicit congestion notification (ECN).
      • The Linux kernel version of the storage node OS is later than 3.10.0, for example, the Linux kernel version of CentOS 7, Red Hat Enterprise Linux 7, SUSE Linux Enterprise Server 12, or Euler 2.0 is later than 3.10.0.

    Configure other storage pools.

    1. If multiple storage pools need to be created, continue the creation by choosing Block Storage Service > Configuration > Block Storage Service Administrator Guide > Storage Pool Management > Resource Management > Creating a Storage Pool in FusionStorage Product Documentation.
Translation
Download
Updated: 2019-06-29

Document ID: EDOC1100016637

Views: 23534

Downloads: 14

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next