No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.


FusionInsight HD 6.5.0 Administrator Guide 02

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Configuring Cluster Static Resources

Configuring Cluster Static Resources


You can adjust resource base on FusionInsight Manager and customize resource configuration groups if you need to control service resources used on each node in a cluster or the available CPU or I/O quotas on each node at different time segments.

Impact on the System

  • After a static service pool is configured, the configuration status of affected services is displayed as Configurations Expired. You need to restart the services. Services are unavailable during restart.
  • After a static service pool is configured, the maximum number of resources used by each service and role instance cannot exceed the upper limit.


Modify the Resource Adjustment Base

  1. On FusionInsight Manager, choose Cluster > Configure Static Service Pool.
  2. Click Configuration in the upper right corner. The page for configuring resource pools is displayed.
  3. Change the values of CPU (%) and Memory (%) in the System Resource Adjustment Base area.

    Modifying the system resource adjustment base changes the maximum physical CPU and memory usage on nodes by services. If multiple services are deployed on the same node, the maximum physical resource usage of all services cannot exceed the adjusted CPU or memory usage.

  4. Click Next.

    To modify parameters again, click Previous.

Modify the Default Resource Configuration Group

  1. Click default. In the Configure weight table, set CPU LIMIT(%), CPU SHARE(%), I/O(%), and Memory(%) for each service.

    • The sum of CPU LIMIT(%) and CPU SHARE(%) used by all services can exceed 100%.
    • The sum of I/O(%) used by all services can exceed 100% but not 0.
    • The sum of Memory(%) used by all services can be greater than, smaller than, or equal to 100%.
    • Memory(%) cannot take effect dynamically and can only be modified in the default configuration group.
    • CPU LIMIT(%) is used to configure the ratio of the number of CPU cores that can be used by a service to those can be allocated to related nodes.
    • CPU SHARE(%) is used to configure the ratio of the time when a service uses a CPU core to the time when other services use the CPU core. That is, the ratio of time when multiple services compete for the same CPU core.

  2. Click Generate detailed configurations based on weight configurations. FusionInsight Manager generates the actual values of the parameters in the default weight configuration table based on the cluster hardware resources and allocation information.

    To modify parameters again, click Configuration on the right of Step 1: Configure weight.

  3. You can change the value of a parameter in the parameter configuration column in the parameter table as required.

    Manually changing parameter values does not refresh the service resource usage. In added configuration groups, the configuration group numbers of the parameters that take effect dynamically will be displayed. For example, HBase: RegionServer: dynamic-config1.RES_CPUSET_PERCENTAGE. The parameter functions do not change. Table 5-1 lists the parameters. Memory related parameters can be configured by instance group. Differentiated parameters can also be configured for instances.

    1. Memory related parameters include MAX_PROCESS_MEMORY, FTPSERVER_HEAPSIZE, FLUME_HEAPSIZE, HBASE_HEAPSIZE, HADOOP_HEAPSIZE, dfs.datanode.max.locked.memory, SOLR_HEAPSIZE, and yarn.nodemanager.resource.memory-mb in Table 5-1.
    2. The default value of an instance group parameter is the minimum value configured for all instances in the instance group.
    3. To differentiate instances in an instance group, click Details on the right of the instance group to modify the configuration and click OK.
    Table 5-1 Parameters of the static service pool



    • dynamic-configX.RES_CPUSET_PERCENTAGE

    Configures the service CPU usage.

    • dynamic-configX.RES_CPU_SHARE

    Configures the service CPU share usage.

    • dynamic-configX.RES_BLKIO_WEIGHT

    Configures service I/O usage.


    Specifies the maximum memory that each Elk node can use. The default value is null. After this parameter is configured on FusionInsight Manager, the database memory is set based on the value. The value set by the GUC tool on the background will be overwritten. Set the value for each host CN and DN on FusionInsight Manager to MAX_PROCESS_MEMORY/(Number of CNs + Number of DNs).

    • Set this parameter based on the formula. Otherwise, the data may fail to be restarted if the configured value is too small.
    • Value range: total physical memory of a node x Memory x Memory (%) to 1073741823.

      The value of Memory is specified in the System Resource Adjustment Base area.

      The value of Memory(%) is specified in the Configure weight area.

      For example, if the physical memory of a node in the cluster is 32088 MB, Memory is 70%, and Memory(%) is 80%, the minimum value of this parameter is 17969 (32088 x 0.7 x 0.8).


    Configures the maximum JVM memory for the FTP Server.


    Configures the maximum JVM memory for Flume.


    Configures the maximum JVM memory for RegionServer.


    Configures the maximum JVM memory of a DataNode.


    Configures the size of the memory block copy cached by a DataNode in the memory in bytes.


    Configures the maximum JVM memory for SolrServerAdmin.


    Configures the memory that can be used by NodeManager on the current node.


    Configures the maximum JVM memory for Elasticsearch.

  4. Click OK.

    In the displayed dialog box, click OK.

Add a Customized Resource Configuration Group

  1. Determine whether to automatically adjust resource configurations at different time segments.

    • If yes, go to Step 10.
    • If no, use the default configurations, and no further action is required.

  2. Click Configuration, change the system resource adjustment base values, and click Next.
  3. Click Add to add a resource configuration group.
  4. In Step 1: Scheduling Time, click Configuration.

    The page for configuring the time policy is displayed.

    Modify the following parameters based on service requirements and click OK.

    • Repeat: If this parameter is selected, the customized resource configuration is applied repeatedly based on the scheduling period. If this parameter is not selected, set the date and time when the configuration of the group of resources can be applied.
    • Repeat Policy: The available values are Daily, Weekly, and Monthly. This parameter is valid only when Repeat is selected.
    • Between: indicates the time period between the start time and end time when the resource configuration is applied. Set a unique time range. If the time range overlaps with that of an existing group of resource configuration, the time range cannot be saved.
    • The default group of resource configuration takes effect in all undefined time segments.
    • The newly added resource group is a parameter set that takes effect dynamically in a specified time range.
    • The newly added resource group can be deleted. A maximum of four resource configuration groups that take effect dynamically can be added.
    • Select a repetition policy. If the end time is earlier than the start time, the resource configuration ends in the next day by default. For example, if a validity period ranges from 22: 00 to 06: 00, the customized resource configuration takes effect from 22: 00 on the current day to 06: 00 on the next day.
    • If the repeat policy types of multiple configuration groups are different, the time ranges can overlap. The policy types are listed as follows by priority from low to high: daily, weekly, and monthly. The following is an example. There are two resource configuration groups using the monthly and daily policies, respectively. Their application time ranges in a day overlap as follows: 04: 00 to 07: 00 and 06: 00 to 08: 00. In this case, the configuration of the group that uses the monthly policy prevails.
    • If the repeat policy types of multiple resource configuration groups are the same, the time ranges of different dates can overlap. For example, if there are two weekly scheduling groups, you can set the same time range on different day for them, such as to 04: 00 to 07: 00, on Monday and Wednesday, respectively.

  5. Modify the resource configuration of each service in Step 2: Weight Configuration.
  6. Click OK.

    In the displayed dialog box, click OK.

Updated: 2019-05-17

Document ID: EDOC1100074522

Views: 7490

Downloads: 12

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Previous Next