No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.


Imbalanced Disk Usage

Publication Date:  2019-04-12 Views:  157 Downloads:  0

Issue Description

Usage of each data disk in a cluster is imbalanced.

Handling Process

1. Check the table with data skew.

1. Switch to user omm: su –omm

2. Enable environment variable: source ${BIGDATA_HOME}/mppdb/.mppdbgs_profile.

3. Query the cluster status. cm_ctl query –Cvd. Find the CN node.

4. Go to any CN node directory cd /srv/BigData/mppdb/data1/coordinator.

Check the port number: cat postgresql.conf | grep port.

5. Query the currently available databases.

Methods to query the currently available databases are as follows:

(1) Connect to the CN database through the port obtained in Step 4: gsql -p 25308 -d postgres.

(2) Enter \l to query all databases, as shown in the following figure.

6. After obtaining the databases in the cluster, query tables with data skew in each database.

The command is gsql -d testdb -p 25308 -f /opt/check_redis.sql > /home/omm/testdb.txt,

In the preceding command,

testdb: Indicates the name of the database to be queried. You are advised to name the output file after the name of the database to be queried so that the files are easy to tell.

25308: Indicates the port number obtained in Step 4.

/opt/check_redis.sql: Check the script location. For details about the script, see the attachment.

/home/omm/testdb.txt: Indicates the path storing the output file.

Output Description

Root Cause

1. Distribution columns are not set when a table is created. As a result, the system uses the default distribution columns to distribute data, resulting in data skew.

2. Distribution columns are not discrete when the table is created. As a result, data skew occurs.


1. Back up the table with data skew.

2. Delete the table with data skew (you can rename the table for backup).

3. Rebuild the table with data skew (redefine distribution columns).

4. Import data.

Perform the following operations to import data:

1. Check whether port 8888 is occupied by running the netstat -nap | grep 8888 command. If not, go to the next step. Otherwise, replace the port.

2. Start the gds service (change the IP address and directory based on the site requirements).

/opt/huawei/Bigdata/mppdb/core/bin/gds -d /opt/gds_test/ -p -H -l /opt/gds_test/log/gds_test_log.log –D

/opt/huawei/Bigdata/mppdb/core/bin/gds: Indicates the path of the gds file. You can run the find / -name gds command in the root directory as user root to search for the path where the gds file is stored.

/opt/gds_test/: Indicates the data file directory. Indicates the IP address and port number of starting the gds service. Indicates the hosts that are allowed to connect to the gds (24 is the subnet mask

3. Create the data and log directory of the gds and assign users permissions on the directories. The directory is the same as the preceding directory /opt/gds_test/ in Step 2, and data files are stored in this directory (for both data import and export).

mkdir -p /opt/gds_test/log

chmod 777 -R /opt/gds_test/

4. Log in to the database (gsql -p 25308 -d postgres) and create a foreign table for data export.

create foreign table foreign_tablename_out (id int)SERVER gsmpp_server OPTIONS (location 'gsfs://',format 'CSV',encoding 'utf8', delimiter E'\x06', quote E'\x07', null '') WRITE ONLY;

Location: Specifies the data source location of the external table (The file name must be specified when the data is imported.)

Format: Specifies the format of the data source file for the external table.

Encoding: Specifies the encoding format name of a data file.

Delimiter: Specifies to separate data fields in a file.

Quote: Specifies the quote character for a CSV file.

Null: Specifies the blank value in a data file.

5. Export data (check whether the export is successful and verify the exported file name, which is usually foreign table name .dat.0).

Insert to foreign_tablename_out select * from table_name;

table_name_foreign: Foreign table name. The structure must be consistent with the structure of the table to be adjusted.

table_name: Tables to be adjusted.

6. Delete or rename old tables.

Delete: Drop table table_name.

Rename: ALTER TABLE table_name RENAME TO table_name_old.

7. Rebuild the old table based on the old table structure (Respecify the distribution column)

When the distribution column is redefined, select a column with less duplicate values.

Create table table_name_new (id int)DISTRIBUTE BY hash(id);

8. Create a foreign table for data import:

create foreign table foreign_tablename_into (id int) SERVER gsmpp_server OPTIONS (location 'gsfs:// foreign_tablename_out.dat.0',format 'CSV',mode 'normal', encoding 'utf8', delimiter E'\x06', quote E'\x07',null '',fill_missing_fields 'on')LOG INTO err_tablename PER NODE REJECT LIMIT 'unlimited';

Fill_missing_fields: When data is being loaded and the last field in a row in the data source file is missing, handle the exception as follows:

If this parameter value is true/on, set the value of the last field to NULL. Then no error message is reported.

If this parameter value is false/off, an error message missing data for column "tt" is reported.

PER NODE REJECT LIMIT 'value': Specifies the number of data format errors allowed in the data import. The value unlimited indicates that the number of data format errors is not limited.

LOG INTO err_tablename: Specifies that the error information is imported to the err_tablename.

9. Import data.

Insert into table_name_new from foreign_tablename_into;



Currently, the impact on services is small, and only the performance is partially affected. However, as data increases, more and more data will be skewed. The worst is that after if a DN node reaches the disk warning threshold, the entire cluster will be is unavailable. (In this case, only data query is allowed, and data write cannot be performed.)


1. The service needs to be disabled temporarily. It is recommended that you disable the service from the beginning. Otherwise, a small amount of data may be lost.

2. The number of tables need to be adjusted is large. Ensure that the size space of the gds data file directory is sufficient.

3. It is recommended that invisible characters be used as separators in foreign tables to avoid insufficient memory when the data and separators are consistent in some cases.

Risks and Safeguards:

1. You are advised to extract data again after rectifying any fault.

2. If an error is reported during data import, clear the table and import it again. The gds is submitted by phase and the data may have been submitted before an error is reported. Therefore, if the data is imported again, the data will be duplicate.

3. Before completely importing and exporting data, do not delete the original table or enable services.


1. During data import and export, if you do not know whether the gds is still running, open a new window to log in to the database and select pg_table_size('table_name'); to check whether the table size is increasing. (This function can also be used to check the data volume after data import and export are complete.)

2. The gds supports multiple threads (multiple tables can be imported and exported at the same time). However, you are advised to operate one table at a time to ensure data security.

3. Copy the gds file (the file only) to the server outside the cluster and start it on the 10GE network to reduce the cluster pressure loads and improve the gds efficiency.

Suggestions on performance improvement: The designed table structure is convenient to correspond to the dimension table. For example, a constant table can be created as a replication table, that is, each DN has complete data, which speeds up query and analysis performance.


After the exception is handled and the data is correct, delete the original table. Otherwise, data disk may be still imbalanced.