No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

HUAWEI CLOUD Stack 6.5.0 Alarm and Event Reference 04

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
ALM-3007 Failed to Synchronize Files

ALM-3007 Failed to Synchronize Files

Description

The SWR service files are stored in the file system. This alarm is generated when inter-instance files fail to be synchronized.

Attribute

Alarm ID

Alarm Severity

Alarm Type

3007

Major

Processing error alarm

Parameters

Parameter Name

Parameter Description

alarmName

Indicates the alarm name.

objectInstance

Indicates the abnormal service address and instance.

additionalInformation

Indicates the returned error message.

probableCause

Indicates the possible cause.

Impact on the System

If inter-instance files fail to be synchronized, the required file may not be found.

System Actions

None

Possible Causes

  • The required file fails to be downloaded due to the network fault.
  • Data sending fails due to the network fault.
  • The Kubernetes service fails to obtain the SWR instance.

Procedure

  1. Use PuTTY to log in to the manage_lb1_ip node.

    The default username is paas, and the default password is QAZ2wsx@123!.

  2. Run the following command and enter the password of the root user to switch to the root user:

    su - root

    Default password: QAZ2wsx@123!

  3. Check whether SWR instances are working properly.

    Run the kubectl get pod -owide -n fst-manage |grep swr-api-server command to obtain all SWR instances. SWR is deployed in high availability (HA) mode. Ensure that the number of SWR instances is greater than or equal to two. Check whether the instance status is Running.

    swr-api-server-4042020123-g3cx4            1/1       Running   0          9d        172.16.6.6       manage-cluster5-3063fc5a-t4c0v
    swr-api-server-4042020123-nk8x3            1/1       Running   0          9d        172.16.3.6       manage-cluster4-3063fc5a-n41xn
    swr-api-server-4042020123-z53ws            1/1       Running   0          9d        172.16.10.5      manage-cluster4-3063fc5a-bzdfd
    • If yes, go to 4.
    • If no, go to 5.

  4. Check whether the network connection between SWR instances is normal.

    1. Run the kubectl exec -it swr-api-server-4042020123-g3cx4 bash -n fst-manage command to go to the SWR instance container. swr-api-server-4042020123-g3cx4 indicates the instance name, which can be obtained from the displayed informationin 3.
    2. Run the ping 172.16.3.6 and ping 172.16.10.5 commands to check whether the network connection between the swr-api-server-4042020123-g3cx4 instance and other SWR instances is normal. The IP addresses such as 172.16.3.6 and 172.16.10.5 can be obtained from 3.
     [paas@swr-api-server-4042020123-g3cx4 dockyard]$  ping 172.16.3.6
    PING 172.16.3.6 (172.16.3.6) 56(84) bytes of data.
    64 bytes from 172.16.3.6: icmp_seq=1 ttl=64 time=0.950 ms
    64 bytes from 172.16.3.6: icmp_seq=2 ttl=64 time=0.223 ms
    64 bytes from 172.16.3.6: icmp_seq=3 ttl=64 time=0.224 ms
    ^C
    --- 172.16.3.6 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2001ms
    rtt min/avg/max/mdev = 0.223/0.465/0.950/0.343 ms
    • If the network connection is normal, go to 7.
    • If the network connection is abnormal, go to 6.

  5. Follow the instructions in section ALM-13 Pod Is Abnormal to handle the SWR instance exception if any.
  6. Contact the network administrator to rectify the network fault if any.
  7. Contact technical support for other issues.

Alarm Clearing

This alarm is automatically cleared when inter-instance files are normally synchronized.

Related Information

None

Translation
Download
Updated: 2019-08-30

Document ID: EDOC1100062365

Views: 35624

Downloads: 31

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next