No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

HUAWEI CLOUD Stack 6.5.0 Alarm and Event Reference 04

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
ALM-5 Failed to Delete Pod

ALM-5 Failed to Delete Pod

Description

This alarm is reported when a pod or a cron job fails to be deleted.

Pod: In Kubernetes, pods are the smallest unit of creation, scheduling, and deployment. A pod is a group of relatively tightly coupled containers. Pods are always co-located and run in a shared application context. Containers within a pod share a namespace, IP address, port space, and volume.

Attribute

Alarm ID

Alarm Severity

Alarm Type

5

Minor

Environmental alarm

Parameters

Parameter Name

Parameter Description

kind

Resource type.

namespace

Name of the project to which the resource belongs.

name

Resource name.

uid

Unique ID of the resource.

OriginalEventTime

Event generation time.

EventSource

Name of the component that reports an event.

EventMessage

Supplementary information about an event.

Impact on the System

Components or applications become abnormal.

System Actions

The system keeps deleting the pod or cron job.

Possible Causes

The deletion lifecycle configured for the application is incorrect. As a result, the deletion fails.

Procedure

  1. Obtain the name of the pod that fails to be deleted.

    1. Use a browser to log in to the FusionStage OM zone console.
      1. Log in to ManageOne Maintenance Portal.
        • Login address: https://Address for accessing the homepage of ManageOne Maintenance Portal:31943, for example, https://oc.type.com:31943.
        • The default username is admin, and the default password is Huawei12#$.
      2. On the O&M Maps page, click the FusionStage link under Quick Links to go to the FusionStage OM zone console.
    2. Choose Application Operations > Application Operations from the main menu.
    3. In the navigation pane on the left, choose Alarm Center > Alarm List and query the alarm by setting query criteria.
    4. Click to expand the alarm information. Record the values of name and namespace in Location Info, that is, podname and namespace.

  2. Use PuTTY to log in to the manage_lb1_ip node.

    The default username is paas, and the default password is QAZ2wsx@123!.

  3. Run the following command and enter the password of the root user to switch to the root user:

    su - root

    Default password: QAZ2wsx@123!

  4. Run the following command to obtain the IP address of the node on which the pod runs:

    kubectl get pod podname -n namespace -oyaml | grep -i hostip:

    In the preceding command, podname is the instance name obtained in 1, and namespace is the namespace obtained in 1.

    Log in to the node as the paas user.

  5. Search for the error information based on the pod name and correct the application configuration.

    1. Run the following commands to view the kubelet log:

      cd /var/paas/sys/log/kubernetes/

      vi kubelet.log

      Press the / key, enter the name of the pod, and then press Enter for search. If Failed or Error is displayed in the command output or if the log line starts with E, the error information may be displayed as shown in the following figure.

    2. Correct the lifecycle script based on the error information. If the following information is displayed, the deletion lifecycle fails to be executed. In this case, you need to correct the application configuration, delete and deploy the application again, and then check whether the alarm is cleared.
      hostname:~ # cd /var/paas/sys/log/kubernetes/
      hostname:/var/paas/sys/log/kubernetes # vi kubelet.log 
      I0111 22:04:40.683498   70092 plugin.go:687] Enter process lifecycle handletype=Delete
      I0111 22:04:40.683506   70092 plugin.go:702] Enter process lifecycle commands=[]string{"/bin/bash", "-c", "rm /tmp/test-file"}
      E0111 22:04:40.689192   70092 plugin.go:808] run process process1 failed: exit status 1.
      E0111 22:04:40.689273   70092 plugin.go:705] RunCommand failed for pod testns/nginx-vm-788324154-2qlrx in process process1, err is: exit status 1
      E0111 22:04:40.689366   70092 kubelet_for_process.go:177] error killing pod: failed to "Uninstall" for "process1" with ExecuteCommandFailed: "error info: exit status 1"
      • If yes, no further action is required.
      • If no, go to 6.

  6. Contact technical support for assistance.

Alarm Clearing

This alarm will be automatically cleared after the fault is rectified.

Related Information

None

Translation
Download
Updated: 2019-08-30

Document ID: EDOC1100062365

Views: 49281

Downloads: 33

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next