No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

HUAWEI CLOUD Stack 6.5.0 Alarm and Event Reference 04

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
ALM-11 Failed to Start Pod

ALM-11 Failed to Start Pod

Description

This alarm is reported when a pod fails to start.

Pod: In Kubernetes, pods are the smallest unit of creation, scheduling, and deployment. A pod is a group of relatively tightly coupled containers. Pods are always co-located and run in a shared application context. Containers within a pod share a namespace, IP address, port space, and volume.

Attribute

Alarm ID

Alarm Severity

Alarm Type

11

Minor

Environmental alarm

Parameters

Parameter Name

Parameter Description

kind

Resource type.

namespace

Name of the project to which the resource belongs.

name

Resource name.

uid

Unique ID of the resource.

OriginalEventTime

Event generation time.

EventSource

Name of the component that reports an event.

EventMessage

Supplementary information about an event.

Impact on the System

The application running within the pod is abnormal, so that functions related to the application may be unavailable.

System Actions

The pod keeps restarting.

Possible Causes

The pod start script contains a bug.

Procedure

  1. Obtain the name of the pod that fails to be started.

    1. Use a browser to log in to the FusionStage OM zone console.
      1. Log in to ManageOne Maintenance Portal.
        • Login address: https://Address for accessing the homepage of ManageOne Maintenance Portal:31943, for example, https://oc.type.com:31943.
        • The default username is admin, and the default password is Huawei12#$.
      2. On the O&M Maps page, click the FusionStage link under Quick Links to go to the FusionStage OM zone console.
    2. Choose Application Operations > Application Operations from the main menu.
    3. In the navigation pane on the left, choose Alarm Center > Alarm List and query the alarm by setting query criteria.
    4. Click to expand the alarm information. Record the values of name and namespace in Location Info, that is, podname and namespace.

  2. Use PuTTY to log in to the manage_lb1_ip node.

    The default username is paas, and the default password is QAZ2wsx@123!.

  3. Run the following command and enter the password of the root user to switch to the root user:

    su - root

    Default password: QAZ2wsx@123!

  4. Check whether a containerized application is running in the pod.

    Run the following command to obtain the pod template:

    kubectl get pod podname -n namespace -oyaml

    In the preceding command, podname is the instance name obtained in 1, and namespace is the namespace obtained in 1.

    If the pod template contains the following information, a containerized application is running in the pod. Otherwise, a process application is running in the pod.
    hostname:~/ # kubectl get pod nginx-run-1869532261-29px2 -n testns -oyaml
    ...
    spec:
      containers:
      - image: */nginx:latest
    ...
    • If yes, go to 5.
    • If no, go to 6.

  5. Search for error information about container startup based on the pod name and correct the container startup configuration.

    1. Run the following command to obtain the IP address of the node on which the pod runs:

      kubectl get pod podname -n namespace -oyaml | grep -i hostip:

      In the preceding command, podname is the instance name obtained in 1, and namespace is the namespace obtained in 1.

      Log in to the node using SSH.

    2. Run the following commands to view the kubelet log:

      cd /var/paas/sys/log/kubernetes/

      vi kubelet.log

      Press the / key, enter the name of the pod, and then press Enter for search. If the following content in bold is displayed, the container startup command fails to be executed, and the container startup script does not exist:

      hostname:~ # cd /var/paas/sys/log/kubernetes/
      hostname:/var/paas/sys/log/kubernetes # vi kubelet.log 
      E0113 14:18:46.648051   70092 docker_manager.go:2466] container start failed: RunContainerError: runContainer: Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"exec: \\\\\\\"bash /tmp/test.sh\\\\\\\": stat bash /tmp/test.sh: no such file or directory\\\"\\n\""}
      E0113 14:18:46.648141   70092 pod_workers.go:226] Error syncing pod nginx-run-1869532261-29px2-b01b9e9c-f829-11e7-aa58-286ed488d1d4, skipping: failed to "StartContainer" for "container1" with RunContainerError: "runContainer: Error response from daemon: {\"message\":\"invalid header field value \\\"oci runtime error: container_linux.go:247: starting container process caused \\\\\\\"exec: \\\\\\\\\\\\\\\"bash /tmp/test.sh\\\\\\\\\\\\\\\": stat bash /tmp/test.sh: no such file or directory\\\\\\\"\\\\n\\\"\"}"
    1. Correct the container startup command based on error information, delete and deploy the application again, and then check whether the alarm is cleared.
      • If yes, no further action is required.
      • If no, go to 7.

  6. Search for error information about process startup based on the pod name and correct the process startup configuration.

    1. Run the following command to obtain the IP address of the node on which the pod runs:

      kubectl get pod podname -n namespace -oyaml | grep -i hostip:

      In the preceding command, podname is the instance name obtained in 1, and namespace is the namespace obtained in 1.

      Log in to the node using SSH.

    2. Run the following commands to view the kubelet log:

      cd /var/paas/sys/log/kubernetes/

      vi kubelet.log

      Press the / key, enter the name of the pod, and then press Enter for search. If the following content in bold is displayed, the container startup command fails to be executed, and error code 2 is returned:

      hostname:~ # cd /var/paas/sys/log/kubernetes/
      hostname:/var/paas/sys/log/kubernetes # vi kubelet.log 
      I0113 15:19:31.770305   70092 plugin.go:605] handle lifecycle for process process1
      I0113 15:19:31.770312   70092 plugin.go:687] Enter process lifecycle handletype=Start
      I0113 15:19:31.770322   70092 plugin.go:702] Enter process lifecycle commands=[]string{"/bin/bash", "-c", "bash /tmp/start.sh"}
      E0113 15:19:31.775082   70092 plugin.go:808] run process process1 failed: exit status 2.
      E0113 15:19:31.775179   70092 plugin.go:705] RunCommand failed for pod testns/nginx-test-vm-1287476116-t54s4 in process process1, err is: exit status 2
      E0113 15:19:31.776240   70092 kubelet_for_process.go:221] Sync pod nginx-test-vm-1287476116-t54s4 error: failed to "StartProcess" for "process1" with ExecuteCommandFailed: "error info: exit status 2"
    1. Correct the process startup command based on error information, delete and deploy the application again, and then check whether the alarm is cleared.
      • If yes, no further action is required.
      • If no, go to 7.

  7. Contact technical support for assistance.

Alarm Clearing

This alarm will be automatically cleared after the fault is rectified.

Related Information

None

Translation
Download
Updated: 2019-08-30

Document ID: EDOC1100062365

Views: 36731

Downloads: 31

Average rating:
This Document Applies to these Products
Related Version
Related Documents
Share
Previous Next