Update pod-lifecycle.md
Updated the table limits so that values do not get truncated.pull/24569/head
parent
ad35d64797
commit
e6c7f7eadf
|
@ -77,13 +77,13 @@ have a given `phase` value.
|
|||
|
||||
Here are the possible values for `phase`:
|
||||
|
||||
Value | Description
|
||||
:-----|:-----------
|
||||
`Pending` | The Pod has been accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This includes time a Pod spends waiting to be scheduled as well as the time spent downloading container images over the network.
|
||||
`Running` | The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.
|
||||
Value | Description
|
||||
:-----------|:-----------
|
||||
`Pending` | The Pod has been accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This includes time a Pod spends waiting to be scheduled as well as the time spent downloading container images over the network.
|
||||
`Running` | The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.
|
||||
`Succeeded` | All containers in the Pod have terminated in success, and will not be restarted.
|
||||
`Failed` | All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.
|
||||
`Unknown` | For some reason the state of the Pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the Pod should be running.
|
||||
`Failed` | All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.
|
||||
`Unknown` | For some reason the state of the Pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the Pod should be running.
|
||||
|
||||
If a node dies or is disconnected from the rest of the cluster, Kubernetes
|
||||
applies a policy for setting the `phase` of all Pods on the lost node to Failed.
|
||||
|
|
Loading…
Reference in New Issue