parent
5bab86b2b3
commit
654a589167
|
@ -204,7 +204,7 @@ If `nodefs` filesystem has met eviction thresholds, `kubelet` frees up disk spac
|
|||
|
||||
If the `kubelet` is unable to reclaim sufficient resource on the node, `kubelet` begins evicting Pods.
|
||||
|
||||
The `kubelet` ranks Pods for eviction first by whether or not their usage of the starved resource exceeds requests,
|
||||
The `kubelet` ranks Pods for eviction first by whether or not their usage of the starved resource exceeds requests,
|
||||
then by [Priority](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/), and then by the consumption of the starved compute resource relative to the Pods' scheduling requests.
|
||||
|
||||
As a result, `kubelet` ranks and evicts Pods in the following order:
|
||||
|
|
|
@ -216,7 +216,7 @@ spec:
|
|||
- Verify that the scheduling of the second pod fails with the below warning:
|
||||
|
||||
```shell
|
||||
Warning FailedScheduling 18s (x4 over 21s) default-scheduler persistentvolumeclaim "slzc" is being deleted
|
||||
Warning FailedScheduling 18s (x4 over 21s) default-scheduler persistentvolumeclaim "slzc" is being deleted
|
||||
```
|
||||
|
||||
- Wait until the pod status of both pods is `Terminated` or `Completed` (either delete the pods or wait until they finish). Afterwards, check that the PVC is removed.
|
||||
|
|
|
@ -55,7 +55,7 @@ rather a
|
|||
globally reachable via a single, static IP address.
|
||||
|
||||
Clients inside your federated Kubernetes clusters (Pods) will be
|
||||
automatically routed to the cluster-local shard of the Federated Service
|
||||
automatically routed to the cluster-local shard of the Federated Service
|
||||
backing the Ingress in their cluster if it exists and is healthy, or the closest healthy shard in a
|
||||
different cluster if it does not. Note that this involves a network
|
||||
trip to the HTTP(s) load balancer, which resides outside your local
|
||||
|
|
|
@ -108,7 +108,7 @@ will not use the command line you intended it to use.
|
|||
|
||||
The first thing to do is to delete your pod and try creating it again with the `--validate` option.
|
||||
For example, run `kubectl create --validate -f mypod.yaml`.
|
||||
If you misspelled `command` as `commnd` then will give an error like this:
|
||||
If you misspelled `command` as `commnd` then will give an error like this:
|
||||
|
||||
```shell
|
||||
I0805 10:43:25.129850 46757 schema.go:126] unknown field: commnd
|
||||
|
|
|
@ -94,7 +94,7 @@ However, you can use [ConfigMap](/docs/tasks/configure-pod-container/configure-p
|
|||
following the steps:
|
||||
|
||||
* **Step 1:** Change the config files in `config/`.
|
||||
* **Step 2:** Create the ConfigMap `node-problem-detector-config` with `kubectl create configmap
|
||||
* **Step 2:** Create the ConfigMap `node-problem-detector-config` with `kubectl create configmap
|
||||
node-problem-detector-config --from-file=config/`.
|
||||
* **Step 3:** Change the `node-problem-detector.yaml` to use the ConfigMap:
|
||||
|
||||
|
|
|
@ -233,7 +233,7 @@ You can install kubectl as part of the Google Cloud SDK.
|
|||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Windows" %}}
|
||||
{{% tab name="Windows" %}}
|
||||
1. Download the latest release {{< param "fullversion" >}} from [this link](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
|
||||
|
||||
Or if you have `curl` installed, use this command:
|
||||
|
|
Loading…
Reference in New Issue