From 757f101117a7a2190a45c5c67a77d7ac3f863e9d Mon Sep 17 00:00:00 2001 From: Jie Luo Date: Wed, 21 Dec 2016 16:37:06 +0800 Subject: [PATCH 1/2] fix some typos Signed-off-by: Jie Luo --- docs/admin/accessing-the-api.md | 2 +- docs/admin/node.md | 2 +- docs/admin/out-of-resource.md | 2 +- docs/admin/resourcequota/index.md | 2 +- docs/admin/resourcequota/walkthrough.md | 2 +- docs/getting-started-guides/windows/index.md | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/admin/accessing-the-api.md b/docs/admin/accessing-the-api.md index 0e491ccf0d..c8f239969f 100644 --- a/docs/admin/accessing-the-api.md +++ b/docs/admin/accessing-the-api.md @@ -148,7 +148,7 @@ By default the Kubernetes APIserver serves HTTP on 2 ports: - default IP is first non-localhost network interface, change with `--bind-address` flag. - request handled by authentication and authorization modules. - request handled by admission control module(s). - - authentication and authoriation modules run. + - authentication and authorisation modules run. When the cluster is created by `kube-up.sh`, on Google Compute Engine (GCE), and on several other cloud providers, the API server serves on port 443. On diff --git a/docs/admin/node.md b/docs/admin/node.md index 3c3e16178d..a18aaf5ca7 100644 --- a/docs/admin/node.md +++ b/docs/admin/node.md @@ -186,7 +186,7 @@ Modifications include setting labels on the node and marking it unschedulable. Labels on nodes can be used in conjunction with node selectors on pods to control scheduling, e.g. to constrain a pod to only be eligible to run on a subset of the nodes. -Marking a node as unscheduleable will prevent new pods from being scheduled to that +Marking a node as unschedulable will prevent new pods from being scheduled to that node, but will not affect any existing pods on the node. This is useful as a preparatory step before a node reboot, etc. For example, to mark a node unschedulable, run this command: diff --git a/docs/admin/out-of-resource.md b/docs/admin/out-of-resource.md index a663703d9c..0fa6f3942c 100644 --- a/docs/admin/out-of-resource.md +++ b/docs/admin/out-of-resource.md @@ -349,7 +349,7 @@ in favor of the simpler configuation supported around eviction. The `kubelet` currently polls `cAdvisor` to collect memory usage stats at a regular interval. If memory usage increases within that window rapidly, the `kubelet` may not observe `MemoryPressure` fast enough, and the `OOMKiller` will still be invoked. We intend to integrate with the `memcg` notification API in a future release to reduce this -latency, and instead have the kernel tell us when a threshold has been crossed immmediately. +latency, and instead have the kernel tell us when a threshold has been crossed immediately. If you are not trying to achieve extreme utilization, but a sensible measure of overcommit, a viable workaround for this issue is to set eviction thresholds at approximately 75% capacity. This increases the ability of this feature diff --git a/docs/admin/resourcequota/index.md b/docs/admin/resourcequota/index.md index c967975dec..88f5d55afd 100644 --- a/docs/admin/resourcequota/index.md +++ b/docs/admin/resourcequota/index.md @@ -125,7 +125,7 @@ The quota can be configured to quota either value. If the quota has a value specified for `requests.cpu` or `requests.memory`, then it requires that every incoming container makes an explicit request for those resources. If the quota has a value specified for `limits.cpu` or `limits.memory`, -then it requires that every incoming container specifies an explict limit for those resources. +then it requires that every incoming container specifies an explicit limit for those resources. ## Viewing and Setting Quotas diff --git a/docs/admin/resourcequota/walkthrough.md b/docs/admin/resourcequota/walkthrough.md index d5ef21ff6c..1120e7550d 100644 --- a/docs/admin/resourcequota/walkthrough.md +++ b/docs/admin/resourcequota/walkthrough.md @@ -232,7 +232,7 @@ services.loadbalancers 0 2 services.nodeports 0 0 ``` -As you can see, the pod that was created is consuming explict amounts of compute resources, and the usage is being +As you can see, the pod that was created is consuming explicit amounts of compute resources, and the usage is being tracked by Kubernetes properly. ## Step 5: Advanced quota scopes diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index 511d125dcd..35a8b28f7a 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -134,7 +134,7 @@ Run the following in a PowerShell window with administrative privileges. Be awar `.\proxy.exe --v=3 --proxy-mode=userspace --hostname-override= --master= --bind-address=` ## Scheduling Pods on Windows -Because your cluster has both Linux and Windows nodes, you must explictly set the nodeSelector constraint to be able to schedule Pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example: +Because your cluster has both Linux and Windows nodes, you must explicitly set the nodeSelector constraint to be able to schedule Pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example: ``` { From 9f9e44d1741666bc65f1faaafff34a797f1787b8 Mon Sep 17 00:00:00 2001 From: Anthony Yeh Date: Wed, 21 Dec 2016 11:43:09 -0800 Subject: [PATCH 2/2] Remove accidentally nested {% raw %} tags. These tags cannot be nested, causing a Liquid syntax error. The nesting was introduced accidentally by concurrent PRs. --- docs/user-guide/kubectl/kubectl_get.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/user-guide/kubectl/kubectl_get.md b/docs/user-guide/kubectl/kubectl_get.md index 439f2b1ed0..7e973d7bc9 100644 --- a/docs/user-guide/kubectl/kubectl_get.md +++ b/docs/user-guide/kubectl/kubectl_get.md @@ -69,7 +69,7 @@ kubectl get [(-o|--output=)json|yaml|wide|custom-columns=...|custom-columns-file kubectl get -f pod.yaml -o json # Return only the phase value of the specified pod. - kubectl get -o template pod/web-pod-13je7 --template={% raw %}{{.status.phase}}{% endraw %} + kubectl get -o template pod/web-pod-13je7 --template={{.status.phase}} # List all replication controllers and services together in ps output format. kubectl get rc,services