Merge branch 'master' into patch-56
commit
5d8dbf6b04
|
@ -145,6 +145,7 @@ toc:
|
|||
- title: Install Network Policy Provider
|
||||
section:
|
||||
- docs/tasks/administer-cluster/calico-network-policy.md
|
||||
- docs/tasks/administer-cluster/cilium-network-policy.md
|
||||
- docs/tasks/administer-cluster/romana-network-policy.md
|
||||
- docs/tasks/administer-cluster/weave-network-policy.md
|
||||
- docs/tasks/administer-cluster/change-pv-reclaim-policy.md
|
||||
|
|
|
@ -7,11 +7,11 @@
|
|||
关于更多的贡献信息,请参阅:
|
||||
|
||||
* [贡献于 Kubernetes 文档](http://kubernetes.io/editdocs/)
|
||||
* [创建文档拉取请求](http://kubernetes.io/docs/contribute/create-pull-request/)
|
||||
* [创建文档拉取请求](http://kubernetes.io/docs/home/contribute/create-pull-request/)
|
||||
* [写一个新的话题](http://kubernetes.io/docs/contribute/write-new-topic/)
|
||||
* [暂停您的文档更改](http://kubernetes.io/docs/contribute/stage-documentation-changes/)
|
||||
* [更用页面模板](http://kubernetes.io/docs/contribute/page-templates/)
|
||||
* [文档样式指南](http://kubernetes.io/docs/contribute/style-guide/)
|
||||
* [暂停您的文档更改](http://kubernetes.io/docs/home/contribute/stage-documentation-changes/)
|
||||
* [更用页面模板](http://kubernetes.io/docs/home/contribute/page-templates/)
|
||||
* [文档样式指南](http://kubernetes.io/docs/home/contribute/style-guide/)
|
||||
|
||||
## 谢谢您!
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- image: gcr.io/google_containers/etcd:2.0.9
|
||||
- image: gcr.io/google_containers/etcd:3.0.17
|
||||
name: etcd-container
|
||||
command:
|
||||
- /usr/local/bin/etcd
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
title: kubefed
|
||||
notitle: true
|
||||
---
|
||||
## kubefed
|
||||
|
||||
kubefed controls a Kubernetes Cluster Federation
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
title: kubefed init
|
||||
notitle: true
|
||||
---
|
||||
## kubefed init
|
||||
|
||||
Initialize a federation control plane
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
title: kubefed join
|
||||
notitle: true
|
||||
---
|
||||
## kubefed join
|
||||
|
||||
Join a cluster to a federation
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
title: kubefed options
|
||||
notitle: true
|
||||
---
|
||||
## kubefed options
|
||||
|
||||
Print the list of flags inherited by all commands
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
title: kubefed unjoin
|
||||
notitle: true
|
||||
---
|
||||
## kubefed unjoin
|
||||
|
||||
Unjoin a cluster from a federation
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
title: kubefed version
|
||||
notitle: true
|
||||
---
|
||||
## kubefed version
|
||||
|
||||
Print the client and server version information
|
||||
|
|
|
@ -15,7 +15,7 @@ incomplete features are referred to in order to better describe service accounts
|
|||
|
||||
## User accounts vs service accounts
|
||||
|
||||
Kubernetes distinguished between the concept of a user account and a service accounts
|
||||
Kubernetes distinguishes between the concept of a user account and a service account
|
||||
for a number of reasons:
|
||||
|
||||
- User accounts are for humans. Service accounts are for processes, which
|
||||
|
@ -60,9 +60,9 @@ It acts synchronously to modify pods as they are created or updated. When this p
|
|||
TokenController runs as part of controller-manager. It acts asynchronously. It:
|
||||
|
||||
- observes serviceAccount creation and creates a corresponding Secret to allow API access.
|
||||
- observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets
|
||||
- observes secret addition, and ensures the referenced ServiceAccount exists, and adds a token to the secret if needed
|
||||
- observes secret deletion and removes a reference from the corresponding ServiceAccount if needed
|
||||
- observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets.
|
||||
- observes secret addition, and ensures the referenced ServiceAccount exists, and adds a token to the secret if needed.
|
||||
- observes secret deletion and removes a reference from the corresponding ServiceAccount if needed.
|
||||
|
||||
You must pass a service account private key file to the token controller in the controller-manager by using
|
||||
the `--service-account-private-key-file` option. The private key will be used to sign generated service account tokens.
|
||||
|
|
|
@ -6,7 +6,7 @@ Static compilation of html from markdown including processing for grouping code
|
|||
|
||||
\> bdocs-tab:kubectl Deployment Config to run 3 nginx instances (max rollback set to 10 revisions).
|
||||
|
||||
bdocs-tab:tab will be stripped during rendering and utilized to with CSS to show or hide the prefered tab. kubectl indicates the desired tab, since blockquotes have no specific syntax highlighting.
|
||||
bdocs-tab:tab will be stripped during rendering and utilized to with CSS to show or hide the preferred tab. kubectl indicates the desired tab, since blockquotes have no specific syntax highlighting.
|
||||
|
||||
\`\`\`bdocs-tab:kubectl_yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
|
|
|
@ -17957,10 +17957,10 @@ Appears In <a href="#ingress-v1beta1">Ingress</a> </aside>
|
|||
<span class="hljs-attr"> name:</span> service-example
|
||||
<span class="hljs-attr">spec:</span>
|
||||
<span class="hljs-attr"> ports:</span>
|
||||
<span class="hljs-comment"># Accept traffic sent to port 80</span>
|
||||
<span class="hljs-attr"> - name:</span> http
|
||||
<span class="hljs-attr"> port:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-attr"> targetPort:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-comment"># Accept traffic sent to port 80</span>
|
||||
<span class="hljs-attr"> - name:</span> http
|
||||
<span class="hljs-attr"> port:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-attr"> targetPort:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-attr"> selector:</span>
|
||||
<span class="hljs-comment"># Loadbalance traffic across Pods matching</span>
|
||||
<span class="hljs-comment"># this label selector</span>
|
||||
|
@ -17981,10 +17981,10 @@ Appears In <a href="#ingress-v1beta1">Ingress</a> </aside>
|
|||
<span class="hljs-attr"> name:</span> service-example
|
||||
<span class="hljs-attr">spec:</span>
|
||||
<span class="hljs-attr"> ports:</span>
|
||||
<span class="hljs-comment"># Accept traffic sent to port 80</span>
|
||||
<span class="hljs-attr"> - name:</span> http
|
||||
<span class="hljs-attr"> port:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-attr"> targetPort:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-comment"># Accept traffic sent to port 80</span>
|
||||
<span class="hljs-attr"> - name:</span> http
|
||||
<span class="hljs-attr"> port:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-attr"> targetPort:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-attr"> selector:</span>
|
||||
<span class="hljs-comment"># Loadbalance traffic across Pods matching</span>
|
||||
<span class="hljs-comment"># this label selector</span>
|
||||
|
@ -18156,11 +18156,11 @@ metadata:
|
|||
name: service-example
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: nginx
|
||||
app: nginx
|
||||
type: LoadBalancer
|
||||
'</span> | kubectl create <span class="hljs-_">-f</span> -
|
||||
</code></pre>
|
||||
|
@ -18176,11 +18176,11 @@ metadata:
|
|||
name: service-example
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: nginx
|
||||
app: nginx
|
||||
type: LoadBalancer
|
||||
'</span> <span class="hljs-symbol">http:</span>/<span class="hljs-regexp">/127.0.0.1:8001/api</span><span class="hljs-regexp">/v1/namespaces</span><span class="hljs-regexp">/default/services</span>
|
||||
</code></pre>
|
||||
|
|
|
@ -17849,10 +17849,10 @@ Appears In <a href="#ingress-v1beta1-extensions">Ingress</a> </aside>
|
|||
<span class="hljs-attr"> name:</span> service-example
|
||||
<span class="hljs-attr">spec:</span>
|
||||
<span class="hljs-attr"> ports:</span>
|
||||
<span class="hljs-comment"># Accept traffic sent to port 80</span>
|
||||
<span class="hljs-attr"> - name:</span> http
|
||||
<span class="hljs-attr"> port:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-attr"> targetPort:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-comment"># Accept traffic sent to port 80</span>
|
||||
<span class="hljs-attr"> - name:</span> http
|
||||
<span class="hljs-attr"> port:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-attr"> targetPort:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-attr"> selector:</span>
|
||||
<span class="hljs-comment"># Loadbalance traffic across Pods matching</span>
|
||||
<span class="hljs-comment"># this label selector</span>
|
||||
|
@ -17873,10 +17873,10 @@ Appears In <a href="#ingress-v1beta1-extensions">Ingress</a> </aside>
|
|||
<span class="hljs-attr"> name:</span> service-example
|
||||
<span class="hljs-attr">spec:</span>
|
||||
<span class="hljs-attr"> ports:</span>
|
||||
<span class="hljs-comment"># Accept traffic sent to port 80</span>
|
||||
<span class="hljs-attr"> - name:</span> http
|
||||
<span class="hljs-attr"> port:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-attr"> targetPort:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-comment"># Accept traffic sent to port 80</span>
|
||||
<span class="hljs-attr"> - name:</span> http
|
||||
<span class="hljs-attr"> port:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-attr"> targetPort:</span> <span class="hljs-number">80</span>
|
||||
<span class="hljs-attr"> selector:</span>
|
||||
<span class="hljs-comment"># Loadbalance traffic across Pods matching</span>
|
||||
<span class="hljs-comment"># this label selector</span>
|
||||
|
@ -18048,11 +18048,11 @@ metadata:
|
|||
name: service-example
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: nginx
|
||||
app: nginx
|
||||
type: LoadBalancer
|
||||
'</span> | kubectl create <span class="hljs-_">-f</span> -
|
||||
</code></pre>
|
||||
|
@ -18068,11 +18068,11 @@ metadata:
|
|||
name: service-example
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: nginx
|
||||
app: nginx
|
||||
type: LoadBalancer
|
||||
'</span> <span class="hljs-symbol">http:</span>/<span class="hljs-regexp">/127.0.0.1:8001/api</span><span class="hljs-regexp">/v1/namespaces</span><span class="hljs-regexp">/default/services</span>
|
||||
</code></pre>
|
||||
|
|
|
@ -17914,10 +17914,10 @@ metadata:
|
|||
</span> name: service-example
|
||||
spec:
|
||||
ports:
|
||||
# Accept traffic sent <span class="hljs-keyword">to</span><span class="hljs-built_in"> port </span>80
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
# Accept traffic sent <span class="hljs-keyword">to</span><span class="hljs-built_in"> port </span>80
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
# Loadbalance traffic across Pods matching
|
||||
# this label selector
|
||||
|
@ -17938,10 +17938,10 @@ metadata:
|
|||
</span> name: service-example
|
||||
spec:
|
||||
ports:
|
||||
# Accept traffic sent <span class="hljs-keyword">to</span><span class="hljs-built_in"> port </span>80
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
# Accept traffic sent <span class="hljs-keyword">to</span><span class="hljs-built_in"> port </span>80
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
# Loadbalance traffic across Pods matching
|
||||
# this label selector
|
||||
|
@ -18129,11 +18129,11 @@ $ echo 'kind: Service
|
|||
name</span>: service-example
|
||||
<span class="hljs-attribute">spec:
|
||||
ports:
|
||||
- name</span>: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
- name</span>: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: nginx
|
||||
app: nginx
|
||||
type: LoadBalancer
|
||||
' | kubectl create -f -
|
||||
</code></pre>
|
||||
|
@ -18149,11 +18149,11 @@ metadata:
|
|||
name: service-example
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: nginx
|
||||
app: nginx
|
||||
type: LoadBalancer
|
||||
'</span> <span class="hljs-symbol">http:</span>/<span class="hljs-regexp">/127.0.0.1:8001/api</span><span class="hljs-regexp">/v1/namespaces</span><span class="hljs-regexp">/default/services</span>
|
||||
</code></pre>
|
||||
|
|
|
@ -65,9 +65,9 @@ or service through the apiserver's proxy functionality.
|
|||
### apiserver -> kubelet
|
||||
|
||||
The connections from the apiserver to the kubelet are used for:
|
||||
* fetching logs for pods.
|
||||
* attaching (through kubectl) to running pods.
|
||||
* the kubelet's port-forwarding functionality.
|
||||
* Fetching logs for pods.
|
||||
* Attaching (through kubectl) to running pods.
|
||||
* Providing the kubelet's port-forwarding functionality.
|
||||
|
||||
These connections terminate at the kubelet's HTTPS endpoint. By default,
|
||||
the apiserver does not verify the kubelet's serving certificate,
|
||||
|
|
|
@ -37,7 +37,7 @@ review the "normal" way that networking works with Docker. By default, Docker
|
|||
uses host-private networking. It creates a virtual bridge, called `docker0` by
|
||||
default, and allocates a subnet from one of the private address blocks defined
|
||||
in [RFC1918](https://tools.ietf.org/html/rfc1918) for that bridge. For each
|
||||
container that Docker creates, it allocates a virtual ethernet device (called
|
||||
container that Docker creates, it allocates a virtual Ethernet device (called
|
||||
`veth`) which is attached to the bridge. The veth is mapped to appear as `eth0`
|
||||
in the container, using Linux namespaces. The in-container `eth0` interface is
|
||||
given an IP address from the bridge's address range.
|
||||
|
|
|
@ -79,7 +79,7 @@ Mi, Ki. For example, the following represent roughly the same value:
|
|||
|
||||
Here's an example.
|
||||
The following Pod has two Containers. Each Container has a request of 0.25 cpu
|
||||
and 64MiB (2<sup>26</sup> bytes) of memory Each Container has a limit of 0.5
|
||||
and 64MiB (2<sup>26</sup> bytes) of memory. Each Container has a limit of 0.5
|
||||
cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128
|
||||
MiB of memory, and a limit of 1 cpu and 256MiB of memory.
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ What constitutes a compatible change and how to change the API are detailed by t
|
|||
|
||||
Complete API details are documented using [Swagger v1.2](http://swagger.io/) and [OpenAPI](https://www.openapis.org/). The Kubernetes apiserver (aka "master") exposes an API that can be used to retrieve the Swagger v1.2 Kubernetes API spec located at `/swaggerapi`. You can also enable a UI to browse the API documentation at `/swagger-ui` by passing the `--enable-swagger-ui=true` flag to apiserver.
|
||||
|
||||
Starting with kubernetes 1.4, OpenAPI spec is also available at [`/swagger.json`](https://git.k8s.io/kubernetes/api/openapi-spec/swagger.json). While we are transitioning from Swagger v1.2 to OpenAPI (aka Swagger v2.0), some of the tools such as kubectl and swagger-ui are still using v1.2 spec. OpenAPI spec is in Beta as of Kubernetes 1.5.
|
||||
Starting with Kubernetes 1.4, OpenAPI spec is also available at [`/swagger.json`](https://git.k8s.io/kubernetes/api/openapi-spec/swagger.json). While we are transitioning from Swagger v1.2 to OpenAPI (aka Swagger v2.0), some of the tools such as kubectl and swagger-ui are still using v1.2 spec. OpenAPI spec is in Beta as of Kubernetes 1.5.
|
||||
|
||||
Kubernetes implements an alternative Protobuf based serialization format for the API that is primarily intended for intra-cluster communication, documented in the [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/protobuf.md) and the IDL files for each schema are located in the Go packages that define the API objects.
|
||||
|
||||
|
|
|
@ -101,9 +101,9 @@ spec:
|
|||
name: busybox
|
||||
clusterIP: None
|
||||
ports:
|
||||
- name: foo # Actually, no port is needed.
|
||||
port: 1234
|
||||
targetPort: 1234
|
||||
- name: foo # Actually, no port is needed.
|
||||
port: 1234
|
||||
targetPort: 1234
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
|
|
|
@ -52,9 +52,9 @@ spec:
|
|||
selector:
|
||||
app: MyApp
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
```
|
||||
|
||||
This specification will create a new `Service` object named "my-service" which
|
||||
|
@ -97,9 +97,9 @@ metadata:
|
|||
name: my-service
|
||||
spec:
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
```
|
||||
|
||||
Because this service has no selector, the corresponding `Endpoints` object will not be
|
||||
|
@ -216,17 +216,17 @@ apiVersion: v1
|
|||
metadata:
|
||||
name: my-service
|
||||
spec:
|
||||
selector:
|
||||
app: MyApp
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
- name: https
|
||||
protocol: TCP
|
||||
port: 443
|
||||
targetPort: 9377
|
||||
selector:
|
||||
app: MyApp
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
- name: https
|
||||
protocol: TCP
|
||||
port: 443
|
||||
targetPort: 9377
|
||||
```
|
||||
|
||||
## Choosing your own IP address
|
||||
|
@ -404,17 +404,17 @@ spec:
|
|||
selector:
|
||||
app: MyApp
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
nodePort: 30061
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
nodePort: 30061
|
||||
clusterIP: 10.0.171.239
|
||||
loadBalancerIP: 78.11.24.19
|
||||
type: LoadBalancer
|
||||
status:
|
||||
loadBalancer:
|
||||
ingress:
|
||||
- ip: 146.148.47.155
|
||||
- ip: 146.148.47.155
|
||||
```
|
||||
|
||||
Traffic from the external load balancer will be directed at the backend `Pods`,
|
||||
|
@ -531,12 +531,12 @@ spec:
|
|||
selector:
|
||||
app: MyApp
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
externalIPs:
|
||||
- 80.11.12.10
|
||||
- 80.11.12.10
|
||||
```
|
||||
|
||||
## Shortcomings
|
||||
|
|
|
@ -581,18 +581,27 @@ __Important: You must create VMDK using one of the following method before using
|
|||
|
||||
#### Creating a VMDK volume
|
||||
|
||||
* Create using vmkfstools.
|
||||
Choose one of the following methods to create a VMDK.
|
||||
|
||||
First ssh into ESX and then use following command to create vmdk,
|
||||
{% capture vmkfstools %}
|
||||
First ssh into ESX, then use the following command to create a VMDK:
|
||||
|
||||
```shell
|
||||
vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk
|
||||
vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk
|
||||
```
|
||||
{% endcapture %}
|
||||
|
||||
{% capture vdiskmanager %}
|
||||
Use the following command to create a VMDK:
|
||||
|
||||
* Create using vmware-vdiskmanager.
|
||||
```shell
|
||||
vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk
|
||||
vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk
|
||||
```
|
||||
{% endcapture %}
|
||||
|
||||
{% assign tab_names = 'Create using vmkfstools,Create using vmware-vdiskmanager' | split: ',' | compact %}
|
||||
{% assign tab_contents = site.emptyArray | push: vmkfstools | push: vdiskmanager %}
|
||||
{% include tabs.md %}
|
||||
|
||||
#### vSphere VMDK Example configuration
|
||||
|
||||
|
|
|
@ -98,9 +98,20 @@ when the pod is created, so it is ignored by the scheduler). Therefore:
|
|||
- DaemonSet controller can make pods even when the scheduler has not been started, which can help cluster
|
||||
bootstrap.
|
||||
|
||||
Daemon pods do respect [taints and tolerations](/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature), but they are
|
||||
created with `NoExecute` tolerations for the `node.alpha.kubernetes.io/notReady` and `node.alpha.kubernetes.io/unreachable`
|
||||
taints with no `tolerationSeconds`. This ensures that when the `TaintBasedEvictions` alpha feature is enabled,
|
||||
Daemon pods do respect [taints and tolerations](/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature),
|
||||
but they are created with `NoExecute` tolerations for the following taints with no `tolerationSeconds`:
|
||||
|
||||
- `node.alpha.kubernetes.io/notReady`
|
||||
- `node.alpha.kubernetes.io/unreachable`
|
||||
- `node.alpha.kubernetes.io/memoryPressure`
|
||||
- `node.alpha.kubernetes.io/diskPressure`
|
||||
|
||||
When the support to critical pods is enabled and the pods in a DaemonSet are
|
||||
labelled as critical, the Daemon pods are created with an additional
|
||||
`NoExecute` toleration for the `node.alpha.kubernetes.io/outOfDisk` taint with
|
||||
no `tolerationSeconds`.
|
||||
|
||||
This ensures that when the `TaintBasedEvictions` alpha feature is enabled,
|
||||
they will not be evicted when there are node problems such as a network partition. (When the
|
||||
`TaintBasedEvictions` feature is not enabled, they are also not evicted in these scenarios, but
|
||||
due to hard-coded behavior of the NodeController rather than due to tolerations).
|
||||
|
|
|
@ -157,9 +157,9 @@ metadata:
|
|||
name: myservice
|
||||
spec:
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
---
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
|
@ -167,9 +167,9 @@ metadata:
|
|||
name: mydb
|
||||
spec:
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9377
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9377
|
||||
```
|
||||
|
||||
This Pod can be started and debugged with the following commands:
|
||||
|
|
|
@ -12,7 +12,11 @@ This page assumes you have a working Juju deployed cluster.
|
|||
{% capture steps %}
|
||||
## Connecting Datadog
|
||||
|
||||
Datadog is a SaaS offering which includes support for a range of integrations, including Kubernetes and ETCD. While the solution is SAAS/Commercial, they include a Free tier which is supported with the following method. To deploy a full Kubernetes stack with Datadog out of the box, do: `juju deploy canonical-kubernetes-datadog`
|
||||
Datadog is a SaaS offering which includes support for a range of integrations, including Kubernetes and ETCD. While the solution is SAAS/Commercial, they include a Free tier which is supported with the following method. To deploy a full Kubernetes stack with Datadog out of the box, do:
|
||||
|
||||
```
|
||||
juju deploy canonical-kubernetes-datadog
|
||||
```
|
||||
|
||||
### Installation of Datadog
|
||||
|
||||
|
@ -132,4 +136,4 @@ juju configure nrpe-external-master nagios_master=255.255.255.255
|
|||
Once configured, connect nrpe-external-master as outlined above.
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
||||
{% include templates/task.md %}
|
||||
|
|
|
@ -6,7 +6,7 @@ Static compilation of html from markdown including processing for grouping code
|
|||
|
||||
\> bdocs-tab:kubectl Deployment Config to run 3 nginx instances (max rollback set to 10 revisions).
|
||||
|
||||
bdocs-tab:tab will be stripped during rendering and utilized to with CSS to show or hide the prefered tab. kubectl indicates the desired tab, since blockquotes have no specific syntax highlighting.
|
||||
bdocs-tab:tab will be stripped during rendering and utilized to with CSS to show or hide the preferred tab. kubectl indicates the desired tab, since blockquotes have no specific syntax highlighting.
|
||||
|
||||
\`\`\`bdocs-tab:kubectl_yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
|
|
|
@ -6,7 +6,7 @@ Static compilation of html from markdown including processing for grouping code
|
|||
|
||||
\> bdocs-tab:kubectl Deployment Config to run 3 nginx instances (max rollback set to 10 revisions).
|
||||
|
||||
bdocs-tab:tab will be stripped during rendering and utilized to with CSS to show or hide the prefered tab. kubectl indicates the desired tab, since blockquotes have no specific syntax highlighting.
|
||||
bdocs-tab:tab will be stripped during rendering and utilized to with CSS to show or hide the preferred tab. kubectl indicates the desired tab, since blockquotes have no specific syntax highlighting.
|
||||
|
||||
\`\`\`bdocs-tab:kubectl_yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
|
|
|
@ -80,7 +80,7 @@ cloud providers is difficult.
|
|||
|
||||
### (1/4) Installing kubeadm on your hosts
|
||||
|
||||
See [Installing kubeadm](/docs/setup/independent/install-kubeadm/)
|
||||
See [Installing kubeadm](/docs/setup/independent/install-kubeadm/).
|
||||
|
||||
**Note:** If you already have kubeadm installed, you should do a `apt-get update &&
|
||||
apt-get upgrade` or `yum update` to get the latest version of kubeadm.
|
||||
|
@ -211,7 +211,7 @@ Please select one of the tabs to see installation instructions for the respectiv
|
|||
|
||||
{% capture calico %}
|
||||
|
||||
The official Calico guide is [here](http://docs.projectcalico.org/latest/getting-started/kubernetes/installation/hosted/kubeadm/)
|
||||
The official Calico guide is [here](http://docs.projectcalico.org/latest/getting-started/kubernetes/installation/hosted/kubeadm/).
|
||||
|
||||
**Note:**
|
||||
- In order for Network Policy to work correctly, you need to pass `--pod-network-cidr=192.168.0.0/16` to `kubeadm init`
|
||||
|
@ -224,7 +224,7 @@ kubectl apply -f http://docs.projectcalico.org/v2.4/getting-started/kubernetes/i
|
|||
|
||||
{% capture canal %}
|
||||
|
||||
The official Canal set-up guide is [here](https://github.com/projectcalico/canal/tree/master/k8s-install)
|
||||
The official Canal set-up guide is [here](https://github.com/projectcalico/canal/tree/master/k8s-install).
|
||||
|
||||
**Note:**
|
||||
- For Canal to work correctly, `--pod-network-cidr=10.244.0.0/16` has to be passed to `kubeadm init`.
|
||||
|
@ -251,7 +251,7 @@ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documen
|
|||
|
||||
{% capture romana %}
|
||||
|
||||
The official Romana set-up guide is [here](https://github.com/romana/romana/tree/master/containerize#using-kubeadm)
|
||||
The official Romana set-up guide is [here](https://github.com/romana/romana/tree/master/containerize#using-kubeadm).
|
||||
|
||||
**Note:** Romana works on `amd64` only.
|
||||
|
||||
|
@ -262,7 +262,7 @@ kubectl apply -f https://raw.githubusercontent.com/romana/romana/master/containe
|
|||
|
||||
{% capture weave_net %}
|
||||
|
||||
The official Weave Net set-up guide is [here](https://www.weave.works/docs/net/latest/kube-addon/)
|
||||
The official Weave Net set-up guide is [here](https://www.weave.works/docs/net/latest/kube-addon/).
|
||||
|
||||
**Note:** Weave Net works on `amd64`, `arm` and `arm64` without any extra action required.
|
||||
|
||||
|
@ -538,9 +538,7 @@ You may have trouble in the configuration if you see Pod statuses like `RunConta
|
|||
second network interface, not the first one). By default, it doesn't do this
|
||||
and kubelet ends-up using first non-loopback network interface, which is
|
||||
usually NATed. Workaround: Modify `/etc/hosts`, take a look at this
|
||||
[`Vagrantfile`][ubuntu-vagrantfile] for how this can be achieved.
|
||||
|
||||
[ubuntu-vagrantfile]: https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
|
||||
`Vagrantfile`[ubuntu-vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11) for how this can be achieved.
|
||||
|
||||
1. The following error indicates a possible certificate mismatch.
|
||||
|
||||
|
@ -559,9 +557,8 @@ Another workaround is to overwrite the default `kubeconfig` for the "admin" user
|
|||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||
```
|
||||
|
||||
1. If you are using CentOS and encounter difficulty while setting up the master node:
|
||||
|
||||
Verify that your Docker cgroup driver matches the kubelet config:
|
||||
1. If you are using CentOS and encounter difficulty while setting up the master node,
|
||||
verify that your Docker cgroup driver matches the kubelet config:
|
||||
|
||||
```
|
||||
docker info |grep -i cgroup
|
||||
|
|
|
@ -200,7 +200,7 @@ You have several options for connecting to nodes, pods and services from outside
|
|||
or it may expose it to the internet. Think about whether the service being exposed is secure.
|
||||
Does it do its own authentication?
|
||||
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
|
||||
place a unique label on the pod it and create a new service which selects this label.
|
||||
place a unique label on the pod and create a new service which selects this label.
|
||||
- In most cases, it should not be necessary for application developer to directly access
|
||||
nodes via their nodeIPs.
|
||||
- Access services, nodes, or pods using the Proxy Verb.
|
||||
|
|
|
@ -138,7 +138,7 @@ current-context: federal-context
|
|||
|
||||
`current-context` is the nickname or 'key' for the cluster,user,namespace tuple that kubectl
|
||||
will use by default when loading config from this file. You can override any of the values in kubectl
|
||||
from the commandline, by passing `--context=CONTEXT`, `--cluster=CLUSTER`, `--user=USER`, and/or `--namespace=NAMESPACE` respectively.
|
||||
from the command line, by passing `--context=CONTEXT`, `--cluster=CLUSTER`, `--user=USER`, and/or `--namespace=NAMESPACE` respectively.
|
||||
You can change the `current-context` with [`kubectl config use-context`](/docs/user-guide/kubectl/{{page.version}}/#-em-use-context-em-).
|
||||
|
||||
#### miscellaneous
|
||||
|
@ -315,7 +315,7 @@ So, tying this all together, a quick start to create your own kubeconfig file:
|
|||
- Make sure your api-server provides at least one set of credentials (for example, `green-user`) when launched. You will of course have to look at api-server documentation in order to determine the current state-of-the-art in terms of providing authentication details.
|
||||
|
||||
## Related discussion
|
||||
[http://issue.k8s.io/1755](http://issue.k8s.io/1755)
|
||||
[https://github.com/kubernetes/kubernetes/issues/1755](https://github.com/kubernetes/kubernetes/issues/1755)
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
||||
|
|
|
@ -26,8 +26,8 @@ metadata:
|
|||
name: myapp
|
||||
spec:
|
||||
ports:
|
||||
- port: 8765
|
||||
targetPort: 9376
|
||||
- port: 8765
|
||||
targetPort: 9376
|
||||
selector:
|
||||
app: example
|
||||
type: LoadBalancer
|
||||
|
@ -44,8 +44,8 @@ metadata:
|
|||
name: myapp
|
||||
spec:
|
||||
ports:
|
||||
- port: 8765
|
||||
targetPort: 9376
|
||||
- port: 8765
|
||||
targetPort: 9376
|
||||
selector:
|
||||
app: example
|
||||
type: LoadBalancer
|
||||
|
|
|
@ -7,9 +7,9 @@ spec:
|
|||
app: hello
|
||||
tier: frontend
|
||||
ports:
|
||||
- protocol: "TCP"
|
||||
port: 80
|
||||
targetPort: 80
|
||||
- protocol: "TCP"
|
||||
port: 80
|
||||
targetPort: 80
|
||||
type: LoadBalancer
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
|
@ -26,9 +26,9 @@ spec:
|
|||
track: stable
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: "gcr.io/google-samples/hello-frontend:1.0"
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/usr/sbin/nginx","-s","quit"]
|
||||
- name: nginx
|
||||
image: "gcr.io/google-samples/hello-frontend:1.0"
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/usr/sbin/nginx","-s","quit"]
|
||||
|
|
|
@ -7,6 +7,6 @@ spec:
|
|||
app: hello
|
||||
tier: backend
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: http
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: http
|
||||
|
|
|
@ -46,7 +46,7 @@ for all items returned.
|
|||
|
||||
As an alternative, it is possible to use the absolute path to the image
|
||||
field within the Pod. This ensures the correct field is retrieved
|
||||
in the even the field name is repeated,
|
||||
even when the field name is repeated,
|
||||
e.g. many fields are called `name` within a given item:
|
||||
|
||||
```sh
|
||||
|
|
|
@ -43,7 +43,7 @@ the corresponding `PersistentVolume` is not be deleted. Instead, it is moved to
|
|||
This list also includes the name of the claims that are bound to each volume
|
||||
for easier identification of dynamically provisioned volumes.
|
||||
|
||||
1. Chose one of your PersistentVolumes and change its reclaim policy:
|
||||
1. Choose one of your PersistentVolumes and change its reclaim policy:
|
||||
|
||||
kubectl patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
|
||||
|
||||
|
|
|
@ -0,0 +1,78 @@
|
|||
---
|
||||
assignees:
|
||||
- danwent
|
||||
title: Use Cilium for NetworkPolicy
|
||||
---
|
||||
|
||||
{% capture overview %}
|
||||
This page shows how to use Cilium for NetworkPolicy.
|
||||
|
||||
For background on Cilium, read the [Introduction to Cilium](http://cilium.readthedocs.io/en/latest/intro/).
|
||||
{% endcapture %}
|
||||
|
||||
{% capture prerequisites %}
|
||||
|
||||
{% include task-tutorial-prereqs.md %}
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture steps %}
|
||||
## Deploying Cilium on Minikube for Basic Testing
|
||||
|
||||
To get familiar with Cilium easily you can follow the
|
||||
[Cilium Kubernetes Getting Started Guide](http://www.cilium.io/try)
|
||||
to perform a basic DaemonSet installation of Cilium in minikube.
|
||||
|
||||
Installation in a minikube setup uses a simple ''all-in-one'' YAML
|
||||
file that includes DaemonSet configurations for Cilium and a key-value store
|
||||
(consul) as well as appropriate RBAC settings:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/master/examples/minikube/cilium-ds.yaml
|
||||
clusterrole "cilium" created
|
||||
serviceaccount "cilium" created
|
||||
clusterrolebinding "cilium" created
|
||||
daemonset "cilium-consul" created
|
||||
daemonset "cilium" created
|
||||
```
|
||||
|
||||
The remainder of the Getting Started Guide explains how to enforce both L3/L4 (i.e., IP address + port) security
|
||||
policies, as well as L7 (e.g., HTTP) security policies using an example application.
|
||||
|
||||
## Deploying Cilium for Production Use
|
||||
|
||||
For detailed instructions around deploying Cilium for production, see:
|
||||
[Cilium Administrator Guide](http://cilium.readthedocs.io/en/latest/admin/) This
|
||||
documentation includes detailed requirements, instructions and example production DaemonSet files.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture discussion %}
|
||||
## Understanding Cilium components
|
||||
|
||||
Deploying a cluster with Cilium adds Pods to the `kube-system` namespace. To see this list of Pods run:
|
||||
|
||||
```shell
|
||||
kubectl get pods --namespace=kube-system
|
||||
```
|
||||
|
||||
You'll see a list of Pods similar to this:
|
||||
|
||||
```console
|
||||
NAME DESIRED CURRENT READY NODE-SELECTOR AGE
|
||||
cilium 1 1 1 <none> 2m
|
||||
...
|
||||
```
|
||||
|
||||
There are two main components to be aware of:
|
||||
|
||||
- One `cilium` Pod runs on each node in your cluster and enforces network policy on the traffic to/from Pods on that node using Linux BPF.
|
||||
- For production deployments, Cilium should leverage the key-value store cluster (e.g., etcd) used by Kubernetes, which typically runs on the Kubernetes master nodes. The [Cilium Administrator Guide](http://cilium.readthedocs.io/en/latest/admin/) includes an example DaemonSet which can be customized to point to this key-value store cluster. The simple ''all-in-one'' DaemonSet for minikube requires no such configuration because it automatically deploys a `cilium-consul` Pod to provide a key-value store.
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
{% capture whatsnext %}
|
||||
Once your cluster is running, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy with Cilium. Have fun, and if you have questions, contact us using the [Cilium Slack Channel](https://cilium.herokuapp.com/).
|
||||
{% endcapture %}
|
||||
|
||||
{% include templates/task.md %}
|
|
@ -12,6 +12,7 @@ This document helps you get started using the Kubernetes [NetworkPolicy API](/do
|
|||
You'll need to have a Kubernetes cluster in place, with network policy support. There are a number of network providers that support NetworkPolicy, including:
|
||||
|
||||
* [Calico](/docs/tasks/configure-pod-container/calico-network-policy/)
|
||||
* [Cilium](/docs/tasks/configure-pod-container/cilium-network-policy/)
|
||||
* [Romana](/docs/tasks/configure-pod-container/romana-network-policy/)
|
||||
* [Weave Net](/docs/tasks/configure-pod-container/weave-network-policy/)
|
||||
|
||||
|
|
|
@ -137,7 +137,7 @@ program to retrieve the contents of your secret.
|
|||
4. Verify the secret is correctly decrypted when retrieved via the API:
|
||||
|
||||
```
|
||||
kubectl describe secret generic -n default
|
||||
kubectl describe secret secret1 -n default
|
||||
```
|
||||
|
||||
should match `mykey: mydata`
|
||||
|
|
|
@ -211,7 +211,6 @@ The output shows that the Container starts and fails repeatedly:
|
|||
```
|
||||
... Normal Created Created container with id 66a3a20aa7980e61be4922780bf9d24d1a1d8b7395c09861225b0eba1b1f8511
|
||||
... Warning BackOff Back-off restarting failed container
|
||||
|
||||
```
|
||||
|
||||
View detailed information about your cluster's Nodes:
|
||||
|
|
|
@ -332,7 +332,7 @@ need to set the `level` section. This sets the
|
|||
[Multi-Category Security (MCS)](https://selinuxproject.org/page/NB_MLS)
|
||||
label given to all Containers in the Pod as well as the Volumes.
|
||||
|
||||
**Warning:** After you specify an MCS label for a Pod, all Pods with the same label will able to access the Volume. So if you need inter-Pod protection, you must ensure each Pod is assigned a unique MCS label.
|
||||
**Warning:** After you specify an MCS label for a Pod, all Pods with the same label can access the Volume. If you need inter-Pod protection, you must assign a unique MCS label to each Pod.
|
||||
{: .warning}
|
||||
|
||||
{% endcapture %}
|
||||
|
|
|
@ -227,8 +227,8 @@ due to caching by intermediate DNS servers.
|
|||
|
||||
1. Notice that there is a normal ('A') record for each service shard that has at least one healthy backend endpoint. For example, in us-central1-a, 104.197.247.191 is the external IP address of the service shard in that zone, and in asia-east1-a the address is 130.211.56.221.
|
||||
2. Similarly, there are regional 'A' records which include all healthy shards in that region. For example, 'us-central1'. These regional records are useful for clients which do not have a particular zone preference, and as a building block for the automated locality and failover mechanism described below.
|
||||
2. For zones where there are currently no healthy backend endpoints, a CNAME ('Canonical Name') record is used to alias (automatically redirect) those queries to the next closest healthy zone. In the example, the service shard in us-central1-f currently has no healthy backend endpoints (i.e. Pods), so a CNAME record has been created to automatically redirect queries to other shards in that region (us-central1 in this case).
|
||||
3. Similarly, if no healthy shards exist in the enclosing region, the search progresses further afield. In the europe-west1-d availability zone, there are no healthy backends, so queries are redirected to the broader europe-west1 region (which also has no healthy backends), and onward to the global set of healthy addresses (' nginx.mynamespace.myfederation.svc.example.com.')
|
||||
3. For zones where there are currently no healthy backend endpoints, a CNAME ('Canonical Name') record is used to alias (automatically redirect) those queries to the next closest healthy zone. In the example, the service shard in us-central1-f currently has no healthy backend endpoints (i.e. Pods), so a CNAME record has been created to automatically redirect queries to other shards in that region (us-central1 in this case).
|
||||
4. Similarly, if no healthy shards exist in the enclosing region, the search progresses further afield. In the europe-west1-d availability zone, there are no healthy backends, so queries are redirected to the broader europe-west1 region (which also has no healthy backends), and onward to the global set of healthy addresses (' nginx.mynamespace.myfederation.svc.example.com.').
|
||||
|
||||
The above set of DNS records is automatically kept in sync with the
|
||||
current state of health of all service shards globally by the
|
||||
|
@ -355,7 +355,7 @@ how to bring up a cluster federation correctly (or have your cluster administrat
|
|||
#### I can create a federated service successfully against the cluster federation API, but no matching services are created in my underlying clusters
|
||||
Check that:
|
||||
|
||||
1. Your clusters are correctly registered in the Cluster Federation API (`kubectl describe clusters`)
|
||||
1. Your clusters are correctly registered in the Cluster Federation API (`kubectl describe clusters`).
|
||||
2. Your clusters are all 'Active'. This means that the cluster Federation system was able to connect and authenticate against the clusters' endpoints. If not, consult the logs of the federation-controller-manager pod to ascertain what the failure might be. (`kubectl --namespace=federation logs $(kubectl get pods --namespace=federation -l module=federation-controller-manager -o name`)
|
||||
3. That the login credentials provided to the Cluster Federation API for the clusters have the correct authorization and quota to create services in the relevant namespace in the clusters. Again you should see associated error messages providing more detail in the above log file if this is not the case.
|
||||
4. Whether any other error is preventing the service creation operation from succeeding (look for `service-controller` errors in the output of `kubectl logs federation-controller-manager --namespace federation`).
|
||||
|
@ -365,7 +365,7 @@ Check that:
|
|||
|
||||
1. Your federation name, DNS provider, DNS domain name are configured correctly. Consult the [federation admin guide](/docs/admin/federation/) or [tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation) to learn
|
||||
how to configure your Cluster Federation system's DNS provider (or have your cluster administrator do this for you).
|
||||
2. Confirm that the Cluster Federation's service-controller is successfully connecting to and authenticating against your selected DNS provider (look for `service-controller` errors or successes in the output of `kubectl logs federation-controller-manager --namespace federation`)
|
||||
2. Confirm that the Cluster Federation's service-controller is successfully connecting to and authenticating against your selected DNS provider (look for `service-controller` errors or successes in the output of `kubectl logs federation-controller-manager --namespace federation`).
|
||||
3. Confirm that the Cluster Federation's service-controller is successfully creating DNS records in your DNS provider (or outputting errors in its logs explaining in more detail what's failing).
|
||||
|
||||
#### Matching DNS records are created in my DNS provider, but clients are unable to resolve against those names
|
||||
|
|
|
@ -367,7 +367,7 @@ kubefed init fellowship \
|
|||
```
|
||||
|
||||
For more information see
|
||||
[Setting up CoreDNS as DNS provider for Cluster Federation](/docs/tutorials/federation/set-up-coredns-provider-federation/)
|
||||
[Setting up CoreDNS as DNS provider for Cluster Federation](/docs/tutorials/federation/set-up-coredns-provider-federation/).
|
||||
|
||||
## Adding a cluster to a federation
|
||||
|
||||
|
@ -464,7 +464,7 @@ commands.
|
|||
|
||||
In all other cases, you must update `kube-dns` configuration manually
|
||||
as described in the
|
||||
[Updating KubeDNS section of the admin guide](/docs/admin/federation/)
|
||||
[Updating KubeDNS section of the admin guide](/docs/admin/federation/).
|
||||
|
||||
## Removing a cluster from a federation
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ This page shows how to perform a rollback on a DaemonSet.
|
|||
* The DaemonSet rollout history and DaemonSet rollback features are only
|
||||
supported in `kubectl` in Kubernetes version 1.7 or later.
|
||||
* Make sure you know how to [perform a rolling update on a
|
||||
DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/)
|
||||
DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/).
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
@ -99,7 +99,7 @@ kubectl rollout status ds/<daemonset-name>
|
|||
When the rollback is complete, the output is similar to this:
|
||||
|
||||
```shell
|
||||
daemon set "<daemonset-name>" successfully rolled out
|
||||
daemonset "<daemonset-name>" successfully rolled out
|
||||
```
|
||||
|
||||
{% endcapture %}
|
||||
|
@ -147,7 +147,7 @@ have revision 1 and 2 in the system, and roll back from revision 2 to revision
|
|||
## Troubleshooting
|
||||
|
||||
* See [troubleshooting DaemonSet rolling
|
||||
update](/docs/tasks/manage-daemon/update-daemon-set/#troubleshooting)
|
||||
update](/docs/tasks/manage-daemon/update-daemon-set/#troubleshooting).
|
||||
|
||||
{% endcapture %}
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ metadata:
|
|||
name: mysql
|
||||
spec:
|
||||
ports:
|
||||
- port: 3306
|
||||
- port: 3306
|
||||
selector:
|
||||
app: mysql
|
||||
clusterIP: None
|
||||
|
|
|
@ -34,19 +34,19 @@ We have multiple ways to install Kompose. Our preferred method is downloading th
|
|||
|
||||
### GitHub release
|
||||
|
||||
Kompose is released via GitHub on a three-week cycle, you can see all current releases on the [GitHub release page](https://github.com/kubernetes-incubator/kompose/releases).
|
||||
Kompose is released via GitHub on a three-week cycle, you can see all current releases on the [GitHub release page](https://github.com/kubernetes/kompose/releases).
|
||||
|
||||
The current release we use is `0.5.0`.
|
||||
The current release we use is `1.0.0`.
|
||||
|
||||
```sh
|
||||
# Linux
|
||||
curl -L https://github.com/kubernetes-incubator/kompose/releases/download/v0.5.0/kompose-linux-amd64 -o kompose
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.0.0/kompose-linux-amd64 -o kompose
|
||||
|
||||
# macOS
|
||||
curl -L https://github.com/kubernetes-incubator/kompose/releases/download/v0.5.0/kompose-darwin-amd64 -o kompose
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.0.0/kompose-darwin-amd64 -o kompose
|
||||
|
||||
# Windows
|
||||
curl -L https://github.com/kubernetes-incubator/kompose/releases/download/v0.5.0/kompose-windows-amd64.exe -o kompose.exe
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.0.0/kompose-windows-amd64.exe -o kompose.exe
|
||||
```
|
||||
|
||||
Make the binary executable and move it to your PATH (e.g. `/usr/local/bin`)
|
||||
|
@ -127,7 +127,7 @@ frontend-service.yaml mongodb-deployment.yaml redis-slave
|
|||
redis-master-deployment.yaml
|
||||
```
|
||||
|
||||
When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be over ridden by subsequent file.
|
||||
When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be overridden by subsequent file.
|
||||
|
||||
Using `--bundle, --dab` to specify a DAB file as below:
|
||||
|
||||
|
@ -300,7 +300,7 @@ file "redis-rc.yaml" created
|
|||
file "web-rc.yaml" created
|
||||
```
|
||||
|
||||
The `*-rc.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `$ kompose convert --rc --replicas 3`
|
||||
The `*-rc.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `$ kompose convert --rc --replicas 3`.
|
||||
|
||||
```console
|
||||
$ kompose convert --ds
|
||||
|
@ -310,7 +310,7 @@ file "redis-daemonset.yaml" created
|
|||
file "web-daemonset.yaml" created
|
||||
```
|
||||
|
||||
The `*-daemonset.yaml` files contain the Daemon Set objects
|
||||
The `*-daemonset.yaml` files contain the Daemon Set objects.
|
||||
|
||||
If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) simply do:
|
||||
|
||||
|
|
|
@ -27,7 +27,10 @@ title: Using kubectl to Create a Deployment
|
|||
<div class="col-md-8">
|
||||
<h3>Kubernetes Deployments</h3>
|
||||
<p>
|
||||
Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. To do so, you create a Kubernetes <b>Deployment</b>. The Deployment is responsible for creating and updating instances of your application. Once you've created a Deployment, the Kubernetes master schedules the application instances that the Deployment creates onto individual Nodes in the cluster.
|
||||
Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it.
|
||||
To do so, you create a Kubernetes <b>Deployment</b> configuration. The Deployment instructs Kubernetes
|
||||
how to create and update instances of your application. Once you've created a Deployment, the Kubernetes
|
||||
master schedules mentioned application instances onto individual Nodes in the cluster.
|
||||
</p>
|
||||
|
||||
<p>Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces it. <b>This provides a self-healing mechanism to address machine failure or maintenance.</b></p>
|
||||
|
|
|
@ -7,6 +7,6 @@ metadata:
|
|||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 9042
|
||||
- port: 9042
|
||||
selector:
|
||||
app: cassandra
|
||||
|
|
|
@ -74,7 +74,7 @@ KUBE_CONFIG_2=value-2
|
|||
## Step Three: Create a pod that sets the command line using ConfigMap
|
||||
|
||||
Use the [`command-pod.yaml`](command-pod.yaml) file to create a Pod with a container
|
||||
whose command is injected with the keys of a ConfigMap
|
||||
whose command is injected with the keys of a ConfigMap:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/configmap/command-pod.yaml
|
||||
|
@ -89,7 +89,7 @@ value-1 value-2
|
|||
|
||||
## Step Four: Create a pod that consumes a configMap in a volume
|
||||
|
||||
Pods can also consume ConfigMaps in volumes. Use the [`volume-pod.yaml`](volume-pod.yaml) file to create a Pod that consume the ConfigMap in a volume.
|
||||
Pods can also consume ConfigMaps in volumes. Use the [`volume-pod.yaml`](volume-pod.yaml) file to create a Pod that consumes the ConfigMap in a volume.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/configmap/volume-pod.yaml
|
||||
|
|
|
@ -4,7 +4,7 @@ metadata:
|
|||
name: redis
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
targetPort: 6379
|
||||
- port: 6379
|
||||
targetPort: 6379
|
||||
selector:
|
||||
app: redis
|
||||
|
|
|
@ -16,7 +16,9 @@ kubectl [command] [TYPE] [NAME] [flags]
|
|||
```
|
||||
|
||||
where `command`, `TYPE`, `NAME`, and `flags` are:
|
||||
|
||||
* `command`: Specifies the operation that you want to perform on one or more resources, for example `create`, `get`, `describe`, `delete`.
|
||||
|
||||
* `TYPE`: Specifies the [resource type](#resource-types). Resource types are case-sensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output:
|
||||
|
||||
$ kubectl get pod pod1
|
||||
|
|
|
@ -4,8 +4,8 @@ metadata:
|
|||
name: myapp
|
||||
spec:
|
||||
ports:
|
||||
- port: 8765
|
||||
targetPort: 9376
|
||||
- port: 8765
|
||||
targetPort: 9376
|
||||
selector:
|
||||
app: example
|
||||
type: LoadBalancer
|
||||
|
|
|
@ -4,7 +4,7 @@ metadata:
|
|||
name: myapp
|
||||
spec:
|
||||
ports:
|
||||
- port: 8765
|
||||
targetPort: 9376
|
||||
- port: 8765
|
||||
targetPort: 9376
|
||||
selector:
|
||||
app: example
|
||||
|
|
|
@ -4,12 +4,6 @@ docs/getting-started-guides/docker.md
|
|||
docs/getting-started-guides/docker-multinode.md
|
||||
docs/user-guide/configmap/README.md
|
||||
docs/user-guide/downward-api/README.md
|
||||
docs/admin/kubefed_unjoin.md
|
||||
docs/admin/kubefed_init.md
|
||||
docs/admin/kubefed.md
|
||||
docs/admin/kubefed_join.md
|
||||
docs/admin/kubefed_options.md
|
||||
docs/admin/kubefed_version.md
|
||||
docs/api-reference/extensions/v1beta1/definitions.md
|
||||
docs/api-reference/extensions/v1beta1/operations.md
|
||||
docs/api-reference/v1/definitions.md
|
||||
|
|
|
@ -136,7 +136,7 @@ done <_data/overrides.yml
|
|||
)
|
||||
|
||||
|
||||
BINARIES="federation-apiserver.md federation-controller-manager.md kube-apiserver.md kube-controller-manager.md kube-proxy.md kube-scheduler.md kubelet.md"
|
||||
BINARIES="federation-apiserver.md federation-controller-manager.md kube-apiserver.md kube-controller-manager.md kube-proxy.md kube-scheduler.md kubelet.md kubefed.md kubefed_init.md kubefed_join.md kubefed_options.md kubefed_unjoin.md kubefed_version.md"
|
||||
|
||||
(
|
||||
cd docs/admin
|
||||
|
@ -149,6 +149,7 @@ BINARIES="federation-apiserver.md federation-controller-manager.md kube-apiserve
|
|||
---' "$bin"
|
||||
done
|
||||
)
|
||||
|
||||
mv -- "${APIREFDESDIR}" "${APIREFSRCDIR}"
|
||||
mv -- "${KUBECTLDESDIR}" "${KUBECTLSRCDIR}"
|
||||
rm -rf -- "${TMPDIR}" "${K8SREPO}"
|
||||
|
|
Loading…
Reference in New Issue