From c4a0363943f61f4dc6ee0cdd52a4b72b36056dd6 Mon Sep 17 00:00:00 2001 From: Stewart-YU Date: Sun, 12 Nov 2017 21:15:59 +0800 Subject: [PATCH 01/47] Update troubleshooting-kubeadm.md Update troubleshooting docs about apiclient blocking while run `kubeadm init`. --- .../independent/troubleshooting-kubeadm.md | 41 ++++++++++++++++--- 1 file changed, 35 insertions(+), 6 deletions(-) diff --git a/docs/setup/independent/troubleshooting-kubeadm.md b/docs/setup/independent/troubleshooting-kubeadm.md index 884514b320..0a939b5a5f 100644 --- a/docs/setup/independent/troubleshooting-kubeadm.md +++ b/docs/setup/independent/troubleshooting-kubeadm.md @@ -34,13 +34,42 @@ If you see the following warnings while running `kubeadm init` Then you may be missing ebtables and ethtool on your Linux machine. You can install them with the following commands: -``` -# For ubuntu/debian users, try -apt install ebtables ethtool +- For ubuntu/debian users, try `apt install ebtables ethtool`. +- For CentOS/Fedora users, try `yum install ebtables ethtool`. + +#### kubeadm blocks waiting for `control plane` during installation + +If you see the following blocks while running `kubeadm init` -# For CentOS/Fedora users, try -yum install ebtables ethtool ``` +[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. +[init] Using Kubernetes version: v1.8.0 +[init] Using Authorization modes: [Node RBAC] +[preflight] Skipping pre-flight checks +[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) +[certificates] Using the existing CA certificate and key. +[certificates] Using the existing API Server certificate and key. +[certificates] Using the existing API Server kubelet client certificate and key. +[certificates] Using the existing service account token signing key. +[certificates] Using the existing front-proxy CA certificate and key. +[certificates] Using the existing front-proxy client certificate and key. +[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" +[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf" +[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf" +[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf" +[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf" +[apiclient] Created API client, waiting for the control plane to become ready +``` + +Then there may can not connect to net or parameter `Cgroup Driver` diff bettween docker and kubelet in your linux machine. + +- Ensure your machine can connect to net. +- If you find some log in `var/log/message`, look like: +``` +error: failed to run Kubelet: failed to create kubelet: +misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs" +``` +then modify startup parameter `KUBELET_CGROUP_ARGS=--cgroup-driver=` in `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`(Ubuntu). #### Pods in `RunContainerError`, `CrashLoopBackOff` or `Error` state @@ -141,4 +170,4 @@ The `kubectl describe pod` or `kubectl logs` commands can help you diagnose erro kubectl -n ${NAMESPACE} describe pod ${POD_NAME} kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME} -``` \ No newline at end of file +``` From d7278dbd9d04d42d713720d04eca8d47ceeada97 Mon Sep 17 00:00:00 2001 From: Qiming Date: Tue, 14 Nov 2017 14:34:18 +0800 Subject: [PATCH 02/47] Update troubleshooting-kubeadm.md --- .../independent/troubleshooting-kubeadm.md | 31 ++++++------------- 1 file changed, 10 insertions(+), 21 deletions(-) diff --git a/docs/setup/independent/troubleshooting-kubeadm.md b/docs/setup/independent/troubleshooting-kubeadm.md index 0a939b5a5f..8f0ca79f54 100644 --- a/docs/setup/independent/troubleshooting-kubeadm.md +++ b/docs/setup/independent/troubleshooting-kubeadm.md @@ -39,37 +39,26 @@ Then you may be missing ebtables and ethtool on your Linux machine. You can inst #### kubeadm blocks waiting for `control plane` during installation -If you see the following blocks while running `kubeadm init` +If you notice that `kubeadm init` hangs after printing out the following line ``` -[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. -[init] Using Kubernetes version: v1.8.0 -[init] Using Authorization modes: [Node RBAC] -[preflight] Skipping pre-flight checks -[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) -[certificates] Using the existing CA certificate and key. -[certificates] Using the existing API Server certificate and key. -[certificates] Using the existing API Server kubelet client certificate and key. -[certificates] Using the existing service account token signing key. -[certificates] Using the existing front-proxy CA certificate and key. -[certificates] Using the existing front-proxy client certificate and key. -[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" -[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf" -[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf" -[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf" -[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf" [apiclient] Created API client, waiting for the control plane to become ready ``` -Then there may can not connect to net or parameter `Cgroup Driver` diff bettween docker and kubelet in your linux machine. +You may want to first check if your node has network connection problem. +Another reason that `kubeadm init` hands is that the default CGroup driver configuration +for the kubelet differs from that used by Docker. + +Check the system log file (e.g. `var/log/message`) or examine the output from `journalctl -u kubelet`. +If you see something like the following -- Ensure your machine can connect to net. -- If you find some log in `var/log/message`, look like: ``` error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs" ``` -then modify startup parameter `KUBELET_CGROUP_ARGS=--cgroup-driver=` in `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`(Ubuntu). + +you will need to fix the cgroup driver problem by following intstructions +[here](/docs/setup/indenpendent/install-kubeadm/#installing-docker). #### Pods in `RunContainerError`, `CrashLoopBackOff` or `Error` state From 8d20b9e72ab1779a6cda7965cef0c8024c34c7dd Mon Sep 17 00:00:00 2001 From: Qiming Date: Wed, 15 Nov 2017 14:06:13 +0800 Subject: [PATCH 03/47] Update troubleshooting-kubeadm.md --- docs/setup/independent/troubleshooting-kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/setup/independent/troubleshooting-kubeadm.md b/docs/setup/independent/troubleshooting-kubeadm.md index 8f0ca79f54..d9af4adf1c 100644 --- a/docs/setup/independent/troubleshooting-kubeadm.md +++ b/docs/setup/independent/troubleshooting-kubeadm.md @@ -46,7 +46,7 @@ If you notice that `kubeadm init` hangs after printing out the following line ``` You may want to first check if your node has network connection problem. -Another reason that `kubeadm init` hands is that the default CGroup driver configuration +Another reason that `kubeadm init` hangs could be that the default cgroup driver configuration for the kubelet differs from that used by Docker. Check the system log file (e.g. `var/log/message`) or examine the output from `journalctl -u kubelet`. From 8d6ea37aa9b368c166db0dc1846b1666742f001b Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Mon, 4 Dec 2017 14:30:52 +0800 Subject: [PATCH 04/47] Fix example validation --- docs/admin/namespaces/OWNERS | 4 - docs/admin/namespaces/namespace-dev.json | 10 - .../administer-cluster}/namespace-prod.json | 0 .../namespaces-walkthrough.md | 2 +- docs/tasks/administer-cluster/namespaces.md | 2 +- .../event-exporter-deploy.yaml | 3 +- .../podpreset-allow-db-merged.yaml | 2 +- .../podpreset-allow-db.yaml | 2 +- .../podpreset-conflict-pod.yaml | 2 +- .../podpreset-replicaset-merged.yaml | 1 + .../persistent-volumes/claims/claim-01.yaml | 10 - .../persistent-volumes/claims/claim-02.yaml | 10 - .../persistent-volumes/claims/claim-03.json | 17 - .../simpletest/namespace.json | 10 - .../persistent-volumes/simpletest/pod.yaml | 20 - .../simpletest/service.json | 19 - .../persistent-volumes/volumes/gce.yaml | 13 - .../persistent-volumes/volumes/local-01.yaml | 13 - .../persistent-volumes/volumes/local-02.yaml | 14 - .../persistent-volumes/volumes/nfs.yaml | 12 - docs/user-guide/pods/OWNERS | 4 - docs/user-guide/pods/pod-config.json | 18 - docs/user-guide/pods/pod-config.yaml | 12 - docs/user-guide/pods/pod-sample.json | 32 -- docs/user-guide/pods/pod-sample.yaml | 16 - docs/user-guide/pods/pod-spec-common.json | 44 --- docs/user-guide/pods/pod-spec-common.yaml | 31 -- docs/user-guide/replication-controller/OWNERS | 7 - .../replication-controller/replication.yaml | 19 - test/examples_test.go | 364 +++++++++++++----- 30 files changed, 283 insertions(+), 430 deletions(-) delete mode 100644 docs/admin/namespaces/OWNERS delete mode 100644 docs/admin/namespaces/namespace-dev.json rename docs/{admin/namespaces => tasks/administer-cluster}/namespace-prod.json (100%) delete mode 100644 docs/user-guide/persistent-volumes/claims/claim-01.yaml delete mode 100644 docs/user-guide/persistent-volumes/claims/claim-02.yaml delete mode 100644 docs/user-guide/persistent-volumes/claims/claim-03.json delete mode 100644 docs/user-guide/persistent-volumes/simpletest/namespace.json delete mode 100644 docs/user-guide/persistent-volumes/simpletest/pod.yaml delete mode 100644 docs/user-guide/persistent-volumes/simpletest/service.json delete mode 100644 docs/user-guide/persistent-volumes/volumes/gce.yaml delete mode 100644 docs/user-guide/persistent-volumes/volumes/local-01.yaml delete mode 100644 docs/user-guide/persistent-volumes/volumes/local-02.yaml delete mode 100644 docs/user-guide/persistent-volumes/volumes/nfs.yaml delete mode 100644 docs/user-guide/pods/OWNERS delete mode 100644 docs/user-guide/pods/pod-config.json delete mode 100644 docs/user-guide/pods/pod-config.yaml delete mode 100644 docs/user-guide/pods/pod-sample.json delete mode 100644 docs/user-guide/pods/pod-sample.yaml delete mode 100644 docs/user-guide/pods/pod-spec-common.json delete mode 100644 docs/user-guide/pods/pod-spec-common.yaml delete mode 100644 docs/user-guide/replication-controller/OWNERS delete mode 100644 docs/user-guide/replication-controller/replication.yaml diff --git a/docs/admin/namespaces/OWNERS b/docs/admin/namespaces/OWNERS deleted file mode 100644 index cca389a741..0000000000 --- a/docs/admin/namespaces/OWNERS +++ /dev/null @@ -1,4 +0,0 @@ -approvers: -- derekwaynecarr -- janetkuo - diff --git a/docs/admin/namespaces/namespace-dev.json b/docs/admin/namespaces/namespace-dev.json deleted file mode 100644 index b2b43b0b73..0000000000 --- a/docs/admin/namespaces/namespace-dev.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "kind": "Namespace", - "apiVersion": "v1", - "metadata": { - "name": "development", - "labels": { - "name": "development" - } - } -} diff --git a/docs/admin/namespaces/namespace-prod.json b/docs/tasks/administer-cluster/namespace-prod.json similarity index 100% rename from docs/admin/namespaces/namespace-prod.json rename to docs/tasks/administer-cluster/namespace-prod.json diff --git a/docs/tasks/administer-cluster/namespaces-walkthrough.md b/docs/tasks/administer-cluster/namespaces-walkthrough.md index 7415b490ac..0e8567c2ab 100644 --- a/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -66,7 +66,7 @@ $ kubectl create -f docs/admin/namespaces/namespace-dev.json And then let's create the production namespace using kubectl. ```shell -$ kubectl create -f docs/admin/namespaces/namespace-prod.json +$ kubectl create -f docs/tasks/administer-cluster/namespace-prod.json ``` To be sure things are right, let's list all of the namespaces in our cluster. diff --git a/docs/tasks/administer-cluster/namespaces.md b/docs/tasks/administer-cluster/namespaces.md index 727580ab33..9c85472cba 100644 --- a/docs/tasks/administer-cluster/namespaces.md +++ b/docs/tasks/administer-cluster/namespaces.md @@ -152,7 +152,7 @@ $ kubectl create -f docs/admin/namespaces/namespace-dev.json And then let's create the production namespace using kubectl. ```shell -$ kubectl create -f docs/admin/namespaces/namespace-prod.json +$ kubectl create -f docs/tasks/administer-cluster/namespace-prod.json ``` To be sure things are right, list all of the namespaces in our cluster. diff --git a/docs/tasks/debug-application-cluster/event-exporter-deploy.yaml b/docs/tasks/debug-application-cluster/event-exporter-deploy.yaml index 7dc547d28b..fc8c5e8eeb 100644 --- a/docs/tasks/debug-application-cluster/event-exporter-deploy.yaml +++ b/docs/tasks/debug-application-cluster/event-exporter-deploy.yaml @@ -10,7 +10,6 @@ apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: event-exporter-rb - namespace: default labels: app: event-exporter roleRef: @@ -42,4 +41,4 @@ spec: image: gcr.io/google-containers/event-exporter:v0.1.0 command: - '/event-exporter' - terminationGracePeriodSeconds: 30 \ No newline at end of file + terminationGracePeriodSeconds: 30 diff --git a/docs/tasks/inject-data-application/podpreset-allow-db-merged.yaml b/docs/tasks/inject-data-application/podpreset-allow-db-merged.yaml index 0cf950c803..4f5af10abd 100644 --- a/docs/tasks/inject-data-application/podpreset-allow-db-merged.yaml +++ b/docs/tasks/inject-data-application/podpreset-allow-db-merged.yaml @@ -28,7 +28,7 @@ spec: value: $(REPLACE_ME) envFrom: - configMapRef: - name: etcd-env-config + name: etcd-env-config volumes: - name: cache-volume emptyDir: {} diff --git a/docs/tasks/inject-data-application/podpreset-allow-db.yaml b/docs/tasks/inject-data-application/podpreset-allow-db.yaml index 97203fdb9c..a5504789fe 100644 --- a/docs/tasks/inject-data-application/podpreset-allow-db.yaml +++ b/docs/tasks/inject-data-application/podpreset-allow-db.yaml @@ -8,7 +8,7 @@ spec: role: frontend env: - name: DB_PORT - value: 6379 + value: "6379" - name: duplicate_key value: FROM_ENV - name: expansion diff --git a/docs/tasks/inject-data-application/podpreset-conflict-pod.yaml b/docs/tasks/inject-data-application/podpreset-conflict-pod.yaml index cf201b2f06..6949f7e162 100644 --- a/docs/tasks/inject-data-application/podpreset-conflict-pod.yaml +++ b/docs/tasks/inject-data-application/podpreset-conflict-pod.yaml @@ -13,7 +13,7 @@ spec: - mountPath: /cache name: cache-volume ports: + - containerPort: 80 volumes: - name: cache-volume emptyDir: {} - - containerPort: 80 diff --git a/docs/tasks/inject-data-application/podpreset-replicaset-merged.yaml b/docs/tasks/inject-data-application/podpreset-replicaset-merged.yaml index 97f16fde1f..6202d35ac4 100644 --- a/docs/tasks/inject-data-application/podpreset-replicaset-merged.yaml +++ b/docs/tasks/inject-data-application/podpreset-replicaset-merged.yaml @@ -1,6 +1,7 @@ apiVersion: v1 kind: Pod metadata: + name: frontend labels: app: guestbook tier: frontend diff --git a/docs/user-guide/persistent-volumes/claims/claim-01.yaml b/docs/user-guide/persistent-volumes/claims/claim-01.yaml deleted file mode 100644 index 7bf0e87dcc..0000000000 --- a/docs/user-guide/persistent-volumes/claims/claim-01.yaml +++ /dev/null @@ -1,10 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: myclaim-1 -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 3Gi diff --git a/docs/user-guide/persistent-volumes/claims/claim-02.yaml b/docs/user-guide/persistent-volumes/claims/claim-02.yaml deleted file mode 100644 index 89e3263897..0000000000 --- a/docs/user-guide/persistent-volumes/claims/claim-02.yaml +++ /dev/null @@ -1,10 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: myclaim-2 -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 8Gi diff --git a/docs/user-guide/persistent-volumes/claims/claim-03.json b/docs/user-guide/persistent-volumes/claims/claim-03.json deleted file mode 100644 index 8b8a9e95fb..0000000000 --- a/docs/user-guide/persistent-volumes/claims/claim-03.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "myclaim-3" - }, "spec": { - "accessModes": [ - "ReadWriteOnce", - "ReadOnlyMany" - ], - "resources": { - "requests": { - "storage": "10G" - } - } - } -} diff --git a/docs/user-guide/persistent-volumes/simpletest/namespace.json b/docs/user-guide/persistent-volumes/simpletest/namespace.json deleted file mode 100644 index 79e2c3521e..0000000000 --- a/docs/user-guide/persistent-volumes/simpletest/namespace.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "kind": "Namespace", - "apiVersion":"v1", - "metadata": { - "name": "myns", - "labels": { - "name": "development" - } - } -} diff --git a/docs/user-guide/persistent-volumes/simpletest/pod.yaml b/docs/user-guide/persistent-volumes/simpletest/pod.yaml deleted file mode 100644 index ccd0045934..0000000000 --- a/docs/user-guide/persistent-volumes/simpletest/pod.yaml +++ /dev/null @@ -1,20 +0,0 @@ -kind: Pod -apiVersion: v1 -metadata: - name: mypod - labels: - name: frontendhttp -spec: - containers: - - name: myfrontend - image: nginx - ports: - - containerPort: 80 - name: "http-server" - volumeMounts: - - mountPath: "/usr/share/nginx/html" - name: mypd - volumes: - - name: mypd - persistentVolumeClaim: - claimName: myclaim-1 diff --git a/docs/user-guide/persistent-volumes/simpletest/service.json b/docs/user-guide/persistent-volumes/simpletest/service.json deleted file mode 100644 index 0e06aa9f03..0000000000 --- a/docs/user-guide/persistent-volumes/simpletest/service.json +++ /dev/null @@ -1,19 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "frontendservice" - }, - "spec": { - "ports": [ - { - "protocol": "TCP", - "port": 3000, - "targetPort": "http-server" - } - ], - "selector": { - "name": "frontendhttp" - } - } -} diff --git a/docs/user-guide/persistent-volumes/volumes/gce.yaml b/docs/user-guide/persistent-volumes/volumes/gce.yaml deleted file mode 100644 index 87cb7fa3dc..0000000000 --- a/docs/user-guide/persistent-volumes/volumes/gce.yaml +++ /dev/null @@ -1,13 +0,0 @@ -kind: PersistentVolume -apiVersion: v1 -metadata: - name: pv0003 -spec: - capacity: - storage: 10Gi - accessModes: - - ReadWriteOnce - - ReadOnlyMany - gcePersistentDisk: - pdName: "abc123" - fsType: "ext4" diff --git a/docs/user-guide/persistent-volumes/volumes/local-01.yaml b/docs/user-guide/persistent-volumes/volumes/local-01.yaml deleted file mode 100644 index a465c65149..0000000000 --- a/docs/user-guide/persistent-volumes/volumes/local-01.yaml +++ /dev/null @@ -1,13 +0,0 @@ -kind: PersistentVolume -apiVersion: v1 -metadata: - name: pv0001 - labels: - type: local -spec: - capacity: - storage: 10Gi - accessModes: - - ReadWriteOnce - hostPath: - path: "/tmp/data01" diff --git a/docs/user-guide/persistent-volumes/volumes/local-02.yaml b/docs/user-guide/persistent-volumes/volumes/local-02.yaml deleted file mode 100644 index e72d2e7d17..0000000000 --- a/docs/user-guide/persistent-volumes/volumes/local-02.yaml +++ /dev/null @@ -1,14 +0,0 @@ -kind: PersistentVolume -apiVersion: v1 -metadata: - name: pv0002 - labels: - type: local -spec: - capacity: - storage: 8Gi - accessModes: - - ReadWriteOnce - hostPath: - path: "/somepath/data02" - persistentVolumeReclaimPolicy: Recycle diff --git a/docs/user-guide/persistent-volumes/volumes/nfs.yaml b/docs/user-guide/persistent-volumes/volumes/nfs.yaml deleted file mode 100644 index eae2e7abac..0000000000 --- a/docs/user-guide/persistent-volumes/volumes/nfs.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: pv0003 -spec: - capacity: - storage: 5Gi - accessModes: - - ReadWriteOnce - nfs: - path: /somepath - server: 172.17.0.2 diff --git a/docs/user-guide/pods/OWNERS b/docs/user-guide/pods/OWNERS deleted file mode 100644 index 0613239ef5..0000000000 --- a/docs/user-guide/pods/OWNERS +++ /dev/null @@ -1,4 +0,0 @@ -approvers: -- caesarxuchao -- mikedanese - diff --git a/docs/user-guide/pods/pod-config.json b/docs/user-guide/pods/pod-config.json deleted file mode 100644 index 265db0368d..0000000000 --- a/docs/user-guide/pods/pod-config.json +++ /dev/null @@ -1,18 +0,0 @@ -{ - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "", - "labels": { - "name": "" - }, - "generateName": "", - "namespace": "", - "annotations": [] - }, - "spec": { - - // See 'The spec schema' for details. - - } -} diff --git a/docs/user-guide/pods/pod-config.yaml b/docs/user-guide/pods/pod-config.yaml deleted file mode 100644 index 9eb3691a74..0000000000 --- a/docs/user-guide/pods/pod-config.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: "" - labels: - name: "" - namespace: "" - annotations: [] - generateName: "" -spec: - ? "// See 'The spec schema' for details." - : ~ diff --git a/docs/user-guide/pods/pod-sample.json b/docs/user-guide/pods/pod-sample.json deleted file mode 100644 index 7cab452b64..0000000000 --- a/docs/user-guide/pods/pod-sample.json +++ /dev/null @@ -1,32 +0,0 @@ -{ - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "redis-django", - "labels": { - "app": "webapp" - } - }, - "spec": { - "containers": [ - { - "name": "key-value-store", - "image": "redis", - "ports": [ - { - "containerPort": 6379 - } - ] - }, - { - "name": "frontend", - "image": "django", - "ports": [ - { - "containerPort": 8000 - } - ] - } - ] - } -} diff --git a/docs/user-guide/pods/pod-sample.yaml b/docs/user-guide/pods/pod-sample.yaml deleted file mode 100644 index 84635ff24f..0000000000 --- a/docs/user-guide/pods/pod-sample.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: redis-django - labels: - app: web -spec: - containers: - - name: key-value-store - image: redis - ports: - - containerPort: 6379 - - name: frontend - image: django - ports: - - containerPort: 8000 diff --git a/docs/user-guide/pods/pod-spec-common.json b/docs/user-guide/pods/pod-spec-common.json deleted file mode 100644 index 8621593d07..0000000000 --- a/docs/user-guide/pods/pod-spec-common.json +++ /dev/null @@ -1,44 +0,0 @@ -"spec": { - "containers": [ - { - "name": "", - "image": "", - "command": [ - "" - ], - "args": [ - "" - ], - "env": [ - { - "name": "", - "value": "" - } - ], - "imagePullPolicy": "", - "ports": [ - { - "containerPort": 0, - "name": "", - "protocol": "" - } - ], - "resources": { - "cpu": "", - "memory": "" - } - } - ], - "restartPolicy": "", - "volumes": [ - { - "name": "", - "emptyDir": { - "medium": "" - }, - "secret": { - "secretName": "" - } - } - ] -} diff --git a/docs/user-guide/pods/pod-spec-common.yaml b/docs/user-guide/pods/pod-spec-common.yaml deleted file mode 100644 index 96e4c1c6e8..0000000000 --- a/docs/user-guide/pods/pod-spec-common.yaml +++ /dev/null @@ -1,31 +0,0 @@ -spec: - containers: - - - args: - - "" - command: - - "" - env: - - - name: "" - value: "" - image: "" - imagePullPolicy: "" - name: "" - ports: - - - containerPort: 0 - name: "" - protocol: "" - resources: - cpu: "" - memory: "" - restartPolicy: "" - volumes: - - - emptyDir: - medium: "" - name: "" - secret: - secretName: "" - diff --git a/docs/user-guide/replication-controller/OWNERS b/docs/user-guide/replication-controller/OWNERS deleted file mode 100644 index dc07c2ccce..0000000000 --- a/docs/user-guide/replication-controller/OWNERS +++ /dev/null @@ -1,7 +0,0 @@ -approvers: -- bprashanth -- bprashanth -- caesarxuchao -- janetkuo -- mikedanese - diff --git a/docs/user-guide/replication-controller/replication.yaml b/docs/user-guide/replication-controller/replication.yaml deleted file mode 100644 index 6eff0b9b57..0000000000 --- a/docs/user-guide/replication-controller/replication.yaml +++ /dev/null @@ -1,19 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - name: nginx -spec: - replicas: 3 - selector: - app: nginx - template: - metadata: - name: nginx - labels: - app: nginx - spec: - containers: - - name: nginx - image: nginx - ports: - - containerPort: 80 diff --git a/test/examples_test.go b/test/examples_test.go index 1999d1f158..342f68d624 100644 --- a/test/examples_test.go +++ b/test/examples_test.go @@ -33,6 +33,8 @@ import ( "k8s.io/apimachinery/pkg/util/validation/field" "k8s.io/apimachinery/pkg/util/yaml" "k8s.io/kubernetes/pkg/api/testapi" + "k8s.io/kubernetes/pkg/apis/admissionregistration" + ar_validation "k8s.io/kubernetes/pkg/apis/admissionregistration/validation" "k8s.io/kubernetes/pkg/apis/apps" apps_validation "k8s.io/kubernetes/pkg/apis/apps/validation" "k8s.io/kubernetes/pkg/apis/autoscaling" @@ -44,9 +46,13 @@ import ( "k8s.io/kubernetes/pkg/apis/extensions" ext_validation "k8s.io/kubernetes/pkg/apis/extensions/validation" "k8s.io/kubernetes/pkg/apis/policy" - policyvalidation "k8s.io/kubernetes/pkg/apis/policy/validation" + policy_validation "k8s.io/kubernetes/pkg/apis/policy/validation" + "k8s.io/kubernetes/pkg/apis/rbac" + rbac_validation "k8s.io/kubernetes/pkg/apis/rbac/validation" + "k8s.io/kubernetes/pkg/apis/settings" + settings_validation "k8s.io/kubernetes/pkg/apis/settings/validation" "k8s.io/kubernetes/pkg/apis/storage" - storagevalidation "k8s.io/kubernetes/pkg/apis/storage/validation" + storage_validation "k8s.io/kubernetes/pkg/apis/storage/validation" "k8s.io/kubernetes/pkg/capabilities" "k8s.io/kubernetes/pkg/registry/batch/job" schedulerapilatest "k8s.io/kubernetes/plugin/pkg/scheduler/api/latest" @@ -54,24 +60,33 @@ import ( func validateObject(obj runtime.Object) (errors field.ErrorList) { switch t := obj.(type) { - case *api.ReplicationController: + case *admissionregistration.InitializerConfiguration: + // cluster scope resource + errors = ar_validation.ValidateInitializerConfiguration(t) + case *api.ConfigMap: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = validation.ValidateReplicationController(t) - case *api.ReplicationControllerList: - for i := range t.Items { - errors = append(errors, validateObject(&t.Items[i])...) - } - case *api.Service: + errors = validation.ValidateConfigMap(t) + case *api.Endpoints: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = validation.ValidateService(t) - case *api.ServiceList: - for i := range t.Items { - errors = append(errors, validateObject(&t.Items[i])...) + errors = validation.ValidateEndpoints(t) + case *api.LimitRange: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault } + errors = validation.ValidateLimitRange(t) + case *api.Namespace: + errors = validation.ValidateNamespace(t) + case *api.PersistentVolume: + errors = validation.ValidatePersistentVolume(t) + case *api.PersistentVolumeClaim: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = validation.ValidatePersistentVolumeClaim(t) case *api.Pod: if t.Namespace == "" { t.Namespace = api.NamespaceDefault @@ -81,55 +96,54 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { for i := range t.Items { errors = append(errors, validateObject(&t.Items[i])...) } - case *api.PersistentVolume: - errors = validation.ValidatePersistentVolume(t) - case *api.PersistentVolumeClaim: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = validation.ValidatePersistentVolumeClaim(t) case *api.PodTemplate: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } errors = validation.ValidatePodTemplate(t) - case *api.Endpoints: + case *api.ReplicationController: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = validation.ValidateEndpoints(t) - case *api.Namespace: - errors = validation.ValidateNamespace(t) - case *api.Secret: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault + errors = validation.ValidateReplicationController(t) + case *api.ReplicationControllerList: + for i := range t.Items { + errors = append(errors, validateObject(&t.Items[i])...) } - errors = validation.ValidateSecret(t) - case *api.LimitRange: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = validation.ValidateLimitRange(t) case *api.ResourceQuota: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } errors = validation.ValidateResourceQuota(t) + case *api.Secret: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = validation.ValidateSecret(t) + case *api.Service: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = validation.ValidateService(t) + case *api.ServiceAccount: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = validation.ValidateServiceAccount(t) + case *api.ServiceList: + for i := range t.Items { + errors = append(errors, validateObject(&t.Items[i])...) + } + case *apps.StatefulSet: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = apps_validation.ValidateStatefulSet(t) case *autoscaling.HorizontalPodAutoscaler: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } errors = autoscaling_validation.ValidateHorizontalPodAutoscaler(t) - case *extensions.Deployment: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = ext_validation.ValidateDeployment(t) - case *extensions.ReplicaSet: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = ext_validation.ValidateReplicaSet(t) case *batch.Job: if t.Namespace == "" { t.Namespace = api.NamespaceDefault @@ -137,42 +151,53 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { // Job needs generateSelector called before validation, and job.Validate does this. // See: https://github.com/kubernetes/kubernetes/issues/20951#issuecomment-187787040 t.ObjectMeta.UID = types.UID("fakeuid") - errors = job.Strategy.Validate(nil, t) - case *extensions.Ingress: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault + if strings.Index(t.ObjectMeta.Name, "$") > -1 { + t.ObjectMeta.Name = "skip-for-good" } - errors = ext_validation.ValidateIngress(t) + errors = job.Strategy.Validate(nil, t) case *extensions.DaemonSet: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } errors = ext_validation.ValidateDaemonSet(t) + case *extensions.Deployment: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = ext_validation.ValidateDeployment(t) + case *extensions.Ingress: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = ext_validation.ValidateIngress(t) case *extensions.PodSecurityPolicy: errors = ext_validation.ValidatePodSecurityPolicy(t) + case *extensions.ReplicaSet: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = ext_validation.ValidateReplicaSet(t) case *batch.CronJob: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } errors = batch_validation.ValidateCronJob(t) - case *api.ConfigMap: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = validation.ValidateConfigMap(t) - case *apps.StatefulSet: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = apps_validation.ValidateStatefulSet(t) case *policy.PodDisruptionBudget: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = policyvalidation.ValidatePodDisruptionBudget(t) + errors = policy_validation.ValidatePodDisruptionBudget(t) + case *rbac.ClusterRoleBinding: + // clusterolebinding does not accept namespace + errors = rbac_validation.ValidateClusterRoleBinding(t) + case *settings.PodPreset: + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + errors = settings_validation.ValidatePodPreset(t) case *storage.StorageClass: // storageclass does not accept namespace - errors = storagevalidation.ValidateStorageClass(t) + errors = storage_validation.ValidateStorageClass(t) default: errors = field.ErrorList{} errors = append(errors, field.InternalError(field.NewPath(""), fmt.Errorf("no validation defined for %#v", obj))) @@ -258,10 +283,6 @@ func TestExampleObjectSchemas(t *testing.T) { "pod2": {&api.Pod{}}, "pod3": {&api.Pod{}}, }, - "../docs/admin/namespaces": { - "namespace-dev": {&api.Namespace{}}, - "namespace-prod": {&api.Namespace{}}, - }, "../docs/admin/resourcequota": { "best-effort": {&api.ResourceQuota{}}, "compute-resources": {&api.ResourceQuota{}}, @@ -310,6 +331,118 @@ func TestExampleObjectSchemas(t *testing.T) { "nginx-deployment": {&extensions.Deployment{}}, "replication": {&api.ReplicationController{}}, }, + "../docs/tasks/access-application-cluster": { + "frontend": {&api.Service{}, &extensions.Deployment{}}, + "hello-service": {&api.Service{}}, + "hello": {&extensions.Deployment{}}, + "redis-master": {&api.Pod{}}, + "two-container-pod": {&api.Pod{}}, + }, + "../docs/tasks/administer-cluster": { + "cloud-controller-manager-daemonset-example": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.DaemonSet{}}, + "cpu-constraints": {&api.LimitRange{}}, + "cpu-constraints-pod": {&api.Pod{}}, + "cpu-constraints-pod-2": {&api.Pod{}}, + "cpu-constraints-pod-3": {&api.Pod{}}, + "cpu-constraints-pod-4": {&api.Pod{}}, + "cpu-defaults": {&api.LimitRange{}}, + "cpu-defaults-pod": {&api.Pod{}}, + "cpu-defaults-pod-2": {&api.Pod{}}, + "cpu-defaults-pod-3": {&api.Pod{}}, + "dns-horizontal-autoscaler": {&extensions.Deployment{}}, + "memory-constraints": {&api.LimitRange{}}, + "memory-constraints-pod": {&api.Pod{}}, + "memory-constraints-pod-2": {&api.Pod{}}, + "memory-constraints-pod-3": {&api.Pod{}}, + "memory-constraints-pod-4": {&api.Pod{}}, + "memory-defaults": {&api.LimitRange{}}, + "memory-defaults-pod": {&api.Pod{}}, + "memory-defaults-pod-2": {&api.Pod{}}, + "memory-defaults-pod-3": {&api.Pod{}}, + "my-scheduler": {&extensions.Deployment{}}, + "namespace-dev": {&api.Namespace{}}, + "namespace-prod": {&api.Namespace{}}, + "persistent-volume-label-initializer-config": {&admissionregistration.InitializerConfiguration{}}, + "pod1": {&api.Pod{}}, + "pod2": {&api.Pod{}}, + "pod3": {&api.Pod{}}, + "quota-mem-cpu": {&api.ResourceQuota{}}, + "quota-mem-cpu-pod": {&api.Pod{}}, + "quota-mem-cpu-pod-2": {&api.Pod{}}, + "quota-objects": {&api.ResourceQuota{}}, + "quota-objects-pvc": {&api.PersistentVolumeClaim{}}, + "quota-objects-pvc-2": {&api.PersistentVolumeClaim{}}, + "quota-pod": {&api.ResourceQuota{}}, + "quota-pod-deployment": {&extensions.Deployment{}}, + "quota-pvc-2": {&api.PersistentVolumeClaim{}}, + }, + "../docs/tasks/configure-pod-container": { + "cpu-request-limit": {&api.Pod{}}, + "cpu-request-limit-2": {&api.Pod{}}, + "exec-liveness": {&api.Pod{}}, + "http-liveness": {&api.Pod{}}, + "init-containers": {&api.Pod{}}, + "lifecycle-events": {&api.Pod{}}, + "mem-limit-range": {&api.LimitRange{}}, + "memory-request-limit": {&api.Pod{}}, + "memory-request-limit-2": {&api.Pod{}}, + "memory-request-limit-3": {&api.Pod{}}, + "oir-pod": {&api.Pod{}}, + "oir-pod-2": {&api.Pod{}}, + "pod": {&api.Pod{}}, + "pod-redis": {&api.Pod{}}, + "private-reg-pod": {&api.Pod{}}, + "projected-volume": {&api.Pod{}}, + "qos-pod": {&api.Pod{}}, + "qos-pod-2": {&api.Pod{}}, + "qos-pod-3": {&api.Pod{}}, + "qos-pod-4": {&api.Pod{}}, + "rq-compute-resources": {&api.ResourceQuota{}}, + "security-context": {&api.Pod{}}, + "security-context-2": {&api.Pod{}}, + "security-context-3": {&api.Pod{}}, + "security-context-4": {&api.Pod{}}, + "task-pv-claim": {&api.PersistentVolumeClaim{}}, + "task-pv-pod": {&api.Pod{}}, + "task-pv-volume": {&api.PersistentVolume{}}, + "tcp-liveness-readiness": {&api.Pod{}}, + }, + "../docs/tasks/debug-application-cluster": { + "counter-pod": {&api.Pod{}}, + "event-exporter-deploy": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.Deployment{}}, + "fluentd-gcp-configmap": {&api.ConfigMap{}}, + "fluentd-gcp-ds": {&extensions.DaemonSet{}}, + "nginx-dep": {&extensions.Deployment{}}, + "shell-demo": {&api.Pod{}}, + "termination": {&api.Pod{}}, + }, + // TODO: decide whether federation examples should be added + "../docs/tasks/inject-data-application": { + "commands": {&api.Pod{}}, + "dapi-envars-container": {&api.Pod{}}, + "dapi-envars-pod": {&api.Pod{}}, + "dapi-volume": {&api.Pod{}}, + "dapi-volume-resources": {&api.Pod{}}, + "envars": {&api.Pod{}}, + "podpreset-allow-db": {&settings.PodPreset{}}, + "podpreset-allow-db-merged": {&api.Pod{}}, + "podpreset-configmap": {&api.ConfigMap{}}, + "podpreset-conflict-pod": {&api.Pod{}}, + "podpreset-conflict-preset": {&settings.PodPreset{}}, + "podpreset-merged": {&api.Pod{}}, + "podpreset-multi-merged": {&api.Pod{}}, + "podpreset-pod": {&api.Pod{}}, + "podpreset-preset": {&settings.PodPreset{}}, + "podpreset-proxy": {&settings.PodPreset{}}, + "podpreset-replicaset-merged": {&api.Pod{}}, + "podpreset-replicaset": {&extensions.ReplicaSet{}}, + "secret": {&api.Secret{}}, + "secret-envars-pod": {&api.Pod{}}, + "secret-pod": {&api.Pod{}}, + }, + "../docs/tasks/job": { + "job": {&batch.Job{}}, + }, "../docs/tasks/job/coarse-parallel-processing-work-queue": { "job": {&batch.Job{}}, }, @@ -318,21 +451,53 @@ func TestExampleObjectSchemas(t *testing.T) { "redis-pod": {&api.Pod{}}, "redis-service": {&api.Service{}}, }, - "../docs/tutorials/stateful-application": { - "gce-volume": {&api.PersistentVolume{}}, + "../docs/tasks/run-application": { + "deployment": {&extensions.Deployment{}}, + "deployment-patch-demo": {&extensions.Deployment{}}, + "deployment-scale": {&extensions.Deployment{}}, + "deployment-update": {&extensions.Deployment{}}, + "mysql-configmap": {&api.ConfigMap{}}, "mysql-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &extensions.Deployment{}}, "mysql-services": {&api.Service{}, &api.Service{}}, - "mysql-configmap": {&api.ConfigMap{}}, "mysql-statefulset": {&apps.StatefulSet{}}, - "cassandra-service": {&api.Service{}}, - "cassandra-statefulset": {&apps.StatefulSet{}, &storage.StorageClass{}}, + }, + "../docs/tutorials/clusters": { + "hello-apparmor-pod": {&api.Pod{}}, + "my-scheduler": {&extensions.Deployment{}}, + }, + "../docs/tutorials/object-management-kubectl": { + "simple_deployment": {&extensions.Deployment{}}, + "update_deployment": {&extensions.Deployment{}}, + }, + "../docs/tutorials/stateful-application": { "web": {&api.Service{}, &apps.StatefulSet{}}, "webp": {&api.Service{}, &apps.StatefulSet{}}, "zookeeper": {&api.Service{}, &api.Service{}, &policy.PodDisruptionBudget{}, &apps.StatefulSet{}}, }, + "../docs/tutorials/stateful-application/cassandra": { + "cassandra-service": {&api.Service{}}, + "cassandra-statefulset": {&apps.StatefulSet{}, &storage.StorageClass{}}, + }, + "../docs/tutorials/stateful-application/mysql-wordpress-persistent-volume": { + "local-volumes": {&api.PersistentVolume{}, &api.PersistentVolume{}}, + "mysql-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &extensions.Deployment{}}, + "wordpress-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &extensions.Deployment{}}, + }, + "../docs/tutorials/stateless-application": { + "deployment": {&extensions.Deployment{}}, + "deployment-scale": {&extensions.Deployment{}}, + "deployment-update": {&extensions.Deployment{}}, + }, + "../docs/tutorials/stateless-application/guestbook": { + "frontend-deployment": {&extensions.Deployment{}}, + "frontend-service": {&api.Service{}}, + "redis-master-deployment": {&extensions.Deployment{}}, + "redis-master-service": {&api.Service{}}, + "redis-slave-deployment": {&extensions.Deployment{}}, + "redis-slave-service": {&api.Service{}}, + }, "../docs/user-guide": { "bad-nginx-deployment": {&extensions.Deployment{}}, - "counter-pod": {&api.Pod{}}, "curlpod": {&extensions.Deployment{}}, "deployment": {&extensions.Deployment{}}, "ingress": {&extensions.Ingress{}}, @@ -352,46 +517,69 @@ func TestExampleObjectSchemas(t *testing.T) { "redis-resource-deployment": {&extensions.Deployment{}}, "redis-secret-deployment": {&extensions.Deployment{}}, "run-my-nginx": {&extensions.Deployment{}}, - "cronjob": {&batch.CronJob{}}, + }, + "../docs/user-guide/configmap": { + "command-pod": {&api.Pod{}}, + "configmap": {&api.ConfigMap{}}, + "env-pod": {&api.Pod{}}, + "mount-file-pod": {&api.Pod{}}, + "volume-pod": {&api.Pod{}}, + }, + "../docs/user-guide/configmap/redis": { + "redis-pod": {&api.Pod{}}, }, "../docs/user-guide/downward-api": { "dapi-pod": {&api.Pod{}}, "dapi-container-resources": {&api.Pod{}}, }, - "../docs/user-guide/downward-api/volume/": { + "../docs/user-guide/downward-api/volume": { "dapi-volume": {&api.Pod{}}, "dapi-volume-resources": {&api.Pod{}}, }, + "../docs/user-guide/environment-guide": { + "backend-rc": {&api.ReplicationController{}}, + "backend-srv": {&api.Service{}}, + "show-rc": {&api.ReplicationController{}}, + "show-srv": {&api.Service{}}, + }, + "../docs/user-guide/horizontal-pod-autoscaling": { + "hpa-php-apache": {&autoscaling.HorizontalPodAutoscaler{}}, + }, + "../docs/user-guide/jobs/work-queue-1": { + "job": {&batch.Job{}}, + }, + "../docs/user-guide/jobs/work-queue-2": { + "job": {&batch.Job{}}, + "redis-pod": {&api.Pod{}}, + "redis-service": {&api.Service{}}, + }, "../docs/user-guide/liveness": { "exec-liveness": {&api.Pod{}}, "http-liveness": {&api.Pod{}}, "http-liveness-named-port": {&api.Pod{}}, }, + "../docs/user-guide/nginx": { + "nginx-deployment": {&extensions.Deployment{}}, + "nginx-svc": {&api.Service{}}, + }, "../docs/user-guide/node-selection": { "pod": {&api.Pod{}}, "pod-with-node-affinity": {&api.Pod{}}, "pod-with-pod-affinity": {&api.Pod{}}, }, - "../docs/user-guide/persistent-volumes/volumes": { - "local-01": {&api.PersistentVolume{}}, - "local-02": {&api.PersistentVolume{}}, - "gce": {&api.PersistentVolume{}}, - "nfs": {&api.PersistentVolume{}}, - }, - "../docs/user-guide/persistent-volumes/claims": { - "claim-01": {&api.PersistentVolumeClaim{}}, - "claim-02": {&api.PersistentVolumeClaim{}}, - "claim-03": {&api.PersistentVolumeClaim{}}, - }, - "../docs/user-guide/persistent-volumes/simpletest": { - "namespace": {&api.Namespace{}}, - "pod": {&api.Pod{}}, - "service": {&api.Service{}}, + "../docs/user-guide/replicasets": { + "frontend": {&extensions.ReplicaSet{}}, + "hpa-rs": {&autoscaling.HorizontalPodAutoscaler{}}, + "redis-slave": {&extensions.ReplicaSet{}}, }, "../docs/user-guide/secrets": { - "secret-pod": {&api.Pod{}}, "secret": {&api.Secret{}}, "secret-env-pod": {&api.Pod{}}, + "secret-pod": {&api.Pod{}}, + }, + "../docs/user-guide/services": { + "load-balancer-sample": {&api.Service{}}, + "service-sample": {&api.Service{}}, }, "../docs/user-guide/update-demo": { "kitten-rc": {&api.ReplicationController{}}, From 1befe950a069f42096620696f32e6d37a2181fc8 Mon Sep 17 00:00:00 2001 From: zhangxiaoyu-zidif Date: Tue, 5 Dec 2017 15:29:35 -0600 Subject: [PATCH 05/47] Create glossary for persistent volume --- _data/glossary/persistent-volume.yaml | 14 ++++++++++++++ 1 file changed, 14 insertions(+) create mode 100644 _data/glossary/persistent-volume.yaml diff --git a/_data/glossary/persistent-volume.yaml b/_data/glossary/persistent-volume.yaml new file mode 100644 index 0000000000..f6ed124ad4 --- /dev/null +++ b/_data/glossary/persistent-volume.yaml @@ -0,0 +1,14 @@ +id: persistent-volume +name: Persistent Volume +full-link: /docs/concepts/storage/persistent-volumes/ +related: +- statefulset +- deployment +- pod +tags: +- core-object +- storage +short-description: > + Persistent Volume is a kind of interface through two resource types, i.e. pv and pvc, and provides pod persistent storage service. +long-description: | + Persistent Volume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. From 4c017787a42e55748f51aadea3caee3bae2fd6aa Mon Sep 17 00:00:00 2001 From: stewart-yu Date: Wed, 6 Dec 2017 09:36:54 +0800 Subject: [PATCH 06/47] Update troubleshooting-kubeadm.md --- .../independent/troubleshooting-kubeadm.md | 31 ++++++++++--------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/docs/setup/independent/troubleshooting-kubeadm.md b/docs/setup/independent/troubleshooting-kubeadm.md index d9af4adf1c..d62650115f 100644 --- a/docs/setup/independent/troubleshooting-kubeadm.md +++ b/docs/setup/independent/troubleshooting-kubeadm.md @@ -34,31 +34,32 @@ If you see the following warnings while running `kubeadm init` Then you may be missing ebtables and ethtool on your Linux machine. You can install them with the following commands: -- For ubuntu/debian users, try `apt install ebtables ethtool`. -- For CentOS/Fedora users, try `yum install ebtables ethtool`. +- For ubuntu/debian users, run `apt install ebtables ethtool`. +- For CentOS/Fedora users, run `yum install ebtables ethtool`. -#### kubeadm blocks waiting for `control plane` during installation +#### kubeadm blocks waiting for control plane during installation -If you notice that `kubeadm init` hangs after printing out the following line +If you notice that `kubeadm init` hangs after printing out the following line: ``` [apiclient] Created API client, waiting for the control plane to become ready ``` -You may want to first check if your node has network connection problem. -Another reason that `kubeadm init` hangs could be that the default cgroup driver configuration -for the kubelet differs from that used by Docker. +This may be caused by a number of problems. The most common are: -Check the system log file (e.g. `var/log/message`) or examine the output from `journalctl -u kubelet`. -If you see something like the following +- network connection problems. Check that your machine has full network connectivity before continuing. +- the default cgroup driver configuration for the kubelet differs from that used by Docker. + Check the system log file (e.g. `/var/log/message`) or examine the output from `journalctl -u kubelet`. If you see something like the following: -``` -error: failed to run Kubelet: failed to create kubelet: -misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs" -``` + ```shell + error: failed to run Kubelet: failed to create kubelet: + misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs" + ``` + + you will need to fix the cgroup driver problem by following intstructions + [here](/docs/setup/indenpendent/install-kubeadm/#installing-docker). +- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`. -you will need to fix the cgroup driver problem by following intstructions -[here](/docs/setup/indenpendent/install-kubeadm/#installing-docker). #### Pods in `RunContainerError`, `CrashLoopBackOff` or `Error` state From ea93d5b1d8e713245d1c399654b895970b8203bc Mon Sep 17 00:00:00 2001 From: Renaud Gaubert Date: Wed, 20 Dec 2017 15:34:20 +0100 Subject: [PATCH 07/47] Update device plugin document to show NVIDIA device plugin --- docs/concepts/cluster-administration/device-plugins.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/docs/concepts/cluster-administration/device-plugins.md b/docs/concepts/cluster-administration/device-plugins.md index 310c2793df..ff42acef1d 100644 --- a/docs/concepts/cluster-administration/device-plugins.md +++ b/docs/concepts/cluster-administration/device-plugins.md @@ -107,8 +107,10 @@ in the plugin's ## Examples -For an example device plugin implementation, see -[nvidia GPU device plugin for COS base OS](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu). +For examples of device plugin implementations, see: +* The official [NVIDIA GPU device plugin](https://github.com/NVIDIA/k8s-device-plugin) + * it requires using [nvidia-docker 2.0](https://github.com/NVIDIA/nvidia-docker) which allows you to run GPU enabled docker containers +* The [NVIDIA GPU device plugin for COS base OS](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu). {% endcapture %} From d1c597095fe99bce12a2fbcf69476a1e4fd4f080 Mon Sep 17 00:00:00 2001 From: Madhuri Kumari Date: Fri, 22 Dec 2017 12:09:07 +0530 Subject: [PATCH 08/47] Fix some spelling mistakes --- docs/admin/multiple-zones.md | 2 +- docs/concepts/architecture/cloud-controller.md | 2 +- docs/concepts/cluster-administration/controller-metrics.md | 4 ++-- docs/concepts/cluster-administration/device-plugins.md | 2 +- .../configuration/manage-compute-resources-container.md | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/admin/multiple-zones.md b/docs/admin/multiple-zones.md index 3f590016c0..1900e7e1a4 100644 --- a/docs/admin/multiple-zones.md +++ b/docs/admin/multiple-zones.md @@ -43,7 +43,7 @@ placement, and so if the zones in your cluster are heterogeneous (e.g. different numbers of nodes, different types of nodes, or different pod resource requirements), this might prevent perfectly even spreading of your pods across zones. If desired, you can use -homogenous zones (same number and types of nodes) to reduce the +homogeneous zones (same number and types of nodes) to reduce the probability of unequal spreading. When persistent volumes are created, the `PersistentVolumeLabel` diff --git a/docs/concepts/architecture/cloud-controller.md b/docs/concepts/architecture/cloud-controller.md index 12d5e13c53..7d482e9d20 100644 --- a/docs/concepts/architecture/cloud-controller.md +++ b/docs/concepts/architecture/cloud-controller.md @@ -97,7 +97,7 @@ The Node controller contains the cloud-dependent functionality of the kubelet. P In this new model, the kubelet initializes a node without cloud-specific information. However, it adds a taint to the newly created node that makes the node unschedulable until the CCM initializes the node with cloud-specific information, and then removes this taint. -### 3. Kubernets API server +### 3. Kubernetes API server The PersistentVolumeLabels controller moves the cloud-dependent functionality of the Kubernetes API server to the CCM as described in the preceding sections. diff --git a/docs/concepts/cluster-administration/controller-metrics.md b/docs/concepts/cluster-administration/controller-metrics.md index ceca94f7de..2f1c71e34a 100644 --- a/docs/concepts/cluster-administration/controller-metrics.md +++ b/docs/concepts/cluster-administration/controller-metrics.md @@ -13,10 +13,10 @@ the controller manager. Controller manager metrics provide important insight into the performance and health of the controller manager. These metrics include common Go language runtime metrics such as go_routine count and controller specific metrics such as -etcd request latencies or Cloudprovider (AWS, GCE, Openstack) API latencies that can be used +etcd request latencies or Cloudprovider (AWS, GCE, OpenStack) API latencies that can be used to gauge the health of a cluster. -Starting from Kubernetes 1.7, detailed Cloudprovider metrics are available for storage operations for GCE, AWS, Vsphere and Openstack. +Starting from Kubernetes 1.7, detailed Cloudprovider metrics are available for storage operations for GCE, AWS, Vsphere and OpenStack. These metrics can be used to monitor health of persistent volume operations. For example, for GCE these metrics are called: diff --git a/docs/concepts/cluster-administration/device-plugins.md b/docs/concepts/cluster-administration/device-plugins.md index 635c8232b2..2078adcf0f 100644 --- a/docs/concepts/cluster-administration/device-plugins.md +++ b/docs/concepts/cluster-administration/device-plugins.md @@ -65,7 +65,7 @@ The general workflow of a device plugin includes the following steps: ```gRPC service DevicePlugin { // ListAndWatch returns a stream of List of Devices - // Whenever a Device state change or a Device disapears, ListAndWatch + // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} diff --git a/docs/concepts/configuration/manage-compute-resources-container.md b/docs/concepts/configuration/manage-compute-resources-container.md index 8508f6642a..ee49bdf94c 100644 --- a/docs/concepts/configuration/manage-compute-resources-container.md +++ b/docs/concepts/configuration/manage-compute-resources-container.md @@ -312,7 +312,7 @@ Kubernetes version 1.8 introduces a new resource, _ephemeral-storage_ for managi This partition is “ephemeral” and applications cannot expect any performance SLAs (Disk IOPS for example) from this partition. Local ephemeral storage management only applies for the root partition; the optional partition for image layer and writable layer is out of scope. -**Note:** If an optional runntime partition is used, root partition will not hold any image layer or writable layers. +**Note:** If an optional runtime partition is used, root partition will not hold any image layer or writable layers. {: .note} ### Requests and limits setting for local ephemeral storage From 3223fc9fdca2ecccd1c4796eed2a005e7082a101 Mon Sep 17 00:00:00 2001 From: Ben Meier Date: Fri, 22 Dec 2017 17:47:39 +0200 Subject: [PATCH 09/47] Fixed missing backticks and broken markdown link. --- docs/setup/independent/create-cluster-kubeadm.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/setup/independent/create-cluster-kubeadm.md b/docs/setup/independent/create-cluster-kubeadm.md index 8670c5565c..02bd8294a3 100644 --- a/docs/setup/independent/create-cluster-kubeadm.md +++ b/docs/setup/independent/create-cluster-kubeadm.md @@ -227,7 +227,7 @@ internal helper service, will not start up before a network is installed. kubead supports Container Network Interface (CNI) based networks (and does not support kubenet).** Several projects provide Kubernetes pod networks using CNI, some of which also -support [Network Policy](/docs/concepts/services-networking/networkpolicies/). See the [add-onspage] (/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons. IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0). [CNI bridge](https://github.com/containernetworking/plugins/blob/master/plugins/main/bridge/README.md) and [local-ipam](https://github.com/containernetworking/plugins/blob/master/plugins/ipam/host-local/README.md) are the only supported IPv6 network plugins in 1.9. +support [Network Policy](/docs/concepts/services-networking/networkpolicies/). See the [add-ons page](/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons. IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0). [CNI bridge](https://github.com/containernetworking/plugins/blob/master/plugins/main/bridge/README.md) and [local-ipam](https://github.com/containernetworking/plugins/blob/master/plugins/ipam/host-local/README.md) are the only supported IPv6 network plugins in 1.9. **Note:** kubeadm sets up a more secure cluster by default and enforces use of [RBAC](#TODO). @@ -386,7 +386,7 @@ The nodes are where your workloads (containers and pods, etc) run. To add new no kubeadm join --token : --discovery-token-ca-cert-hash sha256: ``` -**Note:** To specify an IPv6 tuple for :, IPv6 address must be enclosed in square brackets, for example: `[fd00::101]:2073`. +**Note:** To specify an IPv6 tuple for `:`, IPv6 address must be enclosed in square brackets, for example: `[fd00::101]:2073`. {: .note} The output should look something like: From 8f196b7d6a5b02047171561649aaa47899491d2a Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Fri, 22 Dec 2017 09:55:16 -0800 Subject: [PATCH 10/47] Convert registry to k8s.gcr.io --- _includes/v1.3/v1-definitions.html | 2 +- _includes/v1.4/v1-definitions.html | 2 +- _includes/v1.5/v1-definitions.html | 2 +- cn/docs/admin/cluster-large.md | 2 +- cn/docs/admin/node-conformance.md | 6 ++-- cn/docs/concepts/architecture/nodes.md | 2 +- .../manage-compute-resources-container.md | 2 +- .../configuration/pod-with-node-affinity.yaml | 2 +- .../configuration/pod-with-pod-affinity.yaml | 2 +- cn/docs/concepts/configuration/secret.md | 2 +- .../concepts/workloads/pods/pod-lifecycle.md | 2 +- .../redis-master.yaml | 2 +- .../administer-cluster/cpu-memory-limit.md | 2 +- .../dns-horizontal-autoscaler.yaml | 2 +- cn/docs/tasks/administer-cluster/pod1.yaml | 2 +- cn/docs/tasks/administer-cluster/pod2.yaml | 2 +- cn/docs/tasks/administer-cluster/pod3.yaml | 2 +- .../exec-liveness.yaml | 2 +- .../http-liveness.yaml | 2 +- .../tcp-liveness-readiness.yaml | 2 +- .../dapi-envars-container.yaml | 2 +- .../dapi-envars-pod.yaml | 2 +- .../dapi-volume-resources.yaml | 2 +- .../inject-data-application/dapi-volume.yaml | 2 +- cn/docs/tasks/manage-gpus/scheduling-gpus.md | 6 ++-- cn/docs/tutorials/services/source-ip.md | 2 +- .../basic-stateful-set.md | 36 +++++++++---------- .../tutorials/stateful-application/web.yaml | 2 +- .../tutorials/stateful-application/webp.yaml | 2 +- docs/admin/cluster-large.md | 2 +- docs/admin/federation/index.md | 10 +++--- docs/admin/high-availability/etcd.yaml | 2 +- .../high-availability/kube-apiserver.yaml | 2 +- .../kube-controller-manager.yaml | 2 +- .../high-availability/kube-scheduler.yaml | 2 +- docs/admin/high-availability/podmaster.yaml | 4 +-- docs/admin/limitrange/invalid-pod.yaml | 2 +- docs/admin/limitrange/valid-pod.yaml | 2 +- docs/admin/multiple-schedulers/pod1.yaml | 2 +- docs/admin/multiple-schedulers/pod2.yaml | 2 +- docs/admin/multiple-schedulers/pod3.yaml | 2 +- docs/admin/node-conformance.md | 6 ++-- docs/api-reference/v1.9/index.html | 2 +- docs/concepts/architecture/nodes.md | 2 +- .../two-files-counter-pod-agent-sidecar.yaml | 2 +- .../manage-compute-resources-container.md | 2 +- .../configuration/pod-with-node-affinity.yaml | 2 +- .../configuration/pod-with-pod-affinity.yaml | 2 +- docs/concepts/configuration/secret.md | 2 +- docs/concepts/storage/persistent-volumes.md | 2 +- docs/concepts/storage/volumes.md | 14 ++++---- .../workloads/controllers/statefulset.md | 2 +- docs/concepts/workloads/pods/pod-lifecycle.md | 2 +- docs/getting-started-guides/fluentd-gcp.yaml | 2 +- docs/getting-started-guides/minikube.md | 2 +- docs/reference/generated/kubefed_init.md | 4 +-- docs/reference/generated/kubelet.md | 2 +- .../generated/kubernetes-api/v1.9/index.html | 2 +- .../kubeadm_alpha_phase_addon_all.md | 2 +- .../kubeadm_alpha_phase_addon_kube-dns.md | 2 +- .../kubeadm_alpha_phase_addon_kube-proxy.md | 2 +- .../setup-tools/kubeadm/kubeadm-init.md | 24 ++++++------- docs/resources-reference/v1.5/index.html | 2 +- docs/resources-reference/v1.6/index.html | 2 +- docs/resources-reference/v1.7/index.html | 2 +- .../redis-master.yaml | 2 +- ...-controller-manager-daemonset-example.yaml | 4 +-- .../dns-horizontal-autoscaler.yaml | 2 +- docs/tasks/administer-cluster/pod1.yaml | 2 +- docs/tasks/administer-cluster/pod2.yaml | 2 +- docs/tasks/administer-cluster/pod3.yaml | 2 +- .../administer-cluster/pvc-protection.md | 2 +- .../configure-liveness-readiness-probes.md | 12 +++---- .../configure-pod-configmap.md | 12 +++---- .../exec-liveness.yaml | 2 +- .../http-liveness.yaml | 2 +- .../tcp-liveness-readiness.yaml | 2 +- .../debug-service.md | 4 +-- .../monitor-node-health.md | 4 +-- .../dapi-envars-container.yaml | 2 +- .../dapi-envars-pod.yaml | 2 +- .../dapi-volume-resources.yaml | 2 +- .../inject-data-application/dapi-volume.yaml | 2 +- docs/tasks/manage-gpus/scheduling-gpus.md | 6 ++-- .../horizontal-pod-autoscale-walkthrough.md | 2 +- docs/tools/kompose/user-guide.md | 2 +- docs/tutorials/services/source-ip.md | 2 +- .../basic-stateful-set.md | 22 ++++++------ docs/tutorials/stateful-application/web.yaml | 2 +- docs/tutorials/stateful-application/webp.yaml | 2 +- .../stateful-application/zookeeper.yaml | 2 +- .../guestbook/redis-master-deployment.yaml | 2 +- docs/user-guide/configmap/command-pod.yaml | 2 +- docs/user-guide/configmap/env-pod.yaml | 2 +- docs/user-guide/configmap/mount-file-pod.yaml | 2 +- docs/user-guide/configmap/volume-pod.yaml | 2 +- .../dapi-container-resources.yaml | 2 +- docs/user-guide/downward-api/dapi-pod.yaml | 2 +- .../volume/dapi-volume-resources.yaml | 2 +- .../downward-api/volume/dapi-volume.yaml | 2 +- docs/user-guide/liveness/exec-liveness.yaml | 2 +- .../liveness/http-liveness-named-port.yaml | 2 +- docs/user-guide/liveness/http-liveness.yaml | 2 +- .../pod-with-node-affinity.yaml | 2 +- .../node-selection/pod-with-pod-affinity.yaml | 2 +- docs/user-guide/secrets/secret-env-pod.yaml | 2 +- docs/user-guide/update-demo/index.md.orig | 2 +- docs/user-guide/update-demo/kitten-rc.yaml | 2 +- docs/user-guide/update-demo/nautilus-rc.yaml | 2 +- 109 files changed, 180 insertions(+), 180 deletions(-) diff --git a/_includes/v1.3/v1-definitions.html b/_includes/v1.3/v1-definitions.html index 4cd88cc6ed..42ed010f3c 100755 --- a/_includes/v1.3/v1-definitions.html +++ b/_includes/v1.3/v1-definitions.html @@ -6415,7 +6415,7 @@ The resulting set of endpoints can be viewed as:

names

-

Names by which this image is known. e.g. ["gcr.io/google_containers/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"]

+

Names by which this image is known. e.g. ["k8s.gcr.io/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"]

true

string array

diff --git a/_includes/v1.4/v1-definitions.html b/_includes/v1.4/v1-definitions.html index 254075b5cb..8e61a5b040 100755 --- a/_includes/v1.4/v1-definitions.html +++ b/_includes/v1.4/v1-definitions.html @@ -6671,7 +6671,7 @@ The resulting set of endpoints can be viewed as:

names

-

Names by which this image is known. e.g. ["gcr.io/google_containers/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"]

+

Names by which this image is known. e.g. ["k8s.gcr.io/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"]

true

string array

diff --git a/_includes/v1.5/v1-definitions.html b/_includes/v1.5/v1-definitions.html index ed1b302484..5dbb6c7094 100755 --- a/_includes/v1.5/v1-definitions.html +++ b/_includes/v1.5/v1-definitions.html @@ -6850,7 +6850,7 @@ The resulting set of endpoints can be viewed as:

names

-

Names by which this image is known. e.g. ["gcr.io/google_containers/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"]

+

Names by which this image is known. e.g. ["k8s.gcr.io/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"]

true

string array

diff --git a/cn/docs/admin/cluster-large.md b/cn/docs/admin/cluster-large.md index 9f6b33c94f..5670f7e0ae 100644 --- a/cn/docs/admin/cluster-large.md +++ b/cn/docs/admin/cluster-large.md @@ -85,7 +85,7 @@ AWS使用的规格为: ```yaml containers: - name: fluentd-cloud-logging - image: gcr.io/google_containers/fluentd-gcp:1.16 + image: k8s.gcr.io/fluentd-gcp:1.16 resources: limits: cpu: 100m diff --git a/cn/docs/admin/node-conformance.md b/cn/docs/admin/node-conformance.md index 6be4ba50a0..91af9fde63 100644 --- a/cn/docs/admin/node-conformance.md +++ b/cn/docs/admin/node-conformance.md @@ -40,7 +40,7 @@ title: 节点设置校验 # $LOG_DIR 是测试结果输出的路径。 sudo docker run -it --rm --privileged --net=host \ -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ - gcr.io/google_containers/node-test:0.2 + k8s.gcr.io/node-test:0.2 ``` ## 针对其他硬件体系结构运行节点合规性测试 @@ -61,7 +61,7 @@ Kubernetes 也为其他硬件体系结构的系统提供了节点合规性测试 sudo docker run -it --rm --privileged --net=host \ -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ -e FOCUS=MirrorPod \ # 只运行MirrorPod测试 - gcr.io/google_containers/node-test:0.2 + k8s.gcr.io/node-test:0.2 ``` 为跳过指定的测试,用正则表达式来描述将要跳过的测试,并重载 `SKIP` 环境变量。 @@ -70,7 +70,7 @@ sudo docker run -it --rm --privileged --net=host \ sudo docker run -it --rm --privileged --net=host \ -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ -e SKIP=MirrorPod \ # 运行除MirrorPod外的所有测试 - gcr.io/google_containers/node-test:0.2 + k8s.gcr.io/node-test:0.2 ``` 节点合规性测试是[节点端到端测试](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/devel/e2e-node-tests.md)的一个容器化的版本。 diff --git a/cn/docs/concepts/architecture/nodes.md b/cn/docs/concepts/architecture/nodes.md index fa9bb53098..da999dd7a6 100644 --- a/cn/docs/concepts/architecture/nodes.md +++ b/cn/docs/concepts/architecture/nodes.md @@ -216,7 +216,7 @@ metadata: spec: containers: - name: sleep-forever - image: gcr.io/google_containers/pause:0.8.0 + image: k8s.gcr.io/pause:0.8.0 resources: requests: cpu: 100m diff --git a/cn/docs/concepts/configuration/manage-compute-resources-container.md b/cn/docs/concepts/configuration/manage-compute-resources-container.md index 6b06fc5064..341706ef10 100644 --- a/cn/docs/concepts/configuration/manage-compute-resources-container.md +++ b/cn/docs/concepts/configuration/manage-compute-resources-container.md @@ -199,7 +199,7 @@ Conditions: Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a diff --git a/cn/docs/concepts/configuration/pod-with-node-affinity.yaml b/cn/docs/concepts/configuration/pod-with-node-affinity.yaml index 7c38e19997..253d2b21ea 100644 --- a/cn/docs/concepts/configuration/pod-with-node-affinity.yaml +++ b/cn/docs/concepts/configuration/pod-with-node-affinity.yaml @@ -23,4 +23,4 @@ spec: - another-node-label-value containers: - name: with-node-affinity - image: gcr.io/google_containers/pause:2.0 \ No newline at end of file + image: k8s.gcr.io/pause:2.0 \ No newline at end of file diff --git a/cn/docs/concepts/configuration/pod-with-pod-affinity.yaml b/cn/docs/concepts/configuration/pod-with-pod-affinity.yaml index 3728537d5a..1897af901f 100644 --- a/cn/docs/concepts/configuration/pod-with-pod-affinity.yaml +++ b/cn/docs/concepts/configuration/pod-with-pod-affinity.yaml @@ -26,4 +26,4 @@ spec: topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 diff --git a/cn/docs/concepts/configuration/secret.md b/cn/docs/concepts/configuration/secret.md index 9b61a725eb..c8c9319485 100644 --- a/cn/docs/concepts/configuration/secret.md +++ b/cn/docs/concepts/configuration/secret.md @@ -518,7 +518,7 @@ spec: secretName: dotfile-secret containers: - name: dotfile-test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: - ls - "-l" diff --git a/cn/docs/concepts/workloads/pods/pod-lifecycle.md b/cn/docs/concepts/workloads/pods/pod-lifecycle.md index 8420f318bc..176e57b829 100644 --- a/cn/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/cn/docs/concepts/workloads/pods/pod-lifecycle.md @@ -108,7 +108,7 @@ spec: containers: - args: - /server - image: gcr.io/google_containers/liveness + image: k8s.gcr.io/liveness livenessProbe: httpGet: # when "host" is not defined, "PodIP" will be used diff --git a/cn/docs/tasks/access-application-cluster/redis-master.yaml b/cn/docs/tasks/access-application-cluster/redis-master.yaml index 57305a7a35..589de648f5 100644 --- a/cn/docs/tasks/access-application-cluster/redis-master.yaml +++ b/cn/docs/tasks/access-application-cluster/redis-master.yaml @@ -9,7 +9,7 @@ metadata: spec: containers: - name: master - image: gcr.io/google_containers/redis:v1 + image: k8s.gcr.io/redis:v1 env: - name: MASTER value: "true" diff --git a/cn/docs/tasks/administer-cluster/cpu-memory-limit.md b/cn/docs/tasks/administer-cluster/cpu-memory-limit.md index b437c8a2f0..341e3070f6 100644 --- a/cn/docs/tasks/administer-cluster/cpu-memory-limit.md +++ b/cn/docs/tasks/administer-cluster/cpu-memory-limit.md @@ -178,7 +178,7 @@ $ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resou uid: 3b1bfd7a-f53c-11e5-b066-64510658e388 spec: containers: - - image: gcr.io/google_containers/serve_hostname + - image: k8s.gcr.io/serve_hostname imagePullPolicy: Always name: kubernetes-serve-hostname resources: diff --git a/cn/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml b/cn/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml index f29dd2e275..b427829b5f 100644 --- a/cn/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml +++ b/cn/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml @@ -13,7 +13,7 @@ spec: spec: containers: - name: autoscaler - image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.0.0 + image: k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.0.0 resources: requests: cpu: "20m" diff --git a/cn/docs/tasks/administer-cluster/pod1.yaml b/cn/docs/tasks/administer-cluster/pod1.yaml index 733aa97d99..560b6aa0fb 100644 --- a/cn/docs/tasks/administer-cluster/pod1.yaml +++ b/cn/docs/tasks/administer-cluster/pod1.yaml @@ -7,4 +7,4 @@ metadata: spec: containers: - name: pod-with-no-annotation-container - image: gcr.io/google_containers/pause:2.0 \ No newline at end of file + image: k8s.gcr.io/pause:2.0 \ No newline at end of file diff --git a/cn/docs/tasks/administer-cluster/pod2.yaml b/cn/docs/tasks/administer-cluster/pod2.yaml index e1e280ff09..2f065efe65 100644 --- a/cn/docs/tasks/administer-cluster/pod2.yaml +++ b/cn/docs/tasks/administer-cluster/pod2.yaml @@ -8,4 +8,4 @@ spec: schedulerName: default-scheduler containers: - name: pod-with-default-annotation-container - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 diff --git a/cn/docs/tasks/administer-cluster/pod3.yaml b/cn/docs/tasks/administer-cluster/pod3.yaml index 63be0e0aa3..a1b8db3200 100644 --- a/cn/docs/tasks/administer-cluster/pod3.yaml +++ b/cn/docs/tasks/administer-cluster/pod3.yaml @@ -8,4 +8,4 @@ spec: schedulerName: my-scheduler containers: - name: pod-with-second-annotation-container - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 diff --git a/cn/docs/tasks/configure-pod-container/exec-liveness.yaml b/cn/docs/tasks/configure-pod-container/exec-liveness.yaml index 7b04a5eb8d..1ecb6cc25f 100644 --- a/cn/docs/tasks/configure-pod-container/exec-liveness.yaml +++ b/cn/docs/tasks/configure-pod-container/exec-liveness.yaml @@ -15,7 +15,7 @@ spec: - -c - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox livenessProbe: exec: diff --git a/cn/docs/tasks/configure-pod-container/http-liveness.yaml b/cn/docs/tasks/configure-pod-container/http-liveness.yaml index 6381ab3d1a..9d15abcd02 100644 --- a/cn/docs/tasks/configure-pod-container/http-liveness.yaml +++ b/cn/docs/tasks/configure-pod-container/http-liveness.yaml @@ -9,7 +9,7 @@ spec: - name: liveness args: - /server - image: gcr.io/google_containers/liveness + image: k8s.gcr.io/liveness livenessProbe: httpGet: path: /healthz diff --git a/cn/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml b/cn/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml index 08065019c5..08fb77ff0f 100644 --- a/cn/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml +++ b/cn/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml @@ -7,7 +7,7 @@ metadata: spec: containers: - name: goproxy - image: gcr.io/google_containers/goproxy:0.1 + image: k8s.gcr.io/goproxy:0.1 ports: - containerPort: 8080 readinessProbe: diff --git a/cn/docs/tasks/inject-data-application/dapi-envars-container.yaml b/cn/docs/tasks/inject-data-application/dapi-envars-container.yaml index 8b3b3a39d3..55bd4dd263 100644 --- a/cn/docs/tasks/inject-data-application/dapi-envars-container.yaml +++ b/cn/docs/tasks/inject-data-application/dapi-envars-container.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox:1.24 + image: k8s.gcr.io/busybox:1.24 command: [ "sh", "-c"] args: - while true; do diff --git a/cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml b/cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml index 00762373b3..071fa82bb3 100644 --- a/cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml +++ b/cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "sh", "-c"] args: - while true; do diff --git a/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml b/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml index 65770f283f..55af44ac1b 100644 --- a/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml +++ b/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: client-container - image: gcr.io/google_containers/busybox:1.24 + image: k8s.gcr.io/busybox:1.24 command: ["sh", "-c"] args: - while true; do diff --git a/cn/docs/tasks/inject-data-application/dapi-volume.yaml b/cn/docs/tasks/inject-data-application/dapi-volume.yaml index 7126cefae5..864c99d11e 100644 --- a/cn/docs/tasks/inject-data-application/dapi-volume.yaml +++ b/cn/docs/tasks/inject-data-application/dapi-volume.yaml @@ -12,7 +12,7 @@ metadata: spec: containers: - name: client-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: ["sh", "-c"] args: - while true; do diff --git a/cn/docs/tasks/manage-gpus/scheduling-gpus.md b/cn/docs/tasks/manage-gpus/scheduling-gpus.md index 208d01caf1..6aded77023 100644 --- a/cn/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/cn/docs/tasks/manage-gpus/scheduling-gpus.md @@ -41,13 +41,13 @@ spec: containers: - name: gpu-container-1 - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 resources: limits: alpha.kubernetes.io/nvidia-gpu: 2 # requesting 2 GPUs - name: gpu-container-2 - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 resources: limits: alpha.kubernetes.io/nvidia-gpu: 3 # requesting 3 GPUs @@ -141,7 +141,7 @@ metadata: spec: containers: - name: gpu-container-1 - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 resources: limits: alpha.kubernetes.io/nvidia-gpu: 1 diff --git a/cn/docs/tutorials/services/source-ip.md b/cn/docs/tutorials/services/source-ip.md index 18d0c4f902..92191c5cc4 100644 --- a/cn/docs/tutorials/services/source-ip.md +++ b/cn/docs/tutorials/services/source-ip.md @@ -33,7 +33,7 @@ Kubernetes 集群中运行的应用通过抽象的 Service 查找彼此,相互 你必须拥有一个正常工作的 Kubernetes 1.5 集群,用来运行本文中的示例。该示例使用一个简单的 nginx webserver 回送它接收到的请求的 HTTP 头中的源 IP 地址。你可以像下面这样创建它: ```console -$ kubectl run source-ip-app --image=gcr.io/google_containers/echoserver:1.4 +$ kubectl run source-ip-app --image=k8s.gcr.io/echoserver:1.4 deployment "source-ip-app" created ``` diff --git a/cn/docs/tutorials/stateful-application/basic-stateful-set.md b/cn/docs/tutorials/stateful-application/basic-stateful-set.md index 049c86de11..5f2b5f9edf 100644 --- a/cn/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/cn/docs/tutorials/stateful-application/basic-stateful-set.md @@ -434,7 +434,7 @@ Kubernetes 1.7 版本的 StatefulSet 控制器支持自动更新。更新策略 Patch `web` StatefulSet 的容器镜像。 ```shell -kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.7"}]' +kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"k8s.gcr.io/nginx-slim:0.7"}]' "web" patched ``` @@ -470,9 +470,9 @@ web-0 1/1 Running 0 3s ```shell{% raw %} kubectl get pod -l app=nginx -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}{end}' -web-0 gcr.io/google_containers/nginx-slim:0.7 -web-1 gcr.io/google_containers/nginx-slim:0.8 -web-2 gcr.io/google_containers/nginx-slim:0.8 +web-0 k8s.gcr.io/nginx-slim:0.7 +web-1 k8s.gcr.io/nginx-slim:0.8 +web-2 k8s.gcr.io/nginx-slim:0.8 {% endraw %}``` `web-0` has had its image updated, but `web-0` and `web-1` still have the original @@ -513,9 +513,9 @@ web-2 1/1 Running 0 36s ```shell{% raw %} kubectl get pod -l app=nginx -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}{end}' -web-0 gcr.io/google_containers/nginx-slim:0.7 -web-1 gcr.io/google_containers/nginx-slim:0.7 -web-2 gcr.io/google_containers/nginx-slim:0.7 +web-0 k8s.gcr.io/nginx-slim:0.7 +web-1 k8s.gcr.io/nginx-slim:0.7 +web-2 k8s.gcr.io/nginx-slim:0.7 {% endraw %} ``` @@ -539,7 +539,7 @@ statefulset "web" patched 在一个终端窗口中 patch `web` StatefulSet 来再次的改变容器镜像。 ```shell -kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.8"}]' +kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"k8s.gcr.io/nginx-slim:0.8"}]' statefulset "web" patched ``` @@ -589,9 +589,9 @@ StatefulSet 里的 Pod 采用和序号相反的顺序更新。在更新下一个 ```shell{% raw %} for p in 0 1 2; do kubectl get po web-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done -gcr.io/google_containers/nginx-slim:0.8 -gcr.io/google_containers/nginx-slim:0.8 -gcr.io/google_containers/nginx-slim:0.8 +k8s.gcr.io/nginx-slim:0.8 +k8s.gcr.io/nginx-slim:0.8 +k8s.gcr.io/nginx-slim:0.8 {% endraw %} ``` @@ -617,7 +617,7 @@ statefulset "web" patched 再次 Patch StatefulSet 来改变容器镜像。 ```shell -kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.7"}]' +kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"k8s.gcr.io/nginx-slim:0.7"}]' statefulset "web" patched ``` @@ -646,7 +646,7 @@ web-2 1/1 Running 0 18s ```shell{% raw %} get po web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}' -gcr.io/google_containers/nginx-slim:0.8 +k8s.gcr.io/nginx-slim:0.8 {% endraw %} ``` @@ -683,7 +683,7 @@ web-2 1/1 Running 0 18s ```shell{% raw %} kubectl get po web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}' -gcr.io/google_containers/nginx-slim:0.7 +k8s.gcr.io/nginx-slim:0.7 {% endraw %} ``` @@ -721,7 +721,7 @@ web-1 1/1 Running 0 18s ```shell{% raw %} get po web-1 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}' -gcr.io/google_containers/nginx-slim:0.8 +k8s.gcr.io/nginx-slim:0.8 {% endraw %} ``` @@ -767,9 +767,9 @@ web-0 1/1 Running 0 3s ```shell{% raw %} for p in 0 1 2; do kubectl get po web-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done -gcr.io/google_containers/nginx-slim:0.7 -gcr.io/google_containers/nginx-slim:0.7 -gcr.io/google_containers/nginx-slim:0.7 +k8s.gcr.io/nginx-slim:0.7 +k8s.gcr.io/nginx-slim:0.7 +k8s.gcr.io/nginx-slim:0.7 {% endraw %} ``` diff --git a/cn/docs/tutorials/stateful-application/web.yaml b/cn/docs/tutorials/stateful-application/web.yaml index 6c2770082b..e56d43b76b 100644 --- a/cn/docs/tutorials/stateful-application/web.yaml +++ b/cn/docs/tutorials/stateful-application/web.yaml @@ -26,7 +26,7 @@ spec: spec: containers: - name: nginx - image: gcr.io/google_containers/nginx-slim:0.8 + image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: web diff --git a/cn/docs/tutorials/stateful-application/webp.yaml b/cn/docs/tutorials/stateful-application/webp.yaml index 74a71c90ac..948a1c01d9 100644 --- a/cn/docs/tutorials/stateful-application/webp.yaml +++ b/cn/docs/tutorials/stateful-application/webp.yaml @@ -27,7 +27,7 @@ spec: spec: containers: - name: nginx - image: gcr.io/google_containers/nginx-slim:0.8 + image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: web diff --git a/docs/admin/cluster-large.md b/docs/admin/cluster-large.md index b443d42132..21c50531aa 100644 --- a/docs/admin/cluster-large.md +++ b/docs/admin/cluster-large.md @@ -86,7 +86,7 @@ For example: ```yaml containers: - name: fluentd-cloud-logging - image: gcr.io/google_containers/fluentd-gcp:1.16 + image: k8s.gcr.io/fluentd-gcp:1.16 resources: limits: cpu: 100m diff --git a/docs/admin/federation/index.md b/docs/admin/federation/index.md index 9127a1aa36..6936a43b20 100644 --- a/docs/admin/federation/index.md +++ b/docs/admin/federation/index.md @@ -87,9 +87,9 @@ images or you can build them yourself from HEAD. ### Using official release images As part of every Kubernetes release, official release images are pushed to -`gcr.io/google_containers`. To use the images in this repository, you can +`k8s.gcr.io`. To use the images in this repository, you can set the container image fields in the following configs to point to the -images in this repository. `gcr.io/google_containers/hyperkube` image +images in this repository. `k8s.gcr.io/hyperkube` image includes the federation-apiserver and federation-controller-manager binaries, so you can point the corresponding configs for those components to the hyperkube image. @@ -315,8 +315,8 @@ official release images or you can build from HEAD. #### Using official release images -As part of every release, images are pushed to `gcr.io/google_containers`. To use -these images, set env var `FEDERATION_PUSH_REPO_BASE=gcr.io/google_containers` +As part of every release, images are pushed to `k8s.gcr.io`. To use +these images, set env var `FEDERATION_PUSH_REPO_BASE=k8s.gcr.io` This will always use the latest image. To use the hyperkube image which includes federation-apiserver and federation-controller-manager from a specific release, set the @@ -345,7 +345,7 @@ Once you have the images, you can run these as pods on your existing kubernetes The command to run these pods on an existing GCE cluster will look like: ```shell -$ KUBERNETES_PROVIDER=gce FEDERATION_DNS_PROVIDER=google-clouddns FEDERATION_NAME=myfederation DNS_ZONE_NAME=myfederation.example FEDERATION_PUSH_REPO_BASE=gcr.io/google_containers ./federation/cluster/federation-up.sh +$ KUBERNETES_PROVIDER=gce FEDERATION_DNS_PROVIDER=google-clouddns FEDERATION_NAME=myfederation DNS_ZONE_NAME=myfederation.example FEDERATION_PUSH_REPO_BASE=k8s.gcr.io ./federation/cluster/federation-up.sh ``` `KUBERNETES_PROVIDER` is the cloud provider. diff --git a/docs/admin/high-availability/etcd.yaml b/docs/admin/high-availability/etcd.yaml index 8bcf52b159..364791da6f 100644 --- a/docs/admin/high-availability/etcd.yaml +++ b/docs/admin/high-availability/etcd.yaml @@ -5,7 +5,7 @@ metadata: spec: hostNetwork: true containers: - - image: gcr.io/google_containers/etcd:3.0.17 + - image: k8s.gcr.io/etcd:3.0.17 name: etcd-container command: - /usr/local/bin/etcd diff --git a/docs/admin/high-availability/kube-apiserver.yaml b/docs/admin/high-availability/kube-apiserver.yaml index 33d5cff5cd..057764fc52 100644 --- a/docs/admin/high-availability/kube-apiserver.yaml +++ b/docs/admin/high-availability/kube-apiserver.yaml @@ -6,7 +6,7 @@ spec: hostNetwork: true containers: - name: kube-apiserver - image: gcr.io/google_containers/kube-apiserver:9680e782e08a1a1c94c656190011bd02 + image: k8s.gcr.io/kube-apiserver:9680e782e08a1a1c94c656190011bd02 command: - /bin/sh - -c diff --git a/docs/admin/high-availability/kube-controller-manager.yaml b/docs/admin/high-availability/kube-controller-manager.yaml index 0ecbebb276..ba481fbfc3 100644 --- a/docs/admin/high-availability/kube-controller-manager.yaml +++ b/docs/admin/high-availability/kube-controller-manager.yaml @@ -10,7 +10,7 @@ spec: - /usr/local/bin/kube-controller-manager --master=127.0.0.1:8080 --cluster-name=e2e-test-bburns --cluster-cidr=10.245.0.0/16 --allocate-node-cidrs=true --cloud-provider=gce --service-account-private-key-file=/srv/kubernetes/server.key --v=2 --leader-elect=true 1>>/var/log/kube-controller-manager.log 2>&1 - image: gcr.io/google_containers/kube-controller-manager:fda24638d51a48baa13c35337fcd4793 + image: k8s.gcr.io/kube-controller-manager:fda24638d51a48baa13c35337fcd4793 livenessProbe: httpGet: path: /healthz diff --git a/docs/admin/high-availability/kube-scheduler.yaml b/docs/admin/high-availability/kube-scheduler.yaml index 40c863da48..b4ef0e466e 100644 --- a/docs/admin/high-availability/kube-scheduler.yaml +++ b/docs/admin/high-availability/kube-scheduler.yaml @@ -6,7 +6,7 @@ spec: hostNetwork: true containers: - name: kube-scheduler - image: gcr.io/google_containers/kube-scheduler:34d0b8f8b31e27937327961528739bc9 + image: k8s.gcr.io/kube-scheduler:34d0b8f8b31e27937327961528739bc9 command: - /bin/sh - -c diff --git a/docs/admin/high-availability/podmaster.yaml b/docs/admin/high-availability/podmaster.yaml index d634225b93..cd20e15b38 100644 --- a/docs/admin/high-availability/podmaster.yaml +++ b/docs/admin/high-availability/podmaster.yaml @@ -6,7 +6,7 @@ spec: hostNetwork: true containers: - name: scheduler-elector - image: gcr.io/google_containers/podmaster:1.1 + image: k8s.gcr.io/podmaster:1.1 command: - /podmaster - --etcd-servers=http://127.0.0.1:4001 @@ -20,7 +20,7 @@ spec: - mountPath: /manifests name: manifests - name: controller-manager-elector - image: gcr.io/google_containers/podmaster:1.1 + image: k8s.gcr.io/podmaster:1.1 command: - /podmaster - --etcd-servers=http://127.0.0.1:4001 diff --git a/docs/admin/limitrange/invalid-pod.yaml b/docs/admin/limitrange/invalid-pod.yaml index b63f25deba..ecb45dd95f 100644 --- a/docs/admin/limitrange/invalid-pod.yaml +++ b/docs/admin/limitrange/invalid-pod.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: kubernetes-serve-hostname - image: gcr.io/google_containers/serve_hostname + image: k8s.gcr.io/serve_hostname resources: limits: cpu: "3" diff --git a/docs/admin/limitrange/valid-pod.yaml b/docs/admin/limitrange/valid-pod.yaml index c1ec54183b..d83e91267a 100644 --- a/docs/admin/limitrange/valid-pod.yaml +++ b/docs/admin/limitrange/valid-pod.yaml @@ -7,7 +7,7 @@ metadata: spec: containers: - name: kubernetes-serve-hostname - image: gcr.io/google_containers/serve_hostname + image: k8s.gcr.io/serve_hostname resources: limits: cpu: "1" diff --git a/docs/admin/multiple-schedulers/pod1.yaml b/docs/admin/multiple-schedulers/pod1.yaml index 6cf8fec25a..60cdab226d 100644 --- a/docs/admin/multiple-schedulers/pod1.yaml +++ b/docs/admin/multiple-schedulers/pod1.yaml @@ -7,4 +7,4 @@ metadata: spec: containers: - name: pod-with-no-annotation-container - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 diff --git a/docs/admin/multiple-schedulers/pod2.yaml b/docs/admin/multiple-schedulers/pod2.yaml index e1e280ff09..2f065efe65 100644 --- a/docs/admin/multiple-schedulers/pod2.yaml +++ b/docs/admin/multiple-schedulers/pod2.yaml @@ -8,4 +8,4 @@ spec: schedulerName: default-scheduler containers: - name: pod-with-default-annotation-container - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 diff --git a/docs/admin/multiple-schedulers/pod3.yaml b/docs/admin/multiple-schedulers/pod3.yaml index 63be0e0aa3..a1b8db3200 100644 --- a/docs/admin/multiple-schedulers/pod3.yaml +++ b/docs/admin/multiple-schedulers/pod3.yaml @@ -8,4 +8,4 @@ spec: schedulerName: my-scheduler containers: - name: pod-with-second-annotation-container - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 diff --git a/docs/admin/node-conformance.md b/docs/admin/node-conformance.md index 5c3997fe53..5b6a1297fa 100644 --- a/docs/admin/node-conformance.md +++ b/docs/admin/node-conformance.md @@ -48,7 +48,7 @@ other Kubelet flags you may care: # $LOG_DIR is the test output path. sudo docker run -it --rm --privileged --net=host \ -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ - gcr.io/google_containers/node-test:0.2 + k8s.gcr.io/node-test:0.2 ``` ## Running Node Conformance Test for Other Architectures @@ -71,7 +71,7 @@ regular expression of tests you want to run. sudo docker run -it --rm --privileged --net=host \ -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ -e FOCUS=MirrorPod \ # Only run MirrorPod test - gcr.io/google_containers/node-test:0.2 + k8s.gcr.io/node-test:0.2 ``` To skip specific tests, overwrite the environment variable `SKIP` with the @@ -81,7 +81,7 @@ regular expression of tests you want to skip. sudo docker run -it --rm --privileged --net=host \ -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ -e SKIP=MirrorPod \ # Run all conformance tests but skip MirrorPod test - gcr.io/google_containers/node-test:0.2 + k8s.gcr.io/node-test:0.2 ``` Node conformance test is a containerized version of [node e2e test](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/devel/e2e-node-tests.md). diff --git a/docs/api-reference/v1.9/index.html b/docs/api-reference/v1.9/index.html index 18cc2917db..c255107dc7 100644 --- a/docs/api-reference/v1.9/index.html +++ b/docs/api-reference/v1.9/index.html @@ -56950,7 +56950,7 @@ Appears In: names
string array -Names by which this image is known. e.g. ["gcr.io/google_containers/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"] +Names by which this image is known. e.g. ["k8s.gcr.io/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"] sizeBytes
integer diff --git a/docs/concepts/architecture/nodes.md b/docs/concepts/architecture/nodes.md index 4cc4424111..27b9e01e30 100644 --- a/docs/concepts/architecture/nodes.md +++ b/docs/concepts/architecture/nodes.md @@ -265,7 +265,7 @@ metadata: spec: containers: - name: sleep-forever - image: gcr.io/google_containers/pause:0.8.0 + image: k8s.gcr.io/pause:0.8.0 resources: requests: cpu: 100m diff --git a/docs/concepts/cluster-administration/two-files-counter-pod-agent-sidecar.yaml b/docs/concepts/cluster-administration/two-files-counter-pod-agent-sidecar.yaml index 9737f13493..b37b616e6f 100644 --- a/docs/concepts/cluster-administration/two-files-counter-pod-agent-sidecar.yaml +++ b/docs/concepts/cluster-administration/two-files-counter-pod-agent-sidecar.yaml @@ -22,7 +22,7 @@ spec: - name: varlog mountPath: /var/log - name: count-agent - image: gcr.io/google_containers/fluentd-gcp:1.30 + image: k8s.gcr.io/fluentd-gcp:1.30 env: - name: FLUENTD_ARGS value: -c /etc/fluentd-config/fluentd.conf diff --git a/docs/concepts/configuration/manage-compute-resources-container.md b/docs/concepts/configuration/manage-compute-resources-container.md index 6e3aceecbf..3fbf5e6ceb 100644 --- a/docs/concepts/configuration/manage-compute-resources-container.md +++ b/docs/concepts/configuration/manage-compute-resources-container.md @@ -285,7 +285,7 @@ Conditions: Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a diff --git a/docs/concepts/configuration/pod-with-node-affinity.yaml b/docs/concepts/configuration/pod-with-node-affinity.yaml index 7c38e19997..253d2b21ea 100644 --- a/docs/concepts/configuration/pod-with-node-affinity.yaml +++ b/docs/concepts/configuration/pod-with-node-affinity.yaml @@ -23,4 +23,4 @@ spec: - another-node-label-value containers: - name: with-node-affinity - image: gcr.io/google_containers/pause:2.0 \ No newline at end of file + image: k8s.gcr.io/pause:2.0 \ No newline at end of file diff --git a/docs/concepts/configuration/pod-with-pod-affinity.yaml b/docs/concepts/configuration/pod-with-pod-affinity.yaml index 3728537d5a..1897af901f 100644 --- a/docs/concepts/configuration/pod-with-pod-affinity.yaml +++ b/docs/concepts/configuration/pod-with-pod-affinity.yaml @@ -26,4 +26,4 @@ spec: topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 diff --git a/docs/concepts/configuration/secret.md b/docs/concepts/configuration/secret.md index ee8f85d4b5..54db877ce6 100644 --- a/docs/concepts/configuration/secret.md +++ b/docs/concepts/configuration/secret.md @@ -618,7 +618,7 @@ spec: secretName: dotfile-secret containers: - name: dotfile-test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: - ls - "-l" diff --git a/docs/concepts/storage/persistent-volumes.md b/docs/concepts/storage/persistent-volumes.md index 059202db28..50c819fc1e 100644 --- a/docs/concepts/storage/persistent-volumes.md +++ b/docs/concepts/storage/persistent-volumes.md @@ -125,7 +125,7 @@ spec: path: /any/path/it/will/be/replaced containers: - name: pv-recycler - image: "gcr.io/google_containers/busybox" + image: "k8s.gcr.io/busybox" command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"] volumeMounts: - name: vol diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index 247c593d75..2c5ccf492d 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -131,7 +131,7 @@ metadata: name: test-ebs spec: containers: - - image: gcr.io/google_containers/test-webserver + - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /test-ebs @@ -246,7 +246,7 @@ metadata: name: test-pd spec: containers: - - image: gcr.io/google_containers/test-webserver + - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /cache @@ -326,7 +326,7 @@ metadata: name: test-pd spec: containers: - - image: gcr.io/google_containers/test-webserver + - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /test-pd @@ -432,7 +432,7 @@ metadata: name: test-pd spec: containers: - - image: gcr.io/google_containers/test-webserver + - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /test-pd @@ -665,7 +665,7 @@ metadata: name: test-portworx-volume-pod spec: containers: - - image: gcr.io/google_containers/test-webserver + - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /mnt @@ -736,7 +736,7 @@ metadata: name: pod-0 spec: containers: - - image: gcr.io/google_containers/test-webserver + - image: k8s.gcr.io/test-webserver name: pod-0 volumeMounts: - mountPath: /test-pd @@ -866,7 +866,7 @@ metadata: name: test-vmdk spec: containers: - - image: gcr.io/google_containers/test-webserver + - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /test-vmdk diff --git a/docs/concepts/workloads/controllers/statefulset.md b/docs/concepts/workloads/controllers/statefulset.md index 0fd76d4c8d..8fa4c6f8c7 100644 --- a/docs/concepts/workloads/controllers/statefulset.md +++ b/docs/concepts/workloads/controllers/statefulset.md @@ -87,7 +87,7 @@ spec: terminationGracePeriodSeconds: 10 containers: - name: nginx - image: gcr.io/google_containers/nginx-slim:0.8 + image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: web diff --git a/docs/concepts/workloads/pods/pod-lifecycle.md b/docs/concepts/workloads/pods/pod-lifecycle.md index 3bf5f80a53..8e22ec2556 100644 --- a/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/docs/concepts/workloads/pods/pod-lifecycle.md @@ -198,7 +198,7 @@ spec: containers: - args: - /server - image: gcr.io/google_containers/liveness + image: k8s.gcr.io/liveness livenessProbe: httpGet: # when "host" is not defined, "PodIP" will be used diff --git a/docs/getting-started-guides/fluentd-gcp.yaml b/docs/getting-started-guides/fluentd-gcp.yaml index a81427bdfc..d212752cca 100644 --- a/docs/getting-started-guides/fluentd-gcp.yaml +++ b/docs/getting-started-guides/fluentd-gcp.yaml @@ -14,7 +14,7 @@ spec: dnsPolicy: Default containers: - name: fluentd-cloud-logging - image: gcr.io/google_containers/fluentd-gcp:2.0.2 + image: k8s.gcr.io/fluentd-gcp:2.0.2 # If fluentd consumes its own logs, the following situation may happen: # fluentd fails to send a chunk to the server => writes it to the log => # tries to send this message to the server => fails to send a chunk and so on. diff --git a/docs/getting-started-guides/minikube.md b/docs/getting-started-guides/minikube.md index 470a452c11..57fbf6cadb 100644 --- a/docs/getting-started-guides/minikube.md +++ b/docs/getting-started-guides/minikube.md @@ -46,7 +46,7 @@ Running pre-create checks... Creating machine... Starting local Kubernetes cluster... -$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080 +$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8080 deployment "hello-minikube" created $ kubectl expose deployment hello-minikube --type=NodePort service "hello-minikube" exposed diff --git a/docs/reference/generated/kubefed_init.md b/docs/reference/generated/kubefed_init.md index b8920cb603..d58736792e 100644 --- a/docs/reference/generated/kubefed_init.md +++ b/docs/reference/generated/kubefed_init.md @@ -40,13 +40,13 @@ kubefed init FEDERATION_NAME --host-cluster-context=HOST_CONTEXT --dns-provider-config string Config file path on local file system for configuring DNS provider. --dns-zone-name string DNS suffix for this federation. Federated Service DNS names are published with this suffix. --dry-run dry run without sending commands to server. - --etcd-image string Image to use for etcd server. (default "gcr.io/google_containers/etcd:3.1.10") + --etcd-image string Image to use for etcd server. (default "k8s.gcr.io/etcd:3.1.10") --etcd-persistent-storage Use persistent volume for etcd. Defaults to 'true'. (default true) --etcd-pv-capacity string Size of persistent volume claim to be used for etcd. (default "10Gi") --etcd-pv-storage-class string The storage class of the persistent volume claim used for etcd. Must be provided if a default storage class is not enabled for the host cluster. --federation-system-namespace string Namespace in the host cluster where the federation system components are installed (default "federation-system") --host-cluster-context string Host cluster context - --image string Image to use for federation API server and controller manager binaries. (default "gcr.io/google_containers/hyperkube-amd64:v0.0.0-master_$Format:%h$") + --image string Image to use for federation API server and controller manager binaries. (default "k8s.gcr.io/hyperkube-amd64:v0.0.0-master_$Format:%h$") --image-pull-policy string PullPolicy describes a policy for if/when to pull a container image. The default pull policy is IfNotPresent which will not pull an image if it already exists. (default "IfNotPresent") --image-pull-secrets string Provide secrets that can access the private registry. --kubeconfig string Path to the kubeconfig file to use for CLI requests. diff --git a/docs/reference/generated/kubelet.md b/docs/reference/generated/kubelet.md index 0f3cdf79f1..c97c0bc95c 100644 --- a/docs/reference/generated/kubelet.md +++ b/docs/reference/generated/kubelet.md @@ -167,7 +167,7 @@ VolumeScheduling=true|false (ALPHA - default=false) --node-status-update-frequency duration Specifies how often kubelet posts node status to master. Note: be cautious when changing the constant, it must work with nodeMonitorGracePeriod in nodecontroller. (default 10s) --oom-score-adj int32 The oom-score-adj value for kubelet process. Values must be within the range [-1000, 1000] (default -999) --pod-cidr string The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master. - --pod-infra-container-image string The image whose network/ipc namespaces containers in each pod will use. (default "gcr.io/google_containers/pause-amd64:3.0") + --pod-infra-container-image string The image whose network/ipc namespaces containers in each pod will use. (default "k8s.gcr.io/pause-amd64:3.0") --pod-manifest-path string Path to the directory containing pod manifest files to run, or the path to a single pod manifest file. Files starting with dots will be ignored. --pods-per-core int32 Number of Pods per core that can run on this Kubelet. The total number of Pods on this Kubelet cannot exceed max-pods, so max-pods will be used if this calculation results in a larger number of Pods allowed on the Kubelet. A value of 0 disables this limit. --port int32 The port for the Kubelet to serve on. (default 10250) diff --git a/docs/reference/generated/kubernetes-api/v1.9/index.html b/docs/reference/generated/kubernetes-api/v1.9/index.html index 0f81fe78e8..9097c7be2a 100644 --- a/docs/reference/generated/kubernetes-api/v1.9/index.html +++ b/docs/reference/generated/kubernetes-api/v1.9/index.html @@ -59719,7 +59719,7 @@ Appears In: names
string array -Names by which this image is known. e.g. ["gcr.io/google_containers/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"] +Names by which this image is known. e.g. ["k8s.gcr.io/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"] sizeBytes
integer diff --git a/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_phase_addon_all.md b/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_phase_addon_all.md index edd1b76971..082f3e83cc 100644 --- a/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_phase_addon_all.md +++ b/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_phase_addon_all.md @@ -35,7 +35,7 @@ HighAvailability=true|false (ALPHA - default=false) SelfHosting=true|false (BETA - default=false) StoreCertsInSecrets=true|false (ALPHA - default=false) SupportIPVSProxyMode=true|false (ALPHA - default=false) - --image-repository string Choose a container registry to pull control plane images from (default "gcr.io/google_containers") + --image-repository string Choose a container registry to pull control plane images from (default "k8s.gcr.io") --kubeconfig string The KubeConfig file to use when talking to the cluster (default "/etc/kubernetes/admin.conf") --kubernetes-version string Choose a specific Kubernetes version for the control plane (default "stable-1.8") --pod-network-cidr string The range of IP addresses used for the Pod network diff --git a/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_phase_addon_kube-dns.md b/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_phase_addon_kube-dns.md index 414f2a1b7c..32043b6e91 100644 --- a/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_phase_addon_kube-dns.md +++ b/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_phase_addon_kube-dns.md @@ -24,7 +24,7 @@ HighAvailability=true|false (ALPHA - default=false) SelfHosting=true|false (BETA - default=false) StoreCertsInSecrets=true|false (ALPHA - default=false) SupportIPVSProxyMode=true|false (ALPHA - default=false) - --image-repository string Choose a container registry to pull control plane images from (default "gcr.io/google_containers") + --image-repository string Choose a container registry to pull control plane images from (default "k8s.gcr.io") --kubeconfig string The KubeConfig file to use when talking to the cluster (default "/etc/kubernetes/admin.conf") --kubernetes-version string Choose a specific Kubernetes version for the control plane (default "stable-1.8") --service-cidr string The range of IP address used for service VIPs (default "10.96.0.0/12") diff --git a/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_phase_addon_kube-proxy.md b/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_phase_addon_kube-proxy.md index d1bca6ebff..662f20c924 100644 --- a/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_phase_addon_kube-proxy.md +++ b/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_phase_addon_kube-proxy.md @@ -18,7 +18,7 @@ kubeadm alpha phase addon kube-proxy --apiserver-advertise-address string The IP address or DNS name the API server is accessible on --apiserver-bind-port int32 The port the API server is accessible on (default 6443) --config string Path to a kubeadm config file. WARNING: Usage of a configuration file is experimental! - --image-repository string Choose a container registry to pull control plane images from (default "gcr.io/google_containers") + --image-repository string Choose a container registry to pull control plane images from (default "k8s.gcr.io") --kubeconfig string The KubeConfig file to use when talking to the cluster (default "/etc/kubernetes/admin.conf") --kubernetes-version string Choose a specific Kubernetes version for the control plane (default "stable-1.8") --pod-network-cidr string The range of IP addresses used for the Pod network diff --git a/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/docs/reference/setup-tools/kubeadm/kubeadm-init.md index 96419b5ee2..1071eb028b 100755 --- a/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -156,7 +156,7 @@ More information on custom arguments can be found here: ### Using custom images {#custom-images} -By default, kubeadm pulls images from `gcr.io/google_containers`, unless +By default, kubeadm pulls images from `k8s.gcr.io`, unless the requested Kubernetes version is a CI version. In this case, `gcr.io/kubernetes-ci-images` is used. @@ -164,9 +164,9 @@ You can override this behavior by using [kubeadm with a configuration file](#con Allowed customization are: * To provide an alternative `imageRepository` to be used instead of - `gcr.io/google_containers`. + `k8s.gcr.io`. * To provide a `unifiedControlPlaneImage` to be used instead of different images for control plane components. -* To provide a specific `etcd.image` to be used instead of the image available at`gcr.io/google_containers`. +* To provide a specific `etcd.image` to be used instead of the image available at`k8s.gcr.io`. ### Using custom certificates {#custom-certificates} @@ -370,15 +370,15 @@ For running kubeadm without an internet connection you have to pre-pull the requ | Image Name | v1.8 release branch version | v1.9 release branch version | |----------------------------------------------------------|-----------------------------|-----------------------------| -| gcr.io/google_containers/kube-apiserver-${ARCH} | v1.8.x | v1.9.x | -| gcr.io/google_containers/kube-controller-manager-${ARCH} | v1.8.x | v1.9.x | -| gcr.io/google_containers/kube-scheduler-${ARCH} | v1.8.x | v1.9.x | -| gcr.io/google_containers/kube-proxy-${ARCH} | v1.8.x | v1.9.x | -| gcr.io/google_containers/etcd-${ARCH} | 3.0.17 | 3.1.10 | -| gcr.io/google_containers/pause-${ARCH} | 3.0 | 3.0 | -| gcr.io/google_containers/k8s-dns-sidecar-${ARCH} | 1.14.5 | 1.14.7 | -| gcr.io/google_containers/k8s-dns-kube-dns-${ARCH} | 1.14.5 | 1.14.7 | -| gcr.io/google_containers/k8s-dns-dnsmasq-nanny-${ARCH} | 1.14.5 | 1.14.7 | +| k8s.gcr.io/kube-apiserver-${ARCH} | v1.8.x | v1.9.x | +| k8s.gcr.io/kube-controller-manager-${ARCH} | v1.8.x | v1.9.x | +| k8s.gcr.io/kube-scheduler-${ARCH} | v1.8.x | v1.9.x | +| k8s.gcr.io/kube-proxy-${ARCH} | v1.8.x | v1.9.x | +| k8s.gcr.io/etcd-${ARCH} | 3.0.17 | 3.1.10 | +| k8s.gcr.io/pause-${ARCH} | 3.0 | 3.0 | +| k8s.gcr.io/k8s-dns-sidecar-${ARCH} | 1.14.5 | 1.14.7 | +| k8s.gcr.io/k8s-dns-kube-dns-${ARCH} | 1.14.5 | 1.14.7 | +| k8s.gcr.io/k8s-dns-dnsmasq-nanny-${ARCH} | 1.14.5 | 1.14.7 | Here `v1.8.x` means the "latest patch release of the v1.8 branch". diff --git a/docs/resources-reference/v1.5/index.html b/docs/resources-reference/v1.5/index.html index 36a50de98e..7a19f7f266 100644 --- a/docs/resources-reference/v1.5/index.html +++ b/docs/resources-reference/v1.5/index.html @@ -5643,7 +5643,7 @@ Appears In NodeStatus names
string array -Names by which this image is known. e.g. ["gcr.io/google_containers/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"] +Names by which this image is known. e.g. ["k8s.gcr.io/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"] sizeBytes
integer diff --git a/docs/resources-reference/v1.6/index.html b/docs/resources-reference/v1.6/index.html index fa2c13c0c9..69c9b2c997 100644 --- a/docs/resources-reference/v1.6/index.html +++ b/docs/resources-reference/v1.6/index.html @@ -6169,7 +6169,7 @@ Appears In NodeStatus names
string array -Names by which this image is known. e.g. ["gcr.io/google_containers/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"] +Names by which this image is known. e.g. ["k8s.gcr.io/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"] sizeBytes
integer diff --git a/docs/resources-reference/v1.7/index.html b/docs/resources-reference/v1.7/index.html index 9b6e230640..2d0892a4ec 100644 --- a/docs/resources-reference/v1.7/index.html +++ b/docs/resources-reference/v1.7/index.html @@ -7181,7 +7181,7 @@ Appears In: names
string array -Names by which this image is known. e.g. ["gcr.io/google_containers/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"] +Names by which this image is known. e.g. ["k8s.gcr.io/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"] sizeBytes
integer diff --git a/docs/tasks/access-application-cluster/redis-master.yaml b/docs/tasks/access-application-cluster/redis-master.yaml index 57305a7a35..589de648f5 100644 --- a/docs/tasks/access-application-cluster/redis-master.yaml +++ b/docs/tasks/access-application-cluster/redis-master.yaml @@ -9,7 +9,7 @@ metadata: spec: containers: - name: master - image: gcr.io/google_containers/redis:v1 + image: k8s.gcr.io/redis:v1 env: - name: MASTER value: "true" diff --git a/docs/tasks/administer-cluster/cloud-controller-manager-daemonset-example.yaml b/docs/tasks/administer-cluster/cloud-controller-manager-daemonset-example.yaml index dd92c686ad..b517e2dc48 100644 --- a/docs/tasks/administer-cluster/cloud-controller-manager-daemonset-example.yaml +++ b/docs/tasks/administer-cluster/cloud-controller-manager-daemonset-example.yaml @@ -42,9 +42,9 @@ spec: serviceAccountName: cloud-controller-manager containers: - name: cloud-controller-manager - # for in-tree providers we use gcr.io/google_containers/cloud-controller-manager + # for in-tree providers we use k8s.gcr.io/cloud-controller-manager # this can be replaced with any other image for out-of-tree providers - image: gcr.io/google_containers/cloud-controller-manager:v1.8.0 + image: k8s.gcr.io/cloud-controller-manager:v1.8.0 command: - /usr/local/bin/cloud-controller-manager - --cloud-provider= # Add your own cloud provider here! diff --git a/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml b/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml index f29dd2e275..b427829b5f 100644 --- a/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml +++ b/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml @@ -13,7 +13,7 @@ spec: spec: containers: - name: autoscaler - image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.0.0 + image: k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.0.0 resources: requests: cpu: "20m" diff --git a/docs/tasks/administer-cluster/pod1.yaml b/docs/tasks/administer-cluster/pod1.yaml index 733aa97d99..560b6aa0fb 100644 --- a/docs/tasks/administer-cluster/pod1.yaml +++ b/docs/tasks/administer-cluster/pod1.yaml @@ -7,4 +7,4 @@ metadata: spec: containers: - name: pod-with-no-annotation-container - image: gcr.io/google_containers/pause:2.0 \ No newline at end of file + image: k8s.gcr.io/pause:2.0 \ No newline at end of file diff --git a/docs/tasks/administer-cluster/pod2.yaml b/docs/tasks/administer-cluster/pod2.yaml index e1e280ff09..2f065efe65 100644 --- a/docs/tasks/administer-cluster/pod2.yaml +++ b/docs/tasks/administer-cluster/pod2.yaml @@ -8,4 +8,4 @@ spec: schedulerName: default-scheduler containers: - name: pod-with-default-annotation-container - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 diff --git a/docs/tasks/administer-cluster/pod3.yaml b/docs/tasks/administer-cluster/pod3.yaml index 63be0e0aa3..a1b8db3200 100644 --- a/docs/tasks/administer-cluster/pod3.yaml +++ b/docs/tasks/administer-cluster/pod3.yaml @@ -8,4 +8,4 @@ spec: schedulerName: my-scheduler containers: - name: pod-with-second-annotation-container - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 diff --git a/docs/tasks/administer-cluster/pvc-protection.md b/docs/tasks/administer-cluster/pvc-protection.md index 0c1ee061c0..4cc755840d 100644 --- a/docs/tasks/administer-cluster/pvc-protection.md +++ b/docs/tasks/administer-cluster/pvc-protection.md @@ -94,7 +94,7 @@ metadata: spec: containers: - name: test-pod - image: gcr.io/google_containers/busybox:1.24 + image: k8s.gcr.io/busybox:1.24 command: - "/bin/sh" args: diff --git a/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md b/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md index 86ad07ecd9..253590b16a 100644 --- a/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md +++ b/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md @@ -34,7 +34,7 @@ broken states, and cannot recover except by being restarted. Kubernetes provides liveness probes to detect and remedy such situations. In this exercise, you create a Pod that runs a Container based on the -`gcr.io/google_containers/busybox` image. Here is the configuration file for the Pod: +`k8s.gcr.io/busybox` image. Here is the configuration file for the Pod: {% include code.html language="yaml" file="exec-liveness.yaml" ghlink="/docs/tasks/configure-pod-container/exec-liveness.yaml" %} @@ -75,8 +75,8 @@ The output indicates that no liveness probes have failed yet: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0 -23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "gcr.io/google_containers/busybox" -23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox" +23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox" +23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox" 23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined] 23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e ``` @@ -94,8 +94,8 @@ probes have failed, and the containers have been killed and recreated. FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0 -36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "gcr.io/google_containers/busybox" -36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox" +36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox" +36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox" 36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined] 36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e 2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory @@ -117,7 +117,7 @@ liveness-exec 1/1 Running 1 1m ## Define a liveness HTTP request Another kind of liveness probe uses an HTTP GET request. Here is the configuration -file for a Pod that runs a container based on the `gcr.io/google_containers/liveness` +file for a Pod that runs a container based on the `k8s.gcr.io/liveness` image. {% include code.html language="yaml" file="http-liveness.yaml" ghlink="/docs/tasks/configure-pod-container/http-liveness.yaml" %} diff --git a/docs/tasks/configure-pod-container/configure-pod-configmap.md b/docs/tasks/configure-pod-container/configure-pod-configmap.md index 89310d1241..91fdbb0932 100644 --- a/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -38,7 +38,7 @@ This page provides a series of usage examples demonstrating how to configure Pod spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "env" ] env: # Define the environment variable @@ -88,7 +88,7 @@ This page provides a series of usage examples demonstrating how to configure Pod spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: SPECIAL_LEVEL_KEY @@ -134,7 +134,7 @@ This page provides a series of usage examples demonstrating how to configure Pod spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "env" ] envFrom: - configMapRef: @@ -161,7 +161,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ] env: - name: SPECIAL_LEVEL_KEY @@ -214,7 +214,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "ls /etc/config/" ] volumeMounts: - name: config-volume @@ -248,7 +248,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "/bin/sh","-c","cat /etc/config/keys" ] volumeMounts: - name: config-volume diff --git a/docs/tasks/configure-pod-container/exec-liveness.yaml b/docs/tasks/configure-pod-container/exec-liveness.yaml index 6978ca0069..07bf75f85c 100644 --- a/docs/tasks/configure-pod-container/exec-liveness.yaml +++ b/docs/tasks/configure-pod-container/exec-liveness.yaml @@ -7,7 +7,7 @@ metadata: spec: containers: - name: liveness - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox args: - /bin/sh - -c diff --git a/docs/tasks/configure-pod-container/http-liveness.yaml b/docs/tasks/configure-pod-container/http-liveness.yaml index 1d05e49163..23d37b480a 100644 --- a/docs/tasks/configure-pod-container/http-liveness.yaml +++ b/docs/tasks/configure-pod-container/http-liveness.yaml @@ -7,7 +7,7 @@ metadata: spec: containers: - name: liveness - image: gcr.io/google_containers/liveness + image: k8s.gcr.io/liveness args: - /server livenessProbe: diff --git a/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml b/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml index 08065019c5..08fb77ff0f 100644 --- a/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml +++ b/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml @@ -7,7 +7,7 @@ metadata: spec: containers: - name: goproxy - image: gcr.io/google_containers/goproxy:0.1 + image: k8s.gcr.io/goproxy:0.1 ports: - containerPort: 8080 readinessProbe: diff --git a/docs/tasks/debug-application-cluster/debug-service.md b/docs/tasks/debug-application-cluster/debug-service.md index a2cd6b6e9c..761e8f3725 100644 --- a/docs/tasks/debug-application-cluster/debug-service.md +++ b/docs/tasks/debug-application-cluster/debug-service.md @@ -66,7 +66,7 @@ probably debugging your own `Service` you can substitute your own details, or yo can follow along and get a second data point. ```shell -$ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \ +$ kubectl run hostnames --image=k8s.gcr.io/serve_hostname \ --labels=app=hostnames \ --port=9376 \ --replicas=3 @@ -93,7 +93,7 @@ spec: spec: containers: - name: hostnames - image: gcr.io/google_containers/serve_hostname + image: k8s.gcr.io/serve_hostname ports: - containerPort: 9376 protocol: TCP diff --git a/docs/tasks/debug-application-cluster/monitor-node-health.md b/docs/tasks/debug-application-cluster/monitor-node-health.md index 0faad52a33..0578d82c1f 100644 --- a/docs/tasks/debug-application-cluster/monitor-node-health.md +++ b/docs/tasks/debug-application-cluster/monitor-node-health.md @@ -77,7 +77,7 @@ spec: hostNetwork: true containers: - name: node-problem-detector - image: gcr.io/google_containers/node-problem-detector:v0.1 + image: k8s.gcr.io/node-problem-detector:v0.1 securityContext: privileged: true resources: @@ -149,7 +149,7 @@ spec: hostNetwork: true containers: - name: node-problem-detector - image: gcr.io/google_containers/node-problem-detector:v0.1 + image: k8s.gcr.io/node-problem-detector:v0.1 securityContext: privileged: true resources: diff --git a/docs/tasks/inject-data-application/dapi-envars-container.yaml b/docs/tasks/inject-data-application/dapi-envars-container.yaml index 8b3b3a39d3..55bd4dd263 100644 --- a/docs/tasks/inject-data-application/dapi-envars-container.yaml +++ b/docs/tasks/inject-data-application/dapi-envars-container.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox:1.24 + image: k8s.gcr.io/busybox:1.24 command: [ "sh", "-c"] args: - while true; do diff --git a/docs/tasks/inject-data-application/dapi-envars-pod.yaml b/docs/tasks/inject-data-application/dapi-envars-pod.yaml index 00762373b3..071fa82bb3 100644 --- a/docs/tasks/inject-data-application/dapi-envars-pod.yaml +++ b/docs/tasks/inject-data-application/dapi-envars-pod.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "sh", "-c"] args: - while true; do diff --git a/docs/tasks/inject-data-application/dapi-volume-resources.yaml b/docs/tasks/inject-data-application/dapi-volume-resources.yaml index 65770f283f..55af44ac1b 100644 --- a/docs/tasks/inject-data-application/dapi-volume-resources.yaml +++ b/docs/tasks/inject-data-application/dapi-volume-resources.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: client-container - image: gcr.io/google_containers/busybox:1.24 + image: k8s.gcr.io/busybox:1.24 command: ["sh", "-c"] args: - while true; do diff --git a/docs/tasks/inject-data-application/dapi-volume.yaml b/docs/tasks/inject-data-application/dapi-volume.yaml index 7126cefae5..864c99d11e 100644 --- a/docs/tasks/inject-data-application/dapi-volume.yaml +++ b/docs/tasks/inject-data-application/dapi-volume.yaml @@ -12,7 +12,7 @@ metadata: spec: containers: - name: client-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: ["sh", "-c"] args: - while true; do diff --git a/docs/tasks/manage-gpus/scheduling-gpus.md b/docs/tasks/manage-gpus/scheduling-gpus.md index 7e396d51fc..4ad2126b40 100644 --- a/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/docs/tasks/manage-gpus/scheduling-gpus.md @@ -36,13 +36,13 @@ spec: containers: - name: gpu-container-1 - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 resources: limits: alpha.kubernetes.io/nvidia-gpu: 2 # requesting 2 GPUs - name: gpu-container-2 - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 resources: limits: alpha.kubernetes.io/nvidia-gpu: 3 # requesting 3 GPUs @@ -126,7 +126,7 @@ metadata: spec: containers: - name: gpu-container-1 - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 resources: limits: alpha.kubernetes.io/nvidia-gpu: 1 diff --git a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 6fcf333481..d555c8050c 100644 --- a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -35,7 +35,7 @@ It defines an [index.php](/docs/user-guide/horizontal-pod-autoscaling/image/inde First, we will start a deployment running the image and expose it as a service: ```shell -$ kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m --expose --port=80 +$ kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80 service "php-apache" created deployment "php-apache" created ``` diff --git a/docs/tools/kompose/user-guide.md b/docs/tools/kompose/user-guide.md index d8811b618d..5e79c975cf 100644 --- a/docs/tools/kompose/user-guide.md +++ b/docs/tools/kompose/user-guide.md @@ -28,7 +28,7 @@ version: "2" services: redis-master: - image: gcr.io/google_containers/redis:e2e + image: k8s.gcr.io/redis:e2e ports: - "6379" diff --git a/docs/tutorials/services/source-ip.md b/docs/tutorials/services/source-ip.md index c9d872465e..7ee2b364f2 100644 --- a/docs/tutorials/services/source-ip.md +++ b/docs/tutorials/services/source-ip.md @@ -33,7 +33,7 @@ document. The examples use a small nginx webserver that echoes back the source IP of requests it receives through an HTTP header. You can create it as follows: ```console -$ kubectl run source-ip-app --image=gcr.io/google_containers/echoserver:1.4 +$ kubectl run source-ip-app --image=k8s.gcr.io/echoserver:1.4 deployment "source-ip-app" created ``` diff --git a/docs/tutorials/stateful-application/basic-stateful-set.md b/docs/tutorials/stateful-application/basic-stateful-set.md index 076829f2bd..2ef3ea627d 100644 --- a/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/docs/tutorials/stateful-application/basic-stateful-set.md @@ -465,7 +465,7 @@ In one terminal window, patch the `web` StatefulSet to change the container image again. ```shell -kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.8"}]' +kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"k8s.gcr.io/nginx-slim:0.8"}]' statefulset "web" patched ``` @@ -522,9 +522,9 @@ Get the Pods to view their container images. ```shell{% raw %} for p in 0 1 2; do kubectl get po web-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done -gcr.io/google_containers/nginx-slim:0.8 -gcr.io/google_containers/nginx-slim:0.8 -gcr.io/google_containers/nginx-slim:0.8 +k8s.gcr.io/nginx-slim:0.8 +k8s.gcr.io/nginx-slim:0.8 +k8s.gcr.io/nginx-slim:0.8 {% endraw %} ``` @@ -549,7 +549,7 @@ statefulset "web" patched Patch the StatefulSet again to change the container's image. ```shell -kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.7"}]' +kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"k8s.gcr.io/nginx-slim:0.7"}]' statefulset "web" patched ``` @@ -575,7 +575,7 @@ Get the Pod's container. ```shell{% raw %} kubectl get po web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}' -gcr.io/google_containers/nginx-slim:0.8 +k8s.gcr.io/nginx-slim:0.8 {% endraw %} ``` @@ -610,7 +610,7 @@ Get the Pod's container. ```shell{% raw %} kubectl get po web-2 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}' -gcr.io/google_containers/nginx-slim:0.7 +k8s.gcr.io/nginx-slim:0.7 {% endraw %} ``` @@ -646,7 +646,7 @@ Get the `web-1` Pods container. ```shell{% raw %} kubectl get po web-1 --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}' -gcr.io/google_containers/nginx-slim:0.8 +k8s.gcr.io/nginx-slim:0.8 {% endraw %} ``` `web-1` was restored to its original configuration because the Pod's ordinal @@ -695,9 +695,9 @@ Get the Pod's containers. ```shell{% raw %} for p in 0 1 2; do kubectl get po web-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done -gcr.io/google_containers/nginx-slim:0.7 -gcr.io/google_containers/nginx-slim:0.7 -gcr.io/google_containers/nginx-slim:0.7 +k8s.gcr.io/nginx-slim:0.7 +k8s.gcr.io/nginx-slim:0.7 +k8s.gcr.io/nginx-slim:0.7 {% endraw %} ``` diff --git a/docs/tutorials/stateful-application/web.yaml b/docs/tutorials/stateful-application/web.yaml index 2e9f541460..37c1fabf9c 100644 --- a/docs/tutorials/stateful-application/web.yaml +++ b/docs/tutorials/stateful-application/web.yaml @@ -29,7 +29,7 @@ spec: spec: containers: - name: nginx - image: gcr.io/google_containers/nginx-slim:0.8 + image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: web diff --git a/docs/tutorials/stateful-application/webp.yaml b/docs/tutorials/stateful-application/webp.yaml index 108e670a62..4eab2dc206 100644 --- a/docs/tutorials/stateful-application/webp.yaml +++ b/docs/tutorials/stateful-application/webp.yaml @@ -30,7 +30,7 @@ spec: spec: containers: - name: nginx - image: gcr.io/google_containers/nginx-slim:0.8 + image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: web diff --git a/docs/tutorials/stateful-application/zookeeper.yaml b/docs/tutorials/stateful-application/zookeeper.yaml index e3929613d2..4afa806a54 100644 --- a/docs/tutorials/stateful-application/zookeeper.yaml +++ b/docs/tutorials/stateful-application/zookeeper.yaml @@ -68,7 +68,7 @@ spec: containers: - name: kubernetes-zookeeper imagePullPolicy: Always - image: "gcr.io/google_containers/kubernetes-zookeeper:1.0-3.4.10" + image: "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10" resources: requests: memory: "1Gi" diff --git a/docs/tutorials/stateless-application/guestbook/redis-master-deployment.yaml b/docs/tutorials/stateless-application/guestbook/redis-master-deployment.yaml index 685bce7bd7..07d94594eb 100644 --- a/docs/tutorials/stateless-application/guestbook/redis-master-deployment.yaml +++ b/docs/tutorials/stateless-application/guestbook/redis-master-deployment.yaml @@ -18,7 +18,7 @@ spec: spec: containers: - name: master - image: gcr.io/google_containers/redis:e2e # or just image: redis + image: k8s.gcr.io/redis:e2e # or just image: redis resources: requests: cpu: 100m diff --git a/docs/user-guide/configmap/command-pod.yaml b/docs/user-guide/configmap/command-pod.yaml index 444b4beb66..a9d7411ba6 100644 --- a/docs/user-guide/configmap/command-pod.yaml +++ b/docs/user-guide/configmap/command-pod.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "echo $(KUBE_CONFIG_1) $(KUBE_CONFIG_2)" ] env: - name: KUBE_CONFIG_1 diff --git a/docs/user-guide/configmap/env-pod.yaml b/docs/user-guide/configmap/env-pod.yaml index fe0036e0b2..1fd9ca6702 100644 --- a/docs/user-guide/configmap/env-pod.yaml +++ b/docs/user-guide/configmap/env-pod.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: KUBE_CONFIG_1 diff --git a/docs/user-guide/configmap/mount-file-pod.yaml b/docs/user-guide/configmap/mount-file-pod.yaml index 7efd9b4003..f3ddf3aedb 100644 --- a/docs/user-guide/configmap/mount-file-pod.yaml +++ b/docs/user-guide/configmap/mount-file-pod.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "cat /etc/special-key" ] volumeMounts: - name: config-volume diff --git a/docs/user-guide/configmap/volume-pod.yaml b/docs/user-guide/configmap/volume-pod.yaml index c34332e976..0871f69739 100644 --- a/docs/user-guide/configmap/volume-pod.yaml +++ b/docs/user-guide/configmap/volume-pod.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "cat /etc/config/path/to/special-key" ] volumeMounts: - name: config-volume diff --git a/docs/user-guide/downward-api/dapi-container-resources.yaml b/docs/user-guide/downward-api/dapi-container-resources.yaml index 2a3abb0145..c6c6fc29fc 100644 --- a/docs/user-guide/downward-api/dapi-container-resources.yaml +++ b/docs/user-guide/downward-api/dapi-container-resources.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox:1.24 + image: k8s.gcr.io/busybox:1.24 command: [ "/bin/sh", "-c", "env" ] resources: requests: diff --git a/docs/user-guide/downward-api/dapi-pod.yaml b/docs/user-guide/downward-api/dapi-pod.yaml index 5de0260bfc..06b044bbb6 100644 --- a/docs/user-guide/downward-api/dapi-pod.yaml +++ b/docs/user-guide/downward-api/dapi-pod.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_NODE_NAME diff --git a/docs/user-guide/downward-api/volume/dapi-volume-resources.yaml b/docs/user-guide/downward-api/volume/dapi-volume-resources.yaml index f28bb99e3f..27b8a01c48 100644 --- a/docs/user-guide/downward-api/volume/dapi-volume-resources.yaml +++ b/docs/user-guide/downward-api/volume/dapi-volume-resources.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: client-container - image: gcr.io/google_containers/busybox:1.24 + image: k8s.gcr.io/busybox:1.24 command: ["sh", "-c", "while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done"] resources: requests: diff --git a/docs/user-guide/downward-api/volume/dapi-volume.yaml b/docs/user-guide/downward-api/volume/dapi-volume.yaml index be926498d1..4db64c6586 100644 --- a/docs/user-guide/downward-api/volume/dapi-volume.yaml +++ b/docs/user-guide/downward-api/volume/dapi-volume.yaml @@ -12,7 +12,7 @@ metadata: spec: containers: - name: client-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: ["sh", "-c", "while true; do if [[ -e /etc/labels ]]; then cat /etc/labels; fi; if [[ -e /etc/annotations ]]; then cat /etc/annotations; fi; sleep 5; done"] volumeMounts: - name: podinfo diff --git a/docs/user-guide/liveness/exec-liveness.yaml b/docs/user-guide/liveness/exec-liveness.yaml index 7b2af47c03..44d7c1cbc6 100644 --- a/docs/user-guide/liveness/exec-liveness.yaml +++ b/docs/user-guide/liveness/exec-liveness.yaml @@ -10,7 +10,7 @@ spec: - /bin/sh - -c - echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600 - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox livenessProbe: exec: command: diff --git a/docs/user-guide/liveness/http-liveness-named-port.yaml b/docs/user-guide/liveness/http-liveness-named-port.yaml index 782dfbe339..0449774d47 100644 --- a/docs/user-guide/liveness/http-liveness-named-port.yaml +++ b/docs/user-guide/liveness/http-liveness-named-port.yaml @@ -8,7 +8,7 @@ spec: containers: - args: - /server - image: gcr.io/google_containers/liveness + image: k8s.gcr.io/liveness ports: - name: liveness-port containerPort: 8080 diff --git a/docs/user-guide/liveness/http-liveness.yaml b/docs/user-guide/liveness/http-liveness.yaml index 0f574509e9..57eef7d21b 100644 --- a/docs/user-guide/liveness/http-liveness.yaml +++ b/docs/user-guide/liveness/http-liveness.yaml @@ -8,7 +8,7 @@ spec: containers: - args: - /server - image: gcr.io/google_containers/liveness + image: k8s.gcr.io/liveness livenessProbe: httpGet: path: /healthz diff --git a/docs/user-guide/node-selection/pod-with-node-affinity.yaml b/docs/user-guide/node-selection/pod-with-node-affinity.yaml index 7c38e19997..253d2b21ea 100644 --- a/docs/user-guide/node-selection/pod-with-node-affinity.yaml +++ b/docs/user-guide/node-selection/pod-with-node-affinity.yaml @@ -23,4 +23,4 @@ spec: - another-node-label-value containers: - name: with-node-affinity - image: gcr.io/google_containers/pause:2.0 \ No newline at end of file + image: k8s.gcr.io/pause:2.0 \ No newline at end of file diff --git a/docs/user-guide/node-selection/pod-with-pod-affinity.yaml b/docs/user-guide/node-selection/pod-with-pod-affinity.yaml index 3728537d5a..1897af901f 100644 --- a/docs/user-guide/node-selection/pod-with-pod-affinity.yaml +++ b/docs/user-guide/node-selection/pod-with-pod-affinity.yaml @@ -26,4 +26,4 @@ spec: topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity - image: gcr.io/google_containers/pause:2.0 + image: k8s.gcr.io/pause:2.0 diff --git a/docs/user-guide/secrets/secret-env-pod.yaml b/docs/user-guide/secrets/secret-env-pod.yaml index a5d9c0ff75..d93d4095e1 100644 --- a/docs/user-guide/secrets/secret-env-pod.yaml +++ b/docs/user-guide/secrets/secret-env-pod.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: gcr.io/google_containers/busybox + image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_SECRET_DATA diff --git a/docs/user-guide/update-demo/index.md.orig b/docs/user-guide/update-demo/index.md.orig index a45ca72d97..50679b8b69 100644 --- a/docs/user-guide/update-demo/index.md.orig +++ b/docs/user-guide/update-demo/index.md.orig @@ -62,7 +62,7 @@ $ kubectl rolling-update update-demo-nautilus --update-period=10s -f docs/user-g The rolling-update command in kubectl will do 2 things: -1. Create a new [replication controller](/docs/user-guide/replication-controller/) with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`) +1. Create a new [replication controller](/docs/user-guide/replication-controller/) with a pod template that uses the new image (`k8s.gcr.io/update-demo:kitten`) 2. Scale the old and new replication controllers until the new controller replaces the old. This will kill the current pods one at a time, spinning up new ones to replace them. Watch the [demo website](http://localhost:8001/static/index.html), it will update one pod every 10 seconds until all of the pods have the new image. diff --git a/docs/user-guide/update-demo/kitten-rc.yaml b/docs/user-guide/update-demo/kitten-rc.yaml index 91f1aa06c3..48b15cc190 100644 --- a/docs/user-guide/update-demo/kitten-rc.yaml +++ b/docs/user-guide/update-demo/kitten-rc.yaml @@ -13,7 +13,7 @@ spec: version: kitten spec: containers: - - image: gcr.io/google_containers/update-demo:kitten + - image: k8s.gcr.io/update-demo:kitten name: update-demo ports: - containerPort: 80 diff --git a/docs/user-guide/update-demo/nautilus-rc.yaml b/docs/user-guide/update-demo/nautilus-rc.yaml index cae1a22c16..7518dea901 100644 --- a/docs/user-guide/update-demo/nautilus-rc.yaml +++ b/docs/user-guide/update-demo/nautilus-rc.yaml @@ -14,7 +14,7 @@ spec: version: nautilus spec: containers: - - image: gcr.io/google_containers/update-demo:nautilus + - image: k8s.gcr.io/update-demo:nautilus name: update-demo ports: - containerPort: 80 From 74dea73a16443320643b75854d291ea5b60f6188 Mon Sep 17 00:00:00 2001 From: dungeonmaster18 Date: Sat, 23 Dec 2017 15:25:28 +0530 Subject: [PATCH 11/47] Fixed deprecated redirect --- docs/concepts/services-networking/ingress.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/services-networking/ingress.md b/docs/concepts/services-networking/ingress.md index 256410d4cc..d33e2d8e71 100644 --- a/docs/concepts/services-networking/ingress.md +++ b/docs/concepts/services-networking/ingress.md @@ -224,7 +224,7 @@ Note that there is a gap between TLS features supported by various Ingress contr ### Loadbalancing -An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced load balancing concepts (e.g.: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://git.k8s.io/contrib/service-loadbalancer). With time, we plan to distill load balancing patterns that are applicable cross platform into the Ingress resource. +An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced load balancing concepts (e.g.: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/ingress-nginx/blob/master/docs/catalog.md). With time, we plan to distill load balancing patterns that are applicable cross platform into the Ingress resource. It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) which allow you to achieve the same end result. Please review the controller specific docs to see how they handle health checks ([nginx](https://git.k8s.io/ingress-nginx/README.md), [GCE](https://git.k8s.io/ingress-gce/README.md#health-checks)). From d002c522be3fe84cdedcb199ec74e76e37cf888c Mon Sep 17 00:00:00 2001 From: dungeonmaster18 Date: Sat, 23 Dec 2017 15:52:08 +0530 Subject: [PATCH 12/47] fixed broken link --- docs/setup/independent/create-cluster-kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/setup/independent/create-cluster-kubeadm.md b/docs/setup/independent/create-cluster-kubeadm.md index 8670c5565c..fa95859371 100644 --- a/docs/setup/independent/create-cluster-kubeadm.md +++ b/docs/setup/independent/create-cluster-kubeadm.md @@ -120,7 +120,7 @@ kubeadm init **Notes:** -- Please refer to the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/) if you want to +- Please refer to the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/) if you want to read more about the flags `kubeadm init` provides. You can also specify a [configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file) instead of using flags. - You need to choose a Pod Network Plugin in the next step. Depending on what From 01f6b1277494e25ba153b23b56e8b8e64abaa8d7 Mon Sep 17 00:00:00 2001 From: spoganshev Date: Sat, 23 Dec 2017 14:44:03 +0200 Subject: [PATCH 13/47] Typo fix help -> helps --- docs/setup/independent/create-cluster-kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/setup/independent/create-cluster-kubeadm.md b/docs/setup/independent/create-cluster-kubeadm.md index 8670c5565c..6af3cc8db7 100644 --- a/docs/setup/independent/create-cluster-kubeadm.md +++ b/docs/setup/independent/create-cluster-kubeadm.md @@ -9,7 +9,7 @@ title: Using kubeadm to Create a Cluster {% capture overview %} -**kubeadm** is a toolkit that help you bootstrap a best-practice Kubernetes +**kubeadm** is a toolkit that helps you bootstrap a best-practice Kubernetes cluster in an easy, reasonably secure and extensible way. It also supports managing [Bootstrap Tokens](#TODO) for you and upgrading/downgrading clusters. From d067660586616926b4112b34999aea97045666ec Mon Sep 17 00:00:00 2001 From: Qiming Date: Sun, 24 Dec 2017 14:55:23 +0800 Subject: [PATCH 14/47] Update persistent-volume.yaml --- _data/glossary/persistent-volume.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/_data/glossary/persistent-volume.yaml b/_data/glossary/persistent-volume.yaml index f6ed124ad4..005f3dc5d8 100644 --- a/_data/glossary/persistent-volume.yaml +++ b/_data/glossary/persistent-volume.yaml @@ -4,6 +4,7 @@ full-link: /docs/concepts/storage/persistent-volumes/ related: - statefulset - deployment +- persistent-volume-claim - pod tags: - core-object From 2cbe8c3e5862a4c44cb9db3d7e75faa00f064410 Mon Sep 17 00:00:00 2001 From: Qiming Date: Sun, 24 Dec 2017 14:58:57 +0800 Subject: [PATCH 15/47] Update persistent-volume.yaml --- _data/glossary/persistent-volume.yaml | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/_data/glossary/persistent-volume.yaml b/_data/glossary/persistent-volume.yaml index 005f3dc5d8..1f0fe7b528 100644 --- a/_data/glossary/persistent-volume.yaml +++ b/_data/glossary/persistent-volume.yaml @@ -10,6 +10,8 @@ tags: - core-object - storage short-description: > - Persistent Volume is a kind of interface through two resource types, i.e. pv and pvc, and provides pod persistent storage service. + An API object that represents a piece of storage in the cluster. Available as a general, pluggable resource that persists beyond the lifecycle of any individual {% glossary_tooltip term_id="pod" %}. long-description: | - Persistent Volume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. + PersistentVolumes (PVs) provide an API that abstracts details of how storage is provided from how it is consumed. + PVs are used directly in scenarios where storage can be be created ahead of time (static provisioning). + For scenarios that require on-demand storage (dynamic provisioning), PersistentVolumeClaims (PVCs) are used instead. From d9406efa588e201112e7c3899f39c282c71a08e8 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Sun, 24 Dec 2017 16:19:29 +0800 Subject: [PATCH 16/47] Fix sample yaml used for serviceAccountName Closes: #2333 --- docs/admin/authentication.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/admin/authentication.md b/docs/admin/authentication.md index 2dc4e9fd58..8d3bd1885f 100644 --- a/docs/admin/authentication.md +++ b/docs/admin/authentication.md @@ -196,10 +196,10 @@ spec: metadata: # ... spec: + serviceAccountName: bob-the-bot containers: - name: nginx image: nginx:1.7.9 - serviceAccountName: bob-the-bot ``` Service account bearer tokens are perfectly valid to use outside the cluster and From d5bd93ebcb0301fa4bb78aaf3dc175d3e8b76621 Mon Sep 17 00:00:00 2001 From: Yang Li Date: Tue, 5 Dec 2017 17:58:05 +0800 Subject: [PATCH 17/47] remove unnecessary indents --- docs/concepts/storage/persistent-volumes.md | 36 ++++++++++----------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/docs/concepts/storage/persistent-volumes.md b/docs/concepts/storage/persistent-volumes.md index 50c819fc1e..bc7a57c913 100644 --- a/docs/concepts/storage/persistent-volumes.md +++ b/docs/concepts/storage/persistent-volumes.md @@ -218,24 +218,24 @@ resizing to take place. Also, file system resizing is only supported for followi Each PV contains a spec and status, which is the specification and status of the volume. ```yaml - apiVersion: v1 - kind: PersistentVolume - metadata: - name: pv0003 - spec: - capacity: - storage: 5Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Recycle - storageClassName: slow - mountOptions: - - hard - - nfsvers=4.1 - nfs: - path: /tmp - server: 172.17.0.2 +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv0003 +spec: + capacity: + storage: 5Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Recycle + storageClassName: slow + mountOptions: + - hard + - nfsvers=4.1 + nfs: + path: /tmp + server: 172.17.0.2 ``` ### Capacity From 0bd269fe2e8c97fc62caa8dfe25d8bc11e1b4f37 Mon Sep 17 00:00:00 2001 From: fabriziopandini Date: Tue, 26 Dec 2017 10:54:07 +0100 Subject: [PATCH 18/47] small-fix kubeadm doc --- _redirects | 1 + .../setup-tools/kubeadm/kubeadm-alpha.md | 2 + .../setup-tools/kubeadm/kubeadm-init.md | 25 +++++++++++ .../setup-tools/kubeadm/kubeadm-join.md | 7 ++++ .../independent/create-cluster-kubeadm.md | 42 ++++++++++--------- 5 files changed, 57 insertions(+), 20 deletions(-) diff --git a/_redirects b/_redirects index f10c8d57d0..d145789420 100644 --- a/_redirects +++ b/_redirects @@ -443,3 +443,4 @@ https://kubernetes-io-v1-7.netlify.com/* https://v1-7.docs.kubernetes.io/:spl /docs/admin/kubefed_unjoin/ /docs/reference/generated/kubefed_unjoin/ 301 /docs/admin/kubefed_version/ /docs/reference/generated/kubefed_version/ 301 +/docs/reference/generated/kubeadm/ /docs/reference/setup-tools/kubeadm/kubeadm/ 301 diff --git a/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md b/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md index 6e16e6670b..9c8573eb8a 100755 --- a/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md +++ b/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md @@ -224,6 +224,8 @@ Alternatively, you can use [kubeadm config](kubeadm-config.md). You can install all the available addons with the `all` subcommand, or install them selectively. +Please note that if kubeadm is invoked with `--feature-gates=CoreDNS`, CoreDNS is installed instead of `kube-dns`. + {% capture addon-all %} {% include_relative generated/kubeadm_alpha_phase_addon_all.md %} {% endcapture %} diff --git a/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/docs/reference/setup-tools/kubeadm/kubeadm-init.md index 96419b5ee2..08c0a0ccc0 100755 --- a/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -31,6 +31,13 @@ following steps: API server, each with its own identity, as well as an additional kubeconfig file for administration named `admin.conf`. +1. If kubeadm is invoked with `--feature-gates=DynamicKubeletConfig` enabled, + it writes the kubelet init configuration into the `/var/lib/kubelet/config/init/kubelet` file. + See [Set Kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file.md) + and [Reconfigure a Node's Kubelet in a Live Cluster](/docs/tasks/administer-cluster/reconfigure-kubelet.md) + for more information about Dynamic Kubelet Configuration. + This functionality is now by default disabled as it is behind a feature gate, but is expected to be a default in future versions. + 1. Generates static Pod manifests for the API server, controller manager and scheduler. In case an external etcd is not provided, an additional static Pod manifest are generated for etcd. @@ -40,6 +47,12 @@ following steps: Once control plane Pods are up and running, the `kubeadm init` sequence can continue. +1. If kubeadm is invoked with `--feature-gates=DynamicKubeletConfig` enabled, + it completes the kubelet dynamic configuration by creating a ConfigMap and some RBAC rules that enable + kubelets to access to it, and updates the node by pointing `Node.spec.configSource` to the + newly-created ConfigMap. + This functionality is now by default disabled as it is behind a feature gate, but is expected to be a default in future versions. + 1. Apply labels and taints to the master node so that no additional workloads will run there. @@ -120,6 +133,18 @@ controllerManagerExtraArgs: schedulerExtraArgs: : : +apiServerExtraVolumes: +- name: + hostPath: + mountPath: +controllerManagerExtraVolumes: +- name: + hostPath: + mountPath: +schedulerExtraVolumes: +- name: + hostPath: + mountPath: apiServerCertSANs: - - diff --git a/docs/reference/setup-tools/kubeadm/kubeadm-join.md b/docs/reference/setup-tools/kubeadm/kubeadm-join.md index 50ff3a253c..c1c8e8354e 100755 --- a/docs/reference/setup-tools/kubeadm/kubeadm-join.md +++ b/docs/reference/setup-tools/kubeadm/kubeadm-join.md @@ -21,6 +21,13 @@ This action consists of the following steps: authenticity of that data. The root CA can also be discovered directly via a file or URL. +1. If kubeadm is invoked with `--feature-gates=DynamicKubeletConfig` enabled, + it first retrieves the kubelet init configuration from the master and writes it to + the disk. When kubelet starts up, kubeadm updates the node `Node.spec.configSource` property of the node. + See [Set Kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file.md) + and [Reconfigure a Node's Kubelet in a Live Cluster](/docs/tasks/administer-cluster/reconfigure-kubelet.md) + for more information about Dynamic Kubelet Configuration. + 1. Once the cluster information is known, kubelet can start the TLS bootstrapping process. diff --git a/docs/setup/independent/create-cluster-kubeadm.md b/docs/setup/independent/create-cluster-kubeadm.md index 8670c5565c..12b1b975e3 100644 --- a/docs/setup/independent/create-cluster-kubeadm.md +++ b/docs/setup/independent/create-cluster-kubeadm.md @@ -11,10 +11,10 @@ title: Using kubeadm to Create a Cluster **kubeadm** is a toolkit that help you bootstrap a best-practice Kubernetes cluster in an easy, reasonably secure and extensible way. It also supports -managing [Bootstrap Tokens](#TODO) for you and upgrading/downgrading clusters. +managing [Bootstrap Tokens](/docs/admin/bootstrap-tokens/) for you and upgrading/downgrading clusters. kubeadm aims to set up a minimum viable cluster that pass the -[Kubernetes Conformance tests](#TODO), but installing other addons than +[Kubernetes Conformance tests](http://blog.kubernetes.io/2017/10/software-conformance-certification.html), but installing other addons than really necessary for a functional cluster is out of scope. It by design does not install a networking solution for you, which means you @@ -26,27 +26,29 @@ matter, can be a Linux laptop, virtual machine, physical/cloud server or Raspberry Pi. This makes kubeadm well suited to integrate with provisioning systems of different kinds (e.g. Terraform, Ansible, etc.). -kubeadm is designed to be a good way for new users to start trying -Kubernetes out, possibly for the first time, an way for existing users to -test their application on and stich together a cluster easily and to be -a building block in a larger ecosystem and/or installer tool with a larger +kubeadm is designed to be a simple way for new users to start trying +Kubernetes out, possibly for the first time, a way for existing users to +test their application on and stich together a cluster easily, and also to be +a building block in other ecosystem and/or installer tool with a larger scope. You can install _kubeadm_ very easily on operating systems that support installing deb or rpm packages. The responsible SIG for kubeadm, -[SIG Cluster Lifecycle](#TODO), provides these packages pre-built for you, +[SIG Cluster Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle), provides these packages pre-built for you, but you may also on other OSes. ### kubeadm Maturity -| Area | Maturity Level | -|-----------------|--------------- | -| Command line UX | beta | -| Implementation | beta | -| Config file API | alpha | -| Self-hosting | alpha | -| `kubeadm alpha` | alpha | +| Area | Maturity Level | +|---------------------------|--------------- | +| Command line UX | beta | +| Implementation | beta | +| Config file API | alpha | +| Self-hosting | alpha | +| kubeadm alpha subcommands | alpha | +| CoreDNS | alpha | +| DynamicKubeletConfig | alpha | kubeadm's overall feature state is **Beta** and will soon be graduated to @@ -64,12 +66,12 @@ period a patch release may be issued from the release branch if a severe bug or security issue is found. Here are the latest Kubernetes releases and the support timeframe; which also applies to `kubeadm`. -| Kubernetes version | Release date | End-of-life-month | -|--------------------|--------------|-------------------| -| v1.6.x | TODO | December 2017 | -| v1.7.x | TODO | March 2018 | -| v1.8.x | TODO | June 2018 | -| v1.9.x | TODO        | September 2018   | +| Kubernetes version | Release month | End-of-life-month | +|--------------------|----------------|-------------------| +| v1.6.x | March 2017 | December 2017 | +| v1.7.x | June 2017 | March 2018 | +| v1.8.x | September 2017 | June 2018 | +| v1.9.x | December 2017 | September 2018   | {% endcapture %} From d4428df4886b06da9ba34d5ab6596fd683918bf7 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Wed, 27 Dec 2017 10:57:46 +0800 Subject: [PATCH 19/47] Remove references to source code Closes: #3712 --- .../tasks/administer-federation/replicaset.md | 34 ++++++++++++++++--- 1 file changed, 30 insertions(+), 4 deletions(-) diff --git a/docs/tasks/administer-federation/replicaset.md b/docs/tasks/administer-federation/replicaset.md index ef243d5840..c7cca9f6d4 100644 --- a/docs/tasks/administer-federation/replicaset.md +++ b/docs/tasks/administer-federation/replicaset.md @@ -60,11 +60,37 @@ By default, replicas are spread equally in all the underlying clusters. For exam if you have 3 registered clusters and you create a federated ReplicaSet with `spec.replicas = 9`, then each ReplicaSet in the 3 clusters will have `spec.replicas=3`. -To modify the number of replicas in each cluster, you can specify -[FederatedReplicaSetPreference](https://github.com/kubernetes/federation/blob/{{page.githubbranch}}/apis/federation/types.go) -as an annotation with key `federation.kubernetes.io/replica-set-preferences` -on the federated ReplicaSet. +To modify the number of replicas in each cluster, you can add an annotation with +key `federation.kubernetes.io/replica-set-preferences` to the federated ReplicaSet. +The value of the annoation is a serialized JSON that contains fields shown in +the following example: +``` +{ + "rebalance": true, + "clusters": { + "foo": { + "minReplicas": 10, + "maxReplicas": 50, + "weight": 100 + }, + "bar": { + "minReplicas": 10, + "maxReplicas": 100, + "weight": 200 + } + } +} +``` + +The `rebalance` boolean field specifies whether replicas already scheduled and running +may be moved in order to match current state to the specified preferences. +The `clusters` object field contains a map where users can specify the constraints +for replica placement across the clusters (`foo` and `bar` in the example). +For each cluster, you can specify the minimum number of replicas that should be +assigned to it (default is zero), the maximum number of replicas the cluster can +accept (default is unbounded) and a number expressing the relative weight of +preferences to place additional replicas to that cluster. ## Updating a Federated ReplicaSet From f598486960a80ffba3e4447bda6b20f23c9d173b Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Wed, 27 Dec 2017 20:39:08 +0800 Subject: [PATCH 20/47] Fix CI gate --- test/examples_test.go | 75 ++++++++++++++++++++++++++----------------- 1 file changed, 45 insertions(+), 30 deletions(-) diff --git a/test/examples_test.go b/test/examples_test.go index 54422a7306..5b6cb8d67a 100644 --- a/test/examples_test.go +++ b/test/examples_test.go @@ -318,7 +318,7 @@ func TestExampleObjectSchemas(t *testing.T) { }, "../docs/concepts/services-networking": { "curlpod": {&extensions.Deployment{}}, - "custom-dns": {&api.Pod{}}, + "custom-dns": {&api.Pod{}}, "hostaliases-pod": {&api.Pod{}}, "ingress": {&extensions.Ingress{}}, "nginx-secure-app": {&api.Service{}, &extensions.Deployment{}}, @@ -343,6 +343,7 @@ func TestExampleObjectSchemas(t *testing.T) { "two-container-pod": {&api.Pod{}}, }, "../docs/tasks/administer-cluster": { + "busybox": {&api.Pod{}}, "cloud-controller-manager-daemonset-example": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.DaemonSet{}}, "cpu-constraints": {&api.LimitRange{}}, "cpu-constraints-pod": {&api.Pod{}}, @@ -381,35 +382,37 @@ func TestExampleObjectSchemas(t *testing.T) { "quota-pvc-2": {&api.PersistentVolumeClaim{}}, }, "../docs/tasks/configure-pod-container": { - "cpu-request-limit": {&api.Pod{}}, - "cpu-request-limit-2": {&api.Pod{}}, - "exec-liveness": {&api.Pod{}}, - "http-liveness": {&api.Pod{}}, - "init-containers": {&api.Pod{}}, - "lifecycle-events": {&api.Pod{}}, - "mem-limit-range": {&api.LimitRange{}}, - "memory-request-limit": {&api.Pod{}}, - "memory-request-limit-2": {&api.Pod{}}, - "memory-request-limit-3": {&api.Pod{}}, - "oir-pod": {&api.Pod{}}, - "oir-pod-2": {&api.Pod{}}, - "pod": {&api.Pod{}}, - "pod-redis": {&api.Pod{}}, - "private-reg-pod": {&api.Pod{}}, - "projected-volume": {&api.Pod{}}, - "qos-pod": {&api.Pod{}}, - "qos-pod-2": {&api.Pod{}}, - "qos-pod-3": {&api.Pod{}}, - "qos-pod-4": {&api.Pod{}}, - "rq-compute-resources": {&api.ResourceQuota{}}, - "security-context": {&api.Pod{}}, - "security-context-2": {&api.Pod{}}, - "security-context-3": {&api.Pod{}}, - "security-context-4": {&api.Pod{}}, - "task-pv-claim": {&api.PersistentVolumeClaim{}}, - "task-pv-pod": {&api.Pod{}}, - "task-pv-volume": {&api.PersistentVolume{}}, - "tcp-liveness-readiness": {&api.Pod{}}, + "cpu-request-limit": {&api.Pod{}}, + "cpu-request-limit-2": {&api.Pod{}}, + "exec-liveness": {&api.Pod{}}, + "extended-resource-pod": {&api.Pod{}}, + "extended-resource-pod-2": {&api.Pod{}}, + "http-liveness": {&api.Pod{}}, + "init-containers": {&api.Pod{}}, + "lifecycle-events": {&api.Pod{}}, + "mem-limit-range": {&api.LimitRange{}}, + "memory-request-limit": {&api.Pod{}}, + "memory-request-limit-2": {&api.Pod{}}, + "memory-request-limit-3": {&api.Pod{}}, + "oir-pod": {&api.Pod{}}, + "oir-pod-2": {&api.Pod{}}, + "pod": {&api.Pod{}}, + "pod-redis": {&api.Pod{}}, + "private-reg-pod": {&api.Pod{}}, + "projected-volume": {&api.Pod{}}, + "qos-pod": {&api.Pod{}}, + "qos-pod-2": {&api.Pod{}}, + "qos-pod-3": {&api.Pod{}}, + "qos-pod-4": {&api.Pod{}}, + "rq-compute-resources": {&api.ResourceQuota{}}, + "security-context": {&api.Pod{}}, + "security-context-2": {&api.Pod{}}, + "security-context-3": {&api.Pod{}}, + "security-context-4": {&api.Pod{}}, + "task-pv-claim": {&api.PersistentVolumeClaim{}}, + "task-pv-pod": {&api.Pod{}}, + "task-pv-volume": {&api.PersistentVolume{}}, + "tcp-liveness-readiness": {&api.Pod{}}, }, "../docs/tasks/debug-application-cluster": { "counter-pod": {&api.Pod{}}, @@ -460,6 +463,7 @@ func TestExampleObjectSchemas(t *testing.T) { "deployment-patch-demo": {&extensions.Deployment{}}, "deployment-scale": {&extensions.Deployment{}}, "deployment-update": {&extensions.Deployment{}}, + "hpa-php-apache": {&autoscaling.HorizontalPodAutoscaler{}}, "mysql-configmap": {&api.ConfigMap{}}, "mysql-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &extensions.Deployment{}}, "mysql-services": {&api.Service{}, &api.Service{}}, @@ -602,6 +606,11 @@ func TestExampleObjectSchemas(t *testing.T) { }, } + filesIgnore := map[string]map[string]bool{ + "../docs/tasks/debug-application-cluster": { + "audit-policy": true, + }, + } capabilities.SetForTests(capabilities.Capabilities{ AllowPrivileged: true, }) @@ -612,6 +621,12 @@ func TestExampleObjectSchemas(t *testing.T) { err := walkConfigFiles(path, func(name, path string, docs [][]byte) { expectedTypes, found := expected[name] if !found { + p := filepath.Dir(path) + if files, ok := filesIgnore[p]; ok { + if files[name] { + return + } + } t.Errorf("%s: %s does not have a test case defined", path, name) return } From 4e235945c89b0fc765dcb87dc87a78ce3f0bd9d7 Mon Sep 17 00:00:00 2001 From: Ivan Fraixedes Date: Wed, 27 Dec 2017 19:10:48 +0100 Subject: [PATCH 21/47] docs/tasks: Add hyperkit to hypervisor list Add the hyperkit to the list of hypervisor supported in OSX and mark xhyve as deprecated. --- docs/tasks/tools/install-minikube.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/tasks/tools/install-minikube.md b/docs/tasks/tools/install-minikube.md index 200e6d0e88..495a89c36f 100644 --- a/docs/tasks/tools/install-minikube.md +++ b/docs/tasks/tools/install-minikube.md @@ -23,7 +23,8 @@ If you do not already have a hypervisor installed, install one now. * For OS X, install [xhyve driver](https://git.k8s.io/minikube/docs/drivers.md#xhyve-driver), [VirtualBox](https://www.virtualbox.org/wiki/Downloads), or -[VMware Fusion](https://www.vmware.com/products/fusion). +[VMware Fusion](https://www.vmware.com/products/fusion), or +[HyperKit](https://github.com/moby/hyperkit). * For Linux, install [VirtualBox](https://www.virtualbox.org/wiki/Downloads) or From e2b53d4803cc8ffd4a26a32c895444b6739182e4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E8=92=B2=E4=BF=8A=E5=A5=8710138035?= <10138035@zte.com.cn> Date: Thu, 28 Dec 2017 11:11:13 +0800 Subject: [PATCH 22/47] nit in togeter --- docs/home/contribute/write-new-topic.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/home/contribute/write-new-topic.md b/docs/home/contribute/write-new-topic.md index 95f57d2f3d..ccde10eb9f 100644 --- a/docs/home/contribute/write-new-topic.md +++ b/docs/home/contribute/write-new-topic.md @@ -27,7 +27,7 @@ is the best fit for your content: Tutorial - A tutorial page shows how to accomplish a goal that ties together several Kubernetes features. A tutorial might provide several sequences of steps that readers can actually do as they read the page. Or it might provide explanations of related pieces of code. For example, a tutorial could provide a walkthrough of a code sample. A tutorial can include brief explanations of the Kubernetes features that are being tied togeter, but should link to related concept topics for deep explanations of individual features. + A tutorial page shows how to accomplish a goal that ties together several Kubernetes features. A tutorial might provide several sequences of steps that readers can actually do as they read the page. Or it might provide explanations of related pieces of code. For example, a tutorial could provide a walkthrough of a code sample. A tutorial can include brief explanations of the Kubernetes features that are being tied together, but should link to related concept topics for deep explanations of individual features. From 9b99280a0d1034abfa1145f14770dd2303b88bfc Mon Sep 17 00:00:00 2001 From: Rohit Agarwal Date: Thu, 28 Dec 2017 11:42:27 -0800 Subject: [PATCH 23/47] Rewrote doc on GPU support in Kubernetes. (#6736) --- docs/tasks/manage-gpus/scheduling-gpus.md | 289 +++++++++++++--------- 1 file changed, 173 insertions(+), 116 deletions(-) diff --git a/docs/tasks/manage-gpus/scheduling-gpus.md b/docs/tasks/manage-gpus/scheduling-gpus.md index 4ad2126b40..5b33e93579 100644 --- a/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/docs/tasks/manage-gpus/scheduling-gpus.md @@ -4,154 +4,211 @@ approvers: title: Schedule GPUs --- -{% capture overview %} +Kubernetes includes **experimental** support for managing NVIDIA GPUs spread +across nodes. The support for NVIDIA GPUs was added in v1.6 and has gone through +multiple backwards incompatible iterations. This page describes how users can +consume GPUs across different Kubernetes versions and the current limitations. -Kubernetes includes **experimental** support for managing NVIDIA GPUs spread across nodes. -This page describes how users can consume GPUs and the current limitations. +## v1.6 and v1.7 +To enable GPU support in 1.6 and 1.7, a special **alpha** feature gate +`Accelerators` has to be set to true across the system: +`--feature-gates="Accelerators=true"`. It also requires using the Docker +Engine as the container runtime. -{% endcapture %} +Further, the Kubernetes nodes have to be pre-installed with NVIDIA drivers. +Kubelet will not detect NVIDIA GPUs otherwise. -{% capture prerequisites %} - -1. Kubernetes nodes have to be pre-installed with Nvidia drivers. Kubelet will not detect Nvidia GPUs otherwise. Try to re-install Nvidia drivers if kubelet fails to expose Nvidia GPUs as part of Node Capacity. After installing the driver, run `nvidia-docker-plugin` to confirm that all drivers have been loaded. -2. A special **alpha** feature gate `Accelerators` has to be set to true across the system: `--feature-gates="Accelerators=true"`. -3. Nodes must be using `docker engine` as the container runtime. - -The nodes will automatically discover and expose all Nvidia GPUs as a schedulable resource. - -{% endcapture %} - -{% capture steps %} - -## API - -Nvidia GPUs can be consumed via container level resource requirements using the resource name `alpha.kubernetes.io/nvidia-gpu`. - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: gpu-pod -spec: - containers: - - - name: gpu-container-1 - image: k8s.gcr.io/pause:2.0 - resources: - limits: - alpha.kubernetes.io/nvidia-gpu: 2 # requesting 2 GPUs - - - name: gpu-container-2 - image: k8s.gcr.io/pause:2.0 - resources: - limits: - alpha.kubernetes.io/nvidia-gpu: 3 # requesting 3 GPUs -``` +When you start Kubernetes components after all the above conditions are true, +Kubernetes will expose `alpha.kubernetes.io/nvidia-gpu` as a schedulable +resource. +You can consume these GPUs from your containers by requesting +`alpha.kubernetes.io/nvidia-gpu` just like you request `cpu` or `memory`. +However, there are some limitations in how you specify the resource requirements +when using GPUs: - GPUs are only supposed to be specified in the `limits` section, which means: - * You can specify GPU `limits` without specifying `requests` because Kubernetes - will use the limit as the request value by default. - * You can specify GPU in both `limits` and `requests` but these two values must equal. + * You can specify GPU `limits` without specifying `requests` because + Kubernetes will use the limit as the request value by default. + * You can specify GPU in both `limits` and `requests` but these two values + must be equal. * You cannot specify GPU `requests` without specifying `limits`. -- Containers (and pods) do not share GPUs. -- Each container can request one or more GPUs. -- It is not possible to request a portion of a GPU. -- Nodes are expected to be homogenous, i.e. run the same GPU hardware. +- Containers (and pods) do not share GPUs. There's no overcommitting of GPUs. +- Each container can request one or more GPUs. It is not possible to request a + fraction of a GPU. -If your nodes are running different versions of GPUs, then use Node Labels and Node Selectors to schedule pods to appropriate GPUs. -Following is an illustration of this workflow: +When using `alpha.kubernetes.io/nvidia-gpu` as the resource, you also have to +mount host directories containing NVIDIA libraries (libcuda.so, libnvidia.so +etc.) to the container. -As part of your Node bootstrapping, identify the GPU hardware type on your nodes and expose it as a node label. - -```shell -NVIDIA_GPU_NAME=$(nvidia-smi --query-gpu=gpu_name --format=csv,noheader --id=0 | sed -e 's/ /-/g') -source /etc/default/kubelet -KUBELET_OPTS="$KUBELET_OPTS --node-labels='alpha.kubernetes.io/nvidia-gpu-name=$NVIDIA_GPU_NAME'" -echo "KUBELET_OPTS=$KUBELET_OPTS" > /etc/default/kubelet -``` - -Specify the GPU types a pod can use via [Node Affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) rules. +Here's an example: ```yaml -kind: pod apiVersion: v1 +kind: Pod metadata: - annotations: - scheduler.alpha.kubernetes.io/affinity: > - { - "nodeAffinity": { - "requiredDuringSchedulingIgnoredDuringExecution": { - "nodeSelectorTerms": [ - { - "matchExpressions": [ - { - "key": "alpha.kubernetes.io/nvidia-gpu-name", - "operator": "In", - "values": ["Tesla K80", "Tesla P100"] - } - ] - } - ] - } - } - } + name: cuda-vector-add spec: + restartPolicy: OnFailure containers: - - - name: gpu-container-1 + - name: cuda-vector-add + # https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile + image: "k8s.gcr.io/cuda-vector-add:v0.1" resources: limits: - alpha.kubernetes.io/nvidia-gpu: 2 + alpha.kubernetes.io/nvidia-gpu: 1 # requesting 1 GPU + volumeMounts: + - name: "nvidia-libraries" + mountPath: "/usr/local/nvidia/lib64" + volumes: + - name: "nvidia-libraries" + hostPath: + path: "/usr/lib/nvidia-375" ``` -This will ensure that the pod will be scheduled to a node that has a `Tesla K80` or a `Tesla P100` Nvidia GPU. +The `Accelerators` feature gate and `alpha.kubernetes.io/nvidia-gpu` resource +works on 1.8 and 1.9 as well. It will be deprecated in 1.10 and removed in +1.11. -### Warning +## v1.8 onwards -The API presented here **will change** in an upcoming release to better support GPUs, and hardware accelerators in general, in Kubernetes. +**From 1.8 onwards, the recommended way to consume GPUs is to use [device +plugins](/docs/concepts/cluster-administration/device-plugins).** -## Access to CUDA libraries +To enable GPU support through device plugins, a special **alpha** feature gate +`DevicePlugins` has to be set to true across the system: +`--feature-gates="DevicePlugins=true"`. -As of now, CUDA libraries are expected to be pre-installed on the nodes. +Then you have to install NVIDIA drivers on the nodes and run an NVIDIA GPU device +plugin ([see below](#deploying-nvidia-gpu-device-plugin)). -To mitigate this, you can copy the libraries to a more permissive folder in ``/var/lib/`` or change the permissions directly. (Future releases will automatically perform this operation) +When the above conditions are true, Kubernetes will expose `nvidia.com/gpu` as +a schedulable resource. -Pods can access the libraries using `hostPath` volumes. +You can consume these GPUs from your containers by requesting +`nvidia.com/gpu` just like you request `cpu` or `memory`. +However, there are some limitations in how you specify the resource requirements +when using GPUs: +- GPUs are only supposed to be specified in the `limits` section, which means: + * You can specify GPU `limits` without specifying `requests` because + Kubernetes will use the limit as the request value by default. + * You can specify GPU in both `limits` and `requests` but these two values + must be equal. + * You cannot specify GPU `requests` without specifying `limits`. +- Containers (and pods) do not share GPUs. There's no overcommitting of GPUs. +- Each container can request one or more GPUs. It is not possible to request a + fraction of a GPU. + +Unlike with `alpha.kubernetes.io/nvidia-gpu`, when using `nvidia.com/gpu` as +the resource, you don't have to mount any special directories in your pod +specs. The device plugin is expected to inject them automatically in the +container. + +Here's an example: ```yaml -kind: Pod apiVersion: v1 +kind: Pod metadata: - name: gpu-pod + name: cuda-vector-add spec: + restartPolicy: OnFailure containers: - - name: gpu-container-1 - image: k8s.gcr.io/pause:2.0 - resources: - limits: - alpha.kubernetes.io/nvidia-gpu: 1 - volumeMounts: - - mountPath: /usr/local/nvidia/bin - name: bin - - mountPath: /usr/lib/nvidia - name: lib - volumes: - - hostPath: - path: /usr/lib/nvidia-375/bin - name: bin - - hostPath: - path: /usr/lib/nvidia-375 - name: lib + - name: cuda-vector-add + # https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile + image: "k8s.gcr.io/cuda-vector-add:v0.1" + resources: + limits: + nvidia.com/gpu: 1 # requesting 1 GPU ``` -## Future +### Deploying NVIDIA GPU device plugin -- Support for hardware accelerators is in its early stages in Kubernetes. -- GPUs and other accelerators will soon be a native compute resource across the system. +There are currently two device plugin implementations for NVIDIA GPUs: + +#### Official NVIDIA GPU device plugin + +The [official NVIDIA GPU device plugin](https://github.com/NVIDIA/k8s-device-plugin) +has the following requirements: +- Kubernetes nodes have to be pre-installed with NVIDIA drivers. +- Kubernetes nodes have to be pre-installed with [nvidia-docker 2.0](https://github.com/NVIDIA/nvidia-docker) +- nvidia-container-runtime must be configured as the [default runtime](https://github.com/NVIDIA/nvidia-docker/wiki/Advanced-topics#default-runtime) + for docker instead of runc. +- NVIDIA drivers ~= 361.93 + +To deploy the NVIDIA device plugin once your cluster is running and the above +requirements are satisfied: + +``` +# For Kubernetes v1.8 +kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.8/nvidia-device-plugin.yml + +# For Kubernetes v1.9 +kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.9/nvidia-device-plugin.yml +``` + +Report issues with this device plugin to [NVIDIA/k8s-device-plugin](https://github.com/NVIDIA/k8s-device-plugin). + +#### NVIDIA GPU device plugin used by GKE/GCE + +The [NVIDIA GPU device plugin used by GKE/GCE](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu) +doesn't require using nvidia-docker and should work with any container runtime +that is compatible the Kubernetes Container Runtime Interface (CRI). It's tested +on [Container-Optimized OS](https://cloud.google.com/container-optimized-os/) +and has experimental code for Ubuntu from 1.9 onwards. + +On your 1.9 cluster, you can use the following commands to install the NVIDIA drivers and device plugin: + +``` +# Install NVIDIA drivers on Container-Optimized OS: +kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/k8s-1.9/daemonset.yaml + +# Install NVIDIA drivers on Ubuntu (experimental): +kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/k8s-1.9/nvidia-driver-installer/ubuntu/daemonset.yaml + +# Install the device plugin: +kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.9/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml +``` + +Report issues with this device plugin and installation method to [GoogleCloudPlatform/container-engine-accelerators](https://github.com/GoogleCloudPlatform/container-engine-accelerators). + +## Clusters containing different types of NVIDIA GPUs + +If different nodes in your cluster have different types of NVIDIA GPUs, then you +can use [Node Labels and Node Selectors](/docs/tasks/configure-pod-container/assign-pods-nodes/) +to schedule pods to appropriate nodes. + +For example: + +```shell +# Label your nodes with the accelerator type they have. +kubectl label nodes accelerator=nvidia-tesla-k80 +kubectl label nodes accelerator=nvidia-tesla-p100 +``` + +Specify the GPU type in the pod spec: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: cuda-vector-add +spec: + restartPolicy: OnFailure + containers: + - name: cuda-vector-add + # https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile + image: "k8s.gcr.io/cuda-vector-add:v0.1" + resources: + limits: + nvidia.com/gpu: 1 + nodeSelector: + accelerator: nvidia-tesla-p100 # or nvidia-tesla-k80 etc. +``` + +This will ensure that the pod will be scheduled to a node that has the GPU type +you specified. + +## Future +- Support for hardware accelerators in Kubernetes is still in alpha. - Better APIs will be introduced to provision and consume accelerators in a scalable manner. - Kubernetes will automatically ensure that applications consuming GPUs get the best possible performance. -- Key usability problems like access to CUDA libraries will be addressed. - -{% endcapture %} - -{% include templates/task.md %} From d1d65e340d27a7d824d0c31ccf54c48bcfbee6f1 Mon Sep 17 00:00:00 2001 From: alainvanhoof Date: Thu, 28 Dec 2017 22:04:21 +0100 Subject: [PATCH 24/47] Update service.md (#6746) --- docs/concepts/services-networking/service.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/services-networking/service.md b/docs/concepts/services-networking/service.md index ddbca9c9a0..93084f2109 100644 --- a/docs/concepts/services-networking/service.md +++ b/docs/concepts/services-networking/service.md @@ -199,7 +199,7 @@ consistent with the expectation. When access the `service`, traffic will be redirect to one of the backend `pod`. Similar to iptables, Ipvs is based on netfilter hook function, but use hash -table as the underlying data structure and work in the kernal state. +table as the underlying data structure and work in the kernel space. That means ipvs redirects traffic can be much faster, and have much better performance when sync proxy rules. Furthermore, ipvs provides more options for load balancing algorithm, such as: From 0ea526106f10bec79ef1dab2cfb86415f6b566b4 Mon Sep 17 00:00:00 2001 From: craigbox Date: Thu, 28 Dec 2017 21:07:01 +0000 Subject: [PATCH 25/47] Bring the "trademark usage" policy link inline (#6753) Make the 'trademark usage page' a hyperlink, thus removing the display of the raw URL (and condensing the footer from three lines to two.) I assume there is some legalese reason why this has to be on Every Page, but other Collaborative Projects don't have it at all. --- _includes/footer.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_includes/footer.html b/_includes/footer.html index c01c5d5e1b..4d15180920 100644 --- a/_includes/footer.html +++ b/_includes/footer.html @@ -28,7 +28,7 @@ © {{ 'now' | date: "%Y" }} The Kubernetes Authors | Documentation Distributed under CC BY 4.0
- Copyright © {{ 'now' | date: "%Y" }} The Linux Foundation®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page: https://www.linuxfoundation.org/trademark-usage + Copyright © {{ 'now' | date: "%Y" }} The Linux Foundation®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page
From d8cc4525fdabfce1bb56be743814594446a359b7 Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Thu, 28 Dec 2017 13:08:26 -0800 Subject: [PATCH 26/47] Fix typo in Kubernetes API reference docs. (#6734) --- .../generated/kubernetes-api/v1.9/index.html | 632 +++++++++--------- 1 file changed, 317 insertions(+), 315 deletions(-) diff --git a/docs/reference/generated/kubernetes-api/v1.9/index.html b/docs/reference/generated/kubernetes-api/v1.9/index.html index 9097c7be2a..ee5ea737b1 100644 --- a/docs/reference/generated/kubernetes-api/v1.9/index.html +++ b/docs/reference/generated/kubernetes-api/v1.9/index.html @@ -520,6 +520,10 @@ Appears In: +202
CronJob +Accepted + + 200
CronJob OK @@ -527,10 +531,6 @@ Appears In: 201
CronJob Created - -202
CronJob -Accepted -

Patch

@@ -712,13 +712,13 @@ Appears In: -200
CronJob -OK - - 201
CronJob Created + +200
CronJob +OK +

Delete

@@ -2235,10 +2235,6 @@ spec: -202
DaemonSet -Accepted - - 200
DaemonSet OK @@ -2246,6 +2242,10 @@ spec: 201
DaemonSet Created + +202
DaemonSet +Accepted +

Patch

@@ -3785,7 +3785,7 @@ Appears In: maxSurge -The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new RC can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new RC can be scaled up further, ensuring that total number of pods running at any time during the update is atmost 130% of desired pods. +The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new RC can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new RC can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods. maxUnavailable @@ -3975,6 +3975,10 @@ spec: +200
Deployment +OK + + 201
Deployment Created @@ -3982,10 +3986,6 @@ spec: 202
Deployment Accepted - -200
Deployment -OK -

Patch

@@ -4342,13 +4342,13 @@ spec: -201
Deployment -Created - - 200
Deployment OK + +201
Deployment +Created +

Delete

@@ -6372,10 +6372,6 @@ spec: -202
Job -Accepted - - 200
Job OK @@ -6383,6 +6379,10 @@ spec: 201
Job Created + +202
Job +Accepted +

Patch

@@ -8016,7 +8016,8 @@ $ kubectl proxy name: pod-example spec: containers: - - image: ubuntu:trusty + - name: ubuntu + image: ubuntu:trusty command: ["echo"] args: ["Hello World"] @@ -8030,7 +8031,8 @@ $ kubectl proxy name: pod-example spec: containers: - - image: ubuntu:trusty + - name: ubuntu + image: ubuntu:trusty command: ["echo"] args: ["Hello World"] @@ -8387,10 +8389,6 @@ Appears In: -202
Pod -Accepted - - 200
Pod OK @@ -8398,6 +8396,10 @@ Appears In: 201
Pod Created + +202
Pod +Accepted +

Patch

@@ -8579,13 +8581,13 @@ Appears In: -201
Pod -Created - - 200
Pod OK + +201
Pod +Created +

Delete

@@ -9641,13 +9643,13 @@ Appears In: -200
Pod -OK - - 201
Pod Created + +200
Pod +OK +

Proxy Operations

@@ -11701,13 +11703,13 @@ Appears In: -201
ReplicaSet -Created - - 200
ReplicaSet OK + +201
ReplicaSet +Created +

Delete

@@ -12763,13 +12765,13 @@ Appears In: -201
ReplicaSet -Created - - 200
ReplicaSet OK + +201
ReplicaSet +Created +
@@ -14320,13 +14322,13 @@ Appears In: -201
ReplicationController -Created - - 200
ReplicationController OK + +201
ReplicationController +Created +
@@ -14609,10 +14611,6 @@ Appears In: -200
StatefulSet -OK - - 201
StatefulSet Created @@ -14620,6 +14618,10 @@ Appears In: 202
StatefulSet Accepted + +200
StatefulSet +OK +

Patch

@@ -16058,6 +16060,10 @@ Appears In: +202
Endpoints +Accepted + + 200
Endpoints OK @@ -16065,10 +16071,6 @@ Appears In: 201
Endpoints Created - -202
Endpoints -Accepted -

Patch

@@ -16250,13 +16252,13 @@ Appears In: -201
Endpoints -Created - - 200
Endpoints OK + +201
Endpoints +Created +

Delete

@@ -18528,13 +18530,13 @@ Appears In: -200
Ingress -OK - - 201
Ingress Created + +200
Ingress +OK +
@@ -19208,13 +19210,13 @@ service "deployment-example" replaced -201
Service -Created - - 200
Service OK + +201
Service +Created +

Delete

@@ -21789,10 +21791,6 @@ Appears In: -202
ConfigMap -Accepted - - 200
ConfigMap OK @@ -21800,6 +21798,10 @@ Appears In: 201
ConfigMap Created + +202
ConfigMap +Accepted +

Patch

@@ -22957,10 +22959,6 @@ Appears In: -201
Secret -Created - - 202
Secret Accepted @@ -22968,6 +22966,10 @@ Appears In: 200
Secret OK + +201
Secret +Created +

Patch

@@ -23149,13 +23151,13 @@ Appears In: -200
Secret -OK - - 201
Secret Created + +200
Secret +OK +

Delete

@@ -24397,13 +24399,13 @@ Appears In: -201
PersistentVolumeClaim -Created - - 200
PersistentVolumeClaim OK + +201
PersistentVolumeClaim +Created +

Delete

@@ -25641,6 +25643,10 @@ Appears In: +200
StorageClass +OK + + 201
StorageClass Created @@ -25648,10 +25654,6 @@ Appears In: 202
StorageClass Accepted - -200
StorageClass -OK -

Patch

@@ -26465,7 +26467,7 @@ Appears In: flexVolume
FlexVolumeSource -FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future. +FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker
FlockerVolumeSource @@ -26759,10 +26761,6 @@ Appears In: -200
VolumeAttachment -OK - - 201
VolumeAttachment Created @@ -26770,6 +26768,10 @@ Appears In: 202
VolumeAttachment Accepted + +200
VolumeAttachment +OK +

Patch

@@ -27883,13 +27885,13 @@ Appears In: -200
ControllerRevision -OK - - 201
ControllerRevision Created + +200
ControllerRevision +OK +

Delete

@@ -28904,10 +28906,6 @@ Appears In: -202
CustomResourceDefinition -Accepted - - 200
CustomResourceDefinition OK @@ -28915,6 +28913,10 @@ Appears In: 201
CustomResourceDefinition Created + +202
CustomResourceDefinition +Accepted +

Patch

@@ -29740,13 +29742,13 @@ Appears In: -201
CustomResourceDefinition -Created - - 200
CustomResourceDefinition OK + +201
CustomResourceDefinition +Created +
@@ -29968,6 +29970,10 @@ Appears In: +202
Event +Accepted + + 200
Event OK @@ -29975,10 +29981,6 @@ Appears In: 201
Event Created - -202
Event -Accepted -

Patch

@@ -30160,13 +30162,13 @@ Appears In: -200
Event -OK - - 201
Event Created + +200
Event +OK +

Delete

@@ -31342,13 +31344,13 @@ Appears In: -201
LimitRange -Created - - 200
LimitRange OK + +201
LimitRange +Created +

Delete

@@ -32389,10 +32391,6 @@ Appears In: -200
HorizontalPodAutoscaler -OK - - 201
HorizontalPodAutoscaler Created @@ -32400,6 +32398,10 @@ Appears In: 202
HorizontalPodAutoscaler Accepted + +200
HorizontalPodAutoscaler +OK +

Patch

@@ -32581,13 +32583,13 @@ Appears In: -201
HorizontalPodAutoscaler -Created - - 200
HorizontalPodAutoscaler OK + +201
HorizontalPodAutoscaler +Created +

Delete

@@ -33643,13 +33645,13 @@ Appears In: -200
HorizontalPodAutoscaler -OK - - 201
HorizontalPodAutoscaler Created + +200
HorizontalPodAutoscaler +OK +
@@ -33801,10 +33803,6 @@ Appears In: -202
InitializerConfiguration -Accepted - - 200
InitializerConfiguration OK @@ -33812,6 +33810,10 @@ Appears In: 201
InitializerConfiguration Created + +202
InitializerConfiguration +Accepted +

Patch

@@ -34701,10 +34703,6 @@ Appears In: -202
MutatingWebhookConfiguration -Accepted - - 200
MutatingWebhookConfiguration OK @@ -34712,6 +34710,10 @@ Appears In: 201
MutatingWebhookConfiguration Created + +202
MutatingWebhookConfiguration +Accepted +

Patch

@@ -35601,10 +35603,6 @@ Appears In: -202
ValidatingWebhookConfiguration -Accepted - - 200
ValidatingWebhookConfiguration OK @@ -35612,6 +35610,10 @@ Appears In: 201
ValidatingWebhookConfiguration Created + +202
ValidatingWebhookConfiguration +Accepted +

Patch

@@ -36557,10 +36559,6 @@ Appears In: -200
PodTemplate -OK - - 201
PodTemplate Created @@ -36568,6 +36566,10 @@ Appears In: 202
PodTemplate Accepted + +200
PodTemplate +OK +

Patch

@@ -37793,10 +37795,6 @@ Appears In: -200
PodDisruptionBudget -OK - - 201
PodDisruptionBudget Created @@ -37804,6 +37802,10 @@ Appears In: 202
PodDisruptionBudget Accepted + +200
PodDisruptionBudget +OK +

Patch

@@ -41413,6 +41415,10 @@ Appears In: +202
PodSecurityPolicy +Accepted + + 200
PodSecurityPolicy OK @@ -41420,10 +41426,6 @@ Appears In: 201
PodSecurityPolicy Created - -202
PodSecurityPolicy -Accepted -

Patch

@@ -43363,10 +43365,6 @@ Appears In: -200
Binding -OK - - 201
Binding Created @@ -43374,6 +43372,10 @@ Appears In: 202
Binding Accepted + +200
Binding +OK +
@@ -43597,10 +43599,6 @@ Appears In: -201
CertificateSigningRequest -Created - - 202
CertificateSigningRequest Accepted @@ -43608,6 +43606,10 @@ Appears In: 200
CertificateSigningRequest OK + +201
CertificateSigningRequest +Created +

Patch

@@ -44598,6 +44600,10 @@ Appears In: +200
ClusterRole +OK + + 201
ClusterRole Created @@ -44605,10 +44611,6 @@ Appears In: 202
ClusterRole Accepted - -200
ClusterRole -OK -

Patch

@@ -45497,6 +45499,10 @@ Appears In: +200
ClusterRoleBinding +OK + + 201
ClusterRoleBinding Created @@ -45504,10 +45510,6 @@ Appears In: 202
ClusterRoleBinding Accepted - -200
ClusterRoleBinding -OK -

Patch

@@ -45681,13 +45683,13 @@ Appears In: -200
ClusterRoleBinding -OK - - 201
ClusterRoleBinding Created + +200
ClusterRoleBinding +OK +

Delete

@@ -46621,6 +46623,10 @@ Appears In: +200
LocalSubjectAccessReview +OK + + 201
LocalSubjectAccessReview Created @@ -46628,10 +46634,6 @@ Appears In: 202
LocalSubjectAccessReview Accepted - -200
LocalSubjectAccessReview -OK -
@@ -48000,6 +48002,10 @@ Appears In: +200
Node +OK + + 201
Node Created @@ -48007,10 +48013,6 @@ Appears In: 202
Node Accepted - -200
Node -OK -

Patch

@@ -50357,7 +50359,7 @@ Appears In: flexVolume
FlexVolumeSource -FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future. +FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker
FlockerVolumeSource @@ -50744,13 +50746,13 @@ Appears In: -201
PersistentVolume -Created - - 200
PersistentVolume OK + +201
PersistentVolume +Created +

Delete

@@ -51786,6 +51788,10 @@ Appears In: +200
ResourceQuota +OK + + 201
ResourceQuota Created @@ -51793,10 +51799,6 @@ Appears In: 202
ResourceQuota Accepted - -200
ResourceQuota -OK -

Patch

@@ -53216,6 +53218,10 @@ Appears In: +202
Role +Accepted + + 200
Role OK @@ -53223,10 +53229,6 @@ Appears In: 201
Role Created - -202
Role -Accepted -

Patch

@@ -53408,13 +53410,13 @@ Appears In: -200
Role -OK - - 201
Role Created + +200
Role +OK +

Delete

@@ -55511,6 +55513,10 @@ Appears In: +200
SelfSubjectAccessReview +OK + + 201
SelfSubjectAccessReview Created @@ -55518,10 +55524,6 @@ Appears In: 202
SelfSubjectAccessReview Accepted - -200
SelfSubjectAccessReview -OK -
@@ -55668,6 +55670,10 @@ Appears In: +200
SelfSubjectRulesReview +OK + + 201
SelfSubjectRulesReview Created @@ -55675,10 +55681,6 @@ Appears In: 202
SelfSubjectRulesReview Accepted - -200
SelfSubjectRulesReview -OK -
@@ -56045,13 +56047,13 @@ Appears In: -200
ServiceAccount -OK - - 201
ServiceAccount Created + +200
ServiceAccount +OK +

Delete

@@ -57050,10 +57052,6 @@ Appears In: -202
SubjectAccessReview -Accepted - - 200
SubjectAccessReview OK @@ -57061,6 +57059,10 @@ Appears In: 201
SubjectAccessReview Created + +202
SubjectAccessReview +Accepted +
@@ -57451,6 +57453,10 @@ Appears In: +201
NetworkPolicy +Created + + 202
NetworkPolicy Accepted @@ -57458,10 +57464,6 @@ Appears In: 200
NetworkPolicy OK - -201
NetworkPolicy -Created -

Patch

@@ -57643,13 +57645,13 @@ Appears In: -200
NetworkPolicy -OK - - 201
NetworkPolicy Created + +200
NetworkPolicy +OK +

Delete

@@ -61247,7 +61249,7 @@ Appears In: -

FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. This is an alpha feature and may change in future.

+

FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin.