Merge pull request #26389 from kbhawkey/fixup-simply-usage
clean up use of word: simplypull/26447/head
commit
c24f62c16a
|
@ -45,7 +45,7 @@ kubectl apply -f https://k8s.io/examples/application/nginx/
|
|||
|
||||
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
|
||||
|
||||
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse.
|
||||
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you can deploy all of the components of your stack together.
|
||||
|
||||
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
|
||||
|
||||
|
@ -265,7 +265,7 @@ For a more concrete example, check the [tutorial of deploying Ghost](https://git
|
|||
## Updating labels
|
||||
|
||||
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`.
|
||||
For example, if you want to label all your nginx pods as frontend tier, simply run:
|
||||
For example, if you want to label all your nginx pods as frontend tier, run:
|
||||
|
||||
```shell
|
||||
kubectl label pods -l app=nginx tier=fe
|
||||
|
@ -411,7 +411,7 @@ and
|
|||
|
||||
## Disruptive updates
|
||||
|
||||
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
|
||||
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file:
|
||||
|
||||
```shell
|
||||
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
|
||||
|
@ -448,7 +448,7 @@ kubectl scale deployment my-nginx --current-replicas=1 --replicas=3
|
|||
deployment.apps/my-nginx scaled
|
||||
```
|
||||
|
||||
To update to version 1.16.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`, with the kubectl commands we learned above.
|
||||
To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1` using the previous kubectl commands.
|
||||
|
||||
```shell
|
||||
kubectl edit deployment/my-nginx
|
||||
|
|
|
@ -225,7 +225,7 @@ The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync
|
|||
However, the kubelet uses its local cache for getting the current value of the ConfigMap.
|
||||
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
|
||||
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
|
||||
A ConfigMap can be either propagated by watch (default), ttl-based, or simply redirecting
|
||||
A ConfigMap can be either propagated by watch (default), ttl-based, or by redirecting
|
||||
all requests directly to the API server.
|
||||
As a result, the total delay from the moment when the ConfigMap is updated to the moment
|
||||
when new keys are projected to the Pod can be as long as the kubelet sync period + cache
|
||||
|
|
|
@ -669,7 +669,7 @@ The kubelet checks whether the mounted secret is fresh on every periodic sync.
|
|||
However, the kubelet uses its local cache for getting the current value of the Secret.
|
||||
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
|
||||
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
|
||||
A Secret can be either propagated by watch (default), ttl-based, or simply redirecting
|
||||
A Secret can be either propagated by watch (default), ttl-based, or by redirecting
|
||||
all requests directly to the API server.
|
||||
As a result, the total delay from the moment when the Secret is updated to the moment
|
||||
when new keys are projected to the Pod can be as long as the kubelet sync period + cache
|
||||
|
|
|
@ -31,7 +31,7 @@ Once a custom resource is installed, users can create and access its objects usi
|
|||
|
||||
## Custom controllers
|
||||
|
||||
On their own, custom resources simply let you store and retrieve structured data.
|
||||
On their own, custom resources let you store and retrieve structured data.
|
||||
When you combine a custom resource with a *custom controller*, custom resources
|
||||
provide a true _declarative API_.
|
||||
|
||||
|
@ -120,7 +120,7 @@ Kubernetes provides two ways to add custom resources to your cluster:
|
|||
|
||||
Kubernetes provides these two options to meet the needs of different users, so that neither ease of use nor flexibility is compromised.
|
||||
|
||||
Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, it simply appears that the Kubernetes API is extended.
|
||||
Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, the Kubernetes API appears extended.
|
||||
|
||||
CRDs allow users to create new types of resources without adding another API server. You do not need to understand API Aggregation to use CRDs.
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ Network plugins in Kubernetes come in a few flavors:
|
|||
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as CRI manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
|
||||
|
||||
* `cni-bin-dir`: Kubelet probes this directory for plugins on startup
|
||||
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni".
|
||||
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is `cni`.
|
||||
|
||||
## Network Plugin Requirements
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ Fortunately, there is a cloud provider that offers message queuing as a managed
|
|||
|
||||
A cluster operator can setup Service Catalog and use it to communicate with the cloud provider's service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster.
|
||||
The application developer therefore does not need to be concerned with the implementation details or management of the message queue.
|
||||
The application can simply use it as a service.
|
||||
The application can access the message queue as a service.
|
||||
|
||||
## Architecture
|
||||
|
||||
|
|
|
@ -98,7 +98,7 @@ For both equality-based and set-based conditions there is no logical _OR_ (`||`)
|
|||
### _Equality-based_ requirement
|
||||
|
||||
_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well.
|
||||
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are simply synonyms), while the latter represents _inequality_. For example:
|
||||
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are synonyms), while the latter represents _inequality_. For example:
|
||||
|
||||
```
|
||||
environment = production
|
||||
|
|
|
@ -197,7 +197,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n
|
|||
### Create a policy and a pod
|
||||
|
||||
Define the example PodSecurityPolicy object in a file. This is a policy that
|
||||
simply prevents the creation of privileged pods.
|
||||
prevents the creation of privileged pods.
|
||||
The name of a PodSecurityPolicy object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
|
|
|
@ -261,7 +261,7 @@ for performance and security reasons, there are some constraints on topologyKey:
|
|||
and `preferredDuringSchedulingIgnoredDuringExecution`.
|
||||
2. For pod anti-affinity, empty `topologyKey` is also not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
|
||||
and `preferredDuringSchedulingIgnoredDuringExecution`.
|
||||
3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or simply disable it.
|
||||
3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or disable it.
|
||||
4. Except for the above cases, the `topologyKey` can be any legal label-key.
|
||||
|
||||
In addition to `labelSelector` and `topologyKey`, you can optionally specify a list `namespaces`
|
||||
|
|
|
@ -107,7 +107,7 @@ value being calculated based on the cluster size. There is also a hardcoded
|
|||
minimum value of 50 nodes.
|
||||
|
||||
{{< note >}}In clusters with less than 50 feasible nodes, the scheduler still
|
||||
checks all the nodes, simply because there are not enough feasible nodes to stop
|
||||
checks all the nodes because there are not enough feasible nodes to stop
|
||||
the scheduler's search early.
|
||||
|
||||
In a small cluster, if you set a low value for `percentageOfNodesToScore`, your
|
||||
|
|
|
@ -25,9 +25,9 @@ assigned a DNS name. By default, a client Pod's DNS search list will
|
|||
include the Pod's own namespace and the cluster's default domain. This is best
|
||||
illustrated by example:
|
||||
|
||||
Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
|
||||
in namespace `bar` can look up this service by simply doing a DNS query for
|
||||
`foo`. A Pod running in namespace `quux` can look up this service by doing a
|
||||
Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
|
||||
in namespace `bar` can look up this service by querying a DNS service for
|
||||
`foo`. A Pod running in namespace `quux` can look up this service by doing a
|
||||
DNS query for `foo.bar`.
|
||||
|
||||
The following sections detail the supported record types and layout that is
|
||||
|
|
|
@ -430,7 +430,7 @@ Services by their DNS name.
|
|||
For example, if you have a Service called `my-service` in a Kubernetes
|
||||
namespace `my-ns`, the control plane and the DNS Service acting together
|
||||
create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace
|
||||
should be able to find it by simply doing a name lookup for `my-service`
|
||||
should be able to find the service by doing a name lookup for `my-service`
|
||||
(`my-service.my-ns` would also work).
|
||||
|
||||
Pods in other namespaces must qualify the name as `my-service.my-ns`. These names
|
||||
|
@ -1163,7 +1163,7 @@ rule kicks in, and redirects the packets to the proxy's own port.
|
|||
The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend.
|
||||
|
||||
This means that Service owners can choose any port they want without risk of
|
||||
collision. Clients can simply connect to an IP and port, without being aware
|
||||
collision. Clients can connect to an IP and port, without being aware
|
||||
of which Pods they are actually accessing.
|
||||
|
||||
#### iptables
|
||||
|
|
|
@ -487,7 +487,7 @@ The following volume types support mount options:
|
|||
* VsphereVolume
|
||||
* iSCSI
|
||||
|
||||
Mount options are not validated, so mount will simply fail if one is invalid.
|
||||
Mount options are not validated. If a mount option is invalid, the mount fails.
|
||||
|
||||
In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead
|
||||
of the `mountOptions` attribute. This annotation is still working; however,
|
||||
|
|
|
@ -149,7 +149,7 @@ mount options specified in the `mountOptions` field of the class.
|
|||
|
||||
If the volume plugin does not support mount options but mount options are
|
||||
specified, provisioning will fail. Mount options are not validated on either
|
||||
the class or PV, so mount of the PV will simply fail if one is invalid.
|
||||
the class or PV. If a mount option is invalid, the PV mount fails.
|
||||
|
||||
### Volume Binding Mode
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature add
|
|||
|
||||
A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume.
|
||||
|
||||
The implementation of cloning, from the perspective of the Kubernetes API, simply adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
|
||||
The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
|
||||
|
||||
Users need to be aware of the following when using this feature:
|
||||
|
||||
|
|
|
@ -47,7 +47,7 @@ In this example:
|
|||
* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field.
|
||||
* The Deployment creates three replicated Pods, indicated by the `.spec.replicas` field.
|
||||
* The `.spec.selector` field defines how the Deployment finds which Pods to manage.
|
||||
In this case, you simply select a label that is defined in the Pod template (`app: nginx`).
|
||||
In this case, you select a label that is defined in the Pod template (`app: nginx`).
|
||||
However, more sophisticated selection rules are possible,
|
||||
as long as the Pod template itself satisfies the rule.
|
||||
|
||||
|
@ -171,13 +171,15 @@ Follow the steps given below to update your Deployment:
|
|||
```shell
|
||||
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
|
||||
```
|
||||
or simply use the following command:
|
||||
|
||||
|
||||
or use the following command:
|
||||
|
||||
```shell
|
||||
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
|
@ -188,7 +190,8 @@ Follow the steps given below to update your Deployment:
|
|||
kubectl edit deployment.v1.apps/nginx-deployment
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
deployment.apps/nginx-deployment edited
|
||||
```
|
||||
|
@ -200,10 +203,13 @@ Follow the steps given below to update your Deployment:
|
|||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
|
@ -212,10 +218,11 @@ Get more details on your updated Deployment:
|
|||
|
||||
* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`.
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 36s
|
||||
```
|
||||
|
||||
```ini
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 36s
|
||||
```
|
||||
|
||||
* Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it
|
||||
up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
|
||||
|
|
|
@ -180,16 +180,16 @@ delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl wi
|
|||
for it to delete each pod before deleting the ReplicationController itself. If this kubectl
|
||||
command is interrupted, it can be restarted.
|
||||
|
||||
When using the REST API or go client library, you need to do the steps explicitly (scale replicas to
|
||||
When using the REST API or Go client library, you need to do the steps explicitly (scale replicas to
|
||||
0, wait for pod deletions, then delete the ReplicationController).
|
||||
|
||||
### Deleting just a ReplicationController
|
||||
### Deleting only a ReplicationController
|
||||
|
||||
You can delete a ReplicationController without affecting any of its pods.
|
||||
|
||||
Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
|
||||
|
||||
When using the REST API or go client library, simply delete the ReplicationController object.
|
||||
When using the REST API or Go client library, you can delete the ReplicationController object.
|
||||
|
||||
Once the original is deleted, you can create a new ReplicationController to replace it. As long
|
||||
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
|
||||
|
@ -240,7 +240,7 @@ Pods created by a ReplicationController are intended to be fungible and semantic
|
|||
|
||||
## Responsibilities of the ReplicationController
|
||||
|
||||
The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
|
||||
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)).
|
||||
|
||||
|
|
|
@ -52,7 +52,7 @@ Members can:
|
|||
|
||||
{{< note >}}
|
||||
Using `/lgtm` triggers automation. If you want to provide non-binding
|
||||
approval, simply commenting "LGTM" works too!
|
||||
approval, commenting "LGTM" works too!
|
||||
{{< /note >}}
|
||||
|
||||
- Use the `/hold` comment to block merging for a pull request
|
||||
|
|
|
@ -110,7 +110,7 @@ This admission controller allows all pods into the cluster. It is deprecated bec
|
|||
This admission controller modifies every new Pod to force the image pull policy to Always. This is useful in a
|
||||
multitenant cluster so that users can be assured that their private images can only be used by those
|
||||
who have the credentials to pull them. Without this admission controller, once an image has been pulled to a
|
||||
node, any pod from any user can use it simply by knowing the image's name (assuming the Pod is
|
||||
node, any pod from any user can use it by knowing the image's name (assuming the Pod is
|
||||
scheduled onto the right node), without any authorization check against the image. When this admission controller
|
||||
is enabled, images are always pulled prior to starting containers, which means valid credentials are
|
||||
required.
|
||||
|
|
|
@ -206,7 +206,7 @@ spec:
|
|||
|
||||
Service account bearer tokens are perfectly valid to use outside the cluster and
|
||||
can be used to create identities for long standing jobs that wish to talk to the
|
||||
Kubernetes API. To manually create a service account, simply use the `kubectl
|
||||
Kubernetes API. To manually create a service account, use the `kubectl
|
||||
create serviceaccount (NAME)` command. This creates a service account in the
|
||||
current namespace and an associated secret.
|
||||
|
||||
|
@ -420,12 +420,12 @@ users:
|
|||
refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq
|
||||
name: oidc
|
||||
```
|
||||
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
|
||||
|
||||
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
|
||||
|
||||
##### Option 2 - Use the `--token` Option
|
||||
|
||||
The `kubectl` command lets you pass in a token using the `--token` option. Simply copy and paste the `id_token` into this option:
|
||||
The `kubectl` command lets you pass in a token using the `--token` option. Copy and paste the `id_token` into this option:
|
||||
|
||||
```bash
|
||||
kubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes
|
||||
|
|
|
@ -16,10 +16,10 @@ min-kubernetes-server-version: 1.16
|
|||
|
||||
## Introduction
|
||||
|
||||
Server Side Apply helps users and controllers manage their resources via
|
||||
declarative configurations. It allows them to create and/or modify their
|
||||
Server Side Apply helps users and controllers manage their resources through
|
||||
declarative configurations. Clients can create and modify their
|
||||
[objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
|
||||
declaratively, simply by sending their fully specified intent.
|
||||
declaratively by sending their fully specified intent.
|
||||
|
||||
A fully specified intent is a partial object that only includes the fields and
|
||||
values for which the user has an opinion. That intent either creates a new
|
||||
|
|
|
@ -434,7 +434,7 @@ Now remove the node:
|
|||
kubectl delete node <node name>
|
||||
```
|
||||
|
||||
If you wish to start over simply run `kubeadm init` or `kubeadm join` with the
|
||||
If you wish to start over, run `kubeadm init` or `kubeadm join` with the
|
||||
appropriate arguments.
|
||||
|
||||
### Clean up the control plane
|
||||
|
|
|
@ -547,7 +547,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
|
|||
|
||||
1. After launching `start.ps1`, flanneld is stuck in "Waiting for the Network to be created"
|
||||
|
||||
There are numerous reports of this [issue which are being investigated](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to simply relaunch start.ps1 or relaunch it manually as follows:
|
||||
There are numerous reports of this [issue](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to relaunch start.ps1 or relaunch it manually as follows:
|
||||
|
||||
```powershell
|
||||
PS C:> [Environment]::SetEnvironmentVariable("NODE_NAME", "<Windows_Worker_Hostname>")
|
||||
|
|
|
@ -23,7 +23,7 @@ Windows applications constitute a large portion of the services and applications
|
|||
## Before you begin
|
||||
|
||||
* Create a Kubernetes cluster that includes a [master and a worker node running Windows Server](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes)
|
||||
* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided simply to jumpstart your experience with Windows containers.
|
||||
* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided to jumpstart your experience with Windows containers.
|
||||
|
||||
## Getting Started: Deploying a Windows container
|
||||
|
||||
|
|
|
@ -280,7 +280,7 @@ at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-l
|
|||
|
||||
#### Manually constructing apiserver proxy URLs
|
||||
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
|
||||
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy`
|
||||
|
||||
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.
|
||||
|
|
|
@ -215,7 +215,7 @@ for i in ret.items:
|
|||
|
||||
#### Java client
|
||||
|
||||
* To install the [Java Client](https://github.com/kubernetes-client/java), simply execute :
|
||||
To install the [Java Client](https://github.com/kubernetes-client/java), run:
|
||||
|
||||
```shell
|
||||
# Clone java library
|
||||
|
|
|
@ -83,7 +83,7 @@ See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/ac
|
|||
|
||||
#### Manually constructing apiserver proxy URLs
|
||||
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
|
||||
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`[https:]service_name[:port_name]`*`/proxy`
|
||||
|
||||
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.
|
||||
|
|
|
@ -32,7 +32,7 @@ for example, it might provision storage that is too expensive. If this is the ca
|
|||
you can either change the default StorageClass or disable it completely to avoid
|
||||
dynamic provisioning of storage.
|
||||
|
||||
Simply deleting the default StorageClass may not work, as it may be re-created
|
||||
Deleting the default StorageClass may not work, as it may be re-created
|
||||
automatically by the addon manager running in your cluster. Please consult the docs for your installation
|
||||
for details about addon manager and how to disable individual addons.
|
||||
|
||||
|
|
|
@ -23,16 +23,10 @@ authenticated by the apiserver as a particular User Account (currently this is
|
|||
usually `admin`, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver.
|
||||
When they do, they are authenticated as a particular Service Account (for example, `default`).
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Use the Default Service Account to access the API server.
|
||||
|
@ -129,7 +123,7 @@ then you will see that a token has automatically been created and is referenced
|
|||
|
||||
You may use authorization plugins to [set permissions on service accounts](/docs/reference/access-authn-authz/rbac/#service-account-permissions).
|
||||
|
||||
To use a non-default service account, simply set the `spec.serviceAccountName`
|
||||
To use a non-default service account, set the `spec.serviceAccountName`
|
||||
field of a pod to the name of the service account you wish to use.
|
||||
|
||||
The service account has to exist at the time the pod is created, or it will be rejected.
|
||||
|
|
|
@ -12,16 +12,10 @@ What's Kompose? It's a conversion tool for all things compose (namely Docker Com
|
|||
|
||||
More information can be found on the Kompose website at [http://kompose.io](http://kompose.io).
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Install Kompose
|
||||
|
@ -49,7 +43,6 @@ sudo mv ./kompose /usr/local/bin/kompose
|
|||
|
||||
Alternatively, you can download the [tarball](https://github.com/kubernetes/kompose/releases).
|
||||
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Build from source" %}}
|
||||
|
||||
|
@ -87,8 +80,8 @@ On macOS you can install latest release via [Homebrew](https://brew.sh):
|
|||
|
||||
```bash
|
||||
brew install kompose
|
||||
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -97,95 +90,117 @@ brew install kompose
|
|||
In just a few steps, we'll take you from Docker Compose to Kubernetes. All
|
||||
you need is an existing `docker-compose.yml` file.
|
||||
|
||||
1. Go to the directory containing your `docker-compose.yml` file. If you don't
|
||||
have one, test using this one.
|
||||
1. Go to the directory containing your `docker-compose.yml` file. If you don't have one, test using this one.
|
||||
|
||||
```yaml
|
||||
version: "2"
|
||||
```yaml
|
||||
version: "2"
|
||||
|
||||
services:
|
||||
services:
|
||||
|
||||
redis-master:
|
||||
image: k8s.gcr.io/redis:e2e
|
||||
ports:
|
||||
- "6379"
|
||||
redis-master:
|
||||
image: k8s.gcr.io/redis:e2e
|
||||
ports:
|
||||
- "6379"
|
||||
|
||||
redis-slave:
|
||||
image: gcr.io/google_samples/gb-redisslave:v3
|
||||
ports:
|
||||
- "6379"
|
||||
environment:
|
||||
- GET_HOSTS_FROM=dns
|
||||
redis-slave:
|
||||
image: gcr.io/google_samples/gb-redisslave:v3
|
||||
ports:
|
||||
- "6379"
|
||||
environment:
|
||||
- GET_HOSTS_FROM=dns
|
||||
|
||||
frontend:
|
||||
image: gcr.io/google-samples/gb-frontend:v4
|
||||
ports:
|
||||
- "80:80"
|
||||
environment:
|
||||
- GET_HOSTS_FROM=dns
|
||||
labels:
|
||||
kompose.service.type: LoadBalancer
|
||||
```
|
||||
frontend:
|
||||
image: gcr.io/google-samples/gb-frontend:v4
|
||||
ports:
|
||||
- "80:80"
|
||||
environment:
|
||||
- GET_HOSTS_FROM=dns
|
||||
labels:
|
||||
kompose.service.type: LoadBalancer
|
||||
```
|
||||
|
||||
2. To convert the `docker-compose.yml` file to files that you can use with
|
||||
`kubectl`, run `kompose convert` and then `kubectl apply -f <output file>`.
|
||||
2. To convert the `docker-compose.yml` file to files that you can use with
|
||||
`kubectl`, run `kompose convert` and then `kubectl apply -f <output file>`.
|
||||
|
||||
```bash
|
||||
$ kompose convert
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
```
|
||||
```bash
|
||||
kompose convert
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml
|
||||
service/frontend created
|
||||
service/redis-master created
|
||||
service/redis-slave created
|
||||
deployment.apps/frontend created
|
||||
deployment.apps/redis-master created
|
||||
deployment.apps/redis-slave created
|
||||
```
|
||||
The output is similar to:
|
||||
|
||||
Your deployments are running in Kubernetes.
|
||||
```none
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
```
|
||||
|
||||
3. Access your application.
|
||||
```bash
|
||||
kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,
|
||||
```
|
||||
|
||||
If you're already using `minikube` for your development process:
|
||||
The output is similar to:
|
||||
|
||||
```bash
|
||||
$ minikube service frontend
|
||||
```
|
||||
```none
|
||||
redis-master-deployment.yaml,redis-slave-deployment.yaml
|
||||
service/frontend created
|
||||
service/redis-master created
|
||||
service/redis-slave created
|
||||
deployment.apps/frontend created
|
||||
deployment.apps/redis-master created
|
||||
deployment.apps/redis-slave created
|
||||
```
|
||||
|
||||
Otherwise, let's look up what IP your service is using!
|
||||
Your deployments are running in Kubernetes.
|
||||
|
||||
```sh
|
||||
$ kubectl describe svc frontend
|
||||
Name: frontend
|
||||
Namespace: default
|
||||
Labels: service=frontend
|
||||
Selector: service=frontend
|
||||
Type: LoadBalancer
|
||||
IP: 10.0.0.183
|
||||
LoadBalancer Ingress: 192.0.2.89
|
||||
Port: 80 80/TCP
|
||||
NodePort: 80 31144/TCP
|
||||
Endpoints: 172.17.0.4:80
|
||||
Session Affinity: None
|
||||
No events.
|
||||
3. Access your application.
|
||||
|
||||
```
|
||||
If you're already using `minikube` for your development process:
|
||||
|
||||
If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`.
|
||||
```bash
|
||||
minikube service frontend
|
||||
```
|
||||
|
||||
```sh
|
||||
$ curl http://192.0.2.89
|
||||
```
|
||||
Otherwise, let's look up what IP your service is using!
|
||||
|
||||
```sh
|
||||
kubectl describe svc frontend
|
||||
```
|
||||
|
||||
```none
|
||||
Name: frontend
|
||||
Namespace: default
|
||||
Labels: service=frontend
|
||||
Selector: service=frontend
|
||||
Type: LoadBalancer
|
||||
IP: 10.0.0.183
|
||||
LoadBalancer Ingress: 192.0.2.89
|
||||
Port: 80 80/TCP
|
||||
NodePort: 80 31144/TCP
|
||||
Endpoints: 172.17.0.4:80
|
||||
Session Affinity: None
|
||||
No events.
|
||||
```
|
||||
|
||||
If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`.
|
||||
|
||||
```sh
|
||||
curl http://192.0.2.89
|
||||
```
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
|
@ -205,15 +220,17 @@ you need is an existing `docker-compose.yml` file.
|
|||
Kompose has support for two providers: OpenShift and Kubernetes.
|
||||
You can choose a targeted provider using global option `--provider`. If no provider is specified, Kubernetes is set by default.
|
||||
|
||||
|
||||
## `kompose convert`
|
||||
|
||||
Kompose supports conversion of V1, V2, and V3 Docker Compose files into Kubernetes and OpenShift objects.
|
||||
|
||||
### Kubernetes
|
||||
### Kubernetes `kompose convert` example
|
||||
|
||||
```sh
|
||||
$ kompose --file docker-voting.yml convert
|
||||
```shell
|
||||
kompose --file docker-voting.yml convert
|
||||
```
|
||||
|
||||
```none
|
||||
WARN Unsupported key networks - ignoring
|
||||
WARN Unsupported key build - ignoring
|
||||
INFO Kubernetes file "worker-svc.yaml" created
|
||||
|
@ -226,16 +243,24 @@ INFO Kubernetes file "result-deployment.yaml" created
|
|||
INFO Kubernetes file "vote-deployment.yaml" created
|
||||
INFO Kubernetes file "worker-deployment.yaml" created
|
||||
INFO Kubernetes file "db-deployment.yaml" created
|
||||
```
|
||||
|
||||
$ ls
|
||||
```shell
|
||||
ls
|
||||
```
|
||||
|
||||
```none
|
||||
db-deployment.yaml docker-compose.yml docker-gitlab.yml redis-deployment.yaml result-deployment.yaml vote-deployment.yaml worker-deployment.yaml
|
||||
db-svc.yaml docker-voting.yml redis-svc.yaml result-svc.yaml vote-svc.yaml worker-svc.yaml
|
||||
```
|
||||
|
||||
You can also provide multiple docker-compose files at the same time:
|
||||
|
||||
```sh
|
||||
$ kompose -f docker-compose.yml -f docker-guestbook.yml convert
|
||||
```shell
|
||||
kompose -f docker-compose.yml -f docker-guestbook.yml convert
|
||||
```
|
||||
|
||||
```none
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "mlbparks-service.yaml" created
|
||||
INFO Kubernetes file "mongodb-service.yaml" created
|
||||
|
@ -247,8 +272,13 @@ INFO Kubernetes file "mongodb-deployment.yaml" created
|
|||
INFO Kubernetes file "mongodb-claim0-persistentvolumeclaim.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
```
|
||||
|
||||
$ ls
|
||||
```shell
|
||||
ls
|
||||
```
|
||||
|
||||
```none
|
||||
mlbparks-deployment.yaml mongodb-service.yaml redis-slave-service.jsonmlbparks-service.yaml
|
||||
frontend-deployment.yaml mongodb-claim0-persistentvolumeclaim.yaml redis-master-service.yaml
|
||||
frontend-service.yaml mongodb-deployment.yaml redis-slave-deployment.yaml
|
||||
|
@ -257,10 +287,13 @@ redis-master-deployment.yaml
|
|||
|
||||
When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be over ridden by subsequent file.
|
||||
|
||||
### OpenShift
|
||||
### OpenShift `kompose convert` example
|
||||
|
||||
```sh
|
||||
$ kompose --provider openshift --file docker-voting.yml convert
|
||||
kompose --provider openshift --file docker-voting.yml convert
|
||||
```
|
||||
|
||||
```none
|
||||
WARN [worker] Service cannot be created because of missing port.
|
||||
INFO OpenShift file "vote-service.yaml" created
|
||||
INFO OpenShift file "db-service.yaml" created
|
||||
|
@ -281,7 +314,10 @@ INFO OpenShift file "result-imagestream.yaml" created
|
|||
It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch for the build. You can specify a different source repo and branch using ``--build-repo`` and ``--build-branch`` options respectively.
|
||||
|
||||
```sh
|
||||
$ kompose --provider openshift --file buildconfig/docker-compose.yml convert
|
||||
kompose --provider openshift --file buildconfig/docker-compose.yml convert
|
||||
```
|
||||
|
||||
```none
|
||||
WARN [foo] Service cannot be created because of missing port.
|
||||
INFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.
|
||||
INFO OpenShift file "foo-deploymentconfig.yaml" created
|
||||
|
@ -297,23 +333,31 @@ If you are manually pushing the OpenShift artifacts using ``oc create -f``, you
|
|||
|
||||
Kompose supports a straightforward way to deploy your "composed" application to Kubernetes or OpenShift via `kompose up`.
|
||||
|
||||
### Kubernetes `kompose up` example
|
||||
|
||||
### Kubernetes
|
||||
```sh
|
||||
$ kompose --file ./examples/docker-guestbook.yml up
|
||||
```shell
|
||||
kompose --file ./examples/docker-guestbook.yml up
|
||||
```
|
||||
|
||||
```none
|
||||
We are going to create Kubernetes deployments and services for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead.
|
||||
|
||||
INFO Successfully created service: redis-master
|
||||
INFO Successfully created service: redis-slave
|
||||
INFO Successfully created service: frontend
|
||||
INFO Successfully created service: redis-master
|
||||
INFO Successfully created service: redis-slave
|
||||
INFO Successfully created service: frontend
|
||||
INFO Successfully created deployment: redis-master
|
||||
INFO Successfully created deployment: redis-slave
|
||||
INFO Successfully created deployment: frontend
|
||||
INFO Successfully created deployment: frontend
|
||||
|
||||
Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods' for details.
|
||||
```
|
||||
|
||||
$ kubectl get deployment,svc,pods
|
||||
```shell
|
||||
kubectl get deployment,svc,pods
|
||||
```
|
||||
|
||||
```none
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
deployment.extensions/frontend 1 1 1 1 4m
|
||||
deployment.extensions/redis-master 1 1 1 1 4m
|
||||
|
@ -331,14 +375,19 @@ pod/redis-master-1432129712-63jn8 1/1 Running 0 4m
|
|||
pod/redis-slave-2504961300-nve7b 1/1 Running 0 4m
|
||||
```
|
||||
|
||||
**Note**:
|
||||
{{< note >}}
|
||||
|
||||
- You must have a running Kubernetes cluster with a pre-configured kubectl context.
|
||||
- Only deployments and services are generated and deployed to Kubernetes. If you need different kind of resources, use the `kompose convert` and `kubectl apply -f` commands instead.
|
||||
{{< /note >}}
|
||||
|
||||
### OpenShift
|
||||
```sh
|
||||
$ kompose --file ./examples/docker-guestbook.yml --provider openshift up
|
||||
### OpenShift `kompose up` example
|
||||
|
||||
```shell
|
||||
kompose --file ./examples/docker-guestbook.yml --provider openshift up
|
||||
```
|
||||
|
||||
```none
|
||||
We are going to create OpenShift DeploymentConfigs and Services for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead.
|
||||
|
||||
|
@ -353,8 +402,13 @@ INFO Successfully created deployment: redis-master
|
|||
INFO Successfully created ImageStream: redis-master
|
||||
|
||||
Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is' for details.
|
||||
```
|
||||
|
||||
$ oc get dc,svc,is
|
||||
```shell
|
||||
oc get dc,svc,is
|
||||
```
|
||||
|
||||
```none
|
||||
NAME REVISION DESIRED CURRENT TRIGGERED BY
|
||||
dc/frontend 0 1 0 config,image(frontend:v4)
|
||||
dc/redis-master 0 1 0 config,image(redis-master:e2e)
|
||||
|
@ -369,16 +423,16 @@ is/redis-master 172.30.12.200:5000/fff/redis-master
|
|||
is/redis-slave 172.30.12.200:5000/fff/redis-slave v1
|
||||
```
|
||||
|
||||
**Note**:
|
||||
|
||||
- You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`)
|
||||
{{< note >}}
|
||||
You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`).
|
||||
{{< /note >}}
|
||||
|
||||
## `kompose down`
|
||||
|
||||
Once you have deployed "composed" application to Kubernetes, `$ kompose down` will help you to take the application out by deleting its deployments and services. If you need to remove other resources, use the 'kubectl' command.
|
||||
Once you have deployed "composed" application to Kubernetes, `kompose down` will help you to take the application out by deleting its deployments and services. If you need to remove other resources, use the 'kubectl' command.
|
||||
|
||||
```sh
|
||||
$ kompose --file docker-guestbook.yml down
|
||||
```shell
|
||||
kompose --file docker-guestbook.yml down
|
||||
INFO Successfully deleted service: redis-master
|
||||
INFO Successfully deleted deployment: redis-master
|
||||
INFO Successfully deleted service: redis-slave
|
||||
|
@ -387,16 +441,16 @@ INFO Successfully deleted service: frontend
|
|||
INFO Successfully deleted deployment: frontend
|
||||
```
|
||||
|
||||
**Note**:
|
||||
|
||||
- You must have a running Kubernetes cluster with a pre-configured kubectl context.
|
||||
{{< note >}}
|
||||
You must have a running Kubernetes cluster with a pre-configured `kubectl` context.
|
||||
{{< /note >}}
|
||||
|
||||
## Build and Push Docker Images
|
||||
|
||||
Kompose supports both building and pushing Docker images. When using the `build` key within your Docker Compose file, your image will:
|
||||
|
||||
- Automatically be built with Docker using the `image` key specified within your file
|
||||
- Be pushed to the correct Docker repository using local credentials (located at `.docker/config`)
|
||||
- Automatically be built with Docker using the `image` key specified within your file
|
||||
- Be pushed to the correct Docker repository using local credentials (located at `.docker/config`)
|
||||
|
||||
Using an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/master/examples/buildconfig/docker-compose.yml):
|
||||
|
||||
|
@ -412,7 +466,7 @@ services:
|
|||
Using `kompose up` with a `build` key:
|
||||
|
||||
```none
|
||||
$ kompose up
|
||||
kompose up
|
||||
INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar'
|
||||
INFO Building image 'docker.io/foo/bar' from directory 'build'
|
||||
INFO Image 'docker.io/foo/bar' from directory 'build' built successfully
|
||||
|
@ -432,10 +486,10 @@ In order to disable the functionality, or choose to use BuildConfig generation (
|
|||
|
||||
```sh
|
||||
# Disable building/pushing Docker images
|
||||
$ kompose up --build none
|
||||
kompose up --build none
|
||||
|
||||
# Generate Build Config artifacts for OpenShift
|
||||
$ kompose up --provider openshift --build build-config
|
||||
kompose up --provider openshift --build build-config
|
||||
```
|
||||
|
||||
## Alternative Conversions
|
||||
|
@ -443,45 +497,54 @@ $ kompose up --provider openshift --build build-config
|
|||
The default `kompose` transformation will generate Kubernetes [Deployments](/docs/concepts/workloads/controllers/deployment/) and [Services](/docs/concepts/services-networking/service/), in yaml format. You have alternative option to generate json with `-j`. Also, you can alternatively generate [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/) objects, [Daemon Sets](/docs/concepts/workloads/controllers/daemonset/), or [Helm](https://github.com/helm/helm) charts.
|
||||
|
||||
```sh
|
||||
$ kompose convert -j
|
||||
kompose convert -j
|
||||
INFO Kubernetes file "redis-svc.json" created
|
||||
INFO Kubernetes file "web-svc.json" created
|
||||
INFO Kubernetes file "redis-deployment.json" created
|
||||
INFO Kubernetes file "web-deployment.json" created
|
||||
```
|
||||
|
||||
The `*-deployment.json` files contain the Deployment objects.
|
||||
|
||||
```sh
|
||||
$ kompose convert --replication-controller
|
||||
kompose convert --replication-controller
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-replicationcontroller.yaml" created
|
||||
INFO Kubernetes file "web-replicationcontroller.yaml" created
|
||||
```
|
||||
|
||||
The `*-replicationcontroller.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `$ kompose convert --replication-controller --replicas 3`
|
||||
The `*-replicationcontroller.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `kompose convert --replication-controller --replicas 3`
|
||||
|
||||
```sh
|
||||
$ kompose convert --daemon-set
|
||||
```shell
|
||||
kompose convert --daemon-set
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-daemonset.yaml" created
|
||||
INFO Kubernetes file "web-daemonset.yaml" created
|
||||
```
|
||||
|
||||
The `*-daemonset.yaml` files contain the Daemon Set objects
|
||||
The `*-daemonset.yaml` files contain the DaemonSet objects
|
||||
|
||||
If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) simply do:
|
||||
If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) run:
|
||||
|
||||
```sh
|
||||
$ kompose convert -c
|
||||
```shell
|
||||
kompose convert -c
|
||||
```
|
||||
|
||||
```none
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-deployment.yaml" created
|
||||
chart created in "./docker-compose/"
|
||||
```
|
||||
|
||||
$ tree docker-compose/
|
||||
```shell
|
||||
tree docker-compose/
|
||||
```
|
||||
|
||||
```none
|
||||
docker-compose
|
||||
├── Chart.yaml
|
||||
├── README.md
|
||||
|
@ -562,7 +625,7 @@ If you want to create normal pods without controllers you can use `restart` cons
|
|||
| `no` | Pod | `Never` |
|
||||
|
||||
{{< note >}}
|
||||
The controller object could be `deployment` or `replicationcontroller`, etc.
|
||||
The controller object could be `deployment` or `replicationcontroller`.
|
||||
{{< /note >}}
|
||||
|
||||
For example, the `pival` service will become pod down here. This container calculated value of `pi`.
|
||||
|
@ -577,7 +640,7 @@ services:
|
|||
restart: "on-failure"
|
||||
```
|
||||
|
||||
### Warning about Deployment Config's
|
||||
### Warning about Deployment Configurations
|
||||
|
||||
If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to "Recreate" instead of "RollingUpdate" (default). This is done to avoid multiple instances of a service from accessing a volume at the same time.
|
||||
|
||||
|
@ -590,5 +653,3 @@ Please note that changing service name might break some `docker-compose` files.
|
|||
Kompose supports Docker Compose versions: 1, 2 and 3. We have limited support on versions 2.1 and 3.2 due to their experimental nature.
|
||||
|
||||
A full list on compatibility between all three versions is listed in our [conversion document](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md) including a list of all incompatible Docker Compose keys.
|
||||
|
||||
|
||||
|
|
|
@ -111,7 +111,7 @@ kubectl get pods -l app=hostnames \
|
|||
10.244.0.7
|
||||
```
|
||||
|
||||
The example container used for this walk-through simply serves its own hostname
|
||||
The example container used for this walk-through serves its own hostname
|
||||
via HTTP on port 9376, but if you are debugging your own app, you'll want to
|
||||
use whatever port number your Pods are listening on.
|
||||
|
||||
|
|
|
@ -12,20 +12,15 @@ content_type: task
|
|||
This guide demonstrates how to install and write extensions for [kubectl](/docs/reference/kubectl/kubectl/). By thinking of core `kubectl` commands as essential building blocks for interacting with a Kubernetes cluster, a cluster administrator can think
|
||||
of plugins as a means of utilizing these building blocks to create more complex behavior. Plugins extend `kubectl` with new sub-commands, allowing for new and custom features not included in the main distribution of `kubectl`.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
You need to have a working `kubectl` binary installed.
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Installing kubectl plugins
|
||||
|
||||
A plugin is nothing more than a standalone executable file, whose name begins with `kubectl-`. To install a plugin, simply move its executable file to anywhere on your `PATH`.
|
||||
A plugin is a standalone executable file, whose name begins with `kubectl-`. To install a plugin, move its executable file to anywhere on your `PATH`.
|
||||
|
||||
You can also discover and install kubectl plugins available in the open source
|
||||
using [Krew](https://krew.dev/). Krew is a plugin manager maintained by
|
||||
|
@ -60,9 +55,9 @@ You can write a plugin in any programming language or script that allows you to
|
|||
|
||||
There is no plugin installation or pre-loading required. Plugin executables receive
|
||||
the inherited environment from the `kubectl` binary.
|
||||
A plugin determines which command path it wishes to implement based on its name. For
|
||||
example, a plugin wanting to provide a new command `kubectl foo`, would simply be named
|
||||
`kubectl-foo`, and live somewhere in your `PATH`.
|
||||
A plugin determines which command path it wishes to implement based on its name.
|
||||
For example, a plugin named `kubectl-foo` provides a command `kubectl foo`. You must
|
||||
install the plugin executable somewhere in your `PATH`.
|
||||
|
||||
### Example plugin
|
||||
|
||||
|
@ -88,32 +83,34 @@ echo "I am a plugin named kubectl-foo"
|
|||
|
||||
### Using a plugin
|
||||
|
||||
To use the above plugin, simply make it executable:
|
||||
To use a plugin, make the plugin executable:
|
||||
|
||||
```
|
||||
```shell
|
||||
sudo chmod +x ./kubectl-foo
|
||||
```
|
||||
|
||||
and place it anywhere in your `PATH`:
|
||||
|
||||
```
|
||||
```shell
|
||||
sudo mv ./kubectl-foo /usr/local/bin
|
||||
```
|
||||
|
||||
You may now invoke your plugin as a `kubectl` command:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl foo
|
||||
```
|
||||
|
||||
```
|
||||
I am a plugin named kubectl-foo
|
||||
```
|
||||
|
||||
All args and flags are passed as-is to the executable:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl foo version
|
||||
```
|
||||
|
||||
```
|
||||
1.0.0
|
||||
```
|
||||
|
@ -124,6 +121,7 @@ All environment variables are also passed as-is to the executable:
|
|||
export KUBECONFIG=~/.kube/config
|
||||
kubectl foo config
|
||||
```
|
||||
|
||||
```
|
||||
/home/<user>/.kube/config
|
||||
```
|
||||
|
@ -131,6 +129,7 @@ kubectl foo config
|
|||
```shell
|
||||
KUBECONFIG=/etc/kube/config kubectl foo config
|
||||
```
|
||||
|
||||
```
|
||||
/etc/kube/config
|
||||
```
|
||||
|
@ -376,16 +375,11 @@ set up a build environment (if it needs compiling), and deploy the plugin.
|
|||
If you also make compiled packages available, or use Krew, that will make
|
||||
installs easier.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Check the Sample CLI Plugin repository for a
|
||||
[detailed example](https://github.com/kubernetes/sample-cli-plugin) of a
|
||||
plugin written in Go.
|
||||
In case of any questions, feel free to reach out to the
|
||||
[SIG CLI team](https://github.com/kubernetes/community/tree/master/sig-cli).
|
||||
* Read about [Krew](https://krew.dev/), a package manager for kubectl plugins.
|
||||
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ based on a common template. You can use this approach to process batches of work
|
|||
parallel.
|
||||
|
||||
For this example there are only three items: _apple_, _banana_, and _cherry_.
|
||||
The sample Jobs process each item simply by printing a string then pausing.
|
||||
The sample Jobs process each item by printing a string then pausing.
|
||||
|
||||
See [using Jobs in real workloads](#using-jobs-in-real-workloads) to learn about how
|
||||
this pattern fits more realistic use cases.
|
||||
|
|
|
@ -66,7 +66,7 @@ Use caution when deleting a PVC, as it may lead to data loss.
|
|||
|
||||
### Complete deletion of a StatefulSet
|
||||
|
||||
To simply delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following:
|
||||
To delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following:
|
||||
|
||||
```shell
|
||||
grace=$(kubectl get pods <stateful-set-pod> --template '{{.spec.terminationGracePeriodSeconds}}')
|
||||
|
|
|
@ -171,7 +171,7 @@ properties.
|
|||
The script in the `init-mysql` container also applies either `primary.cnf` or
|
||||
`replica.cnf` from the ConfigMap by copying the contents into `conf.d`.
|
||||
Because the example topology consists of a single primary MySQL server and any number of
|
||||
replicas, the script simply assigns ordinal `0` to be the primary server, and everyone
|
||||
replicas, the script assigns ordinal `0` to be the primary server, and everyone
|
||||
else to be replicas.
|
||||
Combined with the StatefulSet controller's
|
||||
[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees/),
|
||||
|
|
|
@ -168,8 +168,7 @@ k8s-apparmor-example-deny-write (enforce)
|
|||
|
||||
*This example assumes you have already set up a cluster with AppArmor support.*
|
||||
|
||||
First, we need to load the profile we want to use onto our nodes. The profile we'll use simply
|
||||
denies all file writes:
|
||||
First, we need to load the profile we want to use onto our nodes. This profile denies all file writes:
|
||||
|
||||
```shell
|
||||
#include <tunables/global>
|
||||
|
|
|
@ -934,10 +934,10 @@ web-2 0/1 Terminating 0 3m
|
|||
|
||||
When the `web` StatefulSet was recreated, it first relaunched `web-0`.
|
||||
Since `web-1` was already Running and Ready, when `web-0` transitioned to
|
||||
Running and Ready, it simply adopted this Pod. Since you recreated the StatefulSet
|
||||
with `replicas` equal to 2, once `web-0` had been recreated, and once
|
||||
`web-1` had been determined to already be Running and Ready, `web-2` was
|
||||
terminated.
|
||||
Running and Ready, it adopted this Pod. Since you recreated the StatefulSet
|
||||
with `replicas` equal to 2, once `web-0` had been recreated, and once
|
||||
`web-1` had been determined to already be Running and Ready, `web-2` was
|
||||
terminated.
|
||||
|
||||
Let's take another look at the contents of the `index.html` file served by the
|
||||
Pods' webservers:
|
||||
|
@ -945,6 +945,7 @@ Pods' webservers:
|
|||
```shell
|
||||
for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
|
||||
```
|
||||
|
||||
```
|
||||
web-0
|
||||
web-1
|
||||
|
@ -970,15 +971,18 @@ In another terminal, delete the StatefulSet again. This time, omit the
|
|||
```shell
|
||||
kubectl delete statefulset web
|
||||
```
|
||||
|
||||
```
|
||||
statefulset.apps "web" deleted
|
||||
```
|
||||
|
||||
Examine the output of the `kubectl get` command running in the first terminal,
|
||||
and wait for all of the Pods to transition to Terminating.
|
||||
|
||||
```shell
|
||||
kubectl get pods -w -l app=nginx
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 11m
|
||||
|
@ -1006,10 +1010,10 @@ the cascade does not delete the headless Service associated with the StatefulSet
|
|||
You must delete the `nginx` Service manually.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
```shell
|
||||
kubectl delete service nginx
|
||||
```
|
||||
|
||||
```
|
||||
service "nginx" deleted
|
||||
```
|
||||
|
@ -1019,6 +1023,7 @@ Recreate the StatefulSet and headless Service one more time:
|
|||
```shell
|
||||
kubectl apply -f web.yaml
|
||||
```
|
||||
|
||||
```
|
||||
service/nginx created
|
||||
statefulset.apps/web created
|
||||
|
@ -1030,6 +1035,7 @@ the contents of their `index.html` files:
|
|||
```shell
|
||||
for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
|
||||
```
|
||||
|
||||
```
|
||||
web-0
|
||||
web-1
|
||||
|
@ -1044,13 +1050,17 @@ Finally, delete the `nginx` Service...
|
|||
```shell
|
||||
kubectl delete service nginx
|
||||
```
|
||||
|
||||
```
|
||||
service "nginx" deleted
|
||||
```
|
||||
|
||||
...and the `web` StatefulSet:
|
||||
|
||||
```shell
|
||||
kubectl delete statefulset web
|
||||
```
|
||||
|
||||
```
|
||||
statefulset "web" deleted
|
||||
```
|
||||
|
|
|
@ -153,7 +153,7 @@ The `mongo` Services you applied is only accessible within the Kubernetes cluste
|
|||
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the Kubernetes cluster. However a Kubernetes user you can use `kubectl port-forward` to access the service even though it uses a `ClusterIP`.
|
||||
|
||||
{{< note >}}
|
||||
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply uncomment `type: LoadBalancer`.
|
||||
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, uncomment `type: LoadBalancer`.
|
||||
{{< /note >}}
|
||||
|
||||
{{< codenew file="application/guestbook/frontend-service.yaml" >}}
|
||||
|
|
Loading…
Reference in New Issue