Replace {{< codenew ... >}} with {{% codenew ... %}} in all English docs (#42180)

* Replaced {{< codenew ... >}} with {{% codenew ... %}} in all files

* Reverted changes in non-english localizations
pull/42205/head
Andrey Goran 2023-07-25 16:54:06 +04:00 committed by GitHub
parent d6008391d9
commit eb522c126f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
109 changed files with 261 additions and 267 deletions

View File

@ -470,7 +470,7 @@ traffic, you can configure rules to block any health check requests
that originate from outside your cluster.
{{< /caution >}}
{{< codenew file="priority-and-fairness/health-for-strangers.yaml" >}}
{{% codenew file="priority-and-fairness/health-for-strangers.yaml" %}}
## Diagnostics

View File

@ -39,7 +39,7 @@ Kubernetes captures logs from each container in a running Pod.
This example uses a manifest for a `Pod` with a container
that writes text to the standard output stream, once per second.
{{< codenew file="debug/counter-pod.yaml" >}}
{{% codenew file="debug/counter-pod.yaml" %}}
To run this pod, use the following command:
@ -255,7 +255,7 @@ For example, a pod runs a single container, and the container
writes to two different log files using two different formats. Here's a
manifest for the Pod:
{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
{{% codenew file="admin/logging/two-files-counter-pod.yaml" %}}
It is not recommended to write log entries with different formats to the same log
stream, even if you managed to redirect both components to the `stdout` stream of
@ -265,7 +265,7 @@ the logs to its own `stdout` stream.
Here's a manifest for a pod that has two sidecar containers:
{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}}
{{% codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" %}}
Now when you run this pod, you can access each log stream separately by
running the following commands:
@ -332,7 +332,7 @@ Here are two example manifests that you can use to implement a sidecar container
The first manifest contains a [`ConfigMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/)
to configure fluentd.
{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
{{% codenew file="admin/logging/fluentd-sidecar-config.yaml" %}}
{{< note >}}
In the sample configurations, you can replace fluentd with any logging agent, reading
@ -342,7 +342,7 @@ from any source inside an application container.
The second manifest describes a pod that has a sidecar container running fluentd.
The pod mounts a volume where fluentd can pick up its configuration data.
{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
{{% codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" %}}
### Exposing logs directly from the application

View File

@ -22,7 +22,7 @@ Many applications require multiple resources to be created, such as a Deployment
Management of multiple resources can be simplified by grouping them together in the same file
(separated by `---` in YAML). For example:
{{< codenew file="application/nginx-app.yaml" >}}
{{% codenew file="application/nginx-app.yaml" %}}
Multiple resources can be created the same way as a single resource:

View File

@ -111,7 +111,7 @@ technique also lets you access a ConfigMap in a different namespace.
Here's an example Pod that uses values from `game-demo` to configure a Pod:
{{< codenew file="configmap/configure-pod.yaml" >}}
{{% codenew file="configmap/configure-pod.yaml" %}}
A ConfigMap doesn't differentiate between single line property values and
multi-line file-like values.

View File

@ -77,7 +77,7 @@ request.
Here's an example `.yaml` file that shows the required fields and object spec for a Kubernetes Deployment:
{{< codenew file="application/deployment.yaml" >}}
{{% codenew file="application/deployment.yaml" %}}
One way to create a Deployment using a `.yaml` file like the one above is to use the
[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) command

View File

@ -54,12 +54,12 @@ A `LimitRange` does **not** check the consistency of the default values it appli
For example, you define a `LimitRange` with this manifest:
{{< codenew file="concepts/policy/limit-range/problematic-limit-range.yaml" >}}
{{% codenew file="concepts/policy/limit-range/problematic-limit-range.yaml" %}}
along with a Pod that declares a CPU resource request of `700m`, but not a limit:
{{< codenew file="concepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml" >}}
{{% codenew file="concepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml" %}}
then that Pod will not be scheduled, failing with an error similar to:
@ -69,7 +69,7 @@ Pod "example-conflict-with-limitrange-cpu" is invalid: spec.containers[0].resour
If you set both `request` and `limit`, then that new Pod will be scheduled successfully even with the same `LimitRange` in place:
{{< codenew file="concepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml" >}}
{{% codenew file="concepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml" %}}
## Example resource constraints

View File

@ -687,7 +687,7 @@ plugins:
Then, create a resource quota object in the `kube-system` namespace:
{{< codenew file="policy/priority-class-resourcequota.yaml" >}}
{{% codenew file="policy/priority-class-resourcequota.yaml" %}}
```shell
kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system

View File

@ -122,7 +122,7 @@ your Pod spec.
For example, consider the following Pod spec:
{{< codenew file="pods/pod-with-node-affinity.yaml" >}}
{{% codenew file="pods/pod-with-node-affinity.yaml" %}}
In this example, the following rules apply:
@ -172,7 +172,7 @@ scheduling decision for the Pod.
For example, consider the following Pod spec:
{{< codenew file="pods/pod-with-affinity-anti-affinity.yaml" >}}
{{% codenew file="pods/pod-with-affinity-anti-affinity.yaml" %}}
If there are two possible nodes that match the
`preferredDuringSchedulingIgnoredDuringExecution` rule, one with the
@ -288,7 +288,7 @@ spec.
Consider the following Pod spec:
{{< codenew file="pods/pod-with-pod-affinity.yaml" >}}
{{% codenew file="pods/pod-with-pod-affinity.yaml" %}}
This example defines one Pod affinity rule and one Pod anti-affinity rule. The
Pod affinity rule uses the "hard"

View File

@ -31,7 +31,7 @@ each schedulingGate can be removed in arbitrary order, but addition of a new sch
To mark a Pod not-ready for scheduling, you can create it with one or more scheduling gates like this:
{{< codenew file="pods/pod-with-scheduling-gates.yaml" >}}
{{% codenew file="pods/pod-with-scheduling-gates.yaml" %}}
After the Pod's creation, you can check its state using:
@ -61,7 +61,7 @@ The output is:
To inform scheduler this Pod is ready for scheduling, you can remove its `schedulingGates` entirely
by re-applying a modified manifest:
{{< codenew file="pods/pod-without-scheduling-gates.yaml" >}}
{{% codenew file="pods/pod-without-scheduling-gates.yaml" %}}
You can check if the `schedulingGates` is cleared by running:

View File

@ -64,7 +64,7 @@ tolerations:
Here's an example of a pod that uses tolerations:
{{< codenew file="pods/pod-with-toleration.yaml" >}}
{{% codenew file="pods/pod-with-toleration.yaml" %}}
The default value for `operator` is `Equal`.

View File

@ -284,7 +284,7 @@ graph BT
If you want an incoming Pod to be evenly spread with existing Pods across zones, you
can use a manifest similar to:
{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}}
{{% codenew file="pods/topology-spread-constraints/one-constraint.yaml" %}}
From that manifest, `topologyKey: zone` implies the even distribution will only be applied
to nodes that are labelled `zone: <any value>` (nodes that don't have a `zone` label
@ -377,7 +377,7 @@ graph BT
You can combine two topology spread constraints to control the spread of Pods both
by node and by zone:
{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}}
{{% codenew file="pods/topology-spread-constraints/two-constraints.yaml" %}}
In this case, to match the first constraint, the incoming Pod can only be placed onto
nodes in zone `B`; while in terms of the second constraint, the incoming Pod can only be
@ -466,7 +466,7 @@ and you know that zone `C` must be excluded. In this case, you can compose a man
as below, so that Pod `mypod` will be placed into zone `B` instead of zone `C`.
Similarly, Kubernetes also respects `spec.nodeSelector`.
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
{{% codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" %}}
## Implicit conventions

View File

@ -300,7 +300,7 @@ Below are the properties a user can specify in the `dnsConfig` field:
The following is an example Pod with custom DNS settings:
{{< codenew file="service/networking/custom-dns.yaml" >}}
{{% codenew file="service/networking/custom-dns.yaml" %}}
When the Pod above is created, the container `test` gets the following contents
in its `/etc/resolv.conf` file:

View File

@ -136,7 +136,7 @@ These examples demonstrate the behavior of various dual-stack Service configurat
[headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors
will behave in this same way.)
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}}
1. This Service specification explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. When
you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6
@ -152,14 +152,14 @@ These examples demonstrate the behavior of various dual-stack Service configurat
* On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy`
behaves the same as `PreferDualStack`.
{{< codenew file="service/networking/dual-stack-preferred-svc.yaml" >}}
{{% codenew file="service/networking/dual-stack-preferred-svc.yaml" %}}
1. This Service specification explicitly defines `IPv6` and `IPv4` in `.spec.ipFamilies` as well
as defining `PreferDualStack` in `.spec.ipFamilyPolicy`. When Kubernetes assigns an IPv6 and
IPv4 address in `.spec.ClusterIPs`, `.spec.ClusterIP` is set to the IPv6 address because that is
the first element in the `.spec.ClusterIPs` array, overriding the default.
{{< codenew file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" >}}
{{% codenew file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" %}}
#### Dual-stack defaults on existing Services
@ -172,7 +172,7 @@ dual-stack.)
`.spec.ipFamilies` to the address family of the existing Service. The existing Service cluster IP
will be stored in `.spec.ClusterIPs`.
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}}
You can validate this behavior by using kubectl to inspect an existing service.
@ -212,7 +212,7 @@ dual-stack.)
`--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to
`None`.
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}}
You can validate this behavior by using kubectl to inspect an existing headless service with selectors.

View File

@ -73,7 +73,7 @@ Make sure you review your Ingress controller's documentation to understand the c
A minimal Ingress resource example:
{{< codenew file="service/networking/minimal-ingress.yaml" >}}
{{% codenew file="service/networking/minimal-ingress.yaml" %}}
An Ingress needs `apiVersion`, `kind`, `metadata` and `spec` fields.
The name of an Ingress object must be a valid
@ -140,7 +140,7 @@ setting with Service, and will fail validation if both are specified. A common
usage for a `Resource` backend is to ingress data to an object storage backend
with static assets.
{{< codenew file="service/networking/ingress-resource-backend.yaml" >}}
{{% codenew file="service/networking/ingress-resource-backend.yaml" %}}
After creating the Ingress above, you can view it with the following command:
@ -229,7 +229,7 @@ equal to the suffix of the wildcard rule.
| `*.foo.com` | `baz.bar.foo.com` | No match, wildcard only covers a single DNS label |
| `*.foo.com` | `foo.com` | No match, wildcard only covers a single DNS label |
{{< codenew file="service/networking/ingress-wildcard-host.yaml" >}}
{{% codenew file="service/networking/ingress-wildcard-host.yaml" %}}
## Ingress class
@ -238,7 +238,7 @@ configuration. Each Ingress should specify a class, a reference to an
IngressClass resource that contains additional configuration including the name
of the controller that should implement the class.
{{< codenew file="service/networking/external-lb.yaml" >}}
{{% codenew file="service/networking/external-lb.yaml" %}}
The `.spec.parameters` field of an IngressClass lets you reference another
resource that provides configuration related to that IngressClass.
@ -369,7 +369,7 @@ configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the
`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the
default `IngressClass`:
{{< codenew file="service/networking/default-ingressclass.yaml" >}}
{{% codenew file="service/networking/default-ingressclass.yaml" %}}
## Types of Ingress
@ -379,7 +379,7 @@ There are existing Kubernetes concepts that allow you to expose a single Service
(see [alternatives](#alternatives)). You can also do this with an Ingress by specifying a
*default backend* with no rules.
{{< codenew file="service/networking/test-ingress.yaml" >}}
{{% codenew file="service/networking/test-ingress.yaml" %}}
If you create it using `kubectl apply -f` you should be able to view the state
of the Ingress you added:
@ -411,7 +411,7 @@ down to a minimum. For example, a setup like:
It would require an Ingress such as:
{{< codenew file="service/networking/simple-fanout-example.yaml" >}}
{{% codenew file="service/networking/simple-fanout-example.yaml" %}}
When you create the Ingress with `kubectl apply -f`:
@ -456,7 +456,7 @@ Name-based virtual hosts support routing HTTP traffic to multiple host names at
The following Ingress tells the backing load balancer to route requests based on
the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
{{< codenew file="service/networking/name-virtual-host-ingress.yaml" >}}
{{% codenew file="service/networking/name-virtual-host-ingress.yaml" %}}
If you create an Ingress resource without any hosts defined in the rules, then any
web traffic to the IP address of your Ingress controller can be matched without a name based
@ -467,7 +467,7 @@ requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`,
and any traffic whose request host header doesn't match `first.bar.com`
and `second.bar.com` to `service3`.
{{< codenew file="service/networking/name-virtual-host-ingress-no-third-host.yaml" >}}
{{% codenew file="service/networking/name-virtual-host-ingress-no-third-host.yaml" %}}
### TLS
@ -505,7 +505,7 @@ certificates would have to be issued for all the possible sub-domains. Therefore
section.
{{< /note >}}
{{< codenew file="service/networking/tls-example-ingress.yaml" >}}
{{% codenew file="service/networking/tls-example-ingress.yaml" %}}
{{< note >}}
There is a gap between TLS features supported by various Ingress

View File

@ -83,7 +83,7 @@ reference for a full definition of the resource.
An example NetworkPolicy might look like this:
{{< codenew file="service/networking/networkpolicy.yaml" >}}
{{% codenew file="service/networking/networkpolicy.yaml" %}}
{{< note >}}
POSTing this to the API server for your cluster will have no effect unless your chosen networking
@ -212,7 +212,7 @@ in that namespace.
You can create a "default" ingress isolation policy for a namespace by creating a NetworkPolicy
that selects all pods but does not allow any ingress traffic to those pods.
{{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}}
{{% codenew file="service/networking/network-policy-default-deny-ingress.yaml" %}}
This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated
for ingress. This policy does not affect isolation for egress from any pod.
@ -222,7 +222,7 @@ for ingress. This policy does not affect isolation for egress from any pod.
If you want to allow all incoming connections to all pods in a namespace, you can create a policy
that explicitly allows that.
{{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}}
{{% codenew file="service/networking/network-policy-allow-all-ingress.yaml" %}}
With this policy in place, no additional policy or policies can cause any incoming connection to
those pods to be denied. This policy has no effect on isolation for egress from any pod.
@ -232,7 +232,7 @@ those pods to be denied. This policy has no effect on isolation for egress from
You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy
that selects all pods but does not allow any egress traffic from those pods.
{{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}}
{{% codenew file="service/networking/network-policy-default-deny-egress.yaml" %}}
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed
egress traffic. This policy does not change the ingress isolation behavior of any pod.
@ -242,7 +242,7 @@ egress traffic. This policy does not change the ingress isolation behavior of an
If you want to allow all connections from all pods in a namespace, you can create a policy that
explicitly allows all outgoing connections from pods in that namespace.
{{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}}
{{% codenew file="service/networking/network-policy-allow-all-egress.yaml" %}}
With this policy in place, no additional policy or policies can cause any outgoing connection from
those pods to be denied. This policy has no effect on isolation for ingress to any pod.
@ -252,7 +252,7 @@ those pods to be denied. This policy has no effect on isolation for ingress to
You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by
creating the following NetworkPolicy in that namespace.
{{< codenew file="service/networking/network-policy-default-deny-all.yaml" >}}
{{% codenew file="service/networking/network-policy-default-deny-all.yaml" %}}
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed
ingress or egress traffic.
@ -280,7 +280,7 @@ When writing a NetworkPolicy, you can target a range of ports instead of a singl
This is achievable with the usage of the `endPort` field, as the following example:
{{< codenew file="service/networking/networkpolicy-multiport-egress.yaml" >}}
{{% codenew file="service/networking/networkpolicy-multiport-egress.yaml" %}}
The above rule allows any Pod with label `role=db` on the namespace `default` to communicate
with any IP within the range `10.0.0.0/24` over TCP, provided that the target

View File

@ -30,11 +30,11 @@ see the [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all
### Example configuration with a secret, a downwardAPI, and a configMap {#example-configuration-secret-downwardapi-configmap}
{{< codenew file="pods/storage/projected-secret-downwardapi-configmap.yaml" >}}
{{% codenew file="pods/storage/projected-secret-downwardapi-configmap.yaml" %}}
### Example configuration: secrets with a non-default permission mode set {#example-configuration-secrets-nondefault-permission-mode}
{{< codenew file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" >}}
{{% codenew file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" %}}
Each projected volume source is listed in the spec under `sources`. The
parameters are nearly the same with two exceptions:
@ -49,7 +49,7 @@ parameters are nearly the same with two exceptions:
You can inject the token for the current [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
into a Pod at a specified path. For example:
{{< codenew file="pods/storage/projected-service-account-token.yaml" >}}
{{% codenew file="pods/storage/projected-service-account-token.yaml" %}}
The example Pod has a projected volume containing the injected service account
token. Containers in this Pod can use that token to access the Kubernetes API

View File

@ -41,7 +41,7 @@ length of a Job name is no more than 63 characters.
This example CronJob manifest prints the current time and a hello message every minute:
{{< codenew file="application/job/cronjob.yaml" >}}
{{% codenew file="application/job/cronjob.yaml" %}}
([Running Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)
takes you through this example in more detail).

View File

@ -38,7 +38,7 @@ different flags and/or different memory and cpu requests for different hardware
You can describe a DaemonSet in a YAML file. For example, the `daemonset.yaml` file below
describes a DaemonSet that runs the fluentd-elasticsearch Docker image:
{{< codenew file="controllers/daemonset.yaml" >}}
{{% codenew file="controllers/daemonset.yaml" %}}
Create a DaemonSet based on the YAML file:

View File

@ -46,7 +46,7 @@ for a container.
The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods:
{{< codenew file="controllers/nginx-deployment.yaml" >}}
{{% codenew file="controllers/nginx-deployment.yaml" %}}
In this example:

View File

@ -39,7 +39,7 @@ see [CronJob](/docs/concepts/workloads/controllers/cron-jobs/).
Here is an example Job config. It computes π to 2000 places and prints it out.
It takes around 10s to complete.
{{< codenew file="controllers/job.yaml" >}}
{{% codenew file="controllers/job.yaml" %}}
You can run the example with this command:
@ -402,7 +402,7 @@ container exit codes and the Pod conditions.
Here is a manifest for a Job that defines a `podFailurePolicy`:
{{< codenew file="/controllers/job-pod-failure-policy-example.yaml" >}}
{{% codenew file="/controllers/job-pod-failure-policy-example.yaml" %}}
In the example above, the first rule of the Pod failure policy specifies that
the Job should be marked failed if the `main` container fails with the 42 exit

View File

@ -56,7 +56,7 @@ use a Deployment instead, and define your application in the spec section.
## Example
{{< codenew file="controllers/frontend.yaml" >}}
{{% codenew file="controllers/frontend.yaml" %}}
Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster will
create the defined ReplicaSet and the Pods that it manages.
@ -166,7 +166,7 @@ to owning Pods specified by its template-- it can acquire other Pods in the mann
Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest:
{{< codenew file="pods/pod-rs.yaml" >}}
{{% codenew file="pods/pod-rs.yaml" %}}
As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend
ReplicaSet, they will immediately be acquired by it.
@ -381,7 +381,7 @@ A ReplicaSet can also be a target for
a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting
the ReplicaSet we created in the previous example.
{{< codenew file="controllers/hpa-rs.yaml" >}}
{{% codenew file="controllers/hpa-rs.yaml" %}}
Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should
create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage

View File

@ -44,7 +44,7 @@ service, such as web servers.
This example ReplicationController config runs three copies of the nginx web server.
{{< codenew file="controllers/replication.yaml" >}}
{{% codenew file="controllers/replication.yaml" %}}
Run the example job by downloading the example file and then running this command:

View File

@ -46,7 +46,7 @@ A Pod is similar to a set of containers with shared namespaces and shared filesy
The following is an example of a Pod which consists of a container running the image `nginx:1.14.2`.
{{< codenew file="pods/simple-pod.yaml" >}}
{{% codenew file="pods/simple-pod.yaml" %}}
To create the Pod shown above, run the following command:
```shell

View File

@ -273,29 +273,29 @@ Renders to:
### Source code files
You can use the `{{</* codenew */>}}` shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents of the sample file is generic and reusable, and you want the users to try it out themselves.
You can use the `{{%/* codenew */%}}` shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents of the sample file is generic and reusable, and you want the users to try it out themselves.
This shortcode takes in two named parameters: `language` and `file`. The mandatory parameter `file` is used to specify the path to the file being displayed. The optional parameter `language` is used to specify the programming language of the file. If the `language` parameter is not provided, the shortcode will attempt to guess the language based on the file extension.
For example:
```none
{{</* codenew language="yaml" file="application/deployment-scale.yaml" */>}}
{{%/* codenew language="yaml" file="application/deployment-scale.yaml" */%}}
```
The output is:
{{< codenew language="yaml" file="application/deployment-scale.yaml" >}}
{{% codenew language="yaml" file="application/deployment-scale.yaml" %}}
When adding a new sample file, such as a YAML file, create the file in one of the `<LANG>/examples/` subdirectories where `<LANG>` is the language for the page. In the markdown of your page, use the `codenew` shortcode:
```none
{{</* codenew file="<RELATIVE-PATH>/example-yaml>" */>}}
{{%/* codenew file="<RELATIVE-PATH>/example-yaml>" */%}}
```
where `<RELATIVE-PATH>` is the path to the sample file to include, relative to the `examples` directory. The following shortcode references a YAML file located at `/content/en/examples/configmap/configmaps.yaml`.
```none
{{</* codenew file="configmap/configmaps.yaml" */>}}
{{%/* codenew file="configmap/configmaps.yaml" */%}}
```
## Third party content marker

View File

@ -124,22 +124,16 @@ one of the `<LANG>/examples/` subdirectories where `<LANG>` is the language for
the topic. In your topic file, use the `codenew` shortcode:
```none
{{</* codenew file="<RELPATH>/my-example-yaml>" */>}}
{{%/* codenew file="<RELPATH>/my-example-yaml>" */%}}
```
where `<RELPATH>` is the path to the file to include, relative to the
`examples` directory. The following Hugo shortcode references a YAML
file located at `/content/en/examples/pods/storage/gce-volume.yaml`.
```none
{{</* codenew file="pods/storage/gce-volume.yaml" */>}}
{{%/* codenew file="pods/storage/gce-volume.yaml" */%}}
```
{{< note >}}
To show raw Hugo shortcodes as in the above example and prevent Hugo
from interpreting them, use C-style comments directly after the `<` and before
the `>` characters. View the code for this page for an example.
{{< /note >}}
## Showing how to create an API object from a configuration file
If you need to demonstrate how to create an API object based on a

View File

@ -78,7 +78,7 @@ To allow creating a CertificateSigningRequest and retrieving any CertificateSign
For example:
{{< codenew file="access/certificate-signing-request/clusterrole-create.yaml" >}}
{{% codenew file="access/certificate-signing-request/clusterrole-create.yaml" %}}
To allow approving a CertificateSigningRequest:
@ -88,7 +88,7 @@ To allow approving a CertificateSigningRequest:
For example:
{{< codenew file="access/certificate-signing-request/clusterrole-approve.yaml" >}}
{{% codenew file="access/certificate-signing-request/clusterrole-approve.yaml" %}}
To allow signing a CertificateSigningRequest:
@ -96,7 +96,7 @@ To allow signing a CertificateSigningRequest:
* Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests/status`
* Verbs: `sign`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `<signerNameDomain>/<signerNamePath>` or `<signerNameDomain>/*`
{{< codenew file="access/certificate-signing-request/clusterrole-sign.yaml" >}}
{{% codenew file="access/certificate-signing-request/clusterrole-sign.yaml" %}}
## Signers

View File

@ -1240,7 +1240,7 @@ guidance for restricting this access in existing clusters.
If you want new clusters to retain this level of access in the aggregated roles,
you can create the following ClusterRole:
{{< codenew file="access/endpoints-aggregated.yaml" >}}
{{% codenew file="access/endpoints-aggregated.yaml" %}}
## Upgrading from ABAC

View File

@ -265,7 +265,7 @@ updates that Secret with that generated token data.
Here is a sample manifest for such a Secret:
{{< codenew file="secret/serviceaccount/mysecretname.yaml" >}}
{{% codenew file="secret/serviceaccount/mysecretname.yaml" %}}
To create a Secret based on this example, run:

View File

@ -417,7 +417,7 @@ resource to be evaluated.
Here is an example illustrating a few different uses for match conditions:
{{< codenew file="access/validating-admission-policy-match-conditions.yaml" >}}
{{% codenew file="access/validating-admission-policy-match-conditions.yaml" %}}
Match conditions have access to the same CEL variables as validation expressions.
@ -435,7 +435,7 @@ the request is determined as follows:
For example, here is an admission policy with an audit annotation:
{{< codenew file="access/validating-admission-policy-audit-annotation.yaml" >}}
{{% codenew file="access/validating-admission-policy-audit-annotation.yaml" %}}
When an API request is validated with this admission policy, the resulting audit event will look like:
@ -472,7 +472,7 @@ message expression must evaluate to a string.
For example, to better inform the user of the reason of denial when the policy refers to a parameter,
we can have the following validation:
{{< codenew file="access/deployment-replicas-policy.yaml" >}}
{{% codenew file="access/deployment-replicas-policy.yaml" %}}
After creating a params object that limits the replicas to 3 and setting up the binding,
when we try to create a deployment with 5 replicas, we will receive the following message.

View File

@ -332,7 +332,7 @@ resource and its accompanying controller.
Say a user has defined deployment with `replicas` set to the desired value:
{{< codenew file="application/ssa/nginx-deployment.yaml" >}}
{{% codenew file="application/ssa/nginx-deployment.yaml" %}}
And the user has created the deployment using Server-Side Apply like so:
@ -396,7 +396,7 @@ process than it sometimes does.
At this point the user may remove the `replicas` field from their configuration.
{{< codenew file="application/ssa/nginx-deployment-no-replicas.yaml" >}}
{{% codenew file="application/ssa/nginx-deployment-no-replicas.yaml" %}}
Note that whenever the HPA controller sets the `replicas` field to a new value,
the temporary field manager will no longer own any fields and will be

View File

@ -23,7 +23,7 @@ In this exercise, you create a Pod that runs two Containers. The two containers
share a Volume that they can use to communicate. Here is the configuration file
for the Pod:
{{< codenew file="pods/two-container-pod.yaml" >}}
{{% codenew file="pods/two-container-pod.yaml" %}}
In the configuration file, you can see that the Pod has a Volume named
`shared-data`.

View File

@ -36,7 +36,7 @@ require a supported environment. If your environment does not support this, you
The backend is a simple hello greeter microservice. Here is the configuration
file for the backend Deployment:
{{< codenew file="service/access/backend-deployment.yaml" >}}
{{% codenew file="service/access/backend-deployment.yaml" %}}
Create the backend Deployment:
@ -97,7 +97,7 @@ the Pods that it routes traffic to.
First, explore the Service configuration file:
{{< codenew file="service/access/backend-service.yaml" >}}
{{% codenew file="service/access/backend-service.yaml" %}}
In the configuration file, you can see that the Service, named `hello` routes
traffic to Pods that have the labels `app: hello` and `tier: backend`.
@ -125,7 +125,7 @@ configuration file.
The Pods in the frontend Deployment run a nginx image that is configured
to proxy requests to the `hello` backend Service. Here is the nginx configuration file:
{{< codenew file="service/access/frontend-nginx.conf" >}}
{{% codenew file="service/access/frontend-nginx.conf" %}}
Similar to the backend, the frontend has a Deployment and a Service. An important
difference to notice between the backend and frontend services, is that the
@ -133,9 +133,9 @@ configuration for the frontend Service has `type: LoadBalancer`, which means tha
the Service uses a load balancer provisioned by your cloud provider and will be
accessible from outside the cluster.
{{< codenew file="service/access/frontend-service.yaml" >}}
{{% codenew file="service/access/frontend-service.yaml" %}}
{{< codenew file="service/access/frontend-deployment.yaml" >}}
{{% codenew file="service/access/frontend-deployment.yaml" %}}
Create the frontend Deployment and Service:

View File

@ -126,7 +126,7 @@ The following manifest defines an Ingress that sends traffic to your Service via
1. Create `example-ingress.yaml` from the following file:
{{< codenew file="service/networking/example-ingress.yaml" >}}
{{% codenew file="service/networking/example-ingress.yaml" %}}
1. Create the Ingress object by running the following command:

View File

@ -26,7 +26,7 @@ provides load balancing for an application that has two running instances.
Here is the configuration file for the application Deployment:
{{< codenew file="service/access/hello-application.yaml" >}}
{{% codenew file="service/access/hello-application.yaml" %}}
1. Run a Hello World application in your cluster:
Create the application Deployment using the file above:

View File

@ -87,7 +87,7 @@ remote file exists
To limit the access to the `nginx` service so that only Pods with the label `access: true` can query it, create a NetworkPolicy object as follows:
{{< codenew file="service/networking/nginx-policy.yaml" >}}
{{% codenew file="service/networking/nginx-policy.yaml" %}}
The name of a NetworkPolicy object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).

View File

@ -24,7 +24,7 @@ kube-dns.
### Create a simple Pod to use as a test environment
{{< codenew file="admin/dns/dnsutils.yaml" >}}
{{% codenew file="admin/dns/dnsutils.yaml" %}}
{{< note >}}
This example creates a pod in the `default` namespace. DNS name resolution for

View File

@ -86,7 +86,7 @@ container based on the `cluster-proportional-autoscaler-amd64` image.
Create a file named `dns-horizontal-autoscaler.yaml` with this content:
{{< codenew file="admin/dns/dns-horizontal-autoscaler.yaml" >}}
{{% codenew file="admin/dns/dns-horizontal-autoscaler.yaml" %}}
In the file, replace `<SCALE_TARGET>` with your scale target.

View File

@ -47,7 +47,7 @@ kubectl create namespace constraints-cpu-example
Here's a manifest for an example {{< glossary_tooltip text="LimitRange" term_id="limitrange" >}}:
{{< codenew file="admin/resource/cpu-constraints.yaml" >}}
{{% codenew file="admin/resource/cpu-constraints.yaml" %}}
Create the LimitRange:
@ -98,7 +98,7 @@ Here's a manifest for a Pod that has one container. The container manifest
specifies a CPU request of 500 millicpu and a CPU limit of 800 millicpu. These satisfy the
minimum and maximum CPU constraints imposed by the LimitRange for this namespace.
{{< codenew file="admin/resource/cpu-constraints-pod.yaml" >}}
{{% codenew file="admin/resource/cpu-constraints-pod.yaml" %}}
Create the Pod:
@ -140,7 +140,7 @@ kubectl delete pod constraints-cpu-demo --namespace=constraints-cpu-example
Here's a manifest for a Pod that has one container. The container specifies a
CPU request of 500 millicpu and a cpu limit of 1.5 cpu.
{{< codenew file="admin/resource/cpu-constraints-pod-2.yaml" >}}
{{% codenew file="admin/resource/cpu-constraints-pod-2.yaml" %}}
Attempt to create the Pod:
@ -161,7 +161,7 @@ pods "constraints-cpu-demo-2" is forbidden: maximum cpu usage per Container is 8
Here's a manifest for a Pod that has one container. The container specifies a
CPU request of 100 millicpu and a CPU limit of 800 millicpu.
{{< codenew file="admin/resource/cpu-constraints-pod-3.yaml" >}}
{{% codenew file="admin/resource/cpu-constraints-pod-3.yaml" %}}
Attempt to create the Pod:
@ -183,7 +183,7 @@ pods "constraints-cpu-demo-3" is forbidden: minimum cpu usage per Container is 2
Here's a manifest for a Pod that has one container. The container does not
specify a CPU request, nor does it specify a CPU limit.
{{< codenew file="admin/resource/cpu-constraints-pod-4.yaml" >}}
{{% codenew file="admin/resource/cpu-constraints-pod-4.yaml" %}}
Create the Pod:

View File

@ -49,7 +49,7 @@ kubectl create namespace default-cpu-example
Here's a manifest for an example {{< glossary_tooltip text="LimitRange" term_id="limitrange" >}}.
The manifest specifies a default CPU request and a default CPU limit.
{{< codenew file="admin/resource/cpu-defaults.yaml" >}}
{{% codenew file="admin/resource/cpu-defaults.yaml" %}}
Create the LimitRange in the default-cpu-example namespace:
@ -65,7 +65,7 @@ CPU limit of 1.
Here's a manifest for a Pod that has one container. The container
does not specify a CPU request and limit.
{{< codenew file="admin/resource/cpu-defaults-pod.yaml" >}}
{{% codenew file="admin/resource/cpu-defaults-pod.yaml" %}}
Create the Pod.
@ -100,7 +100,7 @@ containers:
Here's a manifest for a Pod that has one container. The container
specifies a CPU limit, but not a request:
{{< codenew file="admin/resource/cpu-defaults-pod-2.yaml" >}}
{{% codenew file="admin/resource/cpu-defaults-pod-2.yaml" %}}
Create the Pod:
@ -132,7 +132,7 @@ resources:
Here's an example manifest for a Pod that has one container. The container
specifies a CPU request, but not a limit:
{{< codenew file="admin/resource/cpu-defaults-pod-3.yaml" >}}
{{% codenew file="admin/resource/cpu-defaults-pod-3.yaml" %}}
Create the Pod:

View File

@ -43,7 +43,7 @@ kubectl create namespace constraints-mem-example
Here's an example manifest for a LimitRange:
{{< codenew file="admin/resource/memory-constraints.yaml" >}}
{{% codenew file="admin/resource/memory-constraints.yaml" %}}
Create the LimitRange:
@ -89,7 +89,7 @@ Here's a manifest for a Pod that has one container. Within the Pod spec, the sol
container specifies a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the
minimum and maximum memory constraints imposed by the LimitRange.
{{< codenew file="admin/resource/memory-constraints-pod.yaml" >}}
{{% codenew file="admin/resource/memory-constraints-pod.yaml" %}}
Create the Pod:
@ -132,7 +132,7 @@ kubectl delete pod constraints-mem-demo --namespace=constraints-mem-example
Here's a manifest for a Pod that has one container. The container specifies a
memory request of 800 MiB and a memory limit of 1.5 GiB.
{{< codenew file="admin/resource/memory-constraints-pod-2.yaml" >}}
{{% codenew file="admin/resource/memory-constraints-pod-2.yaml" %}}
Attempt to create the Pod:
@ -153,7 +153,7 @@ pods "constraints-mem-demo-2" is forbidden: maximum memory usage per Container i
Here's a manifest for a Pod that has one container. That container specifies a
memory request of 100 MiB and a memory limit of 800 MiB.
{{< codenew file="admin/resource/memory-constraints-pod-3.yaml" >}}
{{% codenew file="admin/resource/memory-constraints-pod-3.yaml" %}}
Attempt to create the Pod:
@ -174,7 +174,7 @@ pods "constraints-mem-demo-3" is forbidden: minimum memory usage per Container i
Here's a manifest for a Pod that has one container. The container does not
specify a memory request, and it does not specify a memory limit.
{{< codenew file="admin/resource/memory-constraints-pod-4.yaml" >}}
{{% codenew file="admin/resource/memory-constraints-pod-4.yaml" %}}
Create the Pod:

View File

@ -53,7 +53,7 @@ Here's a manifest for an example {{< glossary_tooltip text="LimitRange" term_id=
The manifest specifies a default memory
request and a default memory limit.
{{< codenew file="admin/resource/memory-defaults.yaml" >}}
{{% codenew file="admin/resource/memory-defaults.yaml" %}}
Create the LimitRange in the default-mem-example namespace:
@ -70,7 +70,7 @@ applies default values: a memory request of 256MiB and a memory limit of 512MiB.
Here's an example manifest for a Pod that has one container. The container
does not specify a memory request and limit.
{{< codenew file="admin/resource/memory-defaults-pod.yaml" >}}
{{% codenew file="admin/resource/memory-defaults-pod.yaml" %}}
Create the Pod.
@ -110,7 +110,7 @@ kubectl delete pod default-mem-demo --namespace=default-mem-example
Here's a manifest for a Pod that has one container. The container
specifies a memory limit, but not a request:
{{< codenew file="admin/resource/memory-defaults-pod-2.yaml" >}}
{{% codenew file="admin/resource/memory-defaults-pod-2.yaml" %}}
Create the Pod:
@ -141,7 +141,7 @@ resources:
Here's a manifest for a Pod that has one container. The container
specifies a memory request, but not a limit:
{{< codenew file="admin/resource/memory-defaults-pod-3.yaml" >}}
{{% codenew file="admin/resource/memory-defaults-pod-3.yaml" %}}
Create the Pod:

View File

@ -42,7 +42,7 @@ kubectl create namespace quota-mem-cpu-example
Here is a manifest for an example ResourceQuota:
{{< codenew file="admin/resource/quota-mem-cpu.yaml" >}}
{{% codenew file="admin/resource/quota-mem-cpu.yaml" %}}
Create the ResourceQuota:
@ -71,7 +71,7 @@ to learn what Kubernetes means by “1 CPU”.
Here is a manifest for an example Pod:
{{< codenew file="admin/resource/quota-mem-cpu-pod.yaml" >}}
{{% codenew file="admin/resource/quota-mem-cpu-pod.yaml" %}}
Create the Pod:
@ -121,7 +121,7 @@ kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example -o json
Here is a manifest for a second Pod:
{{< codenew file="admin/resource/quota-mem-cpu-pod-2.yaml" >}}
{{% codenew file="admin/resource/quota-mem-cpu-pod-2.yaml" %}}
In the manifest, you can see that the Pod has a memory request of 700 MiB.
Notice that the sum of the used memory request and this new memory

View File

@ -39,7 +39,7 @@ kubectl create namespace quota-pod-example
Here is an example manifest for a ResourceQuota:
{{< codenew file="admin/resource/quota-pod.yaml" >}}
{{% codenew file="admin/resource/quota-pod.yaml" %}}
Create the ResourceQuota:
@ -69,7 +69,7 @@ status:
Here is an example manifest for a {{< glossary_tooltip term_id="deployment" >}}:
{{< codenew file="admin/resource/quota-pod-deployment.yaml" >}}
{{% codenew file="admin/resource/quota-pod-deployment.yaml" %}}
In that manifest, `replicas: 3` tells Kubernetes to attempt to create three new Pods, all
running the same application.

View File

@ -73,7 +73,7 @@ Let's create two new namespaces to hold our work.
Use the file [`namespace-dev.yaml`](/examples/admin/namespace-dev.yaml) which describes a `development` namespace:
{{< codenew language="yaml" file="admin/namespace-dev.yaml" >}}
{{% codenew language="yaml" file="admin/namespace-dev.yaml" %}}
Create the `development` namespace using kubectl.
@ -83,7 +83,7 @@ kubectl create -f https://k8s.io/examples/admin/namespace-dev.yaml
Save the following contents into file [`namespace-prod.yaml`](/examples/admin/namespace-prod.yaml) which describes a `production` namespace:
{{< codenew language="yaml" file="admin/namespace-prod.yaml" >}}
{{% codenew language="yaml" file="admin/namespace-prod.yaml" %}}
And then let's create the `production` namespace using kubectl.
@ -226,7 +226,7 @@ At this point, all requests we make to the Kubernetes cluster from the command l
Let's create some contents.
{{< codenew file="admin/snowflake-deployment.yaml" >}}
{{% codenew file="admin/snowflake-deployment.yaml" %}}
Apply the manifest to create a Deployment

View File

@ -40,7 +40,7 @@ kubectl create namespace quota-object-example
Here is the configuration file for a ResourceQuota object:
{{< codenew file="admin/resource/quota-objects.yaml" >}}
{{% codenew file="admin/resource/quota-objects.yaml" %}}
Create the ResourceQuota:
@ -74,7 +74,7 @@ status:
Here is the configuration file for a PersistentVolumeClaim object:
{{< codenew file="admin/resource/quota-objects-pvc.yaml" >}}
{{% codenew file="admin/resource/quota-objects-pvc.yaml" %}}
Create the PersistentVolumeClaim:
@ -99,7 +99,7 @@ pvc-quota-demo Pending
Here is the configuration file for a second PersistentVolumeClaim:
{{< codenew file="admin/resource/quota-objects-pvc-2.yaml" >}}
{{% codenew file="admin/resource/quota-objects-pvc-2.yaml" %}}
Attempt to create the second PersistentVolumeClaim:

View File

@ -92,7 +92,7 @@ projects in repositories maintained by cloud vendors or by SIGs.
For providers already in Kubernetes core, you can run the in-tree cloud controller
manager as a DaemonSet in your cluster, use the following as a guideline:
{{< codenew file="admin/cloud/ccm-example.yaml" >}}
{{% codenew file="admin/cloud/ccm-example.yaml" %}}
## Limitations

View File

@ -71,7 +71,7 @@ in the Container resource manifest. To specify a CPU limit, include `resources:l
In this exercise, you create a Pod that has one container. The container has a request
of 0.5 CPU and a limit of 1 CPU. Here is the configuration file for the Pod:
{{< codenew file="pods/resource/cpu-request-limit.yaml" >}}
{{% codenew file="pods/resource/cpu-request-limit.yaml" %}}
The `args` section of the configuration file provides arguments for the container when it starts.
The `-cpus "2"` argument tells the Container to attempt to use 2 CPUs.
@ -163,7 +163,7 @@ the capacity of any Node in your cluster. Here is the configuration file for a P
that has one Container. The Container requests 100 CPU, which is likely to exceed the
capacity of any Node in your cluster.
{{< codenew file="pods/resource/cpu-request-limit-2.yaml" >}}
{{% codenew file="pods/resource/cpu-request-limit-2.yaml" %}}
Create the Pod:

View File

@ -69,7 +69,7 @@ In this exercise, you create a Pod that has one Container. The Container has a m
request of 100 MiB and a memory limit of 200 MiB. Here's the configuration file
for the Pod:
{{< codenew file="pods/resource/memory-request-limit.yaml" >}}
{{% codenew file="pods/resource/memory-request-limit.yaml" %}}
The `args` section in the configuration file provides arguments for the Container when it starts.
The `"--vm-bytes", "150M"` arguments tell the Container to attempt to allocate 150 MiB of memory.
@ -139,7 +139,7 @@ In this exercise, you create a Pod that attempts to allocate more memory than it
Here is the configuration file for a Pod that has one Container with a
memory request of 50 MiB and a memory limit of 100 MiB:
{{< codenew file="pods/resource/memory-request-limit-2.yaml" >}}
{{% codenew file="pods/resource/memory-request-limit-2.yaml" %}}
In the `args` section of the configuration file, you can see that the Container
will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.
@ -248,7 +248,7 @@ capacity of any Node in your cluster. Here is the configuration file for a Pod t
Container with a request for 1000 GiB of memory, which likely exceeds the capacity
of any Node in your cluster.
{{< codenew file="pods/resource/memory-request-limit-3.yaml" >}}
{{% codenew file="pods/resource/memory-request-limit-3.yaml" %}}
Create the Pod:

View File

@ -64,7 +64,7 @@ Kubernetes cluster.
This manifest describes a Pod that has a `requiredDuringSchedulingIgnoredDuringExecution` node affinity,`disktype: ssd`.
This means that the pod will get scheduled only on a node that has a `disktype=ssd` label.
{{< codenew file="pods/pod-nginx-required-affinity.yaml" >}}
{{% codenew file="pods/pod-nginx-required-affinity.yaml" %}}
1. Apply the manifest to create a Pod that is scheduled onto your
chosen node:
@ -91,7 +91,7 @@ This means that the pod will get scheduled only on a node that has a `disktype=s
This manifest describes a Pod that has a `preferredDuringSchedulingIgnoredDuringExecution` node affinity,`disktype: ssd`.
This means that the pod will prefer a node that has a `disktype=ssd` label.
{{< codenew file="pods/pod-nginx-preferred-affinity.yaml" >}}
{{% codenew file="pods/pod-nginx-preferred-affinity.yaml" %}}
1. Apply the manifest to create a Pod that is scheduled onto your
chosen node:

View File

@ -66,7 +66,7 @@ This pod configuration file describes a pod that has a node selector,
`disktype: ssd`. This means that the pod will get scheduled on a node that has
a `disktype=ssd` label.
{{< codenew file="pods/pod-nginx.yaml" >}}
{{% codenew file="pods/pod-nginx.yaml" %}}
1. Use the configuration file to create a pod that will get scheduled on your
chosen node:
@ -91,7 +91,7 @@ a `disktype=ssd` label.
You can also schedule a pod to one specific node via setting `nodeName`.
{{< codenew file="pods/pod-nginx-specific-node.yaml" >}}
{{% codenew file="pods/pod-nginx-specific-node.yaml" %}}
Use the configuration file to create a pod that will get scheduled on `foo-node` only.

View File

@ -30,7 +30,7 @@ for the postStart and preStop events.
Here is the configuration file for the Pod:
{{< codenew file="pods/lifecycle-events.yaml" >}}
{{% codenew file="pods/lifecycle-events.yaml" %}}
In the configuration file, you can see that the postStart command writes a `message`
file to the Container's `/usr/share` directory. The preStop command shuts down

View File

@ -57,7 +57,7 @@ liveness probes to detect and remedy such situations.
In this exercise, you create a Pod that runs a container based on the
`registry.k8s.io/busybox` image. Here is the configuration file for the Pod:
{{< codenew file="pods/probe/exec-liveness.yaml" >}}
{{% codenew file="pods/probe/exec-liveness.yaml" %}}
In the configuration file, you can see that the Pod has a single `Container`.
The `periodSeconds` field specifies that the kubelet should perform a liveness
@ -142,7 +142,7 @@ liveness-exec 1/1 Running 1 1m
Another kind of liveness probe uses an HTTP GET request. Here is the configuration
file for a Pod that runs a container based on the `registry.k8s.io/liveness` image.
{{< codenew file="pods/probe/http-liveness.yaml" >}}
{{% codenew file="pods/probe/http-liveness.yaml" %}}
In the configuration file, you can see that the Pod has a single container.
The `periodSeconds` field specifies that the kubelet should perform a liveness
@ -203,7 +203,7 @@ kubelet will attempt to open a socket to your container on the specified port.
If it can establish a connection, the container is considered healthy, if it
can't it is considered a failure.
{{< codenew file="pods/probe/tcp-liveness-readiness.yaml" >}}
{{% codenew file="pods/probe/tcp-liveness-readiness.yaml" %}}
As you can see, configuration for a TCP check is quite similar to an HTTP check.
This example uses both readiness and liveness probes. The kubelet will send the
@ -241,7 +241,7 @@ Similarly you can configure readiness and startup probes.
Here is an example manifest:
{{< codenew file="pods/probe/grpc-liveness.yaml" >}}
{{% codenew file="pods/probe/grpc-liveness.yaml" %}}
To use a gRPC probe, `port` must be configured. If you want to distinguish probes of different types
and probes for different features you can use the `service` field.

View File

@ -89,7 +89,7 @@ to set up
Here is the configuration file for the hostPath PersistentVolume:
{{< codenew file="pods/storage/pv-volume.yaml" >}}
{{% codenew file="pods/storage/pv-volume.yaml" %}}
The configuration file specifies that the volume is at `/mnt/data` on the
cluster's Node. The configuration also specifies a size of 10 gibibytes and
@ -127,7 +127,7 @@ access for at most one Node at a time.
Here is the configuration file for the PersistentVolumeClaim:
{{< codenew file="pods/storage/pv-claim.yaml" >}}
{{% codenew file="pods/storage/pv-claim.yaml" %}}
Create the PersistentVolumeClaim:
@ -173,7 +173,7 @@ The next step is to create a Pod that uses your PersistentVolumeClaim as a volum
Here is the configuration file for the Pod:
{{< codenew file="pods/storage/pv-pod.yaml" >}}
{{% codenew file="pods/storage/pv-pod.yaml" %}}
Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but
it does not specify a PersistentVolume. From the Pod's point of view, the claim
@ -244,7 +244,7 @@ You can now close the shell to your Node.
## Mounting the same persistentVolume in two places
{{< codenew file="pods/storage/pv-duplicate.yaml" >}}
{{% codenew file="pods/storage/pv-duplicate.yaml" %}}
You can perform 2 volume mounts on your nginx container:

View File

@ -547,7 +547,7 @@ section, and learn how to use these objects with Pods.
2. Assign the `special.how` value defined in the ConfigMap to the `SPECIAL_LEVEL_KEY`
environment variable in the Pod specification.
{{< codenew file="pods/pod-single-configmap-env-variable.yaml" >}}
{{% codenew file="pods/pod-single-configmap-env-variable.yaml" %}}
Create the Pod:
@ -562,7 +562,7 @@ section, and learn how to use these objects with Pods.
As with the previous example, create the ConfigMaps first.
Here is the manifest you will use:
{{< codenew file="configmap/configmaps.yaml" >}}
{{% codenew file="configmap/configmaps.yaml" %}}
* Create the ConfigMap:
@ -572,7 +572,7 @@ Here is the manifest you will use:
* Define the environment variables in the Pod specification.
{{< codenew file="pods/pod-multiple-configmap-env-variable.yaml" >}}
{{% codenew file="pods/pod-multiple-configmap-env-variable.yaml" %}}
Create the Pod:
@ -591,7 +591,7 @@ Here is the manifest you will use:
* Create a ConfigMap containing multiple key-value pairs.
{{< codenew file="configmap/configmap-multikeys.yaml" >}}
{{% codenew file="configmap/configmap-multikeys.yaml" %}}
Create the ConfigMap:
@ -602,7 +602,7 @@ Here is the manifest you will use:
* Use `envFrom` to define all of the ConfigMap's data as container environment variables. The
key from the ConfigMap becomes the environment variable name in the Pod.
{{< codenew file="pods/pod-configmap-envFrom.yaml" >}}
{{% codenew file="pods/pod-configmap-envFrom.yaml" %}}
Create the Pod:
@ -624,7 +624,7 @@ using the `$(VAR_NAME)` Kubernetes substitution syntax.
For example, the following Pod manifest:
{{< codenew file="pods/pod-configmap-env-var-valueFrom.yaml" >}}
{{% codenew file="pods/pod-configmap-env-var-valueFrom.yaml" %}}
Create that Pod, by running:
@ -651,7 +651,7 @@ the ConfigMap. The file contents become the key's value.
The examples in this section refer to a ConfigMap named `special-config`:
{{< codenew file="configmap/configmap-multikeys.yaml" >}}
{{% codenew file="configmap/configmap-multikeys.yaml" %}}
Create the ConfigMap:
@ -666,7 +666,7 @@ This adds the ConfigMap data to the directory specified as `volumeMounts.mountPa
case, `/etc/config`). The `command` section lists directory files with names that match the
keys in ConfigMap.
{{< codenew file="pods/pod-configmap-volume.yaml" >}}
{{% codenew file="pods/pod-configmap-volume.yaml" %}}
Create the Pod:
@ -700,7 +700,7 @@ kubectl delete pod dapi-test-pod --now
Use the `path` field to specify the desired file path for specific ConfigMap items.
In this case, the `SPECIAL_LEVEL` item will be mounted in the `config-volume` volume at `/etc/config/keys`.
{{< codenew file="pods/pod-configmap-volume-specific-key.yaml" >}}
{{% codenew file="pods/pod-configmap-volume-specific-key.yaml" %}}
Create the Pod:

View File

@ -23,7 +23,7 @@ container starts.
Here is the configuration file for the Pod:
{{< codenew file="pods/init-containers.yaml" >}}
{{% codenew file="pods/init-containers.yaml" %}}
In the configuration file, you can see that the Pod has a Volume that the init
container and the application container share.

View File

@ -29,7 +29,7 @@ In this exercise, you create username and password {{< glossary_tooltip text="Se
Here is the configuration file for the Pod:
{{< codenew file="pods/storage/projected.yaml" >}}
{{% codenew file="pods/storage/projected.yaml" %}}
1. Create the Secrets:

View File

@ -29,7 +29,7 @@ The Windows security context options that you specify for a Pod apply to all Con
Here is a configuration file for a Windows Pod that has the `runAsUserName` field set:
{{< codenew file="windows/run-as-username-pod.yaml" >}}
{{% codenew file="windows/run-as-username-pod.yaml" %}}
Create the Pod:
@ -69,7 +69,7 @@ The Windows security context options that you specify for a Container apply only
Here is the configuration file for a Pod that has one Container, and the `runAsUserName` field is set at the Pod level and the Container level:
{{< codenew file="windows/run-as-username-container.yaml" >}}
{{% codenew file="windows/run-as-username-container.yaml" %}}
Create the Pod:

View File

@ -403,7 +403,7 @@ You can configure this behavior for the `spec` of a Pod using a
To provide a Pod with a token with an audience of `vault` and a validity duration
of two hours, you could define a Pod manifest that is similar to:
{{< codenew file="pods/pod-projected-svc-token.yaml" >}}
{{% codenew file="pods/pod-projected-svc-token.yaml" %}}
Create the Pod:

View File

@ -28,7 +28,7 @@ Volume of type
that lasts for the life of the Pod, even if the Container terminates and
restarts. Here is the configuration file for the Pod:
{{< codenew file="pods/storage/redis.yaml" >}}
{{% codenew file="pods/storage/redis.yaml" %}}
1. Create the Pod:

View File

@ -37,7 +37,7 @@ descriptive resource name.
Here is the configuration file for a Pod that has one Container:
{{< codenew file="pods/resource/extended-resource-pod.yaml" >}}
{{% codenew file="pods/resource/extended-resource-pod.yaml" %}}
In the configuration file, you can see that the Container requests 3 dongles.
@ -73,7 +73,7 @@ Requests:
Here is the configuration file for a Pod that has one Container. The Container requests
two dongles.
{{< codenew file="pods/resource/extended-resource-pod-2.yaml" >}}
{{% codenew file="pods/resource/extended-resource-pod-2.yaml" %}}
Kubernetes will not be able to satisfy the request for two dongles, because the first Pod
used three of the four available dongles.

View File

@ -185,7 +185,7 @@ You have successfully set your Docker credentials as a Secret called `regcred` i
Here is a manifest for an example Pod that needs access to your Docker credentials in `regcred`:
{{< codenew file="pods/private-reg-pod.yaml" >}}
{{% codenew file="pods/private-reg-pod.yaml" %}}
Download the above file onto your computer:

View File

@ -56,7 +56,7 @@ cannot define resources so these restrictions do not apply.
Here is a manifest for a Pod that has one Container. The Container has a memory limit and a
memory request, both equal to 200 MiB. The Container has a CPU limit and a CPU request, both equal to 700 milliCPU:
{{< codenew file="pods/qos/qos-pod.yaml" >}}
{{% codenew file="pods/qos/qos-pod.yaml" %}}
Create the Pod:
@ -116,7 +116,7 @@ A Pod is given a QoS class of `Burstable` if:
Here is a manifest for a Pod that has one Container. The Container has a memory limit of 200 MiB
and a memory request of 100 MiB.
{{< codenew file="pods/qos/qos-pod-2.yaml" >}}
{{% codenew file="pods/qos/qos-pod-2.yaml" %}}
Create the Pod:
@ -165,7 +165,7 @@ have any memory or CPU limits or requests.
Here is a manifest for a Pod that has one Container. The Container has no memory or CPU
limits or requests:
{{< codenew file="pods/qos/qos-pod-3.yaml" >}}
{{% codenew file="pods/qos/qos-pod-3.yaml" %}}
Create the Pod:
@ -205,7 +205,7 @@ kubectl delete pod qos-demo-3 --namespace=qos-example
Here is a manifest for a Pod that has two Containers. One container specifies a memory
request of 200 MiB. The other Container does not specify any requests or limits.
{{< codenew file="pods/qos/qos-pod-4.yaml" >}}
{{% codenew file="pods/qos/qos-pod-4.yaml" %}}
Notice that this Pod meets the criteria for QoS class `Burstable`. That is, it does not meet the
criteria for QoS class `Guaranteed`, and one of its Containers has a memory request.

View File

@ -107,7 +107,7 @@ class pod by specifying requests and/or limits for a pod's containers.
Consider the following manifest for a Pod that has one Container.
{{< codenew file="pods/qos/qos-pod-5.yaml" >}}
{{% codenew file="pods/qos/qos-pod-5.yaml" %}}
Create the pod in the `qos-example` namespace:

View File

@ -58,7 +58,7 @@ in the Pod specification. The `securityContext` field is a
The security settings that you specify for a Pod apply to all Containers in the Pod.
Here is a configuration file for a Pod that has a `securityContext` and an `emptyDir` volume:
{{< codenew file="pods/security/security-context.yaml" >}}
{{% codenew file="pods/security/security-context.yaml" %}}
In the configuration file, the `runAsUser` field specifies that for any Containers in
the Pod, all processes run with user ID 1000. The `runAsGroup` field specifies the primary group ID of 3000 for
@ -221,7 +221,7 @@ there is overlap. Container settings do not affect the Pod's Volumes.
Here is the configuration file for a Pod that has one Container. Both the Pod
and the Container have a `securityContext` field:
{{< codenew file="pods/security/security-context-2.yaml" >}}
{{% codenew file="pods/security/security-context-2.yaml" %}}
Create the Pod:
@ -274,7 +274,7 @@ of the root user. To add or remove Linux capabilities for a Container, include t
First, see what happens when you don't include a `capabilities` field.
Here is configuration file that does not add or remove any Container capabilities:
{{< codenew file="pods/security/security-context-3.yaml" >}}
{{% codenew file="pods/security/security-context-3.yaml" %}}
Create the Pod:
@ -336,7 +336,7 @@ that it has additional capabilities set.
Here is the configuration file for a Pod that runs one Container. The configuration
adds the `CAP_NET_ADMIN` and `CAP_SYS_TIME` capabilities:
{{< codenew file="pods/security/security-context-4.yaml" >}}
{{% codenew file="pods/security/security-context-4.yaml" %}}
Create the Pod:

View File

@ -29,7 +29,7 @@ include debugging utilities like a shell.
Process namespace sharing is enabled using the `shareProcessNamespace` field of
`.spec` for a Pod. For example:
{{< codenew file="pods/share-process-namespace.yaml" >}}
{{% codenew file="pods/share-process-namespace.yaml" %}}
1. Create the pod `nginx` on your cluster:

View File

@ -62,7 +62,7 @@ created without user namespaces.**
A user namespace for a stateless pod is enabled setting the `hostUsers` field of
`.spec` to `false`. For example:
{{< codenew file="pods/user-namespaces-stateless.yaml" >}}
{{% codenew file="pods/user-namespaces-stateless.yaml" %}}
1. Create the pod on your cluster:

View File

@ -25,7 +25,7 @@ This page explains how to debug Pods running (or crashing) on a Node.
For this example we'll use a Deployment to create two pods, similar to the earlier example.
{{< codenew file="application/nginx-with-request.yaml" >}}
{{% codenew file="application/nginx-with-request.yaml" %}}
Create deployment by running following command:

View File

@ -27,7 +27,7 @@ the general
In this exercise, you create a Pod that runs one container.
The manifest for that Pod specifies a command that runs when the container starts:
{{< codenew file="debug/termination.yaml" >}}
{{% codenew file="debug/termination.yaml" %}}
1. Create a Pod based on the YAML configuration file:

View File

@ -29,7 +29,7 @@ running container.
In this exercise, you create a Pod that has one container. The container
runs the nginx image. Here is the configuration file for the Pod:
{{< codenew file="application/shell-demo.yaml" >}}
{{% codenew file="application/shell-demo.yaml" %}}
Create the Pod:

View File

@ -80,7 +80,7 @@ A policy with no (0) rules is treated as illegal.
Below is an example audit policy file:
{{< codenew file="audit/audit-policy.yaml" >}}
{{% codenew file="audit/audit-policy.yaml" %}}
You can use a minimal audit policy file to log all requests at the `Metadata` level:

View File

@ -42,7 +42,7 @@ to detect customized node problems. For example:
1. Create a Node Problem Detector configuration similar to `node-problem-detector.yaml`:
{{< codenew file="debug/node-problem-detector.yaml" >}}
{{% codenew file="debug/node-problem-detector.yaml" %}}
{{< note >}}
You should verify that the system log directory is right for your operating system distribution.
@ -80,7 +80,7 @@ to overwrite the configuration:
1. Change the `node-problem-detector.yaml` to use the `ConfigMap`:
{{< codenew file="debug/node-problem-detector-configmap.yaml" >}}
{{% codenew file="debug/node-problem-detector-configmap.yaml" %}}
1. Recreate the Node Problem Detector with the new configuration file:

View File

@ -69,7 +69,7 @@ for this example. A [Deployment](/docs/concepts/workloads/controllers/deployment
thereby making the scheduler resilient to failures. Here is the deployment
config. Save it as `my-scheduler.yaml`:
{{< codenew file="admin/sched/my-scheduler.yaml" >}}
{{% codenew file="admin/sched/my-scheduler.yaml" %}}
In the above manifest, you use a [KubeSchedulerConfiguration](/docs/reference/scheduling/config/)
to customize the behavior of your scheduler implementation. This configuration has been passed to
@ -139,7 +139,7 @@ Add your scheduler name to the resourceNames of the rule applied for `endpoints`
kubectl edit clusterrole system:kube-scheduler
```
{{< codenew file="admin/sched/clusterrole.yaml" >}}
{{% codenew file="admin/sched/clusterrole.yaml" %}}
## Specify schedulers for pods
@ -150,7 +150,7 @@ scheduler in that pod spec. Let's look at three examples.
- Pod spec without any scheduler name
{{< codenew file="admin/sched/pod1.yaml" >}}
{{% codenew file="admin/sched/pod1.yaml" %}}
When no scheduler name is supplied, the pod is automatically scheduled using the
default-scheduler.
@ -163,7 +163,7 @@ scheduler in that pod spec. Let's look at three examples.
- Pod spec with `default-scheduler`
{{< codenew file="admin/sched/pod2.yaml" >}}
{{% codenew file="admin/sched/pod2.yaml" %}}
A scheduler is specified by supplying the scheduler name as a value to `spec.schedulerName`. In this case, we supply the name of the
default scheduler which is `default-scheduler`.
@ -176,7 +176,7 @@ scheduler in that pod spec. Let's look at three examples.
- Pod spec with `my-scheduler`
{{< codenew file="admin/sched/pod3.yaml" >}}
{{% codenew file="admin/sched/pod3.yaml" %}}
In this case, we specify that this pod should be scheduled using the scheduler that we
deployed - `my-scheduler`. Note that the value of `spec.schedulerName` should match the name supplied for the scheduler

View File

@ -23,7 +23,7 @@ plane hosts. If you do not already have a cluster, you can create one by using
The following steps require an egress configuration, for example:
{{< codenew file="admin/konnectivity/egress-selector-configuration.yaml" >}}
{{% codenew file="admin/konnectivity/egress-selector-configuration.yaml" %}}
You need to configure the API Server to use the Konnectivity service
and direct the network traffic to the cluster nodes:
@ -74,12 +74,12 @@ that the Kubernetes components are deployed as a {{< glossary_tooltip text="stat
term_id="static-pod" >}} in your cluster. If not, you can deploy the Konnectivity
server as a DaemonSet.
{{< codenew file="admin/konnectivity/konnectivity-server.yaml" >}}
{{% codenew file="admin/konnectivity/konnectivity-server.yaml" %}}
Then deploy the Konnectivity agents in your cluster:
{{< codenew file="admin/konnectivity/konnectivity-agent.yaml" >}}
{{% codenew file="admin/konnectivity/konnectivity-agent.yaml" %}}
Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:
{{< codenew file="admin/konnectivity/konnectivity-rbac.yaml" >}}
{{% codenew file="admin/konnectivity/konnectivity-rbac.yaml" %}}

View File

@ -42,7 +42,7 @@ The `command` field corresponds to `entrypoint` in some container runtimes.
In this exercise, you create a Pod that runs one container. The configuration
file for the Pod defines a command and two arguments:
{{< codenew file="pods/commands.yaml" >}}
{{% codenew file="pods/commands.yaml" %}}
1. Create a Pod based on the YAML configuration file:

View File

@ -42,7 +42,7 @@ file for the Pod defines an environment variable with name `DEMO_GREETING` and
value `"Hello from the environment"`. Here is the configuration manifest for the
Pod:
{{< codenew file="pods/inject/envars.yaml" >}}
{{% codenew file="pods/inject/envars.yaml" %}}
1. Create a Pod based on that manifest:

View File

@ -26,7 +26,7 @@ In this exercise, you create a Pod that runs one container. The configuration
file for the Pod defines a dependent environment variable with common usage defined. Here is the configuration manifest for the
Pod:
{{< codenew file="pods/inject/dependent-envars.yaml" >}}
{{% codenew file="pods/inject/dependent-envars.yaml" %}}
1. Create a Pod based on that manifest:

View File

@ -38,7 +38,7 @@ Use a local tool trusted by your OS to decrease the security risks of external t
Here is a configuration file you can use to create a Secret that holds your
username and password:
{{< codenew file="pods/inject/secret.yaml" >}}
{{% codenew file="pods/inject/secret.yaml" %}}
1. Create the Secret
@ -97,7 +97,7 @@ through each step explicitly to demonstrate what is happening.
Here is a configuration file you can use to create a Pod:
{{< codenew file="pods/inject/secret-pod.yaml" >}}
{{% codenew file="pods/inject/secret-pod.yaml" %}}
1. Create the Pod:
@ -252,7 +252,7 @@ secrets change.
- Assign the `backend-username` value defined in the Secret to the `SECRET_USERNAME` environment variable in the Pod specification.
{{< codenew file="pods/inject/pod-single-secret-env-variable.yaml" >}}
{{% codenew file="pods/inject/pod-single-secret-env-variable.yaml" %}}
- Create the Pod:
@ -282,7 +282,7 @@ secrets change.
- Define the environment variables in the Pod specification.
{{< codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" >}}
{{% codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" %}}
- Create the Pod:
@ -315,7 +315,7 @@ This functionality is available in Kubernetes v1.6 and later.
- Use envFrom to define all of the Secret's data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.
{{< codenew file="pods/inject/pod-secret-envFrom.yaml" >}}
{{% codenew file="pods/inject/pod-secret-envFrom.yaml" %}}
- Create the Pod:

View File

@ -32,7 +32,7 @@ In this part of exercise, you create a Pod that has one container, and you
project Pod-level fields into the running container as files.
Here is the manifest for the Pod:
{{< codenew file="pods/inject/dapi-volume.yaml" >}}
{{% codenew file="pods/inject/dapi-volume.yaml" %}}
In the manifest, you can see that the Pod has a `downwardAPI` Volume,
and the container mounts the volume at `/etc/podinfo`.
@ -155,7 +155,7 @@ definition, but taken from the specific
rather than from the Pod overall. Here is a manifest for a Pod that again has
just one container:
{{< codenew file="pods/inject/dapi-volume-resources.yaml" >}}
{{% codenew file="pods/inject/dapi-volume-resources.yaml" %}}
In the manifest, you can see that the Pod has a
[`downwardAPI` volume](/docs/concepts/storage/volumes/#downwardapi),

View File

@ -34,7 +34,7 @@ Read more about accessing Services [here](/docs/tutorials/services/connect-appli
In this part of exercise, you create a Pod that has one container, and you
project Pod-level fields into the running container as environment variables.
{{< codenew file="pods/inject/dapi-envars-pod.yaml" >}}
{{% codenew file="pods/inject/dapi-envars-pod.yaml" %}}
In that manifest, you can see five environment variables. The `env`
field is an array of
@ -119,7 +119,7 @@ rather than from the Pod overall.
Here is a manifest for another Pod that again has just one container:
{{< codenew file="pods/inject/dapi-envars-container.yaml" >}}
{{% codenew file="pods/inject/dapi-envars-container.yaml" %}}
In this manifest, you can see four environment variables. The `env`
field is an array of

View File

@ -22,7 +22,7 @@ This page shows how to run automated tasks using Kubernetes {{< glossary_tooltip
Cron jobs require a config file.
Here is a manifest for a CronJob that runs a simple demonstration task every minute:
{{< codenew file="application/job/cronjob.yaml" >}}
{{% codenew file="application/job/cronjob.yaml" %}}
Run the example CronJob by using this command:

View File

@ -186,7 +186,7 @@ We will use the `amqp-consume` utility to read the message
from the queue and run our actual program. Here is a very simple
example program:
{{< codenew language="python" file="application/job/rabbitmq/worker.py" >}}
{{% codenew language="python" file="application/job/rabbitmq/worker.py" %}}
Give the script execution permission:
@ -230,7 +230,7 @@ Here is a job definition. You'll need to make a copy of the Job and edit the
image to match the name you used, and call it `./job.yaml`.
{{< codenew file="application/job/rabbitmq/job.yaml" >}}
{{% codenew file="application/job/rabbitmq/job.yaml" %}}
In this example, each pod works on one item from the queue and then exits.
So, the completion count of the Job corresponds to the number of work items

View File

@ -119,7 +119,7 @@ called rediswq.py ([Download](/examples/application/job/redis/rediswq.py)).
The "worker" program in each Pod of the Job uses the work queue
client library to get work. Here it is:
{{< codenew language="python" file="application/job/redis/worker.py" >}}
{{% codenew language="python" file="application/job/redis/worker.py" %}}
You could also download [`worker.py`](/examples/application/job/redis/worker.py),
[`rediswq.py`](/examples/application/job/redis/rediswq.py), and
@ -158,7 +158,7 @@ gcloud docker -- push gcr.io/<project>/job-wq-2
Here is the job definition:
{{< codenew file="application/job/redis/job.yaml" >}}
{{% codenew file="application/job/redis/job.yaml" %}}
Be sure to edit the job template to
change `gcr.io/myproject` to your own path.

View File

@ -77,7 +77,7 @@ the start of the clip.
Here is a sample Job manifest that uses `Indexed` completion mode:
{{< codenew language="yaml" file="application/job/indexed-job.yaml" >}}
{{% codenew language="yaml" file="application/job/indexed-job.yaml" %}}
In the example above, you use the builtin `JOB_COMPLETION_INDEX` environment
variable set by the Job controller for all containers. An [init container](/docs/concepts/workloads/pods/init-containers/)
@ -92,7 +92,7 @@ Alternatively, you can directly [use the downward API to pass the annotation
value as a volume file](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#store-pod-fields),
like shown in the following example:
{{< codenew language="yaml" file="application/job/indexed-job-vol.yaml" >}}
{{% codenew language="yaml" file="application/job/indexed-job-vol.yaml" %}}
## Running the Job

View File

@ -43,7 +43,7 @@ pip install --user jinja2
First, download the following template of a Job to a file called `job-tmpl.yaml`.
Here's what you'll download:
{{< codenew file="application/job/job-tmpl.yaml" >}}
{{% codenew file="application/job/job-tmpl.yaml" %}}
```shell
# Use curl to download job-tmpl.yaml

View File

@ -39,7 +39,7 @@ software bug.
First, create a Job based on the config:
{{< codenew file="/controllers/job-pod-failure-policy-failjob.yaml" >}}
{{% codenew file="/controllers/job-pod-failure-policy-failjob.yaml" %}}
by running:
@ -85,7 +85,7 @@ node while the Pod is running on it (within 90s since the Pod is scheduled).
1. Create a Job based on the config:
{{< codenew file="/controllers/job-pod-failure-policy-ignore.yaml" >}}
{{% codenew file="/controllers/job-pod-failure-policy-ignore.yaml" %}}
by running:
@ -145,7 +145,7 @@ deleted pods, in the `Pending` phase, to a terminal phase
1. First, create a Job based on the config:
{{< codenew file="/controllers/job-pod-failure-policy-config-issue.yaml" >}}
{{% codenew file="/controllers/job-pod-failure-policy-config-issue.yaml" %}}
by running:

View File

@ -33,7 +33,7 @@ Let's create a {{<glossary_tooltip term_id="daemonset" text="DaemonSet">}} which
Next, use a `nodeSelector` to ensure that the DaemonSet only runs Pods on nodes
with the `ssd` label set to `"true"`.
{{<codenew file="controllers/daemonset-label-selector.yaml">}}
{{% codenew file="controllers/daemonset-label-selector.yaml" %}}
### Step 3: Create the DaemonSet

View File

@ -46,7 +46,7 @@ You may want to set
This YAML file specifies a DaemonSet with an update strategy as 'RollingUpdate'
{{< codenew file="controllers/fluentd-daemonset.yaml" >}}
{{% codenew file="controllers/fluentd-daemonset.yaml" %}}
After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:
@ -92,7 +92,7 @@ manifest accordingly.
Any updates to a `RollingUpdate` DaemonSet `.spec.template` will trigger a rolling
update. Let's update the DaemonSet by applying a new YAML file. This can be done with several different `kubectl` commands.
{{< codenew file="controllers/fluentd-daemonset-update.yaml" >}}
{{% codenew file="controllers/fluentd-daemonset-update.yaml" %}}
#### Declarative commands

View File

@ -71,7 +71,7 @@ Add the `-R` flag to recursively process directories.
Here's an example of an object configuration file:
{{< codenew file="application/simple_deployment.yaml" >}}
{{% codenew file="application/simple_deployment.yaml" %}}
Run `kubectl diff` to print the object that will be created:
@ -163,7 +163,7 @@ Add the `-R` flag to recursively process directories.
Here's an example configuration file:
{{< codenew file="application/simple_deployment.yaml" >}}
{{% codenew file="application/simple_deployment.yaml" %}}
Create the object using `kubectl apply`:
@ -281,7 +281,7 @@ spec:
Update the `simple_deployment.yaml` configuration file to change the image from
`nginx:1.14.2` to `nginx:1.16.1`, and delete the `minReadySeconds` field:
{{< codenew file="application/update_deployment.yaml" >}}
{{% codenew file="application/update_deployment.yaml" %}}
Apply the changes made to the configuration file:
@ -513,7 +513,7 @@ to calculate which fields should be deleted or set:
Here's an example. Suppose this is the configuration file for a Deployment object:
{{< codenew file="application/update_deployment.yaml" >}}
{{% codenew file="application/update_deployment.yaml" %}}
Also, suppose this is the live configuration for the same Deployment object:
@ -809,7 +809,7 @@ not specified when the object is created.
Here's a configuration file for a Deployment. The file does not specify `strategy`:
{{< codenew file="application/simple_deployment.yaml" >}}
{{% codenew file="application/simple_deployment.yaml" %}}
Create the object using `kubectl apply`:

View File

@ -27,7 +27,7 @@ in this task demonstrate a strategic merge patch and a JSON merge patch.
Here's the configuration file for a Deployment that has two replicas. Each replica
is a Pod that has one container:
{{< codenew file="application/deployment-patch.yaml" >}}
{{% codenew file="application/deployment-patch.yaml" %}}
Create the Deployment:
@ -288,7 +288,7 @@ patch-demo-1307768864-c86dc 1/1 Running 0 1m
Here's the configuration file for a Deployment that uses the `RollingUpdate` strategy:
{{< codenew file="application/deployment-retainkeys.yaml" >}}
{{% codenew file="application/deployment-retainkeys.yaml" %}}
Create the deployment:
@ -439,7 +439,7 @@ examples which supports these subresources.
Here's a manifest for a Deployment that has two replicas:
{{< codenew file="application/deployment.yaml" >}}
{{% codenew file="application/deployment.yaml" %}}
Create the Deployment:

View File

@ -69,7 +69,7 @@ For example: to resolve `foo.local`, `bar.local` to `127.0.0.1` and `foo.remote`
`bar.remote` to `10.1.2.3`, you can configure HostAliases for a Pod under
`.spec.hostAliases`:
{{< codenew file="service/networking/hostaliases-pod.yaml" >}}
{{% codenew file="service/networking/hostaliases-pod.yaml" %}}
You can start a Pod with that configuration by running:

View File

@ -106,7 +106,7 @@ fe00::2 ip6-allrouters
Create the following Service that does not explicitly define `.spec.ipFamilyPolicy`. Kubernetes will assign a cluster IP for the Service from the first configured `service-cluster-ip-range` and set the `.spec.ipFamilyPolicy` to `SingleStack`.
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}}
Use `kubectl` to view the YAML for the Service.
@ -143,7 +143,7 @@ status:
Create the following Service that explicitly defines `IPv6` as the first array element in `.spec.ipFamilies`. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured `service-cluster-ip-range` and set the `.spec.ipFamilyPolicy` to `SingleStack`.
{{< codenew file="service/networking/dual-stack-ipfamilies-ipv6.yaml" >}}
{{% codenew file="service/networking/dual-stack-ipfamilies-ipv6.yaml" %}}
Use `kubectl` to view the YAML for the Service.
@ -181,7 +181,7 @@ status:
Create the following Service that explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select the `.spec.ClusterIP` from the list of `.spec.ClusterIPs` based on the address family of the first element in the `.spec.ipFamilies` array.
{{< codenew file="service/networking/dual-stack-preferred-svc.yaml" >}}
{{% codenew file="service/networking/dual-stack-preferred-svc.yaml" %}}
{{< note >}}
The `kubectl get svc` command will only show the primary IP in the `CLUSTER-IP` field.
@ -222,7 +222,7 @@ Events: <none>
If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with `PreferDualStack` in `.spec.ipFamilyPolicy`, `IPv6` as the first element of the `.spec.ipFamilies` array and the `type` field set to `LoadBalancer`.
{{< codenew file="service/networking/dual-stack-prefer-ipv6-lb-svc.yaml" >}}
{{% codenew file="service/networking/dual-stack-prefer-ipv6-lb-svc.yaml" %}}
Check the Service:

View File

@ -165,11 +165,11 @@ You can find examples of pod disruption budgets defined below. They match pods w
Example PDB Using minAvailable:
{{< codenew file="policy/zookeeper-pod-disruption-budget-minavailable.yaml" >}}
{{% codenew file="policy/zookeeper-pod-disruption-budget-minavailable.yaml" %}}
Example PDB Using maxUnavailable:
{{< codenew file="policy/zookeeper-pod-disruption-budget-maxunavailable.yaml" >}}
{{% codenew file="policy/zookeeper-pod-disruption-budget-maxunavailable.yaml" %}}
For example, if the above `zk-pdb` object selects the pods of a StatefulSet of size 3, both
specifications have the exact same meaning. The use of `maxUnavailable` is recommended as it

View File

@ -58,7 +58,7 @@ To demonstrate a HorizontalPodAutoscaler, you will first start a Deployment that
`hpa-example` image, and expose it as a {{< glossary_tooltip term_id="service">}}
using the following manifest:
{{< codenew file="application/php-apache.yaml" >}}
{{% codenew file="application/php-apache.yaml" %}}
To do so, run the following command:
@ -498,7 +498,7 @@ between `1` and `1500m`, or `1` and `1.5` when written in decimal notation.
Instead of using `kubectl autoscale` command to create a HorizontalPodAutoscaler imperatively we
can use the following manifest to create it declaratively:
{{< codenew file="application/hpa/php-apache.yaml" >}}
{{% codenew file="application/hpa/php-apache.yaml" %}}
Then, create the autoscaler by executing the following command:

View File

@ -56,7 +56,7 @@ and a StatefulSet.
Create the ConfigMap from the following YAML configuration file:
{{< codenew file="application/mysql/mysql-configmap.yaml" >}}
{{% codenew file="application/mysql/mysql-configmap.yaml" %}}
```shell
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-configmap.yaml
@ -76,7 +76,7 @@ based on information provided by the StatefulSet controller.
Create the Services from the following YAML configuration file:
{{< codenew file="application/mysql/mysql-services.yaml" >}}
{{% codenew file="application/mysql/mysql-services.yaml" %}}
```shell
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-services.yaml
@ -103,7 +103,7 @@ writes.
Finally, create the StatefulSet from the following YAML configuration file:
{{< codenew file="application/mysql/mysql-statefulset.yaml" >}}
{{% codenew file="application/mysql/mysql-statefulset.yaml" %}}
```shell
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-statefulset.yaml

View File

@ -39,8 +39,8 @@ Note: The password is defined in the config yaml, and this is insecure. See
[Kubernetes Secrets](/docs/concepts/configuration/secret/)
for a secure solution.
{{< codenew file="application/mysql/mysql-deployment.yaml" >}}
{{< codenew file="application/mysql/mysql-pv.yaml" >}}
{{% codenew file="application/mysql/mysql-deployment.yaml" %}}
{{% codenew file="application/mysql/mysql-pv.yaml" %}}
1. Deploy the PV and PVC of the YAML file:

View File

@ -27,7 +27,7 @@ You can run an application by creating a Kubernetes Deployment object, and you
can describe a Deployment in a YAML file. For example, this YAML file describes
a Deployment that runs the nginx:1.14.2 Docker image:
{{< codenew file="application/deployment.yaml" >}}
{{% codenew file="application/deployment.yaml" %}}
1. Create a Deployment based on the YAML file:
@ -100,7 +100,7 @@ a Deployment that runs the nginx:1.14.2 Docker image:
You can update the deployment by applying a new YAML file. This YAML file
specifies that the deployment should be updated to use nginx 1.16.1.
{{< codenew file="application/deployment-update.yaml" >}}
{{% codenew file="application/deployment-update.yaml" %}}
1. Apply the new YAML file:
@ -120,7 +120,7 @@ You can increase the number of Pods in your Deployment by applying a new YAML
file. This YAML file sets `replicas` to 4, which specifies that the Deployment
should have four Pods:
{{< codenew file="application/deployment-scale.yaml" >}}
{{% codenew file="application/deployment-scale.yaml" %}}
1. Apply the new YAML file:

View File

@ -236,7 +236,7 @@ This produces a certificate authority key file (`ca-key.pem`) and certificate (`
### Issue a certificate
{{< codenew file="tls/server-signing-config.json" >}}
{{% codenew file="tls/server-signing-config.json" %}}
Use a `server-signing-config.json` signing configuration and the certificate authority key file
and certificate to sign the certificate request:

View File

@ -213,7 +213,7 @@ the `>` characters. The following example illustrates this (view the Markdown
source for this page).
```none
{{</* codenew file="pods/storage/gce-volume.yaml" */>}}
{{</* alert color="warning" >}}This is a warning.{{< /alert */>}}
```
## Links

View File

@ -68,7 +68,7 @@ Examine the contents of the Redis pod manifest and note the following:
This has the net effect of exposing the data in `data.redis-config` from the `example-redis-config`
ConfigMap above as `/redis-master/redis.conf` inside the Pod.
{{< codenew file="pods/config/redis-pod.yaml" >}}
{{% codenew file="pods/config/redis-pod.yaml" %}}
Examine the created objects:
@ -139,7 +139,7 @@ Which should also yield its default value of `noeviction`:
Now let's add some configuration values to the `example-redis-config` ConfigMap:
{{< codenew file="pods/config/example-redis-config.yaml" >}}
{{% codenew file="pods/config/example-redis-config.yaml" %}}
Apply the updated ConfigMap:

View File

@ -209,7 +209,7 @@ done
Next, we'll run a simple "Hello AppArmor" pod with the deny-write profile:
{{< codenew file="pods/security/hello-apparmor.yaml" >}}
{{% codenew file="pods/security/hello-apparmor.yaml" %}}
```shell
kubectl create -f ./hello-apparmor.yaml

Some files were not shown because too many files have changed in this diff Show More