parent
dfeed456d3
commit
e0ae1338cd
|
@ -131,14 +131,14 @@ classes:
|
|||
namespace). These are important to isolate from other traffic because failures
|
||||
in leader election cause their controllers to fail and restart, which in turn
|
||||
causes more expensive traffic as the new controllers sync their informers.
|
||||
|
||||
|
||||
* The `workload-high` priority level is for other requests from built-in
|
||||
controllers.
|
||||
|
||||
|
||||
* The `workload-low` priority level is for requests from any other service
|
||||
account, which will typically include all requests from controllers runing in
|
||||
Pods.
|
||||
|
||||
|
||||
* The `global-default` priority level handles all other traffic, e.g.
|
||||
interactive `kubectl` commands run by nonprivileged users.
|
||||
|
||||
|
@ -150,7 +150,7 @@ are built in and may not be overwritten:
|
|||
special `exempt` FlowSchema classifies all requests from the `system:masters`
|
||||
group into this priority level. You may define other FlowSchemas that direct
|
||||
other requests to this priority level, if appropriate.
|
||||
|
||||
|
||||
* The special `catch-all` priority level is used in combination with the special
|
||||
`catch-all` FlowSchema to make sure that every request gets some kind of
|
||||
classification. Typically you should not rely on this catch-all configuration,
|
||||
|
@ -164,7 +164,7 @@ are built in and may not be overwritten:
|
|||
|
||||
## Resources
|
||||
The flow control API involves two kinds of resources.
|
||||
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1alpha1-flowcontrol-apiserver-k8s-io)
|
||||
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1alpha1-flowcontrol-apiserver-k8s-io)
|
||||
define the available isolation classes, the share of the available concurrency
|
||||
budget that each can handle, and allow for fine-tuning queuing behavior.
|
||||
[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1alpha1-flowcontrol-apiserver-k8s-io)
|
||||
|
@ -204,7 +204,7 @@ to balance progress between request flows.
|
|||
|
||||
The queuing configuration allows tuning the fair queuing algorithm for a
|
||||
priority level. Details of the algorithm can be read in the [enhancement
|
||||
proposal](#what-s-next), but in short:
|
||||
proposal](#whats-next), but in short:
|
||||
|
||||
* Increasing `queues` reduces the rate of collisions between different flows, at
|
||||
the cost of increased memory usage. A value of 1 here effectively disables the
|
||||
|
@ -291,7 +291,7 @@ enabled has two extra headers: `X-Kubernetes-PF-FlowSchema-UID` and
|
|||
`X-Kubernetes-PF-PriorityLevel-UID`, noting the flow schema that matched the request
|
||||
and the priority level to which it was assigned, respectively. The API objects'
|
||||
names are not included in these headers in case the requesting user does not
|
||||
have permission to view them, so when debugging you can use a command like
|
||||
have permission to view them, so when debugging you can use a command like
|
||||
|
||||
```shell
|
||||
kubectl get flowschemas -o custom-columns="uid:{metadata.uid},name:{metadata.name}"
|
||||
|
@ -363,7 +363,7 @@ poorly-behaved workloads that may be harming system health.
|
|||
* `apiserver_flowcontrol_request_execution_seconds` gives a histogram of how
|
||||
long requests took to actually execute, grouped by the FlowSchema that matched the
|
||||
request and the PriorityLevel to which it was assigned.
|
||||
|
||||
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
@ -374,4 +374,4 @@ the [enhancement proposal](https://github.com/kubernetes/enhancements/blob/maste
|
|||
You can make suggestions and feature requests via [SIG API
|
||||
Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -106,7 +106,7 @@ as well as keeping the existing service in good shape.
|
|||
## Writing your own Operator {#writing-operator}
|
||||
|
||||
If there isn't an Operator in the ecosystem that implements the behavior you
|
||||
want, you can code your own. In [What's next](#what-s-next) you'll find a few
|
||||
want, you can code your own. In [What's next](#whats-next) you'll find a few
|
||||
links to libraries and tools you can use to write your own cloud native
|
||||
Operator.
|
||||
|
||||
|
@ -129,4 +129,4 @@ that can act as a [client for the Kubernetes API](/docs/reference/using-api/clie
|
|||
* Read [CoreOS' original article](https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern
|
||||
* Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -157,13 +157,13 @@ the three things:
|
|||
1. **wait** (with a timeout) \
|
||||
If a Permit plugin returns "wait", then the Pod is kept in an internal "waiting"
|
||||
Pods list, and the binding cycle of this Pod starts but directly blocks until it
|
||||
gets [approved](#frameworkhandle). If a timeout occurs, **wait** becomes **deny**
|
||||
gets approved. If a timeout occurs, **wait** becomes **deny**
|
||||
and the Pod is returned to the scheduling queue, triggering [Unreserve](#unreserve)
|
||||
plugins.
|
||||
|
||||
{{< note >}}
|
||||
While any plugin can access the list of "waiting" Pods and approve them
|
||||
(see [`FrameworkHandle`](#frameworkhandle)), we expect only the permit
|
||||
(see [`FrameworkHandle`](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md#frameworkhandle)), we expect only the permit
|
||||
plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod
|
||||
is approved, it is sent to the [PreBind](#pre-bind) phase.
|
||||
{{< /note >}}
|
||||
|
@ -239,4 +239,4 @@ If you are using Kubernetes v1.18 or later, you can configure a set of plugins a
|
|||
a scheduler profile and then define multiple profiles to fit various kinds of workload.
|
||||
Learn more at [multiple profiles](/docs/reference/scheduling/profiles/#multiple-profiles).
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -171,7 +171,7 @@ following pod-specific DNS policies. These policies are specified in the
|
|||
- "`None`": It allows a Pod to ignore DNS settings from the Kubernetes
|
||||
environment. All DNS settings are supposed to be provided using the
|
||||
`dnsConfig` field in the Pod Spec.
|
||||
See [Pod's DNS config](#pod-s-dns-config) subsection below.
|
||||
See [Pod's DNS config](#pod-dns-config) subsection below.
|
||||
|
||||
{{< note >}}
|
||||
"Default" is not the default DNS policy. If `dnsPolicy` is not
|
||||
|
@ -201,7 +201,7 @@ spec:
|
|||
dnsPolicy: ClusterFirstWithHostNet
|
||||
```
|
||||
|
||||
### Pod's DNS Config
|
||||
### Pod's DNS Config {#pod-dns-config}
|
||||
|
||||
Pod's DNS Config allows users more control on the DNS settings for a Pod.
|
||||
|
||||
|
@ -269,6 +269,4 @@ The availability of Pod DNS Config and DNS Policy "`None`"" is shown as below.
|
|||
For guidance on administering DNS configurations, check
|
||||
[Configure DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% /capture %}}
|
|
@ -736,9 +736,9 @@ and need persistent storage, it is recommended that you use the following patter
|
|||
`persistentVolumeClaim.storageClassName` field.
|
||||
This will cause the PVC to match the right storage
|
||||
class if the cluster has StorageClasses enabled by the admin.
|
||||
- If the user does not provide a storage class name, leave the
|
||||
`persistentVolumeClaim.storageClassName` field as nil. This will cause a
|
||||
PV to be automatically provisioned for the user with the default StorageClass
|
||||
- If the user does not provide a storage class name, leave the
|
||||
`persistentVolumeClaim.storageClassName` field as nil. This will cause a
|
||||
PV to be automatically provisioned for the user with the default StorageClass
|
||||
in the cluster. Many cluster environments have a default StorageClass installed,
|
||||
or administrators can create their own default StorageClass.
|
||||
- In your tooling, watch for PVCs that are not getting bound after some time
|
||||
|
@ -759,4 +759,4 @@ and need persistent storage, it is recommended that you use the following patter
|
|||
* [PersistentVolumeSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumespec-v1-core)
|
||||
* [PersistentVolumeClaim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core)
|
||||
* [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core)
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -241,7 +241,7 @@ myapp-pod 1/1 Running 0 9m
|
|||
```
|
||||
|
||||
This simple example should provide some inspiration for you to create your own
|
||||
init containers. [What's next](#what-s-next) contains a link to a more detailed example.
|
||||
init containers. [What's next](#whats-next) contains a link to a more detailed example.
|
||||
|
||||
## Detailed behavior
|
||||
|
||||
|
@ -325,4 +325,4 @@ reasons:
|
|||
* Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container)
|
||||
* Learn how to [debug init containers](/docs/tasks/debug-application-cluster/debug-init-containers/)
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -73,8 +73,7 @@ true:
|
|||
[Prow](https://github.com/kubernetes/test-infra/blob/master/prow/README.md) is
|
||||
the Kubernetes-based CI/CD system that runs jobs against pull requests (PRs). Prow
|
||||
enables chatbot-style commands to handle GitHub actions across the Kubernetes
|
||||
organization, like [adding and removing
|
||||
labels](#add-and-remove-labels), closing issues, and assigning an approver. Enter Prow commands as GitHub comments using the `/<command-name>` format.
|
||||
organization, like [adding and removing labels](#adding-and-removing-issue-labels), closing issues, and assigning an approver. Enter Prow commands as GitHub comments using the `/<command-name>` format.
|
||||
|
||||
The most common prow commands reviewers and approvers use are:
|
||||
|
||||
|
|
|
@ -28,7 +28,7 @@ For information how to create a cluster with kubeadm once you have performed thi
|
|||
* 2 GB or more of RAM per machine (any less will leave little room for your apps)
|
||||
* 2 CPUs or more
|
||||
* Full network connectivity between all machines in the cluster (public or private network is fine)
|
||||
* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-the-mac-address-and-product-uuid-are-unique-for-every-node) for more details.
|
||||
* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-mac-address) for more details.
|
||||
* Certain ports are open on your machines. See [here](#check-required-ports) for more details.
|
||||
* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly.
|
||||
|
||||
|
@ -36,7 +36,7 @@ For information how to create a cluster with kubeadm once you have performed thi
|
|||
|
||||
{{% capture steps %}}
|
||||
|
||||
## Verify the MAC address and product_uuid are unique for every node
|
||||
## Verify the MAC address and product_uuid are unique for every node {#verify-mac-address}
|
||||
|
||||
* You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a`
|
||||
* The product_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid`
|
||||
|
@ -305,4 +305,4 @@ If you are running into difficulties with kubeadm, please consult our [troublesh
|
|||
|
||||
* [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -21,7 +21,7 @@ Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [in
|
|||
* openSUSE Leap 15
|
||||
* continuous integration tests
|
||||
|
||||
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](../kops).
|
||||
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
@ -119,4 +119,4 @@ When running the reset playbook, be sure not to accidentally target your product
|
|||
|
||||
Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md).
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -277,7 +277,7 @@ This shows the proxy-verb URL for accessing each service.
|
|||
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
|
||||
at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/` if suitable credentials are passed. Logging can also be reached through a kubectl proxy, for example at:
|
||||
`http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`.
|
||||
(See [above](#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.)
|
||||
(See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/access-cluster-api/) for how to pass credentials or use kubectl proxy.)
|
||||
|
||||
#### Manually constructing apiserver proxy URLs
|
||||
|
||||
|
@ -376,4 +376,4 @@ There are several different proxies you may encounter when using Kubernetes:
|
|||
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
|
||||
will typically ensure that the latter types are setup correctly.
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -362,7 +362,7 @@ Structural schemas are a requirement for `apiextensions.k8s.io/v1`, and disables
|
|||
* [Webhook Conversion](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/#webhook-conversion)
|
||||
* [Pruning](#preserving-unknown-fields)
|
||||
|
||||
### Pruning versus preserving unknown fields
|
||||
### Pruning versus preserving unknown fields {#preserving-unknown-fields}
|
||||
|
||||
{{< feature-state state="stable" for_k8s_version="v1.16" >}}
|
||||
|
||||
|
@ -1431,4 +1431,4 @@ crontabs/my-new-cron-object 3s
|
|||
* Serve [multiple versions](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/) of a
|
||||
CustomResourceDefinition.
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -70,10 +70,10 @@ and is overridden by command-line flags. Unspecified values in the new configura
|
|||
will receive default values appropriate to the configuration version
|
||||
(e.g. `kubelet.config.k8s.io/v1beta1`), unless overridden by flags.
|
||||
|
||||
The status of the Node's kubelet configuration is reported via
|
||||
The status of the Node's kubelet configuration is reported via
|
||||
`Node.Spec.Status.Config`. Once you have updated a Node to use the new
|
||||
ConfigMap, you can observe this status to confirm that the Node is using the
|
||||
intended configuration.
|
||||
intended configuration.
|
||||
|
||||
This document describes editing Nodes using `kubectl edit`.
|
||||
There are other ways to modify a Node's spec, including `kubectl patch`, for
|
||||
|
@ -136,7 +136,7 @@ adapt the steps if you prefer to extract the `kubeletconfig` subobject manually.
|
|||
|
||||
1. Choose a Node to reconfigure. In this example, the name of this Node is
|
||||
referred to as `NODE_NAME`.
|
||||
2. Start the kubectl proxy in the background using the following command:
|
||||
2. Start the kubectl proxy in the background using the following command:
|
||||
|
||||
```bash
|
||||
kubectl proxy --port=8001 &
|
||||
|
@ -236,8 +236,8 @@ Retrieve the Node using the `kubectl get node ${NODE_NAME} -o yaml` command and
|
|||
|
||||
The`lastKnownGood` configuration might not be present if it is set to its default value,
|
||||
the local config deployed with the node. The status will update `lastKnownGood` to
|
||||
match a valid `assigned` config after the kubelet becomes comfortable with the config.
|
||||
The details of how the kubelet determines a config should become the `lastKnownGood` are
|
||||
match a valid `assigned` config after the kubelet becomes comfortable with the config.
|
||||
The details of how the kubelet determines a config should become the `lastKnownGood` are
|
||||
not guaranteed by the API, but is currently implemented as a 10-minute grace period.
|
||||
|
||||
You can use the following command (using `jq`) to filter down
|
||||
|
@ -287,7 +287,7 @@ by eye).
|
|||
|
||||
If an error occurs, the kubelet reports it in the `Node.Status.Config.Error`
|
||||
structure. Possible errors are listed in
|
||||
[Understanding Node.Status.Config.Error messages](#understanding-node-status-config-error-messages).
|
||||
[Understanding Node.Status.Config.Error messages](#understanding-node-config-status-errors).
|
||||
You can search for the identical text in the kubelet log for additional details
|
||||
and context about the error.
|
||||
|
||||
|
@ -355,7 +355,7 @@ metadata and checkpoints. The structure of the kubelet's checkpointing directory
|
|||
| - ...
|
||||
```
|
||||
|
||||
## Understanding Node.Status.Config.Error messages
|
||||
## Understanding Node.Status.Config.Error messages {#understanding-node-config-status-errors}
|
||||
|
||||
The following table describes error messages that can occur
|
||||
when using Dynamic Kubelet Config. You can search for the identical text
|
||||
|
@ -379,4 +379,4 @@ internal failure, see Kubelet log for details | The kubelet encountered some int
|
|||
- For more information on configuring the kubelet via a configuration file, see
|
||||
[Set kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file).
|
||||
- See the reference documentation for [`NodeConfigSource`](https://kubernetes.io/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodeconfigsource-v1-core)
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -173,7 +173,7 @@ For example: in Centos, you can do this using the tuned toolset.
|
|||
Memory pressure at the node level leads to System OOMs which affects the entire
|
||||
node and all pods running on it. Nodes can go offline temporarily until memory
|
||||
has been reclaimed. To avoid (or reduce the probability of) system OOMs kubelet
|
||||
provides [`Out of Resource`](./out-of-resource.md) management. Evictions are
|
||||
provides [`Out of Resource`](/docs/tasks/administer-cluster/out-of-resource/) management. Evictions are
|
||||
supported for `memory` and `ephemeral-storage` only. By reserving some memory via
|
||||
`--eviction-hard` flag, the `kubelet` attempts to `evict` pods whenever memory
|
||||
availability on the node drops below the reserved value. Hypothetically, if
|
||||
|
@ -190,7 +190,7 @@ The scheduler treats `Allocatable` as the available `capacity` for pods.
|
|||
`kubelet` enforce `Allocatable` across pods by default. Enforcement is performed
|
||||
by evicting pods whenever the overall usage across all pods exceeds
|
||||
`Allocatable`. More details on eviction policy can be found
|
||||
[here](./out-of-resource.md#eviction-policy). This enforcement is controlled by
|
||||
[here](/docs/tasks/administer-cluster/out-of-resource/#eviction-policy). This enforcement is controlled by
|
||||
specifying `pods` value to the kubelet flag `--enforce-node-allocatable`.
|
||||
|
||||
|
||||
|
@ -251,4 +251,4 @@ If `kube-reserved` and/or `system-reserved` is not enforced and system daemons
|
|||
exceed their reservation, `kubelet` evicts pods whenever the overall node memory
|
||||
usage is higher than `31.5Gi` or `storage` is greater than `90Gi`
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -9,7 +9,7 @@ toc_hide: true
|
|||
{{% capture overview %}}
|
||||
|
||||
{{< note >}}
|
||||
Be sure to also [create an entry in the table of contents](/docs/home/contribute/write-new-topic/#creating-an-entry-in-the-table-of-contents) for your new document.
|
||||
Be sure to also [create an entry in the table of contents](/docs/contribute/style/write-new-topic/#placing-your-topic-in-the-table-of-contents) for your new document.
|
||||
{{< /note >}}
|
||||
|
||||
This page shows how to ...
|
||||
|
@ -29,7 +29,7 @@ This page shows how to ...
|
|||
## Doing ...
|
||||
|
||||
1. Do this.
|
||||
1. Do this next. Possibly read this [related explanation](...).
|
||||
1. Do this next. Possibly read this [related explanation](#).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
@ -49,6 +49,4 @@ Here's an interesting thing to know about the steps you just did.
|
|||
* Learn more about [Writing a New Topic](/docs/home/contribute/write-new-topic/).
|
||||
* See [Using Page Templates - Task template](/docs/home/contribute/page-templates/#task_template) for how to use this template.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% /capture %}}
|
|
@ -11,9 +11,9 @@ card:
|
|||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
This tutorial builds upon the [PHP Guestbook with Redis](../guestbook) tutorial. Lightweight log, metric, and network data open source shippers, or *Beats*, from Elastic are deployed in the same Kubernetes cluster as the guestbook. The Beats collect, parse, and index the data into Elasticsearch so that you can view and analyze the resulting operational information in Kibana. This example consists of the following components:
|
||||
This tutorial builds upon the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. Lightweight log, metric, and network data open source shippers, or *Beats*, from Elastic are deployed in the same Kubernetes cluster as the guestbook. The Beats collect, parse, and index the data into Elasticsearch so that you can view and analyze the resulting operational information in Kibana. This example consists of the following components:
|
||||
|
||||
* A running instance of the [PHP Guestbook with Redis tutorial](../guestbook)
|
||||
* A running instance of the [PHP Guestbook with Redis tutorial](/docs/tutorials/stateless-application/guestbook)
|
||||
* Elasticsearch and Kibana
|
||||
* Filebeat
|
||||
* Metricbeat
|
||||
|
@ -36,16 +36,16 @@ This tutorial builds upon the [PHP Guestbook with Redis](../guestbook) tutorial.
|
|||
|
||||
Additionally you need:
|
||||
|
||||
* A running deployment of the [PHP Guestbook with Redis](../guestbook) tutorial.
|
||||
* A running deployment of the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial.
|
||||
|
||||
* A running Elasticsearch and Kibana deployment. You can use [Elasticsearch Service in Elastic Cloud](https://cloud.elastic.co), run the [download files](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html) on your workstation or servers, or the [Elastic Helm Charts](https://github.com/elastic/helm-charts).
|
||||
* A running Elasticsearch and Kibana deployment. You can use [Elasticsearch Service in Elastic Cloud](https://cloud.elastic.co), run the [download files](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html) on your workstation or servers, or the [Elastic Helm Charts](https://github.com/elastic/helm-charts).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture lessoncontent %}}
|
||||
|
||||
## Start up the PHP Guestbook with Redis
|
||||
This tutorial builds on the [PHP Guestbook with Redis](../guestbook) tutorial. If you have the guestbook application running, then you can monitor that. If you do not have it running then follow the instructions to deploy the guestbook and do not perform the **Cleanup** steps. Come back to this page when you have the guestbook running.
|
||||
This tutorial builds on the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. If you have the guestbook application running, then you can monitor that. If you do not have it running then follow the instructions to deploy the guestbook and do not perform the **Cleanup** steps. Come back to this page when you have the guestbook running.
|
||||
|
||||
## Add a Cluster role binding
|
||||
Create a [cluster level role binding](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) so that you can deploy kube-state-metrics and the Beats at the cluster level (in kube-system).
|
||||
|
@ -403,4 +403,4 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
|
|||
* Read more about [logging architecture](/docs/concepts/cluster-administration/logging/)
|
||||
* Read more about [application introspection and debugging](/docs/tasks/debug-application-cluster/)
|
||||
* Read more about [troubleshoot applications](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
Loading…
Reference in New Issue