Removing redundant #elector in url and adding line-wrapping cleanup, without changing the text.

Signed-off-by: Ritikaa96 <ritika@india.nec.com>
pull/41406/head
Ritikaa96 2023-06-01 13:24:48 +05:30
parent 9d6d500d3b
commit e67417a283
4 changed files with 25 additions and 17 deletions

View File

@ -118,7 +118,7 @@ break the kubelet behavior and remove containers that should exist.
To configure options for unused container and image garbage collection, tune the To configure options for unused container and image garbage collection, tune the
kubelet using a [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/) kubelet using a [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
and change the parameters related to garbage collection using the and change the parameters related to garbage collection using the
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) [`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/)
resource type. resource type.
### Container image lifecycle ### Container image lifecycle

View File

@ -506,7 +506,7 @@ in a cluster,
|`custom-class-c` | 1000 | |`custom-class-c` | 1000 |
|`regular/unset` | 0 | |`regular/unset` | 0 |
Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/)
the settings for `shutdownGracePeriodByPodPriority` could look like: the settings for `shutdownGracePeriodByPodPriority` could look like:
|Pod priority class value|Shutdown period| |Pod priority class value|Shutdown period|
@ -625,7 +625,7 @@ onwards, swap memory support can be enabled on a per-node basis.
To enable swap on a node, the `NodeSwap` feature gate must be enabled on To enable swap on a node, the `NodeSwap` feature gate must be enabled on
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn` the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) [configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/)
must be set to false. must be set to false.
{{< warning >}} {{< warning >}}

View File

@ -81,15 +81,16 @@ See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl
![Node level logging](/images/docs/user-guide/logging/logging-node-level.png) ![Node level logging](/images/docs/user-guide/logging/logging-node-level.png)
A container runtime handles and redirects any output generated to a containerized application's `stdout` and `stderr` streams. A container runtime handles and redirects any output generated to a containerized
Different container runtimes implement this in different ways; however, the integration with the kubelet is standardized application's `stdout` and `stderr` streams.
as the _CRI logging format_. Different container runtimes implement this in different ways; however, the integration
with the kubelet is standardized as the _CRI logging format_.
By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, By default, if a container restarts, the kubelet keeps one terminated container with its logs.
all corresponding containers are also evicted, along with their logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
The kubelet makes logs available to clients via a special feature of the Kubernetes API. The usual way to access this is The kubelet makes logs available to clients via a special feature of the Kubernetes API.
by running `kubectl logs`. The usual way to access this is by running `kubectl logs`.
### Log rotation ### Log rotation
@ -101,7 +102,7 @@ If you configure rotation, the kubelet is responsible for rotating container log
The kubelet sends this information to the container runtime (using CRI), The kubelet sends this information to the container runtime (using CRI),
and the runtime writes the container logs to the given location. and the runtime writes the container logs to the given location.
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration), You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/),
`containerLogMaxSize` and `containerLogMaxFiles`, `containerLogMaxSize` and `containerLogMaxFiles`,
using the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/). using the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
These settings let you configure the maximum size for each log file and the maximum number of files allowed for each container respectively. These settings let you configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
@ -201,7 +202,8 @@ as your responsibility.
## Cluster-level logging architectures ## Cluster-level logging architectures
While Kubernetes does not provide a native solution for cluster-level logging, there are several common approaches you can consider. Here are some options: While Kubernetes does not provide a native solution for cluster-level logging, there are
several common approaches you can consider. Here are some options:
* Use a node-level logging agent that runs on every node. * Use a node-level logging agent that runs on every node.
* Include a dedicated sidecar container for logging in an application pod. * Include a dedicated sidecar container for logging in an application pod.
@ -211,14 +213,18 @@ While Kubernetes does not provide a native solution for cluster-level logging, t
![Using a node level logging agent](/images/docs/user-guide/logging/logging-with-node-agent.png) ![Using a node level logging agent](/images/docs/user-guide/logging/logging-with-node-agent.png)
You can implement cluster-level logging by including a _node-level logging agent_ on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node. You can implement cluster-level logging by including a _node-level logging agent_ on each node.
The logging agent is a dedicated tool that exposes logs or pushes logs to a backend.
Commonly, the logging agent is a container that has access to a directory with log files from all of the
application containers on that node.
Because the logging agent must run on every node, it is recommended to run the agent Because the logging agent must run on every node, it is recommended to run the agent
as a `DaemonSet`. as a `DaemonSet`.
Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node. Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
Containers write to stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation. Containers write to stdout and stderr, but with no agreed format. A node-level agent collects
these logs and forwards them for aggregation.
### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent} ### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent}

View File

@ -53,13 +53,13 @@ setting up a cluster to use an external CA.
You can use the `check-expiration` subcommand to check when certificates expire: You can use the `check-expiration` subcommand to check when certificates expire:
``` ```shell
kubeadm certs check-expiration kubeadm certs check-expiration
``` ```
The output is similar to this: The output is similar to this:
``` ```console
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 30, 2020 23:36 UTC 364d no admin.conf Dec 30, 2020 23:36 UTC 364d no
apiserver Dec 30, 2020 23:36 UTC 364d ca no apiserver Dec 30, 2020 23:36 UTC 364d ca no
@ -268,7 +268,7 @@ serverTLSBootstrap: true
If you have already created the cluster you must adapt it by doing the following: If you have already created the cluster you must adapt it by doing the following:
- Find and edit the `kubelet-config-{{< skew currentVersion >}}` ConfigMap in the `kube-system` namespace. - Find and edit the `kubelet-config-{{< skew currentVersion >}}` ConfigMap in the `kube-system` namespace.
In that ConfigMap, the `kubelet` key has a In that ConfigMap, the `kubelet` key has a
[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) [KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/)
document as its value. Edit the KubeletConfiguration document to set `serverTLSBootstrap: true`. document as its value. Edit the KubeletConfiguration document to set `serverTLSBootstrap: true`.
- On each node, add the `serverTLSBootstrap: true` field in `/var/lib/kubelet/config.yaml` - On each node, add the `serverTLSBootstrap: true` field in `/var/lib/kubelet/config.yaml`
and restart the kubelet with `systemctl restart kubelet` and restart the kubelet with `systemctl restart kubelet`
@ -284,6 +284,8 @@ These CSRs can be viewed using:
```shell ```shell
kubectl get csr kubectl get csr
```
```console
NAME AGE SIGNERNAME REQUESTOR CONDITION NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending csr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending
csr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending csr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending