Merge pull request #41406 from Ritikaa96/remove-redundant-selector-from-url

Removing redundant #selector in url and adding line-wrapping cleanup
pull/41428/head
Kubernetes Prow Robot 2023-06-02 01:54:53 -07:00 committed by GitHub
commit d703ea5c4c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 25 additions and 17 deletions

View File

@ -118,7 +118,7 @@ break the kubelet behavior and remove containers that should exist.
To configure options for unused container and image garbage collection, tune the
kubelet using a [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
and change the parameters related to garbage collection using the
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/)
resource type.
### Container image lifecycle

View File

@ -506,7 +506,7 @@ in a cluster,
|`custom-class-c` | 1000 |
|`regular/unset` | 0 |
Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/)
the settings for `shutdownGracePeriodByPodPriority` could look like:
|Pod priority class value|Shutdown period|
@ -625,7 +625,7 @@ onwards, swap memory support can be enabled on a per-node basis.
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/)
must be set to false.
{{< warning >}}

View File

@ -81,15 +81,16 @@ See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl
![Node level logging](/images/docs/user-guide/logging/logging-node-level.png)
A container runtime handles and redirects any output generated to a containerized application's `stdout` and `stderr` streams.
Different container runtimes implement this in different ways; however, the integration with the kubelet is standardized
as the _CRI logging format_.
A container runtime handles and redirects any output generated to a containerized
application's `stdout` and `stderr` streams.
Different container runtimes implement this in different ways; however, the integration
with the kubelet is standardized as the _CRI logging format_.
By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node,
all corresponding containers are also evicted, along with their logs.
By default, if a container restarts, the kubelet keeps one terminated container with its logs.
If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
The kubelet makes logs available to clients via a special feature of the Kubernetes API. The usual way to access this is
by running `kubectl logs`.
The kubelet makes logs available to clients via a special feature of the Kubernetes API.
The usual way to access this is by running `kubectl logs`.
### Log rotation
@ -101,7 +102,7 @@ If you configure rotation, the kubelet is responsible for rotating container log
The kubelet sends this information to the container runtime (using CRI),
and the runtime writes the container logs to the given location.
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration),
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/),
`containerLogMaxSize` and `containerLogMaxFiles`,
using the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
These settings let you configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
@ -201,7 +202,8 @@ as your responsibility.
## Cluster-level logging architectures
While Kubernetes does not provide a native solution for cluster-level logging, there are several common approaches you can consider. Here are some options:
While Kubernetes does not provide a native solution for cluster-level logging, there are
several common approaches you can consider. Here are some options:
* Use a node-level logging agent that runs on every node.
* Include a dedicated sidecar container for logging in an application pod.
@ -211,14 +213,18 @@ While Kubernetes does not provide a native solution for cluster-level logging, t
![Using a node level logging agent](/images/docs/user-guide/logging/logging-with-node-agent.png)
You can implement cluster-level logging by including a _node-level logging agent_ on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
You can implement cluster-level logging by including a _node-level logging agent_ on each node.
The logging agent is a dedicated tool that exposes logs or pushes logs to a backend.
Commonly, the logging agent is a container that has access to a directory with log files from all of the
application containers on that node.
Because the logging agent must run on every node, it is recommended to run the agent
as a `DaemonSet`.
Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
Containers write to stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
Containers write to stdout and stderr, but with no agreed format. A node-level agent collects
these logs and forwards them for aggregation.
### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent}

View File

@ -53,13 +53,13 @@ setting up a cluster to use an external CA.
You can use the `check-expiration` subcommand to check when certificates expire:
```
```shell
kubeadm certs check-expiration
```
The output is similar to this:
```
```console
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 30, 2020 23:36 UTC 364d no
apiserver Dec 30, 2020 23:36 UTC 364d ca no
@ -268,7 +268,7 @@ serverTLSBootstrap: true
If you have already created the cluster you must adapt it by doing the following:
- Find and edit the `kubelet-config-{{< skew currentVersion >}}` ConfigMap in the `kube-system` namespace.
In that ConfigMap, the `kubelet` key has a
[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/)
document as its value. Edit the KubeletConfiguration document to set `serverTLSBootstrap: true`.
- On each node, add the `serverTLSBootstrap: true` field in `/var/lib/kubelet/config.yaml`
and restart the kubelet with `systemctl restart kubelet`
@ -284,6 +284,8 @@ These CSRs can be viewed using:
```shell
kubectl get csr
```
```console
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending
csr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending