Merge pull request #28234 from jihoon-seo/210603_Nit_Fix_hrefs

Nit: Fix hrefs of some links
pull/28259/head
Kubernetes Prow Robot 2021-06-03 18:59:26 -07:00 committed by GitHub
commit 13349883d1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 12 additions and 12 deletions

View File

@ -253,7 +253,7 @@ message AllocatableResourcesResponse {
`ContainerDevices` do expose the topology information declaring to which NUMA cells the device is affine.
The NUMA cells are identified using a opaque integer ID, which value is consistent to what device
plugins report [when they register themselves to the kubelet](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager).
plugins report [when they register themselves to the kubelet](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager).
The gRPC service is served over a unix socket at `/var/lib/kubelet/pod-resources/kubelet.sock`.

View File

@ -86,7 +86,7 @@ rolling out node software updates can cause voluntary disruptions. Also, some im
of cluster (node) autoscaling may cause voluntary disruptions to defragment and compact nodes.
Your cluster administrator or hosting provider should have documented what level of voluntary
disruptions, if any, to expect. Certain configuration options, such as
[using PriorityClasses](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/)
[using PriorityClasses](/docs/concepts/configuration/pod-priority-preemption/)
in your pod spec can also cause voluntary (and involuntary) disruptions.

View File

@ -27,7 +27,7 @@ each Pod in the scheduling queue according to constraints and available
resources. The scheduler then ranks each valid Node and binds the Pod to a
suitable Node. Multiple different schedulers may be used within a cluster;
kube-scheduler is the reference implementation.
See [scheduling](https://kubernetes.io/docs/concepts/scheduling-eviction/)
See [scheduling](/docs/concepts/scheduling-eviction/)
for more information about scheduling and the kube-scheduler component.
```

View File

@ -250,7 +250,7 @@ only has one pending pods queue.
## {{% heading "whatsnext" %}}
* Read the [kube-scheduler reference](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/)
* Read the [kube-scheduler reference](/docs/reference/command-line-tools-reference/kube-scheduler/)
* Learn about [scheduling](/docs/concepts/scheduling-eviction/kube-scheduler/)
* Read the [kube-scheduler configuration (v1beta1)](/docs/reference/config-api/kube-scheduler-config.v1beta1/) reference

View File

@ -17,7 +17,7 @@ Generate keys and certificate signing requests
Generates keys and certificate signing requests (CSRs) for all the certificates required to run the control plane. This command also generates partial kubeconfig files with private key data in the "users > user > client-key-data" field, and for each kubeconfig file an accompanying ".csr" file is created.
This command is designed for use in [Kubeadm External CA Mode](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode). It generates CSRs which you can then submit to your external certificate authority for signing.
This command is designed for use in [Kubeadm External CA Mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode). It generates CSRs which you can then submit to your external certificate authority for signing.
The PEM encoded signed certificates should then be saved alongside the key files, using ".crt" as the file extension, or in the case of kubeconfig files, the PEM encoded signed certificate should be base64 encoded and added to the kubeconfig file in the "users > user > client-certificate-data" field.

View File

@ -74,7 +74,7 @@ The **policy/v1beta1** API version of PodDisruptionBudget will no longer be serv
PodSecurityPolicy in the **policy/v1beta1** API version will no longer be served in v1.25, and the PodSecurityPolicy admission controller will be removed.
PodSecurityPolicy replacements are still under discussion, but current use can be migrated to
[3rd-party admission webhooks](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) now.
[3rd-party admission webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/) now.
#### RuntimeClass {#runtimeclass-v125}

View File

@ -66,8 +66,8 @@ When creating a cluster, you can (using custom tooling):
* start and configure additional etcd instance
* configure the {{< glossary_tooltip term_id="kube-apiserver" text="API server" >}} to use it for storing events
See [Operating etcd clusters for Kubernetes](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/) and
[Set up a High Availability etcd cluster with kubeadm](docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)
See [Operating etcd clusters for Kubernetes](/docs/tasks/administer-cluster/configure-upgrade-etcd/) and
[Set up a High Availability etcd cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)
for details on configuring and managing etcd for a large cluster.
## Addon resources

View File

@ -49,7 +49,7 @@ access cluster resources. You can use role-based access control
security mechanisms to make sure that users and workloads can get access to the
resources they need, while keeping workloads, and the cluster itself, secure.
You can set limits on the resources that users and workloads can access
by managing [policies](https://kubernetes.io/docs/concepts/policy/) and
by managing [policies](/docs/concepts/policy/) and
[container resources](/docs/concepts/configuration/manage-resources-containers/).
Before building a Kubernetes production environment on your own, consider
@ -286,8 +286,8 @@ and the
deployment methods.
- Configure user management by determining your
[Authentication](/docs/reference/access-authn-authz/authentication/) and
[Authorization](docs/reference/access-authn-authz/authorization/) methods.
[Authorization](/docs/reference/access-authn-authz/authorization/) methods.
- Prepare for application workloads by setting up
[resource limits](docs/tasks/administer-cluster/manage-resources/),
[resource limits](/docs/tasks/administer-cluster/manage-resources/),
[DNS autoscaling](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)
and [service accounts](/docs/reference/access-authn-authz/service-accounts-admin/).

View File

@ -239,7 +239,7 @@ The field `serverTLSBootstrap: true` will enable the bootstrap of kubelet servin
certificates by requesting them from the `certificates.k8s.io` API. One known limitation
is that the CSRs (Certificate Signing Requests) for these certificates cannot be automatically
approved by the default signer in the kube-controller-manager -
[`kubernetes.io/kubelet-serving`](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers).
[`kubernetes.io/kubelet-serving`](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers).
This will require action from the user or a third party controller.
These CSRs can be viewed using: