Drop outdated information from the page

pull/26564/head
Qiming Teng 2021-02-16 11:31:31 +08:00
parent 9cac41b0a8
commit 3b916b3366
1 changed files with 0 additions and 54 deletions

View File

@ -325,57 +325,3 @@ stale data. Note that in practice, the restore takes a bit of time. During the
restoration, critical components will lose leader lock and restart themselves.
{{< /note >}}
## Upgrading and rolling back etcd clusters
As of Kubernetes v1.13.0, etcd2 is no longer supported as a storage backend for
new or existing Kubernetes clusters. The timeline for Kubernetes support for
etcd2 and etcd3 is as follows:
- Kubernetes v1.0: etcd2 only
- Kubernetes v1.5.1: etcd3 support added, new clusters still default to etcd2
- Kubernetes v1.6.0: new clusters created with `kube-up.sh` default to etcd3,
and `kube-apiserver` defaults to etcd3
- Kubernetes v1.9.0: deprecation of etcd2 storage backend announced
- Kubernetes v1.13.0: etcd2 storage backend removed, `kube-apiserver` will
refuse to start with `--storage-backend=etcd2`, with the
message `etcd2 is no longer a supported storage backend`
Before upgrading a v1.12.x kube-apiserver using `--storage-backend=etcd2` to
v1.13.x, etcd v2 data must be migrated to the v3 storage backend and
kube-apiserver invocations must be changed to use `--storage-backend=etcd3`.
The process for migrating from etcd2 to etcd3 is highly dependent on how the
etcd cluster was deployed and configured, as well as how the Kubernetes
cluster was deployed and configured. We recommend that you consult your cluster
provider's documentation to see if there is a predefined solution.
If your cluster was created via `kube-up.sh` and is still using etcd2 as its
storage backend, please consult the [Kubernetes v1.12 etcd cluster upgrade
docs](https://v1-12.docs.kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#upgrading-and-rolling-back-etcd-clusters)
## Known issue: etcd client balancer with secure endpoints
The etcd v3 client, released in etcd v3.3.13 or earlier, has a [critical
bug](https://github.com/kubernetes/kubernetes/issues/72102) which affects the
kube-apiserver and HA deployments. The etcd client balancer failover does not
properly work against secure endpoints. As a result, etcd servers may fail or
disconnect briefly from the kube-apiserver. This affects kube-apiserver HA
deployments.
The fix was made in etcd v3.4 (and backported to v3.3.14 or later): the new
client now creates its own credential bundle to correctly set authority target
in dial function.
Because the fix requires gRPC dependency upgrade (to v1.23.0), downstream
Kubernetes [did not backport etcd
upgrades](https://github.com/kubernetes/kubernetes/issues/72102#issuecomment-526645978).
Which means the [etcd fix in
kube-apiserver](https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab)
is only available from Kubernetes 1.16.
To urgently fix this bug for Kubernetes 1.15 or earlier, build a custom
kube-apiserver. You can make local changes to
[`vendor/google.golang.org/grpc/credentials/credentials.go`](https://github.com/kubernetes/kubernetes/blob/7b85be021cd2943167cd3d6b7020f44735d9d90b/vendor/google.golang.org/grpc/credentials/credentials.go#L135)
with
[etcd@db61ee106](https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab).