Merge pull request #33632 from fenggw-fnst/tasks-8
[zh] Sync for tasks-8: ip-masq-agent.md and nodelocaldns.mdpull/33821/head
commit
b026c3a896
|
@ -9,9 +9,9 @@ content_type: task
|
|||
|
||||
<!-- overview -->
|
||||
<!--
|
||||
This page shows how to configure and enable the ip-masq-agent.
|
||||
This page shows how to configure and enable the `ip-masq-agent`.
|
||||
-->
|
||||
此页面展示如何配置和启用 ip-masq-agent。
|
||||
此页面展示如何配置和启用 `ip-masq-agent`。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -24,9 +24,9 @@ This page shows how to configure and enable the ip-masq-agent.
|
|||
## IP Masquerade Agent 用户指南
|
||||
|
||||
<!--
|
||||
The ip-masq-agent configures iptables rules to hide a pod's IP address behind the cluster node's IP address. This is typically done when sending traffic to destinations outside the cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range.
|
||||
The `ip-masq-agent` configures iptables rules to hide a pod's IP address behind the cluster node's IP address. This is typically done when sending traffic to destinations outside the cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range.
|
||||
-->
|
||||
ip-masq-agent 配置 iptables 规则以隐藏位于集群节点 IP 地址后面的 Pod 的 IP 地址。
|
||||
`ip-masq-agent` 配置 iptables 规则以隐藏位于集群节点 IP 地址后面的 Pod 的 IP 地址。
|
||||
这通常在将流量发送到集群的 Pod
|
||||
[CIDR](https://zh.wikipedia.org/wiki/%E6%97%A0%E7%B1%BB%E5%88%AB%E5%9F%9F%E9%97%B4%E8%B7%AF%E7%94%B1)
|
||||
范围之外的目的地时使用。
|
||||
|
@ -96,23 +96,26 @@ The agent configuration file must be written in YAML or JSON syntax, and may con
|
|||
代理配置文件必须使用 YAML 或 JSON 语法编写,并且可能包含三个可选值:
|
||||
|
||||
<!--
|
||||
* **nonMasqueradeCIDRs:** A list of strings in [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify the non-masquerade ranges.
|
||||
* `nonMasqueradeCIDRs`: A list of strings in
|
||||
[CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify the non-masquerade ranges.
|
||||
-->
|
||||
* **nonMasqueradeCIDRs:**
|
||||
* `nonMasqueradeCIDRs`:
|
||||
[CIDR](https://zh.wikipedia.org/wiki/%E6%97%A0%E7%B1%BB%E5%88%AB%E5%9F%9F%E9%97%B4%E8%B7%AF%E7%94%B1)
|
||||
表示法中的字符串列表,用于指定不需伪装的地址范围。
|
||||
|
||||
<!--
|
||||
* **masqLinkLocal:** A Boolean (true / false) which indicates whether to masquerade traffic to the link local prefix 169.254.0.0/16. False by default.
|
||||
* `masqLinkLocal`: A Boolean (true/false) which indicates whether to masquerade traffic to the
|
||||
link local prefix `169.254.0.0/16`. False by default.
|
||||
-->
|
||||
* **masqLinkLocal:** 布尔值 (true / false),表示是否将流量伪装到
|
||||
本地链路前缀 169.254.0.0/16。默认为 false。
|
||||
* `masqLinkLocal`:布尔值 (true/false),表示是否为本地链路前缀 169.254.0.0/16 的流量提供伪装。
|
||||
默认为 false。
|
||||
|
||||
<!--
|
||||
* **resyncInterval:** An interval at which the agent attempts to reload config from disk. e.g. '30s' where 's' is seconds, 'ms' is milliseconds etc...
|
||||
* `resyncInterval`: A time interval at which the agent attempts to reload config from disk.
|
||||
For example: '30s', where 's' means seconds, 'ms' means milliseconds.
|
||||
-->
|
||||
* **resyncInterval:** 代理尝试从磁盘重新加载配置的时间间隔。
|
||||
例如 '30s',其中 's' 是秒,'ms' 是毫秒等...
|
||||
* `resyncInterval`:代理从磁盘重新加载配置的重试时间间隔。
|
||||
例如 '30s',其中 's' 是秒,'ms' 是毫秒。
|
||||
|
||||
<!--
|
||||
Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) ranges will NOT be masqueraded. Any other traffic (assumed to be internet) will be masqueraded. An example of a local destination from a pod could be its Node's IP address as well as another node's address or one of the IP addresses in Cluster's IP range. Any other traffic will be masqueraded by default. The below entries show the default set of rules that are applied by the ip-masq-agent:
|
||||
|
@ -122,8 +125,11 @@ Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) ranges will NOT be masq
|
|||
Pod 访问本地目的地的例子,可以是其节点的 IP 地址、另一节点的地址或集群的 IP 地址范围内的一个 IP 地址。
|
||||
默认情况下,任何其他流量都将伪装。以下条目展示了 ip-masq-agent 的默认使用的规则:
|
||||
|
||||
```
|
||||
```shell
|
||||
iptables -t nat -L IP-MASQ-AGENT
|
||||
```
|
||||
|
||||
```none
|
||||
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
RETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
|
@ -133,13 +139,17 @@ MASQUERADE all -- anywhere anywhere /* ip-masq-agent:
|
|||
```
|
||||
|
||||
<!--
|
||||
By default, in GCE/Google Kubernetes Engine starting with Kubernetes version 1.7.0, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in another environment, you can add the ip-masq-agent [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) to your cluster:
|
||||
By default, in GCE/Google Kubernetes Engine, if network policy is enabled or
|
||||
you are using a cluster CIDR not in the 10.0.0.0/8 range, the `ip-masq-agent`
|
||||
will run in your cluster. If you are running in another environment,
|
||||
you can add the `ip-masq-agent` [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)
|
||||
to your cluster.
|
||||
-->
|
||||
默认情况下,从 Kubernetes 1.7.0 版本开始的 GCE/Google Kubernetes Engine 中,
|
||||
如果启用了网络策略,或者你使用的集群 CIDR 不在 10.0.0.0/8 范围内,
|
||||
则 ip-masq-agent 将在你的集群中运行。
|
||||
如果你在其他环境中运行,则可以将 ip-masq-agent
|
||||
[DaemonSet](/zh/docs/concepts/workloads/controllers/daemonset/) 添加到你的集群:
|
||||
默认情况下,在 GCE/Google Kubernetes Engine 中,如果启用了网络策略,
|
||||
或者你使用的集群 CIDR 不在 10.0.0.0/8 范围内,
|
||||
则 `ip-masq-agent` 将在你的集群中运行。
|
||||
如果你在其他环境中运行,可以将 `ip-masq-agent`
|
||||
[DaemonSet](/zh/docs/concepts/workloads/controllers/daemonset/) 添加到你的集群中。
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -172,7 +182,7 @@ More information can be found in the ip-masq-agent documentation [here](https://
|
|||
<!--
|
||||
In most cases, the default set of rules should be sufficient; however, if this is not the case for your cluster, you can create and apply a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to customize the IP ranges that are affected. For example, to allow only 10.0.0.0/8 to be considered by the ip-masq-agent, you can create the following [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) in a file called "config".
|
||||
-->
|
||||
在大多数情况下,默认的规则集应该足够;但是,如果你的群集不是这种情况,则可以创建并应用
|
||||
在大多数情况下,默认的规则集应该足够;但是,如果你的集群不是这种情况,则可以创建并应用
|
||||
[ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)
|
||||
来自定义受影响的 IP 范围。
|
||||
例如,要允许 ip-masq-agent 仅作用于 10.0.0.0/8,你可以在一个名为 “config” 的文件中创建以下
|
||||
|
@ -180,12 +190,12 @@ In most cases, the default set of rules should be sufficient; however, if this i
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
It is important that the file is called config since, by default, that will be used as the key for lookup by the ip-masq-agent:
|
||||
It is important that the file is called config since, by default, that will be used as the key for lookup by the `ip-masq-agent`:
|
||||
-->
|
||||
重要的是,该文件之所以被称为 config,因为默认情况下,该文件将被用作
|
||||
ip-masq-agent 查找的主键:
|
||||
`ip-masq-agent` 查找的主键:
|
||||
|
||||
```
|
||||
```yaml
|
||||
nonMasqueradeCIDRs:
|
||||
- 10.0.0.0/8
|
||||
resyncInterval: 60s
|
||||
|
@ -195,22 +205,25 @@ resyncInterval: 60s
|
|||
<!--
|
||||
Run the following command to add the config map to your cluster:
|
||||
-->
|
||||
运行以下命令将配置映射添加到你的集群:
|
||||
运行以下命令将 ConfigMap 添加到你的集群:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl create configmap ip-masq-agent --from-file=config --namespace=kube-system
|
||||
```
|
||||
|
||||
<!--
|
||||
This will update a file located at */etc/config/ip-masq-agent* which is periodically checked every *resyncInterval* and applied to the cluster node.
|
||||
This will update a file located at `/etc/config/ip-masq-agent` which is periodically checked every `resyncInterval` and applied to the cluster node.
|
||||
After the resync interval has expired, you should see the iptables rules reflect your changes:
|
||||
-->
|
||||
这将更新位于 */etc/config/ip-masq-agent* 的一个文件,该文件以 *resyncInterval*
|
||||
这将更新位于 `/etc/config/ip-masq-agent` 的一个文件,该文件以 `resyncInterval`
|
||||
为周期定期检查并应用于集群节点。
|
||||
重新同步间隔到期后,你应该看到你的更改在 iptables 规则中体现:
|
||||
|
||||
```
|
||||
```shell
|
||||
iptables -t nat -L IP-MASQ-AGENT
|
||||
```
|
||||
|
||||
```none
|
||||
Chain IP-MASQ-AGENT (1 references)
|
||||
target prot opt source destination
|
||||
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
|
@ -219,13 +232,13 @@ MASQUERADE all -- anywhere anywhere /* ip-masq-agent:
|
|||
```
|
||||
|
||||
<!--
|
||||
By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set *masqLinkLocal* to true in the config map.
|
||||
By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set `masqLinkLocal` to true in the ConfigMap.
|
||||
-->
|
||||
默认情况下,本地链路范围 (169.254.0.0/16) 也由 ip-masq agent 处理,
|
||||
该代理设置适当的 iptables 规则。 要使 ip-masq-agent 忽略本地链路,
|
||||
可以在配置映射中将 *masqLinkLocal* 设置为 true。
|
||||
可以在 ConfigMap 中将 `masqLinkLocal` 设置为 true。
|
||||
|
||||
```
|
||||
```yaml
|
||||
nonMasqueradeCIDRs:
|
||||
- 10.0.0.0/8
|
||||
resyncInterval: 60s
|
||||
|
|
|
@ -11,7 +11,9 @@ content_type: task
|
|||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||
|
||||
<!--
|
||||
This page provides an overview of NodeLocal DNSCache feature in Kubernetes.
|
||||
-->
|
||||
|
@ -29,10 +31,17 @@ This page provides an overview of NodeLocal DNSCache feature in Kubernetes.
|
|||
## 引言
|
||||
|
||||
<!--
|
||||
NodeLocal DNSCache improves Cluster DNS performance by running a dns caching agent on cluster nodes as a DaemonSet. In today's architecture, Pods in ClusterFirst DNS mode reach out to a kube-dns serviceIP for DNS queries. This is translated to a kube-dns/CoreDNS endpoint via iptables rules added by kube-proxy. With this new architecture, Pods will reach out to the dns caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query kube-dns service for cache misses of cluster hostnames(cluster.local suffix by default).
|
||||
NodeLocal DNSCache improves Cluster DNS performance by running a DNS caching agent
|
||||
on cluster nodes as a DaemonSet. In today's architecture, Pods in 'ClusterFirst' DNS mode
|
||||
reach out to a kube-dns `serviceIP` for DNS queries. This is translated to a
|
||||
kube-dns/CoreDNS endpoint via iptables rules added by kube-proxy.
|
||||
With this new architecture, Pods will reach out to the DNS caching agent
|
||||
running on the same node, thereby avoiding iptables DNAT rules and connection tracking.
|
||||
The local caching agent will query kube-dns service for cache misses of cluster
|
||||
hostnames ("`cluster.local`" suffix by default).
|
||||
-->
|
||||
NodeLocal DNSCache 通过在集群节点上作为 DaemonSet 运行 DNS 缓存代理来提高集群 DNS 性能。
|
||||
在当今的体系结构中,运行在 ClusterFirst DNS 模式下的 Pod 可以连接到 kube-dns `serviceIP` 进行 DNS 查询。
|
||||
在当今的体系结构中,运行在 'ClusterFirst' DNS 模式下的 Pod 可以连接到 kube-dns `serviceIP` 进行 DNS 查询。
|
||||
通过 kube-proxy 添加的 iptables 规则将其转换为 kube-dns/CoreDNS 端点。
|
||||
借助这种新架构,Pods 将可以访问在同一节点上运行的 DNS 缓存代理,从而避免 iptables DNAT 规则和连接跟踪。
|
||||
本地缓存代理将查询 kube-dns 服务以获取集群主机名的缓存缺失(默认为 "`cluster.local`" 后缀)。
|
||||
|
@ -43,22 +52,29 @@ NodeLocal DNSCache 通过在集群节点上作为 DaemonSet 运行 DNS 缓存代
|
|||
## 动机
|
||||
|
||||
<!--
|
||||
* With the current DNS architecture, it is possible that Pods with the highest DNS QPS have to reach out to a different node, if there is no local kube-dns/CoreDNS instance.
|
||||
Having a local cache will help improve the latency in such scenarios.
|
||||
* With the current DNS architecture, it is possible that Pods with the highest DNS QPS
|
||||
have to reach out to a different node, if there is no local kube-dns/CoreDNS instance.
|
||||
Having a local cache will help improve the latency in such scenarios.
|
||||
-->
|
||||
* 使用当前的 DNS 体系结构,如果没有本地 kube-dns/CoreDNS 实例,则具有最高 DNS QPS
|
||||
的 Pod 可能必须延伸到另一个节点。
|
||||
在这种场景下,拥有本地缓存将有助于改善延迟。
|
||||
|
||||
<!--
|
||||
* Skipping iptables DNAT and connection tracking will help reduce [conntrack races](https://github.com/kubernetes/kubernetes/issues/56903) and avoid UDP DNS entries filling up conntrack table.
|
||||
* Skipping iptables DNAT and connection tracking will help reduce
|
||||
[conntrack races](https://github.com/kubernetes/kubernetes/issues/56903)
|
||||
and avoid UDP DNS entries filling up conntrack table.
|
||||
-->
|
||||
* 跳过 iptables DNAT 和连接跟踪将有助于减少
|
||||
[conntrack 竞争](https://github.com/kubernetes/kubernetes/issues/56903)
|
||||
并避免 UDP DNS 条目填满 conntrack 表。
|
||||
|
||||
<!--
|
||||
* Connections from local caching agent to kube-dns servie can be upgraded to TCP. TCP conntrack entries will be removed on connection close in contrast with UDP entries that have to timeout ([default](https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt) `nf_conntrack_udp_timeout` is 30 seconds)
|
||||
* Connections from local caching agent to kube-dns service can be upgraded to TCP.
|
||||
TCP conntrack entries will be removed on connection close in contrast with
|
||||
UDP entries that have to timeout
|
||||
([default](https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt)
|
||||
`nf_conntrack_udp_timeout` is 30 seconds)
|
||||
-->
|
||||
* 从本地缓存代理到 kube-dns 服务的连接可以升级为 TCP 。
|
||||
TCP conntrack 条目将在连接关闭时被删除,相反 UDP 条目必须超时
|
||||
|
@ -66,14 +82,16 @@ Having a local cache will help improve the latency in such scenarios.
|
|||
`nf_conntrack_udp_timeout` 是 30 秒)。
|
||||
|
||||
<!--
|
||||
* Upgrading DNS queries from UDP to TCP would reduce tail latency attributed to dropped UDP packets and DNS timeouts usually up to 30s (3 retries + 10s timeout). Since the nodelocal cache listens for UDP DNS queries, applications don't need to be changed.
|
||||
* Upgrading DNS queries from UDP to TCP would reduce tail latency attributed to
|
||||
dropped UDP packets and DNS timeouts usually up to 30s (3 retries + 10s timeout).
|
||||
Since the nodelocal cache listens for UDP DNS queries, applications don't need to be changed.
|
||||
-->
|
||||
* 将 DNS 查询从 UDP 升级到 TCP 将减少由于被丢弃的 UDP 包和 DNS 超时而带来的尾部等待时间;
|
||||
这类延时通常长达 30 秒(3 次重试 + 10 秒超时)。
|
||||
由于 nodelocal 缓存监听 UDP DNS 查询,应用不需要变更。
|
||||
|
||||
<!--
|
||||
* Metrics & visibility into dns requests at a node level.
|
||||
* Metrics & visibility into DNS requests at a node level.
|
||||
-->
|
||||
* 在节点级别对 DNS 请求的度量和可见性。
|
||||
|
||||
|
@ -101,8 +119,14 @@ This is the path followed by DNS Queries after NodeLocal DNSCache is enabled:
|
|||
## Configuration
|
||||
-->
|
||||
## 配置
|
||||
|
||||
<!--
|
||||
{{< note >}} The local listen IP address for NodeLocal DNSCache can be any address that can be guaranteed to not collide with any existing IP in your cluster. It's recommended to use an address with a local scope, per example, from the link-local range 169.254.0.0/16 for IPv4 or from the Unique Local Address range in IPv6 fd00::/8.
|
||||
{{< note >}}
|
||||
The local listen IP address for NodeLocal DNSCache can be any address that
|
||||
can be guaranteed to not collide with any existing IP in your cluster.
|
||||
It's recommended to use an address with a local scope, per example,
|
||||
from the 'link-local' range '169.254.0.0/16' for IPv4 or from the
|
||||
'Unique Local Address' range in IPv6 'fd00::/8'.
|
||||
{{< /note >}}
|
||||
-->
|
||||
{{< note >}}
|
||||
|
@ -117,32 +141,40 @@ This feature can be enabled using the following steps:
|
|||
可以使用以下步骤启动此功能:
|
||||
|
||||
<!--
|
||||
* Prepare a manifest similar to the sample [`nodelocaldns.yaml`](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml) and save it as `nodelocaldns.yaml.`
|
||||
* Prepare a manifest similar to the sample
|
||||
[`nodelocaldns.yaml`](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml)
|
||||
and save it as `nodelocaldns.yaml.`
|
||||
-->
|
||||
* 根据示例 [`nodelocaldns.yaml`](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml)
|
||||
准备一个清单,把它保存为 `nodelocaldns.yaml`。
|
||||
|
||||
<!--
|
||||
* If using IPv6, the CoreDNS configuration file need to enclose all the IPv6 addresses into square brackets if used in IP:Port format.
|
||||
If you are using the sample manifest from the previous point, this will require to modify [the configuration line L70](https://github.com/kubernetes/kubernetes/blob/b2ecd1b3a3192fbbe2b9e348e095326f51dc43dd/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml#L70) like this `health [__PILLAR__LOCAL__DNS__]:8080`
|
||||
* If using IPv6, the CoreDNS configuration file need to enclose all the IPv6 addresses
|
||||
into square brackets if used in 'IP:Port' format.
|
||||
If you are using the sample manifest from the previous point, this will require to modify
|
||||
[the configuration line L70](https://github.com/kubernetes/kubernetes/blob/b2ecd1b3a3192fbbe2b9e348e095326f51dc43dd/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml#L70)
|
||||
like this: "`health [__PILLAR__LOCAL__DNS__]:8080`"
|
||||
-->
|
||||
* 如果使用 IPv6,在使用 IP:Port 格式的时候需要把 CoreDNS 配置文件里的所有 IPv6 地址用方括号包起来。
|
||||
* 如果使用 IPv6,在使用 'IP:Port' 格式的时候需要把 CoreDNS 配置文件里的所有 IPv6 地址用方括号包起来。
|
||||
如果你使用上述的示例清单,需要把
|
||||
[配置行 L70](https://github.com/kubernetes/kubernetes/blob/b2ecd1b3a3192fbbe2b9e348e095326f51dc43dd/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml#L70)
|
||||
修改为 `health [__PILLAR__LOCAL__DNS__]:8080`。
|
||||
修改为: "`health [__PILLAR__LOCAL__DNS__]:8080`"。
|
||||
|
||||
<!--
|
||||
* Substitute the variables in the manifest with the right values:
|
||||
|
||||
* kubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}`
|
||||
```shell
|
||||
kubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}`
|
||||
domain=<cluster-domain>
|
||||
localdns=<node-local-address>
|
||||
```
|
||||
|
||||
* domain=`<cluster-domain>`
|
||||
|
||||
* localdns=`<node-local-address>`
|
||||
|
||||
`<cluster-domain>` is "cluster.local" by default. `<node-local-address>` is the local listen IP address chosen for NodeLocal DNSCache.
|
||||
`<cluster-domain>` is "`cluster.local`" by default. `<node-local-address>` is the
|
||||
local listen IP address chosen for NodeLocal DNSCache.
|
||||
-->
|
||||
* 把清单里的变量更改为正确的值:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}`
|
||||
domain=<cluster-domain>
|
||||
localdns=<node-local-address>
|
||||
|
@ -152,15 +184,17 @@ If you are using the sample manifest from the previous point, this will require
|
|||
NodeLocal DNSCache 选择的本地侦听 IP 地址。
|
||||
|
||||
<!--
|
||||
* If kube-proxy is running in IPTABLES mode:
|
||||
* If kube-proxy is running in IPTABLES mode:
|
||||
|
||||
``` bash
|
||||
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml
|
||||
```
|
||||
``` bash
|
||||
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml
|
||||
```
|
||||
|
||||
`__PILLAR__CLUSTER__DNS__` and `__PILLAR__UPSTREAM__SERVERS__` will be populated by the node-local-dns pods.
|
||||
In this mode, node-local-dns pods listen on both the kube-dns service IP as well as `<node-local-address>`, so pods can lookup DNS records using either IP address.
|
||||
-->
|
||||
`__PILLAR__CLUSTER__DNS__` and `__PILLAR__UPSTREAM__SERVERS__` will be populated by
|
||||
the `node-local-dns` pods.
|
||||
In this mode, the `node-local-dns` pods listen on both the kube-dns service IP
|
||||
as well as `<node-local-address>`, so pods can lookup DNS records using either IP address.
|
||||
-->
|
||||
* 如果 kube-proxy 运行在 IPTABLES 模式:
|
||||
|
||||
``` bash
|
||||
|
@ -170,44 +204,57 @@ If you are using the sample manifest from the previous point, this will require
|
|||
node-local-dns Pods 会设置 `__PILLAR__CLUSTER__DNS__` 和 `__PILLAR__UPSTREAM__SERVERS__`。
|
||||
在此模式下, node-local-dns Pods 会同时侦听 kube-dns 服务的 IP 地址和
|
||||
`<node-local-address>` 的地址,以便 Pods 可以使用其中任何一个 IP 地址来查询 DNS 记录。
|
||||
<!--
|
||||
|
||||
<!--
|
||||
* If kube-proxy is running in IPVS mode:
|
||||
|
||||
``` bash
|
||||
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/,__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml
|
||||
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/,__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml
|
||||
```
|
||||
In this mode, node-local-dns pods listen only on `<node-local-address>`. The node-local-dns interface cannot bind the kube-dns cluster IP since the interface used for IPVS loadbalancing already uses this address.
|
||||
`__PILLAR__UPSTREAM__SERVERS__` will be populated by the node-local-dns pods.
|
||||
|
||||
In this mode, the `node-local-dns` pods listen only on `<node-local-address>`.
|
||||
The `node-local-dns` interface cannot bind the kube-dns cluster IP since the
|
||||
interface used for IPVS loadbalancing already uses this address.
|
||||
`__PILLAR__UPSTREAM__SERVERS__` will be populated by the node-local-dns pods.
|
||||
-->
|
||||
* 如果 kube-proxy 运行在 IPVS 模式:
|
||||
|
||||
``` bash
|
||||
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml
|
||||
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/,__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml
|
||||
```
|
||||
|
||||
在此模式下,node-local-dns Pods 只会侦听 `<node-local-address>` 的地址。
|
||||
node-local-dns 接口不能绑定 kube-dns 的集群 IP 地址,因为 IPVS 负载均衡
|
||||
使用的接口已经占用了该地址。
|
||||
node-local-dns Pods 会设置 `__PILLAR__UPSTREAM__SERVERS__`。
|
||||
|
||||
|
||||
<!--
|
||||
* Run `kubectl create -f nodelocaldns.yaml`
|
||||
* If using kube-proxy in IPVS mode, `--cluster-dns` flag to kubelet needs to be modified to use `<node-local-address>` that NodeLocal DNSCache is listening on.
|
||||
Otherwise, there is no need to modify the value of the `--cluster-dns` flag, since NodeLocal DNSCache listens on both the kube-dns service IP as well as `<node-local-address>`.
|
||||
|
||||
* If using kube-proxy in IPVS mode, `--cluster-dns` flag to kubelet needs to be modified
|
||||
to use `<node-local-address>` that NodeLocal DNSCache is listening on.
|
||||
Otherwise, there is no need to modify the value of the `--cluster-dns` flag,
|
||||
since NodeLocal DNSCache listens on both the kube-dns service IP as well as
|
||||
`<node-local-address>`.
|
||||
-->
|
||||
* 运行 `kubectl create -f nodelocaldns.yaml`
|
||||
|
||||
* 如果 kube-proxy 运行在 IPVS 模式,需要修改 kubelet 的 `--cluster-dns` 参数
|
||||
NodeLocal DNSCache 正在侦听的 `<node-local-address>` 地址。
|
||||
否则,不需要修改 `--cluster-dns` 参数,因为 NodeLocal DNSCache 会同时侦听
|
||||
kube-dns 服务的 IP 地址和 `<node-local-address>` 的地址。
|
||||
|
||||
<!--
|
||||
Once enabled, node-local-dns Pods will run in the kube-system namespace on each of the cluster nodes. This Pod runs [CoreDNS](https://github.com/coredns/coredns) in cache mode, so all CoreDNS metrics exposed by the different plugins will be available on a per-node basis.
|
||||
Once enabled, the `node-local-dns` Pods will run in the `kube-system` namespace
|
||||
on each of the cluster nodes. This Pod runs [CoreDNS](https://github.com/coredns/coredns)
|
||||
in cache mode, so all CoreDNS metrics exposed by the different plugins will
|
||||
be available on a per-node basis.
|
||||
|
||||
You can disable this feature by removing the DaemonSet, using `kubectl delete -f <manifest>` . You should also revert any changes you made to the kubelet configuration.
|
||||
You can disable this feature by removing the DaemonSet, using `kubectl delete -f <manifest>`.
|
||||
You should also revert any changes you made to the kubelet configuration.
|
||||
-->
|
||||
启用后,node-local-dns Pods 将在每个集群节点上的 kube-system 名字空间中运行。
|
||||
此 Pod 在缓存模式下运行 [CoreDNS](https://github.com/coredns/coredns) ,
|
||||
启用后,`node-local-dns` Pods 将在每个集群节点上的 `kube-system` 名字空间中运行。
|
||||
此 Pod 在缓存模式下运行 [CoreDNS](https://github.com/coredns/coredns),
|
||||
因此每个节点都可以使用不同插件公开的所有 CoreDNS 指标。
|
||||
|
||||
如果要禁用该功能,你可以使用 `kubectl delete -f <manifest>` 来删除 DaemonSet。
|
||||
|
@ -240,7 +287,9 @@ In those cases, the `kube-dns` ConfigMap can be updated.
|
|||
## 设置内存限制
|
||||
|
||||
<!--
|
||||
node-local-dns pods use memory for storing cache entries and processing queries. Since they do not watch Kubernetes objects, the cluster size or the number of Services/Endpoints do not directly affect memory usage. Memory usage is influenced by the DNS query pattern.
|
||||
The `node-local-dns` Pods use memory for storing cache entries and processing queries.
|
||||
Since they do not watch Kubernetes objects, the cluster size or the number of Services/Endpoints
|
||||
do not directly affect memory usage. Memory usage is influenced by the DNS query pattern.
|
||||
From [CoreDNS docs](https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md),
|
||||
> The default cache size is 10000 entries, which uses about 30 MB when completely filled.
|
||||
-->
|
||||
|
@ -267,13 +316,13 @@ using the `max_concurrent` option in the forward plugin.
|
|||
你可以在 forward 插件中使用 `max_concurrent` 选项设置并发查询数量上限。
|
||||
|
||||
<!--
|
||||
If a node-local-dns pod attempts to use more memory than is available (because of total system
|
||||
If a `node-local-dns` Pod attempts to use more memory than is available (because of total system
|
||||
resources, or because of a configured
|
||||
[resource limit](/docs/concepts/configuration/manage-resources-containers/)), the operating system
|
||||
may shut down that pod's container.
|
||||
If this happens, the container that is terminated (“OOMKilled”) does not clean up the custom
|
||||
packet filtering rules that it previously added during startup.
|
||||
The node-local-dns container should get restarted (since managed as part of a DaemonSet), but this
|
||||
The `node-local-dns` container should get restarted (since managed as part of a DaemonSet), but this
|
||||
will lead to a brief DNS downtime each time that the container fails: the packet filtering rules direct
|
||||
DNS queries to a local Pod that is unhealthy.
|
||||
-->
|
||||
|
|
Loading…
Reference in New Issue