Update glossary and move existing info to new page

- Update glossary term for secrets
- Improve clarity of privileged container warning note
- Create a new page for Secrets good practices and bring existing content as-is to the page
- Add weights to pages
- Add link for good practices for secrets and remove moved content
pull/34291/head
Shannon Kularathna 2022-08-17 18:58:43 +00:00
parent 2fff85121a
commit 1c625d0659
4 changed files with 108 additions and 79 deletions
content/en/docs
reference/glossary

View File

@ -1280,61 +1280,15 @@ Secrets that a Pod requests are potentially visible within its containers.
Therefore, one Pod does not have access to the Secrets of another Pod.
{{< warning >}}
Any privileged containers on a node are liable to have access to all Secrets used
on that node.
Any containers that run with `privileged: true` on a node can access all
Secrets used on that node.
{{< /warning >}}
### Security recommendations for developers
- Applications still need to protect the value of confidential information after reading it
from an environment variable or volume. For example, your application must avoid logging
the secret data in the clear or transmitting it to an untrusted party.
- If you are defining multiple containers in a Pod, and only one of those
containers needs access to a Secret, define the volume mount or environment
variable configuration so that the other containers do not have access to that
Secret.
- If you configure a Secret through a {{< glossary_tooltip text="manifest" term_id="manifest" >}},
with the secret data encoded as base64, sharing this file or checking it in to a
source repository means the secret is available to everyone who can read the manifest.
Base64 encoding is _not_ an encryption method, it provides no additional confidentiality
over plain text.
- When deploying applications that interact with the Secret API, you should
limit access using
[authorization policies](/docs/reference/access-authn-authz/authorization/) such as
[RBAC](/docs/reference/access-authn-authz/rbac/).
- In the Kubernetes API, `watch` and `list` requests for Secrets within a namespace
are extremely powerful capabilities. Avoid granting this access where feasible, since
listing Secrets allows the clients to inspect the values of every Secret in that
namespace.
### Security recommendations for cluster administrators
{{< caution >}}
A user who can create a Pod that uses a Secret can also see the value of that Secret. Even
if cluster policies do not allow a user to read the Secret directly, the same user could
have access to run a Pod that then exposes the Secret.
{{< /caution >}}
- Reserve the ability to `watch` or `list` all secrets in a cluster (using the Kubernetes
API), so that only the most privileged, system-level components can perform this action.
- When deploying applications that interact with the Secret API, you should
limit access using
[authorization policies](/docs/reference/access-authn-authz/authorization/) such as
[RBAC](/docs/reference/access-authn-authz/rbac/).
- In the API server, objects (including Secrets) are persisted into
{{< glossary_tooltip term_id="etcd" >}}; therefore:
- only allow cluster administrators to access etcd (this includes read-only access);
- enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
for Secret objects, so that the data of these Secrets are not stored in the clear
into {{< glossary_tooltip term_id="etcd" >}};
- consider wiping / shredding the durable storage used by etcd once it is
no longer in use;
- if there are multiple etcd instances, make sure that etcd is
using SSL/TLS for communication between etcd peers.
## {{% heading "whatsnext" %}}
- For guidelines to manage and improve the security of your Secrets, refer to
[Good practices for Kubernetes Secrets](/docs/concepts/security/secrets-good-practices).
- Learn how to [manage Secrets using `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- Learn how to [manage Secrets using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)

View File

@ -1,7 +1,7 @@
---
title: Multi-tenancy
content_type: concept
weight: 70
weight: 80
---
<!-- overview -->
@ -27,7 +27,7 @@ The first step to determining how to share your cluster is understanding your us
evaluate the patterns and tools available. In general, multi-tenancy in Kubernetes clusters falls
into two broad categories, though many variations and hybrids are also possible.
### Multiple teams
### Multiple teams
A common form of multi-tenancy is to share a cluster between multiple teams within an
organization, each of whom may operate one or more workloads. These workloads frequently need to
@ -39,7 +39,7 @@ automation tools. There is often some level of trust between members of differen
Kubernetes policies such as RBAC, quotas, and network policies are essential to safely and fairly
share clusters.
### Multiple customers
### Multiple customers
The other major form of multi-tenancy frequently involves a Software-as-a-Service (SaaS) vendor
running multiple instances of a workload for customers. This business model is so strongly
@ -83,7 +83,7 @@ services.
There are several ways to design and build multi-tenant solutions with Kubernetes. Each of these
methods comes with its own set of tradeoffs that impact the isolation level, implementation
effort, operational complexity, and cost of service.
effort, operational complexity, and cost of service.
A Kubernetes cluster consists of a control plane which runs Kubernetes software, and a data plane
consisting of worker nodes where tenant workloads are executed as pods. Tenant isolation can be
@ -95,7 +95,7 @@ implies strong isolation, and “soft” multi-tenancy, which implies weaker iso
often from security and resource sharing perspectives (e.g. guarding against attacks such as data
exfiltration or DoS). Since data planes typically have much larger attack surfaces, "hard"
multi-tenancy often requires extra attention to isolating the data-plane, though control plane
isolation also remains critical.
isolation also remains critical.
However, the terms "hard" and "soft" can often be confusing, as there is no single definition that
will apply to all users. Rather, "hardness" or "softness" is better understood as a broad
@ -118,7 +118,7 @@ your needs or capabilities change.
## Control plane isolation
Control plane isolation ensures that different tenants cannot access or affect each others'
Kubernetes API resources.
Kubernetes API resources.
### Namespaces
@ -161,7 +161,7 @@ are less useful for multi-tenant clusters.
In a multi-team environment, RBAC must be used to restrict tenants' access to the appropriate
namespaces, and ensure that cluster-wide resources can only be accessed or modified by privileged
users such as cluster administrators.
users such as cluster administrators.
If a policy ends up granting a user more permissions than they need, this is likely a signal that
the namespace containing the affected resources should be refactored into finer-grained
@ -169,7 +169,7 @@ namespaces. Namespace management tools may simplify the management of these fine
namespaces by applying common RBAC policies to different namespaces, while still allowing
fine-grained policies where necessary.
### Quotas
### Quotas
Kubernetes workloads consume node resources, like CPU and memory. In a multi-tenant environment,
you can use [Resource Quotas](/docs/concepts/policy/resource-quotas/) to manage resource usage of
@ -188,7 +188,7 @@ than built-in quotas.
Quotas prevent a single tenant from consuming greater than their allocated share of resources
hence minimizing the “noisy neighbor” issue, where one tenant negatively impacts the performance
of other tenants' workloads.
of other tenants' workloads.
When you apply a quota to namespace, Kubernetes requires you to also specify resource requests and
limits for each container. Limits are the upper bound for the amount of resources that a container
@ -242,11 +242,11 @@ However, they can be significantly more complex to manage and may not be appropr
Kubernetes offers several types of volumes that can be used as persistent storage for workloads.
For security and data-isolation, [dynamic volume provisioning](/docs/concepts/storage/dynamic-provisioning/)
is recommended and volume types that use node resources should be avoided.
is recommended and volume types that use node resources should be avoided.
[StorageClasses](/docs/concepts/storage/storage-classes/) allow you to describe custom "classes"
of storage offered by your cluster, based on quality-of-service levels, backup policies, or custom
policies determined by the cluster administrators.
policies determined by the cluster administrators.
Pods can request storage using a [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/).
A PersistentVolumeClaim is a namespaced resource, which enables isolating portions of the storage
@ -291,7 +291,7 @@ sandboxing implementations are available:
userspace kernel, written in Go, with limited access to the underlying host.
* [Kata Containers](https://katacontainers.io/) is an OCI compliant runtime that allows you to run
containers in a VM. The hardware virtualization available in Kata offers an added layer of
security for containers running untrusted code.
security for containers running untrusted code.
### Node Isolation
@ -308,7 +308,7 @@ services. A skilled attacker could use the permissions assigned to the kubelet o
running on the node to move laterally within the cluster and gain access to tenant workloads
running on other nodes. If this is a major concern, consider implementing compensating controls
such as seccomp, AppArmor or SELinux or explore using sandboxed containers or creating separate
clusters for each tenant.
clusters for each tenant.
Node isolation is a little easier to reason about from a billing standpoint than sandboxing
containers since you can charge back per node rather than per pod. It also has fewer compatibility
@ -332,7 +332,7 @@ feature that allows you to assign a priority to certain pods running within the
When an application calls the Kubernetes API, the API server evaluates the priority assigned to pod.
Calls from pods with higher priority are fulfilled before those with a lower priority.
When contention is high, lower priority calls can be queued until the server is less busy or you
can reject the requests.
can reject the requests.
Using API priority and fairness will not be very common in SaaS environments unless you are
allowing customers to run applications that interface with the Kubernetes API, for example,
@ -346,7 +346,7 @@ service that comes with fewer performance guarantees and features and a for-fee
specific performance guarantees. Fortunately, there are several Kubernetes constructs that can
help you accomplish this within a shared cluster, including network QoS, storage classes, and pod
priority and preemption. The idea with each of these is to provide tenants with the quality of
service that they paid for. Lets start by looking at networking QoS.
service that they paid for. Lets start by looking at networking QoS.
Typically, all pods on a node share a network interface. Without network QoS, some pods may
consume an unfair share of the available bandwidth at the expense of other pods.
@ -356,7 +356,7 @@ for networking that allows you to use Kubernetes resources constructs, i.e. requ
apply rate limits to pods by using Linux tc queues.
Be aware that the plugin is considered experimental as per the
[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shaping)
documentation and should be thoroughly tested before use in production environments.
documentation and should be thoroughly tested before use in production environments.
For storage QoS, you will likely want to create different storage classes or profiles with
different performance characteristics. Each storage profile can be associated with a different
@ -396,14 +396,14 @@ that supports multiple tenants.
[Operators](/docs/concepts/extend-kubernetes/operator/) are Kubernetes controllers that manage
applications. Operators can simplify the management of multiple instances of an application, like
a database service, which makes them a common building block in the multi-consumer (SaaS)
multi-tenancy use case.
multi-tenancy use case.
Operators used in a multi-tenant environment should follow a stricter set of guidelines.
Specifically, the Operator should:
Specifically, the Operator should:
* Support creating resources within different tenant namespaces, rather than just in the namespace
in which the Operator is deployed.
* Ensure that the Pods are configured with resource requests and limits, to ensure scheduling and fairness.
* Ensure that the Pods are configured with resource requests and limits, to ensure scheduling and fairness.
* Support configuration of Pods for data-plane isolation techniques such as node isolation and
sandboxed containers.
@ -416,7 +416,7 @@ There are two primary ways to share a Kubernetes cluster for multi-tenancy: usin
plane per tenant).
In both cases, data plane isolation, and management of additional considerations such as API
Priority and Fairness, is also recommended.
Priority and Fairness, is also recommended.
Namespace isolation is well-supported by Kubernetes, has a negligible resource cost, and provides
mechanisms to allow tenants to interact appropriately, such as by allowing service-to-service
@ -492,14 +492,14 @@ referred to as a _virtual control plane_.
A virtual control plane typically consists of the Kubernetes API server, the controller manager,
and the etcd data store. It interacts with the super cluster via a metadata synchronization
controller which coordinates changes across tenant control planes and the control plane of the
super-cluster.
super-cluster.
By using per-tenant dedicated control planes, most of the isolation problems due to sharing one
API server among all tenants are solved. Examples include noisy neighbors in the control plane,
uncontrollable blast radius of policy misconfigurations, and conflicts between cluster scope
objects such as webhooks and CRDs. Hence, the virtual control plane model is particularly
suitable for cases where each tenant requires access to a Kubernetes API server and expects the
full cluster manageability.
full cluster manageability.
The improved isolation comes at the cost of running and maintaining an individual virtual control
plane per tenant. In addition, per-tenant control planes do not solve isolation problems in the
@ -507,10 +507,9 @@ data plane, such as node-level noisy neighbors or security threats. These must s
separately.
The Kubernetes [Cluster API - Nested (CAPN)](https://github.com/kubernetes-sigs/cluster-api-provider-nested/tree/main/virtualcluster)
project provides an implementation of virtual control planes.
project provides an implementation of virtual control planes.
#### Other implementations
* [Kamaji](https://github.com/clastix/kamaji)
* [vcluster](https://github.com/loft-sh/vcluster)
* [vcluster](https://github.com/loft-sh/vcluster)

View File

@ -0,0 +1,67 @@
---
title: Good practices for Kubernetes Secrets
description: >
Principles and practices for good Secret management for cluster administrators and application developers.
content_type: concept
weight: 70
---
<!-- overview -->
{{<glossary_definition prepend="In Kubernetes, a Secret is an object that"
term_id="secret" length="all">}}
The following good practices are intended for both cluster administrators and
application developers. Use these guidelines to improve the security of your
sensitive information in Secret objects, as well as to more effectively manage
your Secrets.
<!-- body -->
### Security recommendations for developers
- Applications still need to protect the value of confidential information after reading it
from an environment variable or volume. For example, your application must avoid logging
the secret data in the clear or transmitting it to an untrusted party.
- If you are defining multiple containers in a Pod, and only one of those
containers needs access to a Secret, define the volume mount or environment
variable configuration so that the other containers do not have access to that
Secret.
- If you configure a Secret through a {{< glossary_tooltip text="manifest" term_id="manifest" >}},
with the secret data encoded as base64, sharing this file or checking it in to a
source repository means the secret is available to everyone who can read the manifest.
Base64 encoding is _not_ an encryption method, it provides no additional confidentiality
over plain text.
- When deploying applications that interact with the Secret API, you should
limit access using
[authorization policies](/docs/reference/access-authn-authz/authorization/) such as
[RBAC](/docs/reference/access-authn-authz/rbac/).
- In the Kubernetes API, `watch` and `list` requests for Secrets within a namespace
are extremely powerful capabilities. Avoid granting this access where feasible, since
listing Secrets allows the clients to inspect the values of every Secret in that
namespace.
### Security recommendations for cluster administrators
{{< caution >}}
A user who can create a Pod that uses a Secret can also see the value of that Secret. Even
if cluster policies do not allow a user to read the Secret directly, the same user could
have access to run a Pod that then exposes the Secret.
{{< /caution >}}
- Reserve the ability to `watch` or `list` all secrets in a cluster (using the Kubernetes
API), so that only the most privileged, system-level components can perform this action.
- When deploying applications that interact with the Secret API, you should
limit access using
[authorization policies](/docs/reference/access-authn-authz/authorization/) such as
[RBAC](/docs/reference/access-authn-authz/rbac/).
- In the API server, objects (including Secrets) are persisted into
{{< glossary_tooltip term_id="etcd" >}}; therefore:
- only allow cluster admistrators to access etcd (this includes read-only access);
- enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
for Secret objects, so that the data of these Secrets are not stored in the clear
into {{< glossary_tooltip term_id="etcd" >}};
- consider wiping / shredding the durable storage used by etcd once it is
no longer in use;
- if there are multiple etcd instances, make sure that etcd is
using SSL/TLS for communication between etcd peers.

View File

@ -6,13 +6,22 @@ full_link: /docs/concepts/configuration/secret/
short_description: >
Stores sensitive information, such as passwords, OAuth tokens, and ssh keys.
aka:
aka:
tags:
- core-object
- security
---
Stores sensitive information, such as passwords, OAuth tokens, and ssh keys.
Stores sensitive information, such as passwords, OAuth tokens, and SSH keys.
<!--more-->
<!--more-->
Allows for more control over how sensitive information is used and reduces the risk of accidental exposure. Secret values are encoded as base64 strings and stored unencrypted by default, but can be configured to be [encrypted at rest](/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted). A {{< glossary_tooltip text="Pod" term_id="pod" >}} references the secret as a file in a volume mount or by the kubelet pulling images for a pod. Secrets are great for confidential data and [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) for non-confidential data.
Secrets give you more control over how sensitive information is used and reduces
the risk of accidental exposure. Secret values are encoded as base64 strings and
are stored unencrypted by default, but can be configured to be
[encrypted at rest](/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted).
A {{< glossary_tooltip text="Pod" term_id="pod" >}} can reference the Secret in
a variety of ways, such as in a volume mount or as an environment variable.
Secrets are designed for confidential data and
[ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) are
designed for non-confidential data.