Improved openshift, docker, ansible references (#12201)
* Fixed Openshift -> OpenShift * Fixed ansible -> Ansible * Fixed docker -> Dockerpull/12206/head
parent
1bc0d7c385
commit
aa0da8f34a
|
@ -27,7 +27,7 @@ E2E issues and LGTM process
|
|||
|
||||
* Question/concern to work out is securing Jenkins. Short term conclusion: Will look at pushing Jenkins logs into GCS bucket. Lavalamp will follow up with Jeff Grafton.
|
||||
|
||||
* Longer term solution may be a merge queue, where e2e runs for each merge (as opposed to multiple merges). This exists in Openshift today.
|
||||
* Longer term solution may be a merge queue, where e2e runs for each merge (as opposed to multiple merges). This exists in OpenShift today.
|
||||
|
||||
Cluster Upgrades for Kubernetes as final v1 feature
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ We see kompose as a terrific way to expose Kubernetes principles to Docker users
|
|||
|
||||
Over the summer, Kompose has found a new gear with help from Tomas Kral and Suraj Deshmukh from Red Hat, and Janet Kuo from Google. Together with our own lead kompose developer Nguyen An-Tu they are making kompose even more exciting. We proposed Kompose to the Kubernetes Incubator within the SIG-apps and we received approval from the general Kubernetes community; you can now find kompose in the [Kubernetes Incubator](https://github.com/kubernetes-incubator/kompose).
|
||||
|
||||
Kompose now supports Docker-compose v2 format, persistent volume claims have been added recently, as well as multiple container per pods. It can also be used to target Openshift deployments, by specifying a different provider than the default Kubernetes. Kompose is also now available in Fedora packages and we look forward to see it in CentOS distributions in the coming weeks.
|
||||
Kompose now supports Docker-compose v2 format, persistent volume claims have been added recently, as well as multiple container per pods. It can also be used to target OpenShift deployments, by specifying a different provider than the default Kubernetes. Kompose is also now available in Fedora packages and we look forward to see it in CentOS distributions in the coming weeks.
|
||||
|
||||
kompose is a single Golang binary that you build or install from the [release on GitHub](https://github.com/kubernetes-incubator/kompose). Let’s skip the build instructions and dive straight into an example.
|
||||
|
||||
|
|
|
@ -109,7 +109,7 @@ For a detailed list of changes in the containerd 1.1 release, please see the rel
|
|||
To setup a Kubernetes cluster using containerd as the container runtime:
|
||||
|
||||
* For a production quality cluster on GCE brought up with kube-up.sh, see [here](https://github.com/containerd/cri/blob/v1.0.0/docs/kube-up.md).
|
||||
* For a multi-node cluster installer and bring up steps using ansible and kubeadm, see [here](https://github.com/containerd/cri/blob/v1.0.0/contrib/ansible/README.md).
|
||||
* For a multi-node cluster installer and bring up steps using Ansible and kubeadm, see [here](https://github.com/containerd/cri/blob/v1.0.0/contrib/ansible/README.md).
|
||||
* For creating a cluster from scratch on Google Cloud, see [Kubernetes the Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way).
|
||||
* For a custom installation from release tarball, see [here](https://github.com/containerd/cri/blob/v1.0.0/docs/installation.md).
|
||||
* To install using LinuxKit on a local VM, see [here](https://github.com/linuxkit/linuxkit/tree/master/projects/kubernetes).
|
||||
|
|
|
@ -29,7 +29,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
|
|||
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) is an overlay network provider that can be used with Kubernetes.
|
||||
* [Knitter](https://github.com/ZTE/Knitter/) is a network solution supporting multiple networking in Kubernetes.
|
||||
* [Multus](https://github.com/Intel-Corp/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
|
||||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and Openshift.
|
||||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
|
||||
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
|
||||
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
|
||||
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
|
||||
|
|
|
@ -125,7 +125,7 @@ Details on how the AOS system works can be accessed here: http://www.apstra.com/
|
|||
|
||||
[Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments. Using unified physical & virtual SDN, Big Cloud Fabric tackles inherent container networking problems such as load balancing, visibility, troubleshooting, security policies & container traffic monitoring.
|
||||
|
||||
With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat Openshift, Mesosphere DC/OS & Docker Swarm will be natively integrated along side with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed.
|
||||
With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat OpenShift, Mesosphere DC/OS & Docker Swarm will be natively integrated along side with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed.
|
||||
|
||||
BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](http://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
|
||||
|
||||
|
@ -262,7 +262,7 @@ Multus supports all [reference plugins](https://github.com/containernetworking/p
|
|||
|
||||
[VMware NSX-T](https://docs.vmware.com/en/VMware-NSX-T/index.html) is a network virtualization and security platform. NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks. In addition to vSphere hypervisors, these environments include other hypervisors such as KVM, containers, and bare metal.
|
||||
|
||||
[NSX-T Container Plug-in (NCP)](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) provides integration between NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and Openshift.
|
||||
[NSX-T Container Plug-in (NCP)](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) provides integration between NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
|
||||
|
||||
### Nuage Networks VCS (Virtualized Cloud Services)
|
||||
|
||||
|
|
|
@ -127,7 +127,7 @@ juju config kubernetes-worker allow-privileged=true
|
|||
|
||||
### Private registry
|
||||
|
||||
With the registry action, you can easily create a private docker registry that
|
||||
With the registry action, you can easily create a private Docker registry that
|
||||
uses TLS authentication. However, note that a registry deployed with that action
|
||||
is not HA; it uses storage tied to the kubernetes node where the pod is running.
|
||||
Consequently, if the registry pod is migrated from one node to another, you will
|
||||
|
|
|
@ -288,7 +288,7 @@ Various configuration files are written into the home directory *CLC_CLUSTER_HOM
|
|||
to access the cluster from machines other than where you created the cluster from.
|
||||
|
||||
* ```config/```: Ansible variable files containing parameters describing the master and minion hosts
|
||||
* ```hosts/```: hosts files listing access information for the ansible playbooks
|
||||
* ```hosts/```: hosts files listing access information for the Ansible playbooks
|
||||
* ```kube/```: ```kubectl``` configuration files, and the basic-authentication password for admin access to the Kubernetes API
|
||||
* ```pki/```: public key infrastructure files enabling TLS communication in the cluster
|
||||
* ```ssh/```: SSH keys for root access to the hosts
|
||||
|
|
|
@ -143,7 +143,7 @@ Open the file in an editor and replace the following values for `ClusterConfigur
|
|||
|
||||
You must also modify the `ClusterStatus` to add a mapping for the current host under apiEndpoints.
|
||||
|
||||
Add an annotation for the cri-socket to the current node, for example to use docker:
|
||||
Add an annotation for the cri-socket to the current node, for example to use Docker:
|
||||
|
||||
```shell
|
||||
kubectl annotate node <nodename> kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
|
||||
|
|
|
@ -297,7 +297,7 @@ INFO OpenShift file "foo-buildconfig.yaml" created
|
|||
```
|
||||
|
||||
{{< note >}}
|
||||
If you are manually pushing the Openshift artifacts using ``oc create -f``, you need to ensure that you push the imagestream artifact before the buildconfig artifact, to workaround this Openshift issue: https://github.com/openshift/origin/issues/4518 .
|
||||
If you are manually pushing the OpenShift artifacts using ``oc create -f``, you need to ensure that you push the imagestream artifact before the buildconfig artifact, to workaround this OpenShift issue: https://github.com/openshift/origin/issues/4518 .
|
||||
{{< /note >}}
|
||||
|
||||
## `kompose up`
|
||||
|
|
|
@ -88,7 +88,7 @@ Just create `node-problem-detector.yaml`, and put it under the addon pods direct
|
|||
## Overwrite the Configuration
|
||||
|
||||
The [default configuration](https://github.com/kubernetes/node-problem-detector/tree/v0.1/config)
|
||||
is embedded when building the docker image of node problem detector.
|
||||
is embedded when building the Docker image of node problem detector.
|
||||
|
||||
However, you can use [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to overwrite it
|
||||
following the steps:
|
||||
|
|
Loading…
Reference in New Issue