ZH-trans: merge release1.16-temporary to master (#18217)

* zh-trans:/docs/docs/concepts/workloads/pods/ephemeral-containers.md (#16948)

* update zh-trans of define-environment-variable-container.md (#16999)

Signed-off-by: Yixiang2019 <wang.yixiang@zte.com.cn>

* update chinese docs (#16985)

* Fix ordered list (#16988)

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

update

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

* zh-trans:/docs/reference/setup-tools/kubeadm/kubeadm-upgrade-phase.md (#16951)

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

update

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

update

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

* Remove redundant symbol and fix some ordered list (#17000)

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

update

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

* update-zh-translation/docs/reference/setup-tools/kubeadm/kubeadm-init.md (#16997)

* update zh translation kubeadm-reset.md kubeadm-upgrade.md (#16992)

* Create kubeadm_join_phase_control-plane-join_all.md (#16987)

* Update web-ui-dashboard.md (#16976)

* update format problem (#16956)

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

update

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

* zh-trans:/docs/docs/concepts/storage/volume-pvc-datasource.md (#17021)

* update zh translation /docs/reference/access-authn-authz/webhook.md (#16860)

* fix confict update zh translation (#16863)

* zh-trans:/docs/concepts/workloads/pods/disruptions.md (#16983)

* zh-trans:/docs/concepts/workloads/pods/disruptions.md

* Update content/zh/docs/concepts/workloads/pods/disruptions.md

Co-Authored-By: Qiming <tengqim@cn.ibm.com>

* update zh translation content/zh/docs/reference/command-line-tools-reference/kube-scheduler.md (#17006)

* Update RC's link (#16935)

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

Update

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

update the style

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

* Update the links for /zh/docs/setup (#16938)

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

* zh-trans:/docs/docs/concepts/services-networking/dual-stack.md (#17024)

* update zh translation /reference/setup-tools/kubeadm/generated/kubeadm.md (#17036)

* update zh tanslation /reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_download.md (#17037)

* update zh translation -/reference/setup-tools/kubeadm/generated/kubeadm_alpha.md (#17038)

* update zh translation /docs/contribute/participating.md (#17040)

* Fix cri-o's links to match English docs (#16936)

Signed-off-by: PingWang <wang.ping5@zte.com.cn>

* update zh translation content/zh/docs/reference/kubectl/jsonpath.md (#16862)

* update zh translation /docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md (#17049)

* update Unkown ->  Unknown (#17062)

* zh-translation:high-availability.md (#16960)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* update Runnning -> Running (#17061)

* zh-trans:/docs/docs/concepts/storage/volume-snapshots.md (#17054)

* update zh transation /docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md (#17060)

* update zh translation /docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md (#17052)

* update zh translation /docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md (#17055)

* update zh translation update-zh-translation-/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md (#17048)

* update zh-translation:ha-topology.md (#17099)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* zh-trans /reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md (#17093)

* update zh translation /reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md (#17083)

* Add Chinese translation for scheduler-perf-tuning (#17087)

Signed-off-by: GeorgeSen <wang.sen2@zte.com.cn>

Add Chinese translation for scheduler-perf-tuning

Signed-off-by: GeorgeSen <wang.sen2@zte.com.cn>

Add Chinese translation for scheduler-perf-tuning

Signed-off-by: GeorgeSen <wang.sen2@zte.com.cn>

Add Chinese translation for scheduler-perf-tuning

Signed-off-by: GeorgeSen <wang.sen2@zte.com.cn>

* zh trans content/zh/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md (#17121)

* update zh trans content/zh/docs/tasks/access-application-cluster/service-access-application-cluster.md (#17122)

* zh-translation:content/zh/docs/tasks/administer-cluster/namespaces-walkthrough.md (#17105)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* update-zh-translation-/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_migrate.md (#17091)

* update zh trans /docs/setup/learning-environment/minikube.md (#17143)

* update zh trans /zh/docs/reference/_index.md (#17146)

* update zh trans /docs/tasks/access-application-cluster/port-forward-access-application-cluster.md (#17134)

* update zh translation 20191020-update-zh-translation-/docs/contribute/localization.md (#17046)

* update zh trans /docs/reference/using-api/client-libraries.md (#17144)

* update zh trans /docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md (#17145)

* update zh /docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md (#17131)

* update zh translation /reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md (#17081)

* update zh translation /docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md (#17078)

* add zh translation /docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md (#17076)

* update zh trans content/zh/docs/contribute/generate-ref-docs/kubectl.md (#17165)

* update zh translation /reference/command-line-tools-reference/kube-proxy.md (#17107)

* zh-trans:/docs/docs/concepts/workloads/pods/pod-topology-spread-const… (#16955)

* zh-trans:/docs/docs/concepts/workloads/pods/pod-topology-spread-constraints.md

* Update pod-topology-spread-constraints.md

* zh-trans:/docs/concepts/configuration/scheduling-framework.md (#17088)

* update zh translation /docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md (#17079)

* update zh translation /docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md (#17077)

* update zh translation /docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md (#17075)

* update zh translation /docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md (#17050)

* update zh trans /docs/concepts/overview/working-with-objects/namespaces.md (#17205)

* update zh trans /docs/concepts/overview/what-is-kubernetes.md (#17203)

* pr_release-1.16_crictl (#17201)

* update zh docs/tasks/administer-cluster/dns-debugging-resolution.md (#17186)

* update zh trans content/zh/docs/concepts/overview/working-with-objects/field-selectors.md (#17191)

* translate configure_upgrade_etcd (#17160)

* translate kubeadm_upgrade_apply (#17159)

* update zh /docs/tasks/job/coarse-parallel-processing-work-queue.md (#17155)

* translate docs/setup/release/version-skew-policy.md to Chinese (#17142)

* update zh trans content/zh/docs/tasks/access-application-cluster/create-external-load-balancer.md (#17124)

* zh-trans replace the wrong translation (#17103)

zh-trans replace the wrong translation

* Merged 1.14~1.16 changes (#17117)

* zh-trans:/docs/concepts/configuration/assign-pod-node.md (#17129)

* update zh trans /docs/contribute/generate-ref-docs/kubernetes-api.md (#17161)

* pr_release-1.16_basic-ss (#17169)

* Add zh-trans of assign-cpu-resource.md (#17063)

Signed-off-by: heqg <he.qingguo@zte.com.cn>

Add zh-trans of assign-cpu-resource.md

Signed-off-by: heqg <he.qingguo@zte.com.cn>

Add zh-trans of assign-cpu-resource.md

Signed-off-by: heqg <he.qingguo@zte.com.cn>

Add zh-trans of assign-cpu-resource.md

Signed-off-by: heqg <he.qingguo@zte.com.cn>

Add zh-trans of assign-cpu-resource.md

Signed-off-by: heqg <he.qingguo@zte.com.cn>

* update zh trans content/zh/docs/concepts/overview/working-with-objects/common-labels.md (#17193)

* update zh translation /docs/contribute/intermediate.md (#17041)

* zh-trans:docs/setup/production-environment/turnkey/tencent.md (#17207)

* pr_release-1.16_mysql-wordpress-pv (#17202)

* pr_release-1.16_reconfig-kubelet (#17200)

* zh-trans:/docs/docs/concepts/workloads/controllers/jobs-run-completio… (#17020)

* zh-trans:/docs/docs/concepts/workloads/controllers/jobs-run-completion.md

* Update jobs-run-completion.md

* Update jobs-run-completion.md

* update zh trans content/zh/docs/concepts/extend-kubernetes/extend-cluster.md (#17212)

* update zh trans content/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md (#17213)

* add zh trans /docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md (#17220)

kubeadm_join_phase_control-plane-prepare_download-certs.md

* add zh trans /reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs.md and /reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare.md (#17219)

* update zh trans content/zh/docs/concepts/containers/images.md (#17216)

* ZH-trans: add _index.md (#17214)

* ZH-trans: add _index.md

* add _index.md file

* add _index.md files

* pr_release-1.16_crd-versions (#17198)

* update zh translation /docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md (#17092)

* zh-trans /reference/setup-tools/kubeadm/generated/kubeadm_config_images.md (#17094)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md   and   content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_sa.md (#17222)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md  content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md (#17221)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md (#17223)

* 013 /docs/concepts/services networking/ingress.md (#17185)

* x

* update zh trans content/zh/docs/concepts/services-networking/ingress.md

* zh-trans:/reference/setup-tools/kubeadm/generated/kubeadm_completion.md and kubeadm_config.md (#17097)

* add zh trans reference/glossary/pod-lifecycle (#17226)

* update zh-translation document (#17096)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* update zh-translation:setup-ha-etcd-with-kubeadm.md (#17098)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* add zh trans /docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_selfhosting.md and /docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_enable-dynamic.md (#17227)

* add zh trans /docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubeconfig_user.md and /docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md (#17228)

* zh-translation:content/zh/docs/tasks/extend-kubectl/kubectl-plugins.md (#17089)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* update zh translation /docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md (#17080)

* update zh translation /reference/setup-tools/kubeadm/generated/kubeadm_config_view.md (#17085)

* zh-translation:troubleshooting-kubeadm.md (#17069)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* zh-trans: docs/concepts/scheduling/kube-scheduler.md (#17067)

* Add Chinese translation for kube-scheduler

Signed-off-by: GeorgeSen <wang.sen2@zte.com.cn>

Add Chinese translation for kube-scheduler

Signed-off-by: GeorgeSen <wang.sen2@zte.com.cn>

Add Chinese translation for kube-scheduler

Signed-off-by: GeorgeSen <wang.sen2@zte.com.cn>

Add Chinese translation for kube-scheduler

Signed-off-by: GeorgeSen <wang.sen2@zte.com.cn>

Add Chinese translation for kube-scheduler

Signed-off-by: GeorgeSen <wang.sen2@zte.com.cn>

Add Chinese translation for kube-scheduler

Signed-off-by: GeorgeSen <wang.sen2@zte.com.cn>

Add Chinese translation for kube-scheduler

Signed-off-by: GeorgeSen <wang.sen2@zte.com.cn>

Add Chinese translation for kube-scheduler

Signed-off-by: GeorgeSen <wang.sen2@zte.com.cn>

* Update kube-scheduler.md

* Add Chinese translation for kube-scheduler

* Update kube-scheduler.md

* Update kube-scheduler.md

* Update kube-scheduler.md

* translate pods.md  and init-containers.md for branch release-1.16 (#17208)

* update zh translation /docs/contribute/start.md (#17039)

* zh-translation:2017-10-00-Five-Days-Of-Kubernetes-18.md (#17229)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* translate kubeadm-certs.md (#17090)

* update the Illegal comment such as : (<!--、<--) (#17266)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase.md and content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md (#17268)

* update zh /docs/concepts/architecture/cloud-controller.md (#17263)

* update zh /docs/concepts/cluster-administration/logging.md (#17247)

* modify the show of zh translation /concepts/overview/what-is-kubernetes.md /reference/setup-tools/kubeadm/generated/kubeadm_init.md  /reference/setup-tools/kubeadm/kubeadm-init.md (#17246)

* zh-translation:kubeadm_init_phase_control-plane_all.md (#17243)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* zh-translation:kubeadm_join_phase_control-plane-join_etcd.md (#17236)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md and content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md (#17230)

* add content/zh/docs/reference/glossary/container-runtime.md file fix-up to pass the ci and trans content/zh/docs/reference/glossary/container-runtime.md、content/zh/docs/concepts/overview/components.md (#17211)

* zh-translation:mirror-pod.md (#17231)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* zh-translation:kubeadm_init_phase_upload-config.md (#17238)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* update zh /docs/concepts/workloads/pods/pod-overview.md (#17239)

* update the format of zh translation content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade.md   content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md  content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md  content/zh/docs/reference/setup-tools/kubeadm/kubeadm-config.md  content/zh/docs/reference/setup-tools/kubeadm/kubeadm-reset.md  content/zh/docs/reference/setup-tools/kubeadm/kubeadm-token.md  content/zh/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md (#17248)

* Create advanced.md (#17256)

* Create advanced.md

* trans the advanced.md and fix the build bugs

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane.md and content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_scheduler.md (#17269)

* zh-translation:kubelet-integration.md (#17272)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* pr_release-1.16_out-of-resource (#17199)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md and content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_controller-manager.md (#17267)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md and content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md (#17271)

* pr_release-1.16_ext-admission-ctl (#17196)

* update zh translation /docs/contribute/advanced.md (#17042)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md and content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md (#17275)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print.md update zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_config.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_init-defaults.md (#17282)

* add zh trans  content/zh/docs/reference/setup-tools/kubeadm/generated… (#17287)

* add zh trans  content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd.md content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md update zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane.md

* Update kubeadm_init_phase_etcd.md

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/… (#17285)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-server.md content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet.md update zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver.md

* Update kubeadm_alpha_kubelet.md

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_all.md and content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon.md fix-up bad comment kubeadm_alpha_certs_renew.md、kubeadm_alpha_certs_renew_apiserver-etcd-client.md、kubeadm_init_phase_addon_all.md (#17276)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/… (#17281)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md and content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_join-defaults.md

* Update kubeadm_init_phase_etcd_local.md

* zh trans update-daemon-set.md (#16872)

* zh trans update-daemon-set.md

* Update update-daemon-set.md

* managing-tls-in-a-cluster.md (#16874)

* update-api-object-kubectl-patch.md (#16875)

* improve the zh trans /kubeadm/generated/kubeadm_init_phase_.* 1 (#17295)

* improve the zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_.* (#17294)

* modify the zh translation content/zh/docs/reference/setup-tools/kubea… (#17293)

* modify the zh translation content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_.*

* Update kubeadm_alpha.md

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md and content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md (#17280)

* add zh /docs/reference/glossary/cgroup.md (#17291)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md (#17288)

* improve zh trans of command in /kubeadm/generated/kubeadm_init_phase_.* files (#17297)

* improve zh command translation /kubeadm/generated/kubeadm_init_.* files (#17298)

* update zh trans in /kubeadm/generated/kubeadm_.* files (#17306)

* add zh trans /reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md、/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-peer.md、/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubeconfig.md (#17308)

* Improve previously translated documents (#17327)

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* zh-trans: docs/setup/production-environment/turnkey/aws.md (#17320)

* Update zh.toml

* update zh trans /generated/kubeadm_join_phase_.* files (#17301)

* Update zh.toml

* add zh /docs/reference/glossary/pod-disruption-budget.md (#17344)

* pr_release-1.16_config-aggregation-layer (#17197)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_controller-manager.conf.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_scheduler.conf.md (#17390)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md (#17381)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md (#17380)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md (#17383)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_certificate-key.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_update-cluster-status.md (#17385)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_admin.conf.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md (#17387)

* add zh trans content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_front-proxy-client.md、content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_selfhosting_pivot.md (#17384)

* add zh /docs/reference/glossary/limitrange.md (#17324)

* add zh trans content/zh/docs/setup/best-practices/cluster-large.md (#17321)

* add zh trans content/zh/docs/setup/best-practices/cluster-large.md

* Update cluster-large.md

* add zh trans /docs/reference/setup-tools/kubeadm/kubeadm-alpha.md、/do… (#17403)

* add zh trans /docs/reference/setup-tools/kubeadm/kubeadm-alpha.md、/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md、/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md

* Update kubeadm_init_phase_certs_etcd-server.md

* add zh /docs/reference/glossary/cluster-operations.md (#17300)

* add zh /docs/reference/glossary/applications.md (#17419)

* update zh trans kubelet (#17379)

* update zh trans kubelet

* update the file according to feedback from reviewer tengqm

* update 1000-1757 lines

* update the advice zh trans

* add zh /docs/reference/glossary/static-pod.md (#17418)

* add zh /docs/reference/glossary/preemption.md (#17423)

* add zh /docs/reference/glossary/pod-priority.md (#17421)

* add zh /docs/reference/glossary/control-plane.md (#17425)

* add zh /docs/reference/glossary/cluster-infrastructure.md (#17424)

* pr_release-1.16_api-overview (#17444)

* pr_release-1.16_daemonset (#17435)

* pr_release-1.16_gc (#17445)

* pr_release-1.16_qos-class (#17442)

* fix QoS Class to QoS 类 (#17464)

* pr_release-1.16-abac (#17427)

* zh-trans:docs/setup/production-environment/turnkey/alibaba-cloud.md (#17345)

* pr_release-1.16_endpoint-slice (#17468)

* pr_release-1.16_taint (#17469)

* pr_release-1.16_operator-pattern (#17467)

* pr_release-1.16_ss (#17433)

* pr_release-1.16_admission-controller (#17440)

* pr_release-1.16_containerd (#17441)

* pr_release-1.16_app-container (#17466)

* add zh-trans content/zh/docs/setup/_index.md、content/zh/docs/setup/release/_index.md (#17503)

* pr-release-1.16_enabling-endpoint-slices (#17504)

* update zh trans content/zh/docs/reference/setup-tools/kubeadm/kubeadm… (#17495)

* update zh trans content/zh/docs/reference/setup-tools/kubeadm/kubeadm-upgrade-phase.md、content/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md

* add content/zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md

* pr-release-1.16_logging (#17491)

* pr-release-1.16_cri (#17486)

* add zh-trans:docs/setup/production-environment/turnkey/azure.md (#17482)

* pattern translate into 模式 (#17485)

* fix zk affinity description in zh trans (#17393)

* pr-release-1.16_ephemeral-container (#17487)

* pr-release-1.16_data-plane (#17489)

* pr-release-1.16_cncf (#17490)

* zh-trans add content\zh\docs\tools\install-minikube.md (#16920)

* zh-trans add content\zh\docs\tools\install-minikube.md

* zh-trans update content\zh\docs\tools\install-minikube.md

* zh-trans update content\zh\docs\tools\install-minikube.md

* update \docs\tasks\tools\install-minikube.md

* update docs\concepts\workloads\controllers\deployment.md

* update deployment.md

* pr-release-1.16_extensions (#17492)

* pr-release-1.16_toleration (#17493)

* Revert "update zh trans content/zh/docs/reference/setup-tools/kubeadm/kubeadm… (#17495)" (#17521)

This reverts commit 1134c14e0a.

* motidy extensions in content/zh/docs/reference/glossary/extensions (#17524)

* motidy toleration in content/zh/docs/reference/glossary/toleration.md (#17523)

* motidy toleration in content/zh/docs/reference/glossary/toleration.md

* Update toleration.md

* improve zh-trans in content/zh/docs/setup/_index.md (#17526)

* add zh-trans /zh/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md (#17527)

* Broken Link (#17546)

Issue available at https://kubernetes.io/zh/docs/concepts/containers/runtime-class/
and introduced by original English documentation (see #17543)

* Update trans kubeadm_upgrade_plan.md (#17395)

* Update kubeadm_upgrade_plan.md

* Update kubeadm_upgrade_plan.md

* Update kubeadm_upgrade_plan.md

* Update kubeadm_upgrade_plan.md

* Update kubeadm_upgrade_plan.md

* Update kubeadm_upgrade_plan.md

* update zh-trans of define-command-argument-container.md (#17022)

Signed-off-by: Yixiang2019 <wang.yixiang@zte.com.cn>

update zh-trans of define-command-argument-container.md

Signed-off-by: Yixiang2019 <wang.yixiang@zte.com.cn>

Add back the Original English

Signed-off-by: Yixiang2019 <wang.yixiang@zte.com.cn>

update title of define-command-argument-container.md

Signed-off-by: Yixiang2019 <wang.yixiang@zte.com.cn>

update table title and reference

Signed-off-by: Yixiang2019 <wang.yixiang@zte.com.cn>

update reference of define-command-argument-container.md

Signed-off-by: Yixiang2019 <wang.yixiang@zte.com.cn>

* ZH-trans: fix multiple jump links and update files (#17603)

* ZH-trans: fix multiple jump links and update files

* Update _index.html

* update zh trans content/zh/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md (#17528)

* update zh trans /doc/concepts/architecture/nodes.md (#17617)

* update zh trans content/zh/docs/concepts/architecture/nodes.md

* fix-up content/zh/docs/concepts/architecture/nodes.md

* add zh-trans /docs/setup/release/notes.md (#17519)

* add zh-trans /docs/setup/release/notes.md

update-750

*  fix-up 1575 line and udpate 2483 line

* update to line 2980

* update to the last line 3160

* add zh-trans:docs/setup/production-environment/turnkey/icp.md (#17568)

* zh-trans: /docs/setup/production-environment/container-runtimes.md (#17646)

* zh-trs:container-runtimes.md

Signed-off-by: yuxiaobo <yuxiaobogo@163.com>

* Update container-runtimes.md

* Translate /docs/concepts/workloads/controllers/ttlafterfinished.md into Chinese (#17791)

* Translate /docs/concepts/cluster-administration/cloud-providers.md into Chinese

* Translate /docs/concepts/workloads/controllers/ttlafterfinished.md into Chinese

* Sorry, wrong commit, roll back...

* Translate /docs/concepts/workloads/controllers/ttlafterfinished.md

* Translate /docs/concepts/workloads/controllers/ttlafterfinished.md into Chinese

* Translate /docs/concepts/workloads/controllers/ttlafterfinished.md into Chinese

* Create trans kubeadm_upgrade_diff.md (#17392)

* Update kubeadm_upgrade_diff.md

* Update kubeadm_upgrade_diff.md

* Update kubeadm_upgrade_diff.md

* Create _index.md (#17831)

* zh-trans zh-trans-/docs/reference/command-line-tools-reference/feature-gates.md (#17658)

* Update volume-snapshots.md (#17829)

* Update volume-snapshots.md

* Update volume-snapshots.md

* Update volume-snapshots.md

* Create ovirt.md (#17849)

* Create ovirt.md

* Update ovirt.md

* Translate /docs/concepts/cluster-administration/cloud-providers.md into Chinese (#17761)

* Translate /docs/concepts/cluster-administration/cloud-providers.md into Chinese

* Translate /docs/concepts/workloads/controllers/ttlafterfinished.md into Chinese

* Sorry, wrong commit, roll back...

* Update translation after tengqm's review.

* Update cloud-providers.md

* Update dual-stack.md (#17836)

* Update dual-stack.md

* Update dual-stack.md

* Update dual-stack.md

* Update dual-stack.md

* Create validate-dual-stack.md (#17833)

* Create validate-dual-stack.md

* Update validate-dual-stack.md

* translation content/zh/docs/reference/setup-tools/kubeadm/ kubeadm-join-phase、kubeadm-reset-phase (#17881)

* zh-trans content/zh/docs/contribute/generate-ref-docs/contribute-upstream.md (#17882)

* Chinese translation /docs/tasks/administer-cluster/highly-available-master.md (#17884)

* Chinese translation /docs/tasks/administer-cluster/highly-available-master.md

* Apply suggestions from code review

Co-Authored-By: Qiming <tengqim@cn.ibm.com>

* Fix format issue (#17921)

* Create topology-manager.md (#17901)

* Create topology-manager.md

* Update topology-manager.md

* add zh-trans:docs/setup/production-environment/windows/user-guide-windows-containers.md (#17876)

* Update pod-overhead.md (#17931)

* Update scheduler-perf-tuning.md (#17934)

* Create dcos.md (#17932)

* Create dcos.md

* Update dcos.md

* Update object-management.md (#17937)

* zh-translation content/zh/docs/setup/production-environment/tools/kops.md (#17991)

* Create imperative-config.md (#17956)

* Create imperative-config.md

* Update imperative-config.md

* Create self-hosting.md (#17950)

* Create resource-bin-packing.md (#17935)

* Create nodelocaldns.md (#17938)

* Update config.toml(release-1.16) for 1.17 (#18025)

* Update config.toml(release-1.16) for 1.17

* Update config.toml

* Remove ru language

* Update the conflict and merge the two commits

* add nginx-deployment.yaml file

Co-authored-by: ZhongliangXiong <xiong.zhongliang@zte.com.cn>
Co-authored-by: Yixiang Wang <wang.yixiang@zte.com.cn>
Co-authored-by: li mengyang <hwdef97@gmail.com>
Co-authored-by: PingWang <wang.ping5@zte.com.cn>
Co-authored-by: chentanjun <tanjunchen20@gmail.com>
Co-authored-by: Sophy417 <53026875+Sophy417@users.noreply.github.com>
Co-authored-by: zhangx501 <zhang0000xun@gmail.com>
Co-authored-by: Qiming <tengqim@cn.ibm.com>
Co-authored-by: yuxiaobo96 <41496192+yuxiaobo96@users.noreply.github.com>
Co-authored-by: senwang <wang.sen2@zte.com.cn>
Co-authored-by: lichuqiang <lichuqiang@huawei.com>
Co-authored-by: lpf7551321 <liupengfei20@huawei.com>
Co-authored-by: Hongcai Ren <renhongcai@huawei.com>
Co-authored-by: jiajie <jiaj12@chinaunicom.cn>
Co-authored-by: heqg <56527988+heqg@users.noreply.github.com>
Co-authored-by: IreneByron <zhangbingqing7@huawei.com>
Co-authored-by: Wang Bing <wangbing.adam@gmail.com>
Co-authored-by: Damini Satya <daminisatya@gmail.com>
Co-authored-by: liufangwai <liufangwai@huawei.com>
Co-authored-by: wangcong <congfairy2536@gmail.com>
Co-authored-by: XuefeiWang2 <wangxuefei2@huawei.com>
Co-authored-by: Kubernetes Prow Robot <k8s-ci-robot@users.noreply.github.com>
Co-authored-by: Ziqiu Zhu <zzqshu@126.com>
Co-authored-by: ten2ton <50288981+ten2ton@users.noreply.github.com>
Co-authored-by: Oleg Butuzov <butuzov@users.noreply.github.com>
Co-authored-by: jiazxjason <52809535+jiazxjason@users.noreply.github.com>
Co-authored-by: Coffey Gao <coffiney@qq.com>
Co-authored-by: Bingshen Wang <bingshen.wbs@alibaba-inc.com>
pull/18293/head
LiuDui 2019-12-25 08:51:29 +08:00 committed by Kubernetes Prow Robot
parent 9135bdbe6d
commit b8b2cc7c74
933 changed files with 113346 additions and 8088 deletions

View File

@ -1,61 +1,82 @@
---
title: "生产级别的容器编排系统"
abstract: "自动化的容器部署、扩展和管理"
cid: "home"
cid: home
---
{{< deprecationwarning >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}})
是一个开源系统,用于容器化应用的自动部署、扩缩和管理。
Kubernetes 将构成应用的容器按逻辑单位进行分组以便于管理和发现。
Kubernetes 基于 [谷歌公司在运行生产负载上的 15 年经验](http://queue.acm.org/detail.cfm?id=2898444)
打造,并融合了来自社区的最佳建议与实践。
<!-- ### [Kubernetes]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) is an open-source system for automating deployment, scaling, and management of containerized applications. -->
### [Kubernetes]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) 是用于自动部署,扩展和管理容器化应用程序的开源系统。
<!-- It groups containers that make up an application into logical units for easy management and discovery.
Kubernetes builds upon [15 years of experience of running production workloads at Google](http://queue.acm.org/detail.cfm?id=2898444),
combined with best-of-breed ideas and practices from the community. -->
它将组成应用程序的容器组合成逻辑单元以便于管理和服务发现。Kubernetes 源自[Google 15 年生产环境的运维经验](http://queue.acm.org/detail.cfm?id=2898444),同时凝聚了社区的最佳创意和实践。
{{% /blocks/feature %}}
{{% blocks/feature image="scalable" %}}
#### 星际规模
基于一定的规则原理,谷歌公司每周能够运行数十亿的容器;通过遵循相同的设计,
Kubernetes 能够在保持现有运维团队规模的前提下完成集群的弹性伸缩。
<!-- #### Planet Scale -->
#### 星际尺度
<!-- Designed on the same principles that allows Google to run billions of containers a week,
Kubernetes can scale without increasing your ops team. -->
Google 每周运行数十亿个容器Kubernetes 基于与之相同的原则来设计,能够在不扩张运维团队的情况下进行规模扩展。
{{% /blocks/feature %}}
{{% blocks/feature image="blocks" %}}
#### 永不过时
无论是在本地测试还是支撑全球化企业业务Kubernetes 的灵活性使得其能够与您一起成长。
无论您的需求多么复杂Kubernetes 都能助您始终如一地、轻而易举地交付应用。
<!-- #### Never Outgrow -->
#### 处处适用
<!-- Whether testing locally or running a global enterprise, Kubernetes flexibility grows with you to deliver your applications
consistently and easily no matter how complex your need is. -->
无论是本地测试还是跨国公司Kubernetes 的灵活性都能让你在应对复杂系统时得心应手。
{{% /blocks/feature %}}
{{% blocks/feature image="suitcase" %}}
#### 随处运行
作为一个开源系统Kubernetes 使您能够自由地获得内部云、混合云或公有云基础设施所提供的便利,
并根据需要轻松地迁移工作负载。
<!-- #### Run Anywhere -->
#### 永不过时
<!-- Kubernetes is open source giving you the freedom to take advantage of on-premises, hybrid,
or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you. -->
Kubernetes 是开源系统,可以自由地部署在企业内部,私有云、混合云或公有云,让您轻松地做出合适的选择。
{{% /blocks/feature %}}
{{< /blocks/section >}}
{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}}
<div class="light-text">
<h2>将超过 150 个微服务迁移到 Kubernetes 时遇到的挑战</h2>
<p>Sarah Wells, Technical Director for Operations and Reliability, Financial Times</p>
<button id="desktopShowVideoButton" onclick="kub.showVideo()">观看视频</button>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019" button id="desktopKCButton">参加 2019 年 11 月 18 日的 San Diego KubeCon</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2020/" button id="desktopKCButton">参加 2020 年 3 月 30 日的 Amsterdam KubeCon</a>
<!-- <h2>The Challenges of Migrating 150+ Microservices to Kubernetes</h2> -->
<h2>将 150+ 微服务迁移到 Kubernetes 上的挑战</h2>
<!-- <p>By Sarah Wells, Technical Director for Operations and Reliability, Financial Times</p> -->
<p>Sarah Wells, 运营和可靠性技术总监, 金融时报</p>
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
<br>
<br>
<br>
<!-- <a href="https://www.lfasiallc.com/events/kubecon-cloudnativecon-china-2018/" button id="desktopKCButton">Attend KubeCon in Shanghai on Nov. 13-15, 2018</a> -->
<a href="https://www.lfasiallc.com/events/kubecon-cloudnativecon-china-2018/" button id="desktopKCButton">参加11月13日到15日的上海 KubeCon</a>
<br>
<br>
<br>
<br>
<!-- <a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/" button id="desktopKCButton">Attend KubeCon in Seattle on Dec. 11-13, 2018</a> -->
<a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/" button id="desktopKCButton">参加12月11日到13日的西雅图 KubeCon</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -0,0 +1,27 @@
---
title: " Kubernetes 采集视频 "
date: 2015-03-23
slug: kubernetes-gathering-videos
url: /blog/2015/03/Kubernetes-Gathering-Videos
---
<!--
---
title: " Kubernetes Gathering Videos "
date: 2015-03-23
slug: kubernetes-gathering-videos
url: /blog/2015/03/Kubernetes-Gathering-Videos
---
-->
<!--
If you missed the Kubernetes Gathering in SF last month, fear not! Here are the videos from the evening presentations organized into a playlist on YouTube
[![Kubernetes Gathering](https://img.youtube.com/vi/q8lGZCKktYo/0.jpg)](https://www.youtube.com/playlist?list=PL69nYSiGNLP2FBVvSLHpJE8_6hRHW8Kxe)
-->
如果你错过了上个月在旧金山举行的 Kubernetes 大会,不要害怕!以下是在 YouTube 上组织成播放列表的晚间演示文稿中的视频。
[![Kubernetes Gathering](https://img.youtube.com/vi/q8lGZCKktYo/0.jpg)](https://www.youtube.com/playlist?list=PL69nYSiGNLP2FBVvSLHpJE8_6hRHW8Kxe)

View File

@ -0,0 +1,186 @@
---
title: " Kubernetes 社区每周聚会笔记 - 2015年3月27日 "
date: 2015-03-28
slug: weekly-kubernetes-community-hangout
url: /blog/2015/03/Weekly-Kubernetes-Community-Hangout
---
<!--
---
title: " Weekly Kubernetes Community Hangout Notes - March 27 2015 "
date: 2015-03-28
slug: weekly-kubernetes-community-hangout
url: /blog/2015/03/Weekly-Kubernetes-Community-Hangout
---
-->
<!--
Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
-->
每个星期Kubernetes 贡献者社区几乎都会在谷歌 Hangouts 上聚会。我们希望任何对此感兴趣的人都能了解这个论坛的讨论内容。
<!--
Agenda:
-->
日程安排:
<!--
\- Andy - demo remote execution and port forwarding
\- Quinton - Cluster federation - Postponed
\- Clayton - UI code sharing and collaboration around Kubernetes
-->
\- Andy - 演示远程执行和端口转发
\- Quinton - 联邦集群 - 延迟
\- Clayton - 围绕 Kubernetes 的 UI 代码共享和协作
<!--
Notes from meeting:
-->
从会议指出:
<!--
1\. Andy from RedHat:
-->
1\. Andy 从 RedHat
<!--
* Demo remote execution
-->
* 演示远程执行
<!--
* kubectl exec -p $POD -- $CMD
* Makes a connection to the master as proxy, figures out which node the pod is on, proxies connection to kubelet, which does the interesting bit. via nsenter.
* Multiplexed streaming over HTTP using SPDY
* Also interactive mode:
* Assumes first container. Can use -c $CONTAINER to pick a particular one.
* If have gdb pre-installed in container, then can interactively attach it to running process
* backtrace, symbol tbles, print, etc. Most things you can do with gdb.
* Can also with careful flag crafting run rsync over this or set up sshd inside container.
* Some feedback via chat:
-->
* kubectl exec -p $POD -- $CMD
* 作为代理与主机建立连接,找出 pod 所在的节点,代理与 kubelet 的连接,这一点很有趣。通过 nsenter。
* 使用 SPDY 通过 HTTP 进行多路复用流式传输
* 还有互动模式:
* 假设第一个容器,可以使用 -c $CONTAINER 一个特定的。
* 如果在容器中预先安装了 gdb则可以交互地将其附加到正在运行的进程中
* backtrace、symbol tbles、print 等。 使用gdb可以做的大多数事情。
* 也可以用精心制作的参数在上面运行 rsync 或者在容器内设置 sshd。
* 一些聊天反馈:
<!--
* Andy also demoed port forwarding
* nsenter vs. docker exec
-->
* Andy 还演示了端口转发
* nnsenter 与 docker exec
<!--
* want to inject a binary under control of the host, similar to pre-start hooks
* socat, nsenter, whatever the pre-start hook needs
-->
* 想要在主机的控制下注入二进制文件,类似于预启动钩子
* socat、nsenter任何预启动钩子需要的
<!--
* would be nice to blog post on this
* version of nginx in wheezy is too old to support needed master-proxy functionality
-->
* 如果能在博客上发表这方面的文章就太好了
* wheezy 中的 nginx 版本太旧,无法支持所需的主代理功能
<!--
2\. Clayton: where are we wrt a community organization for e.g. kubernetes UI components?
* google-containers-ui IRC channel, mailing list.
* Tim: google-containers prefix is historical, should just do "kubernetes-ui"
* also want to put design resources in, and bower expects its own repo.
* General agreement
-->
2\. Clayton: 我们的社区组织在哪里,例如 kubernetes UI 组件?
* google-containers-ui IRC 频道,邮件列表。
* Tim: google-containers 前缀是历史的,应该只做 "kubernetes-ui"
* 也希望将设计资源投入使用,并且 bower 期望自己的仓库。
* 通用协议
<!--
3\. Brian Grant:
* Testing v1beta3, getting that ready to go in.
* Paul working on changes to commandline stuff.
* Early to mid next week, try to enable v1beta3 by default?
* For any other changes, file issue and CC thockin.
-->
3\. Brian Grant:
* 测试 v1beta3准备进入。
* Paul 力于改变命令行的内容。
* 下周初至中旬尝试默认启用v1beta3 ?
* 对于任何其他更改,请发出文件并抄送 thockin。
<!--
4\. General consensus that 30 minutes is better than 60
-->
4\. 一般认为30分钟比60分钟好
<!--
* Shouldn't artificially try to extend just to fill time.
-->
* 不应该为了填满时间而人为地延长。

View File

@ -0,0 +1,62 @@
---
title: 欢迎来到 Kubernetes 博客!
date: 2015-03-20
slug: welcome-to-kubernetes-blog
url: /blog/2015/03/Welcome-To-Kubernetes-Blog
---
<!--
---
title: Welcome to the Kubernetes Blog!
date: 2015-03-20
slug: welcome-to-kubernetes-blog
url: /blog/2015/03/Welcome-To-Kubernetes-Blog
---
-->
<!--
Welcome to the new Kubernetes Blog. Follow this blog to learn about the Kubernetes Open Source project. We plan to post release notes, how-to articles, events, and maybe even some off topic fun here from time to time.
-->
欢迎来到新的 Kubernetes 博客。关注此博客,了解 Kubernetes 开源项目。我们计划不时发布发布说明,操作方法文章,活动,甚至一些非常有趣的话题。
<!--
If you are using Kubernetes or contributing to the project and would like to do a guest post, [please let me know](mailto:kitm@google.com).
-->
如果您正在使用 Kubernetes 或为该项目做出贡献并想要发帖子,[请告诉我](mailto:kitm@google.com)。
<!--
To start things off, here's a roundup of recent Kubernetes posts from other sites:
-->
首先,以下是 Kubernetes 最近在其他网站上发布的文章摘要:
<!--
- [Scaling MySQL in the cloud with Vitess and Kubernetes](http://googlecloudplatform.blogspot.com/2015/03/scaling-MySQL-in-the-cloud-with-Vitess-and-Kubernetes.html)
- [Container Clusters on VMs](http://googlecloudplatform.blogspot.com/2015/02/container-clusters-on-vms.html)
- [Everything you wanted to know about Kubernetes but were afraid to ask](http://googlecloudplatform.blogspot.com/2015/01/everything-you-wanted-to-know-about-Kubernetes-but-were-afraid-to-ask.html)
- [What makes a container cluster?](http://googlecloudplatform.blogspot.com/2015/01/what-makes-a-container-cluster.html)
- [Integrating OpenStack and Kubernetes with Murano](https://www.mirantis.com/blog/integrating-openstack-and-kubernetes-with-murano/)
- [An introduction to containers, Kubernetes, and the trajectory of modern cloud computing](http://googlecloudplatform.blogspot.com/2015/01/in-coming-weeks-we-will-be-publishing.html)
- [What is Kubernetes and how to use it?](http://www.centurylinklabs.com/what-is-kubernetes-and-how-to-use-it/)
- [OpenShift V3, Docker and Kubernetes Strategy](https://blog.openshift.com/v3-docker-kubernetes-interview/)
- [An Introduction to Kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes)
-->
- [使用 Vitess 和 Kubernetes 在云中扩展 MySQL](http://googlecloudplatform.blogspot.com/2015/03/scaling-MySQL-in-the-cloud-with-Vitess-and-Kubernetes.html)
- [虚拟机上的容器群集](http://googlecloudplatform.blogspot.com/2015/02/container-clusters-on-vms.html)
- [想知道的关于 kubernetes 的一切,却又不敢问](http://googlecloudplatform.blogspot.com/2015/01/everything-you-wanted-to-know-about-Kubernetes-but-were-afraid-to-ask.html)
- [什么构成容器集群?](http://googlecloudplatform.blogspot.com/2015/01/what-makes-a-container-cluster.html)
- [将 OpenStack 和 Kubernetes 与 Murano 集成](https://www.mirantis.com/blog/integrating-openstack-and-kubernetes-with-murano/)
- [容器介绍Kubernetes 以及现代云计算的发展轨迹](http://googlecloudplatform.blogspot.com/2015/01/in-coming-weeks-we-will-be-publishing.html)
- [什么是 Kubernetes 以及如何使用它?](http://www.centurylinklabs.com/what-is-kubernetes-and-how-to-use-it/)
- [OpenShift V3Docker 和 Kubernetes 策略](https://blog.openshift.com/v3-docker-kubernetes-interview/)
- [Kubernetes 简介](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes)
<!--
Happy cloud computing!
-->
快乐的云计算!
<!--
- Kit Merker - Product Manager, Google Cloud Platform
-->
- Kit Merker - Google 云平台产品经理

View File

@ -0,0 +1,176 @@
---
title: " Kubernetes Release: 0.15.0 "
date: 2015-04-16
slug: kubernetes-release-0150
url: /blog/2015/04/Kubernetes-Release-0150
---
<!--
Release Notes:
-->
Release 说明:
<!--
* Enables v1beta3 API and sets it to the default API version ([#6098][1])
* Added multi-port Services ([#6182][2])
* New Getting Started Guides
* Multi-node local startup guide ([#6505][3])
* Mesos on Google Cloud Platform ([#5442][4])
* Ansible Setup instructions ([#6237][5])
* Added a controller framework ([#5270][6], [#5473][7])
* The Kubelet now listens on a secure HTTPS port ([#6380][8])
* Made kubectl errors more user-friendly ([#6338][9])
* The apiserver now supports client cert authentication ([#6190][10])
* The apiserver now limits the number of concurrent requests it processes ([#6207][11])
* Added rate limiting to pod deleting ([#6355][12])
* Implement Balanced Resource Allocation algorithm as a PriorityFunction in scheduler package ([#6150][13])
* Enabled log collection from master ([#6396][14])
* Added an api endpoint to pull logs from Pods ([#6497][15])
* Added latency metrics to scheduler ([#6368][16])
* Added latency metrics to REST client ([#6409][17])
-->
* 启用 1beta3 API 并将其设置为默认 API 版本 ([#6098][1])
* 增加了多端口服务([#6182][2])
* 新入门指南
* 多节点本地启动指南 ([#6505][3])
* Google 云平台上的 Mesos ([#5442][4])
* Ansible 安装说明 ([#6237][5])
* 添加了一个控制器框架 ([#5270][6], [#5473][7])
* Kubelet 现在监听一个安全的 HTTPS 端口 ([#6380][8])
* 使 kubectl 错误更加友好 ([#6338][9])
* apiserver 现在支持客户端 cert 身份验证 ([#6190][10])
* apiserver 现在限制了它处理的并发请求的数量 ([#6207][11])
* 添加速度限制删除 pod ([#6355][12])
* 将平衡资源分配算法作为优先级函数实现在调度程序包中 ([#6150][13])
* 从主服务器启用日志收集功能 ([#6396][14])
* 添加了一个 api 端口来从 Pod 中提取日志 ([#6497][15])
* 为调度程序添加了延迟指标 ([#6368][16])
* 为 REST 客户端添加了延迟指标 ([#6409][17])
<!--
* etcd now runs in a pod on the master ([#6221][18])
* nginx now runs in a container on the master ([#6334][19])
* Began creating Docker images for master components ([#6326][20])
* Updated GCE provider to work with gcloud 0.9.54 ([#6270][21])
* Updated AWS provider to fix Region vs Zone semantics ([#6011][22])
* Record event when image GC fails ([#6091][23])
* Add a QPS limiter to the kubernetes client ([#6203][24])
* Decrease the time it takes to run make release ([#6196][25])
* New volume support
* Added iscsi volume plugin ([#5506][26])
* Added glusterfs volume plugin ([#6174][27])
* AWS EBS volume support ([#5138][28])
* Updated to heapster version to v0.10.0 ([#6331][29])
* Updated to etcd 2.0.9 ([#6544][30])
* Updated to Kibana to v1.2 ([#6426][31])
* Bug Fixes
* Kube-proxy now updates iptables rules if a service's public IPs change ([#6123][32])
* Retry kube-addons creation if the initial creation fails ([#6200][33])
* Make kube-proxy more resiliant to running out of file descriptors ([#6727][34])
-->
* etcd 现在在 master 上的一个 pod 中运行 ([#6221][18])
* nginx 现在在 master上的容器中运行 ([#6334][19])
* 开始为主组件构建 Docker 镜像 ([#6326][20])
* 更新了 GCE 程序以使用 gcloud 0.9.54 ([#6270][21])
* 更新了 AWS 程序来修复区域与区域语义 ([#6011][22])
* 记录镜像 GC 失败时的事件 ([#6091][23])
* 为 kubernetes 客户端添加 QPS 限制器 ([#6203][24])
* 减少运行 make release 所需的时间 ([#6196][25])
* 新卷的支持
* 添加 iscsi 卷插件 ([#5506][26])
* 添加 glusterfs 卷插件 ([#6174][27])
* AWS EBS 卷支持 ([#5138][28])
* 更新到 heapster 版本到 v0.10.0 ([#6331][29])
* 更新到 etcd 2.0.9 ([#6544][30])
* 更新到 Kibana 到 v1.2 ([#6426][31])
* 漏洞修复
* 如果服务的公共 IP 发生变化Kube-proxy现在会更新iptables规则 ([#6123][32])
* 如果初始创建失败,则重试 kube-addons 创建 ([#6200][33])
* 使 kube-proxy 对耗尽文件描述符更具弹性 ([#6727][34])
<!--
To download, please visit https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.15.0
-->
要下载,请访问 https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.15.0
<!--
[1]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6098 "Enabling v1beta3 api version by default in master"
[2]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6182 "Implement multi-port Services"
[3]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6505 "Docker multi-node"
[4]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5442 "Getting started guide for Mesos on Google Cloud Platform"
[5]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6237 "example ansible setup repo"
[6]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5270 "Controller framework"
[7]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5473 "Add DeltaFIFO (a controller framework piece)"
[8]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6380 "Configure the kubelet to use HTTPS (take 2)"
[9]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6338 "Return a typed error for config validation, and make errors simple"
[10]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6190 "Add client cert authentication"
[11]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6207 "Add a limit to the number of in-flight requests that a server processes."
[12]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6355 "Added rate limiting to pod deleting"
[13]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6150 "Implement Balanced Resource Allocation (BRA) algorithm as a PriorityFunction in scheduler package."
[14]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6396 "Enable log collection from master."
[15]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6497 "Pod log subresource"
[16]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6368 "Add basic latency metrics to scheduler."
[17]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6409 "Add latency metrics to REST client"
[18]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6221 "Run etcd 2.0.5 in a pod"
[19]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6334 "Add an nginx docker image for use on the master."
[20]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6326 "Create Docker images for master components "
[21]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6270 "Updates for gcloud 0.9.54"
-->
[1]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6098 "在 master 中默认启用 v1beta3 api 版本"
[2]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6182 "实现多端口服务"
[3]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6505 "Docker 多节点"
[4]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5442 "谷歌云平台上 Mesos 入门指南"
[5]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6237 "示例 ansible 设置仓库"
[6]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5270 "控制器框架"
[7]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5473 "添加 DeltaFIFO控制器框架块"
[8]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6380 "将 kubelet 配置为使用 HTTPS (获得 2)"
[9]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6338 "返回用于配置验证的类型化错误,并简化错误"
[10]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6190 "添加客户端证书认证"
[11]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6207 "为服务器处理的正在运行的请求数量添加一个限制。"
[12]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6355 "添加速度限制删除 pod"
[13]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6150 "将均衡资源分配算法作为优先级函数实现在调度程序包中。"
[14]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6396 "启用主服务器收集日志。"
[15]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6497 "pod 子日志资源"
[16]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6368 "将基本延迟指标添加到调度程序。"
[17]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6409 "向 REST 客户端添加延迟指标"
[18]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6221 "在 pod 中运行 etcd 2.0.5"
[19]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6334 "添加一个 nginx docker 镜像用于主程序。"
[20]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6326 "为主组件创建 Docker 镜像"
[21]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6270 "gcloud 0.9.54 的更新"
<!--
[22]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6011 "Fix AWS region vs zone"
[23]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6091 "Record event when image GC fails."
[24]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6203 "Add a QPS limiter to the kubernetes client."
[25]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6196 "Parallelize architectures in both the building and packaging phases of `make release`"
[26]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5506 "add iscsi volume plugin"
[27]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6174 "implement glusterfs volume plugin"
[28]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5138 "AWS EBS volume support"
[29]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6331 "Update heapster version to v0.10.0"
[30]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6544 "Build etcd image (version 2.0.9), and upgrade kubernetes cluster to the new version"
[31]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6426 "Update Kibana to v1.2 which paramaterizes location of Elasticsearch"
[32]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6123 "Fix bug in kube-proxy of not updating iptables rules if a service's public IPs change"
[33]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6200 "Retry kube-addons creation if kube-addons creation fails."
[34]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6727 "pkg/proxy: panic if run out of fd"
-->
[22]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6011 "修复 AWS 区域 与 zone"
[23]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6091 "记录镜像 GC 失败时的事件。"
[24]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6203 "向 kubernetes 客户端添加 QPS 限制器。"
[25]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6196 "在 `make release` 的构建和打包阶段并行化架构"
[26]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5506 "添加 iscsi 卷插件"
[27]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6174 "实现 glusterfs 卷插件"
[28]: https://github.com/GoogleCloudPlatform/kubernetes/pull/5138 "AWS EBS 卷支持"
[29]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6331 "将 heapster 版本更新到 v0.10.0"
[30]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6544 "构建 etcd 镜像(版本 2.0.9),并将 kubernetes 集群升级到新版本"
[31]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6426 "更新 Kibana 到 v1.2,它对 Elasticsearch 的位置进行了参数化"
[32]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6123 "修复了 kube-proxy 中的一个错误,如果一个服务的公共 ip 发生变化,它不会更新 iptables 规则"
[33]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6200 "如果 kube-addons 创建失败,请重试 kube-addons 创建。"
[34]: https://github.com/GoogleCloudPlatform/kubernetes/pull/6727 "pkg/proxy: fd 用完后引起恐慌"

View File

@ -0,0 +1,287 @@
---
title: " Kubernetes 社区每周聚会笔记- 2015年4月17日 "
date: 2015-04-17
slug: weekly-kubernetes-community-hangout_17
url: /blog/2015/04/Weekly-Kubernetes-Community-Hangout_17
---
<!--
---
title: " Weekly Kubernetes Community Hangout Notes - April 17 2015 "
date: 2015-04-17
slug: weekly-kubernetes-community-hangout_17
url: /blog/2015/04/Weekly-Kubernetes-Community-Hangout_17
---
-->
<!--
Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
-->
每个星期Kubernetes 贡献者社区几乎都会在谷歌 Hangouts 上聚会。我们希望任何对此感兴趣的人都能了解这个论坛的讨论内容。
<!--
Agenda
* Mesos Integration
* High Availability (HA)
* Adding performance and profiling details to e2e to track regressions
* Versioned clients
-->
议程
* Mesos 集成
* 高可用性HA
* 向 e2e 添加性能和分析详细信息以跟踪回归
* 客户端版本化
<!--
Notes
-->
笔记
<!--
* Mesos integration
* Mesos integration proposal:
* No blockers to integration.
* Documentation needs to be updated.
-->
* Mesos 集成
* Mesos 集成提案:
* 没有阻塞集成的因素。
* 文档需要更新。
<!--
* HA
* Proposal should land today.
* Etcd cluster.
* Load-balance apiserver.
* Cold standby for controller manager and other master components.
-->
* HA
* 提案今天应该会提交。
* Etcd 集群。
* apiserver 负载均衡。
* 控制器管理器和其他主组件的冷备用。
<!--
* Adding performance and profiling details to e2e to track regression
* Want red light for performance regression
* Need a public DB to post the data
* See
* Justin working on multi-platform e2e dashboard
-->
* 向 e2e 添加性能和分析详细信息以跟踪回归
* 希望红色为性能回归
* 需要公共数据库才能发布数据
* 查看
* Justin 致力于多平台 e2e 仪表盘
<!--
* Versioned clients
*
*
* Client library currently uses internal API objects.
* Nobody reported that frequent changes to types.go have been painful, but we are worried about it.
* Structured types are useful in the client. Versioned structs would be ok.
* If start with json/yaml (kubectl), shouldnt convert to structured types. Use swagger.
-->
* 客户端版本化
*
*
* 客户端库当前使用内部 API 对象。
* 尽管没有人反映频繁修改 `types.go` 有多痛苦,但我们很为此担心。
* 结构化类型在客户端中很有用。版本化的结构就可以了。
* 如果从 json/yaml (kubectl) 开始,则不应转换为结构化类型。使用 swagger。
<!--
* Security context
*
* Administrators can restrict who can run privileged containers or require specific unix uids
* Kubelet will be able to get pull credentials from apiserver
* Policy proposal coming in the next week or so
-->
* Security context
*
* 管理员可以限制谁可以运行特权容器或需要特定的 unix uid
* kubelet 将能够从 apiserver 获取证书
* 政策提案将于下周左右出台
<!--
* Discussing upstreaming of users, etc. into Kubernetes, at least as optional
* 1.0 Roadmap
* Focus is performance, stability, cluster upgrades
* TJ has been making some edits to [roadmap.md][4] but hasnt sent out a PR yet
* Kubernetes UI
* Dependencies broken out into third-party
* @lavalamp is reviewer
-->
* 讨论用户的上游等等进入Kubernetes至少是可选的
* 1.0 路线图
* 重点是性能,稳定性,集群升级
* TJ 一直在对[roadmap.md][4]进行一些编辑但尚未发布PR
* Kubernetes UI
* 依赖关系分解为第三方
* @lavalamp 是评论家
[1]: http://kubernetes.io/images/nav_logo.svg
[2]: http://kubernetes.io/docs/
[3]: https://kubernetes.io/blog/
[4]: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/roadmap.md
[5]: https://kubernetes.io/blog/2015/04/weekly-kubernetes-community-hangout_17 "permanent link"
[6]: https://resources.blogblog.com/img/icon18_edit_allbkg.gif
[7]: https://www.blogger.com/post-edit.g?blogID=112706738355446097&postID=630924463010638300&from=pencil "Edit Post"
[8]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=630924463010638300&target=email "Email This"
[9]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=630924463010638300&target=blog "BlogThis!"
[10]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=630924463010638300&target=twitter "Share to Twitter"
[11]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=630924463010638300&target=facebook "Share to Facebook"
[12]: https://www.blogger.com/share-post.g?blogID=112706738355446097&postID=630924463010638300&target=pinterest "Share to Pinterest"
[13]: https://kubernetes.io/blog/search/label/community%20meetings
[14]: https://kubernetes.io/blog/search/label/containers
[15]: https://kubernetes.io/blog/search/label/docker
[16]: https://kubernetes.io/blog/search/label/k8s
[17]: https://kubernetes.io/blog/search/label/kubernetes
[18]: https://kubernetes.io/blog/search/label/open%20source
[19]: https://kubernetes.io/blog/2015/04/kubernetes-and-mesosphere-dcos "Newer Post"
[20]: https://kubernetes.io/blog/2015/04/introducing-kubernetes-v1beta3 "Older Post"
[21]: https://kubernetes.io/blog/feeds/630924463010638300/comments/default
[22]: https://img2.blogblog.com/img/widgets/arrow_dropdown.gif
[23]: https://img1.blogblog.com/img/icon_feed12.png
[24]: https://img1.blogblog.com/img/widgets/subscribe-netvibes.png
[25]: https://www.netvibes.com/subscribe.php?url=http%3A%2F%2Fblog.kubernetes.io%2Ffeeds%2Fposts%2Fdefault
[26]: https://img1.blogblog.com/img/widgets/subscribe-yahoo.png
[27]: https://add.my.yahoo.com/content?url=http%3A%2F%2Fblog.kubernetes.io%2Ffeeds%2Fposts%2Fdefault
[28]: https://kubernetes.io/blog/feeds/posts/default
[29]: https://www.netvibes.com/subscribe.php?url=http%3A%2F%2Fblog.kubernetes.io%2Ffeeds%2F630924463010638300%2Fcomments%2Fdefault
[30]: https://add.my.yahoo.com/content?url=http%3A%2F%2Fblog.kubernetes.io%2Ffeeds%2F630924463010638300%2Fcomments%2Fdefault
[31]: https://resources.blogblog.com/img/icon18_wrench_allbkg.png
[32]: //www.blogger.com/rearrange?blogID=112706738355446097&widgetType=Subscribe&widgetId=Subscribe1&action=editWidget§ionId=sidebar-right-1 "Edit"
[33]: https://twitter.com/kubernetesio
[34]: https://github.com/kubernetes/kubernetes
[35]: http://slack.k8s.io/
[36]: http://stackoverflow.com/questions/tagged/kubernetes
[37]: http://get.k8s.io/
[38]: //www.blogger.com/rearrange?blogID=112706738355446097&widgetType=HTML&widgetId=HTML2&action=editWidget§ionId=sidebar-right-1 "Edit"
[39]: javascript:void(0)
[40]: https://kubernetes.io/blog/2018/
[41]: https://kubernetes.io/blog/2018/01/
[42]: https://kubernetes.io/blog/2017/
[43]: https://kubernetes.io/blog/2017/12/
[44]: https://kubernetes.io/blog/2017/11/
[45]: https://kubernetes.io/blog/2017/10/
[46]: https://kubernetes.io/blog/2017/09/
[47]: https://kubernetes.io/blog/2017/08/
[48]: https://kubernetes.io/blog/2017/07/
[49]: https://kubernetes.io/blog/2017/06/
[50]: https://kubernetes.io/blog/2017/05/
[51]: https://kubernetes.io/blog/2017/04/
[52]: https://kubernetes.io/blog/2017/03/
[53]: https://kubernetes.io/blog/2017/02/
[54]: https://kubernetes.io/blog/2017/01/
[55]: https://kubernetes.io/blog/2016/
[56]: https://kubernetes.io/blog/2016/12/
[57]: https://kubernetes.io/blog/2016/11/
[58]: https://kubernetes.io/blog/2016/10/
[59]: https://kubernetes.io/blog/2016/09/
[60]: https://kubernetes.io/blog/2016/08/
[61]: https://kubernetes.io/blog/2016/07/
[62]: https://kubernetes.io/blog/2016/06/
[63]: https://kubernetes.io/blog/2016/05/
[64]: https://kubernetes.io/blog/2016/04/
[65]: https://kubernetes.io/blog/2016/03/
[66]: https://kubernetes.io/blog/2016/02/
[67]: https://kubernetes.io/blog/2016/01/
[68]: https://kubernetes.io/blog/2015/
[69]: https://kubernetes.io/blog/2015/12/
[70]: https://kubernetes.io/blog/2015/11/
[71]: https://kubernetes.io/blog/2015/10/
[72]: https://kubernetes.io/blog/2015/09/
[73]: https://kubernetes.io/blog/2015/08/
[74]: https://kubernetes.io/blog/2015/07/
[75]: https://kubernetes.io/blog/2015/06/
[76]: https://kubernetes.io/blog/2015/05/
[77]: https://kubernetes.io/blog/2015/04/
[78]: https://kubernetes.io/blog/2015/04/weekly-kubernetes-community-hangout_29
[79]: https://kubernetes.io/blog/2015/04/borg-predecessor-to-kubernetes
[80]: https://kubernetes.io/blog/2015/04/kubernetes-and-mesosphere-dcos
[81]: https://kubernetes.io/blog/2015/04/weekly-kubernetes-community-hangout_17
[82]: https://kubernetes.io/blog/2015/04/introducing-kubernetes-v1beta3
[83]: https://kubernetes.io/blog/2015/04/kubernetes-release-0150
[84]: https://kubernetes.io/blog/2015/04/weekly-kubernetes-community-hangout_11
[85]: https://kubernetes.io/blog/2015/04/faster-than-speeding-latte
[86]: https://kubernetes.io/blog/2015/04/weekly-kubernetes-community-hangout
[87]: https://kubernetes.io/blog/2015/03/
[88]: //www.blogger.com/rearrange?blogID=112706738355446097&widgetType=BlogArchive&widgetId=BlogArchive1&action=editWidget§ionId=sidebar-right-1 "Edit"
[89]: //www.blogger.com/rearrange?blogID=112706738355446097&widgetType=HTML&widgetId=HTML1&action=editWidget§ionId=sidebar-right-1 "Edit"
[90]: https://www.blogger.com
[91]: //www.blogger.com/rearrange?blogID=112706738355446097&widgetType=Attribution&widgetId=Attribution1&action=editWidget§ionId=footer-3 "Edit"
[*[3:27 PM]: 2015-04-17T15:27:00-07:00

View File

@ -0,0 +1,143 @@
---
title: " Kubernetes 社区每周聚会笔记- 2015年4月24日 "
date: 2015-04-30
slug: weekly-kubernetes-community-hangout_29
url: /blog/2015/04/Weekly-Kubernetes-Community-Hangout_29
---
<!--
---
title: " Weekly Kubernetes Community Hangout Notes - April 24 2015 "
date: 2015-04-30
slug: weekly-kubernetes-community-hangout_29
url: /blog/2015/04/Weekly-Kubernetes-Community-Hangout_29
---
-->
<!--
Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
-->
每个星期Kubernetes 贡献者社区几乎都会在谷歌 Hangouts 上聚会。我们希望任何对此感兴趣的人都能了解这个论坛的讨论内容。
<!--
Agenda:
* Flocker and Kubernetes integration demo
-->
日程安排:
* Flocker 和 Kubernetes 集成演示
<!--
Notes:
* flocker and kubernetes integration demo
* * Flocker Q/A
* Does the file still exists on node1 after migration?
* Brendan: Any plan this to make it a volume? So we don't need powerstrip?
* Luke: Need to figure out interest to decide if we want to make it a first-class persistent disk provider in kube.
* Brendan: Removing need for powerstrip would make it simple to use. Totally go for it.
* Tim: Should take no more than 45 minutes to add it to kubernetes:)
-->
笔记:
* flocker 和 kubernetes 集成演示
* * Flocker Q/A
* 迁移后文件是否仍存在于node1上
* Brendan: 有没有计划把它做成一本书?我们不需要 powerstrip
* Luke: 需要找出感兴趣的来决定我们是否想让它成为 kube 中的一个一流的持久性磁盘提供商。
* Brendan: 删除对 powerstrip 的需求会使其易于使用。完全去做。
* Tim: 将它添加到 kubernetes 应该不超过45分钟:)
<!--
* Derek: Contrast this with persistent volumes and claims?
* Luke: Not much difference, except for the novel ZFS based backend. Makes workloads really portable.
* Tim: very different than network-based volumes. Its interesting that it is the only offering that allows upgrading media.
* Brendan: claims, how does it look for replicated claims? eg Cassandra wants to have replicated data underneath. It would be efficient to scale up and down. Create storage on the fly based on load dynamically. Its step beyond taking snapshots - programmatically creating replicas with preallocation.
* Tim: helps with auto-provisioning.
-->
* Derek: 持久卷和请求相比呢?
* Luke: 除了基于 ZFS 的新后端之外,差别不大。使工作负载真正可移植。
* Tim: 与基于网络的卷非常不同。有趣的是,它是唯一允许升级媒体的产品。
* Brendan: 请求它如何查找重复请求Cassandra 希望在底层复制数据。向上和向下扩缩是有效的。根据负载动态地创建存储。它的步骤不仅仅是快照——通过编程使用预分配创建副本。
* Tim: 帮助自动配置。
<!--
* Brian: Does flocker requires any other component?
* Kai: Flocker control service co-located with the master. (dia on blog post). Powerstrip + Powerstrip Flocker. Very interested in mpersisting state in etcd. It keeps metadata about each volume.
* Brendan: In future, flocker can be a plugin and we'll take care of persistence. Post v1.0.
* Brian: Interested in adding generic plugin for services like flocker.
* Luke: Zfs can become really valuable when scaling to lot of containers on a single node.
-->
* Brian: flocker 是否需要其他组件?
* Kai: Flocker 控制服务与主服务器位于同一位置。(dia 在博客上)。Powerstrip + Powerstrip Flocker。对在 etcd 中持久化状态非常有趣。它保存关于每个卷的元数据。
* Brendan: 在未来flocker 可以是一个插件,我们将负责持久性。发布 v1.0。
* Brian: 有兴趣为 flocker 等服务添加通用插件。
* Luke: 当扩展到单个节点上的许多容器时Zfs 会变得非常有价值。
<!--
* Alex: Can flocker service can be run as a pod?
* Kai: Yes, only requirement is the flocker control service should be able to talk to zfs agent. zfs agent needs to be installed on the host and zfs binaries need to be accessible.
* Brendan: In theory, all zfs bits can be put it into a container with devices.
* Luke: Yes, still working through cross-container mounting issue.
* Tim: pmorie is working through it to make kubelet work in a container. Possible re-use.
* Kai: Cinder support is coming. Few days away.
* Bob: What's the process of pushing kube to GKE? Need more visibility for confidence.
-->
* Alex: flocker 服务可以作为 pod 运行吗?
* Kai: 是的,唯一的要求是 flocker 控制服务应该能够与 zfs 代理对话。需要在主机上安装 zfs 代理,并且需要访问 zfs 二进制文件。
* Brendan: 从理论上讲,所有 zfs 位都可以与设备一起放入容器中。
* Luke: 是的,仍然在处理跨容器安装问题。
* Tim: pmorie 正在通过它使 kubelet 在容器中工作。可能重复使用。
* Kai: Cinder 支持即将到来。几天之后。
* Bob: 向 GKE 推送 kube 的过程是怎样的?需要更多的可见度。

View File

@ -0,0 +1,112 @@
---
title: " OpenStack 上的 Kubernetes "
date: 2015-05-19
slug: kubernetes-on-openstack
url: /blog/2015/05/Kubernetes-On-Openstack
---
<!--
---
title: " Kubernetes on OpenStack "
date: 2015-05-19
slug: kubernetes-on-openstack
url: /blog/2015/05/Kubernetes-On-Openstack
---
-->
[![](https://3.bp.blogspot.com/-EOrCHChZJZE/VVZzq43g6CI/AAAAAAAAF-E/JUilRHk369E/s400/Untitled%2Bdrawing.jpg)](https://3.bp.blogspot.com/-EOrCHChZJZE/VVZzq43g6CI/AAAAAAAAF-E/JUilRHk369E/s1600/Untitled%2Bdrawing.jpg)
<!--
Today, the [OpenStack foundation](https://www.openstack.org/foundation/) made it even easier for you deploy and manage clusters of Docker containers on OpenStack clouds by including Kubernetes in its [Community App Catalog](http://apps.openstack.org/). &nbsp;At a keynote today at the OpenStack Summit in Vancouver, Mark Collier, COO of the OpenStack Foundation, and Craig Peters, &nbsp;[Mirantis](https://www.mirantis.com/) product line manager, demonstrated the Community App Catalog workflow by launching a Kubernetes cluster in a matter of seconds by leveraging the compute, storage, networking and identity systems already present in an OpenStack cloud.
-->
今天,[OpenStack 基金会](https://www.openstack.org/foundation/)通过在其[社区应用程序目录](http://apps.openstack.org/)中包含 Kubernetes使您更容易在 OpenStack 云上部署和管理 Docker 容器集群。
今天在温哥华 OpenStack 峰会上的主题演讲中OpenStack 基金会的首席运营官Mark Collier 和 [Mirantis](https://www.mirantis.com/) 产品线经理 Craig Peters 通过利用 OpenStack 云中已经存在的计算、存储、网络和标识系统,在几秒钟内启动了 Kubernetes 集群,展示了社区应用程序目录的工作流。
<!--
The entries in the catalog include not just the ability to [start a Kubernetes cluster](http://apps.openstack.org/#tab=murano-apps&asset=Kubernetes%20Cluster), but also a range of applications deployed in Docker containers managed by Kubernetes. These applications include:
-->
目录中的条目不仅包括[启动 Kubernetes 集群](http://apps.openstack.org/#tab=murano-apps&asset=Kubernetes%20Cluster)的功能,还包括部署在 Kubernetes 管理的 Docker 容器中的一系列应用程序。这些应用包括:
<!--
-
Apache web server
-
Nginx web server
-
Crate - The Distributed Database for Docker
-
GlassFish - Java EE 7 Application Server
-
Tomcat - An open-source web server and servlet container
-
InfluxDB - An open-source, distributed, time series database
-
Grafana - Metrics dashboard for InfluxDB
-
Jenkins - An extensible open source continuous integration server
-
MariaDB database
-
MySql database
-
Redis - Key-value cache and store
-
PostgreSQL database
-
MongoDB NoSQL database
-
Zend Server - The Complete PHP Application Platform
-->
-
Apache web 服务器
-
Nginx web 服务器
-
Crate - Docker的分布式数据库
-
GlassFish - Java EE 7 应用服务器
-
Tomcat - 一个开源的 web 服务器和 servlet 容器
-
InfluxDB - 一个开源的、分布式的、时间序列数据库
-
Grafana - InfluxDB 的度量仪表板
-
Jenkins - 一个可扩展的开放源码持续集成服务器
-
MariaDB 数据库
-
MySql 数据库
-
Redis - 键-值缓存和存储
-
PostgreSQL 数据库
-
MongoDB NoSQL 数据库
-
Zend 服务器 - 完整的 PHP 应用程序平台
<!--
This list will grow, and is curated [here](https://github.com/openstack/murano-apps/tree/master/Docker/Kubernetes). You can examine (and contribute to) the YAML file that tells Murano how to install and start the Kubernetes cluster [here](https://github.com/openstack/murano-apps/blob/master/Docker/Kubernetes/KubernetesCluster/package/Classes/KubernetesCluster.yaml).
-->
此列表将会增长,并在[此处](https://github.com/openstack/murano-apps/tree/master/Docker/Kubernetes)进行策划。您可以检查并参与YAML 文件,该文件告诉 Murano 如何根据[此处](https://github.com/openstack/murano-apps/blob/master/Docker/Kubernetes/KubernetesCluster/package/Classes/KubernetesCluster.yaml)定义来安装和启动 ...apps/blob/master/Docker/Kubernetes/KubernetesCluster/package/Classes/KubernetesCluster.yaml)安装和启动 Kubernetes 集群。
<!--
[The Kubernetes open source project](https://github.com/GoogleCloudPlatform/kubernetes) has continued to see fantastic community adoption and increasing momentum, with over 11,000 commits and 7,648 stars on GitHub. With supporters ranging from Red Hat and Intel to CoreOS and Box.net, it has come to represent a range of customer interests ranging from enterprise IT to cutting edge startups. We encourage you to give it a try, give us your feedback, and get involved in our growing community.
-->
[Kubernetes 开源项目](https://github.com/GoogleCloudPlatform/kubernetes)继续受到社区的欢迎并且势头越来越好GitHub 上有超过 11000 个提交和 7648 颗星。从 Red Hat 和 Intel 到 CoreOS 和 Box.net它已经代表了从企业 IT 到前沿创业企业的一系列客户。我们鼓励您尝试一下,给我们您的反馈,并参与到我们不断增长的社区中来。
<!--
- Martin Buhr, Product Manager, Kubernetes Open Source Project
-->
- Martin Buhr, Kubernetes 开源项目产品经理

View File

@ -0,0 +1,121 @@
---
title: " Kubernetes 社区每周聚会笔记- 2015年5月1日 "
date: 2015-05-11
slug: weekly-kubernetes-community-hangout
url: /blog/2015/05/Weekly-Kubernetes-Community-Hangout
---
<!--
---
title: " Weekly Kubernetes Community Hangout Notes - May 1 2015 "
date: 2015-05-11
slug: weekly-kubernetes-community-hangout
url: /blog/2015/05/Weekly-Kubernetes-Community-Hangout
---
-->
<!--
Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
-->
每个星期Kubernetes 贡献者社区几乎都会在谷歌 Hangouts 上聚会。我们希望任何对此感兴趣的人都能了解这个论坛的讨论内容。
<!--
* Simple rolling update - Brendan
* Rolling update = nice example of why RCs and Pods are good.
* ...pause… (Brendan needs demo recovery tips from Kelsey)
* Rolling update has recovery: Cancel update and restart, update continues from where it stopped.
* New controller gets name of old controller, so appearance is pure update.
* Can also name versions in update (won't do rename at the end).
-->
* 简单的滚动更新 - Brendan
* 滚动更新 = RCs和Pods很好的例子。
* ...pause… (Brendan 需要 Kelsey 的演示恢复技巧)
* 滚动更新具有恢复功能:取消更新并重新启动,更新从停止的地方继续。
* 新控制器获取旧控制器的名称,因此外观是纯粹的更新。
* 还可以在 update 中命名版本(最后不会重命名)。
<!--
* Rocket demo - CoreOS folks
* 2 major differences between rocket & docker: Rocket is daemonless & pod-centric.
* Rocket has AppContainer format as native, but also supports docker image format.
* Can run AppContainer and docker containers in same pod.
* Changes are close to merged.
-->
* Rocket 演示 - CoreOS 的伙计们
* Rocket 和 docker 之间的主要区别: Rocket 是无守护进程和以 pod 为中心。。
* Rocket 具有原生的 AppContainer 格式,但也支持 docker 镜像格式。
* 可以在同一个 pod 中运行 AppContainer 和 docker 容器。
* 变更接近于合并。
<!--
* demo service accounts and secrets being added to pods - Jordan
* Problem: It's hard to get a token to talk to the API.
* New API object: "ServiceAccount"
* ServiceAccount is namespaced, controller makes sure that at least 1 default service account exists in a namespace.
* Typed secret "ServiceAccountToken", controller makes sure there is at least 1 default token.
* DEMO
* * Can create new service account with ServiceAccountToken. Controller will create token for it.
* Can create a pod with service account, pods will have service account secret mounted at /var/run/secrets/kubernetes.io/…
-->
* 演示 service accounts 和 secrets 被添加到 pod - Jordan
* 问题很难获得与API通信的令牌。
* 新的API对象"ServiceAccount"
* ServiceAccount 是命名空间,控制器确保命名空间中至少存在一个个默认 service account。
* 键入 "ServiceAccountToken",控制器确保至少有一个默认令牌。
* 演示
* * 可以使用 ServiceAccountToken 创建新的 service account。控制器将为它创建令牌。
* 可以创建一个带有 service account 的 pod, pod 将在 /var/run/secrets/kubernets.io/…
<!--
* Kubelet running in a container - Paul
* Kubelet successfully ran pod w/ mounted secret.
-->
* Kubelet 在容器中运行 - Paul
* Kubelet 成功地运行了带有 secret 的 pod。

View File

@ -0,0 +1,25 @@
---
title: "幻灯片Kubernetes 集群管理,爱丁堡大学演讲"
date: 2015-06-26
slug: slides-cluster-management-with
url: /blog/2015/06/Slides-Cluster-Management-With
---
<!--
---
title: " Slides: Cluster Management with Kubernetes, talk given at the University of Edinburgh "
date: 2015-06-26
slug: slides-cluster-management-with
url: /blog/2015/06/Slides-Cluster-Management-With
---
-->
<!--
On Friday 5 June 2015 I gave a talk called [Cluster Management with Kubernetes](https://docs.google.com/presentation/d/1H4ywDb4vAJeg8KEjpYfhNqFSig0Q8e_X5I36kM9S6q0/pub?start=false&loop=false&delayms=3000) to a general audience at the University of Edinburgh. The talk includes an example of a music store system with a Kibana front end UI and an Elasticsearch based back end which helps to make concrete concepts like pods, replication controllers and services.
[Cluster Management with Kubernetes](https://docs.google.com/presentation/d/1H4ywDb4vAJeg8KEjpYfhNqFSig0Q8e_X5I36kM9S6q0/pub?start=false&loop=false&delayms=3000).
-->
2015年6月5日星期五我在爱丁堡大学给普通听众做了一个演讲题目是[使用 Kubernetes 进行集群管理](https://docs.google.com/presentation/d/1H4ywDb4vAJeg8KEjpYfhNqFSig0Q8e_X5I36kM9S6q0/pub?start=false&loop=false&delayms=3000)。这次演讲包括一个带有 Kibana 前端 UI 的音乐存储系统的例子,以及一个基于 Elasticsearch 的后端,该后端有助于生成具体的概念,如 pods、复制控制器和服务。
[Kubernetes 集群管理](https://docs.google.com/presentation/d/1H4ywDb4vAJeg8KEjpYfhNqFSig0Q8e_X5I36kM9S6q0/pub?start=false&loop=false&delayms=3000)。

View File

@ -0,0 +1,24 @@
---
title: "宣布首个Kubernetes企业培训课程 "
date: 2015-07-08
slug: announcing-first-kubernetes-enterprise
url: /blog/2015/07/Announcing-First-Kubernetes-Enterprise
---
<!-- ---
title: " Announcing the First Kubernetes Enterprise Training Course "
date: 2015-07-08
slug: announcing-first-kubernetes-enterprise
url: /blog/2015/07/Announcing-First-Kubernetes-Enterprise
--- -->
<!-- At Google we rely on Linux application containers to run our core infrastructure. Everything from Search to Gmail runs in containers. &nbsp;In fact, we like containers so much that even our Google Compute Engine VMs run in containers! &nbsp;Because containers are critical to our business, we have been working with the community on many of the basic container technologies (from cgroups to Dockers LibContainer) and even decided to build the next generation of Googles container scheduling technology, Kubernetes, in the open. -->
在谷歌,我们依赖 Linux 容器应用程序去运行我们的核心基础架构。所有服务从搜索引擎到Gmail服务都运行在容器中。事实上我们非常喜欢容器甚至我们的谷歌云计算引擎虚拟机也运行在容器上由于容器对于我们的业务至关重要我们已经与社区合作开发许多基本的容器技术从 cgroups 到 Docker 的 LibContainer,甚至决定去构建谷歌的下一代开源容器调度技术Kubernetes。
<!-- One year into the Kubernetes project, and on the eve of our planned V1 release at OSCON, we are pleased to announce the first-ever formal Kubernetes enterprise-focused training session organized by a key Kubernetes contributor, Mesosphere. The inaugural session will be taught by Zed Shaw and Michael Hausenblas from Mesosphere, and will take place on July 20 at OSCON in Portland. [Pre-registration](https://mesosphere.com/training/kubernetes/) is free for early registrants, but space is limited so act soon! -->
在 Kubernetes 项目进行一年后,在 OSCON 上发布 V1 版本的前夕我们很高兴的宣布Kubernetes 的主要贡献者 Mesosphere 组织了有史以来第一次正规的以企业为中心的 Kubernetes 培训会议。首届会议将于 6 月 20 日在波特兰的 OSCON 举办,由来自 Mesosphere 的 Zed Shaw 和 Michael Hausenblas 演讲。[Pre-registration](https://mesosphere.com/training/kubernetes/) 对于优先注册者是免费的,但名额有限,立刻行动吧!
<!-- This one-day course will cover the basics of building and deploying containerized applications using Kubernetes. It will walk attendees through the end-to-end process of creating a Kubernetes application architecture, building and configuring Docker images, and deploying them on a Kubernetes cluster. Users will also learn the fundamentals of deploying Kubernetes applications and services on our Google Container Engine and Mesospheres Datacenter Operating System. -->
这个为期一天的课程将包涵使用 Kubernetes 构建和部署容器化应用程序的基础知识。它将通过完整的流程引导与参会者创建一个 Kubernetes 的应用程序体系结构,创建和配置 Docker 镜像,并把它们部署到 Kubernetes 集群上。用户还将了解在我们的谷歌容器引擎和 Mesosphere 的数据中心操作系统上部署 Kubernetes 应用程序和服务的基础知识。
<!-- The upcoming Kubernetes bootcamp will be a great way to learn how to apply Kubernetes to solve long-standing deployment and application management problems. &nbsp;This is just the first of what we hope are many, and from a broad set of contributors. -->
即将推出的 Kubernetes bootcamp 将是学习如何应用 Kubernetes 解决长期部署和应用程序管理问题的一个好途径。相对于我们所预期的,来自于广泛社区的众多培训项目而言,这只是其中一个。

View File

@ -0,0 +1,164 @@
---
title: " 使用 Puppet 管理 Kubernetes PodsServices 和 Replication Controllers "
date: 2015-12-17
slug: managing-kubernetes-pods-services-and-replication-controllers-with-puppet
url: /blog/2015/12/Managing-Kubernetes-Pods-Services-And-Replication-Controllers-With-Puppet
---
<!--
---
title: " Managing Kubernetes Pods, Services and Replication Controllers with Puppet "
date: 2015-12-17
slug: managing-kubernetes-pods-services-and-replication-controllers-with-puppet
url: /blog/2015/12/Managing-Kubernetes-Pods-Services-And-Replication-Controllers-With-Puppet
---
-->
<!--
_Todays guest post is written by Gareth Rushgrove, Senior Software Engineer at Puppet Labs, a leader in IT automation. Gareth tells us about a new Puppet module that helps manage resources in Kubernetes.&nbsp;_
People familiar with [Puppet](https://github.com/puppetlabs/puppet)&nbsp;might have used it for managing files, packages and users on host computers. But Puppet is first and foremost a configuration management tool, and config management is a much broader discipline than just managing host-level resources. A good definition of configuration management is that it aims to solve four related problems: identification, control, status accounting and verification and audit. These problems exist in the operation of any complex system, and with the new [Puppet Kubernetes module](https://forge.puppetlabs.com/garethr/kubernetes)&nbsp;were starting to look at how we can solve those problems for Kubernetes.
-->
_今天的嘉宾帖子是由 IT 自动化领域的领导者 Puppet Labs 的高级软件工程师 Gareth Rushgrove 撰写的。Gareth告诉我们一个新的 Puppet 模块,它帮助管理 Kubernetes 中的资源。_
熟悉[Puppet]的人(https://github.com/puppetlabs/puppet)可能使用它来管理主机上的文件、包和用户。但是Puppet首先是一个配置管理工具配置管理是一个比管理主机级资源更广泛的规程。配置管理的一个很好的定义是它旨在解决四个相关的问题标识、控制、状态核算和验证审计。这些问题存在于任何复杂系统的操作中并且有了新的[Puppet Kubernetes module](https://forge.puppetlabs.com/garethr/kubernetes),我们开始研究如何为 Kubernetes 解决这些问题。
<!--
### The Puppet Kubernetes Module
The Puppet Kubernetes module currently assumes you already have a Kubernetes cluster [up and running](http://kubernetes.io/gettingstarted/).&nbsp;Its focus is on managing the resources in Kubernetes, like Pods, Replication Controllers and Services, not (yet) on managing the underlying kubelet or etcd services. Heres a quick snippet of code describing a Pod in Puppets DSL.
-->
### Puppet Kubernetes 模块
Puppet kubernetes 模块目前假设您已经有一个 kubernetes 集群 [启动并运行]](http://kubernetes.io/gettingstarted/)。它的重点是管理 Kubernetes中的资源如 Pods、Replication Controllers 和 Services而不是现在管理底层的 kubelet 或 etcd services。下面是描述 Puppets DSL 中一个 Pod 的简短代码片段。
<!--
```
kubernetes_pod { 'sample-pod':
ensure => present,
metadata => {
namespace => 'default',
},
spec => {
containers => [{
name => 'container-name',
image => 'nginx',
}]
},
```
}
-->
```
kubernetes_pod { 'sample-pod':
ensure => present,
metadata => {
namespace => 'default',
},
spec => {
containers => [{
name => 'container-name',
image => 'nginx',
}]
},
}
```
<!--
If youre familiar with the YAML file format, youll probably recognise the structure immediately. The interface is intentionally identical to aid conversion between different formats — in fact, the code powering this is autogenerated from the Kubernetes API Swagger definitions. Running the above code, assuming we save it as pod.pp, is as simple as:
```
puppet apply pod.pp
```
-->
如果您熟悉 YAML 文件格式,您可能会立即识别该结构。 该接口故意采取相同的格式以帮助在不同格式之间进行转换 — 事实上为此提供支持的代码是从Kubernetes API Swagger自动生成的。 运行上面的代码,假设我们将其保存为 pod.pp就像下面这样简单
```
puppet apply pod.pp
```
<!--
Authentication uses the standard kubectl configuration file. You can find complete [installation instructions in the module's README](https://github.com/garethr/garethr-kubernetes/blob/master/README.md).
Kubernetes has several resources, from Pods and Services to Replication Controllers and Service Accounts. You can see an example of the module managing these resources in the [Kubernetes guestbook sample in Puppet](https://puppetlabs.com/blog/kubernetes-guestbook-example-puppet)&nbsp;post. This demonstrates converting the canonical hello-world example to use Puppet code. -->
身份验证使用标准的 kubectl 配置文件。您可以在模块的自述文件中找到完整的[README](https://github.com/garethr/garethr-kubernetes/blob/master/README.md)。
Kubernetes 有很多资源,来自 Pods、 Services、 Replication Controllers 和 Service Accounts。您可以在[Puppet 中的 kubernetes 留言簿示例](https://puppetlabs.com/blog/kubernetes-guestbook-example-puppet)文章中看到管理这些资源的模块示例。这演示了如何将规范的 hello-world 示例转换为使用 Puppet代码。
<!--
One of the main advantages of using Puppet for this, however, is that you can create your own higher-level and more business-specific interfaces to Kubernetes-managed applications. For instance, for the guestbook, you could create something like the following:
```
guestbook { 'myguestbook':
redis_slave_replicas => 2,
frontend_replicas => 3,
redis_master_image => 'redis',
redis_slave_image => 'gcr.io/google_samples/gb-redisslave:v1',
frontend_image => 'gcr.io/google_samples/gb-frontend:v3',
}
```
-->
然而,使用 Puppet 的一个主要优点是,您可以创建自己的更高级别和更特定于业务的接口,以连接 kubernetes 管理的应用程序。例如,对于留言簿,可以创建如下内容:
```
guestbook { 'myguestbook':
redis_slave_replicas => 2,
frontend_replicas => 3,
redis_master_image => 'redis',
redis_slave_image => 'gcr.io/google_samples/gb-redisslave:v1',
frontend_image => 'gcr.io/google_samples/gb-frontend:v3',
}
```
<!--
You can read more about using Puppets defined types, and see lots more code examples, in the Puppet blog post, [Building Your Own Abstractions for Kubernetes in Puppet](https://puppetlabs.com/blog/building-your-own-abstractions-kubernetes-puppet).
### Conclusions
The advantages of using Puppet rather than just the standard YAML files and kubectl are:
-->
您可以在Puppet博客文章[在 Puppet 中为 Kubernetes 构建自己的抽象](https://puppetlabs.com/blog/building-your-own-abstractions-kubernetes-puppet)中阅读更多关于使用 Puppet 定义的类型的信息,并看到更多的代码示例。
### 结论
使用 Puppet 而不仅仅是使用标准的 YAML 文件和 kubectl 的优点是:
<!--
- The ability to create your own abstractions to cut down on repetition and craft higher-level user interfaces, like the guestbook example above.&nbsp;
- Use of Puppets development tools for validating code and for writing unit tests.&nbsp;
- Integration with other tools such as Puppet Server, for ensuring that your model in code matches the state of your cluster, and with PuppetDB for storing reports and tracking changes.
- The ability to run the same code repeatedly against the Kubernetes API, to detect any changes or remediate configuration drift.&nbsp;
-->
- 能够创建自己的抽象,以减少重复和设计更高级别的用户界面,如上面的留言簿示例。
- 使用 Puppet 的开发工具验证代码和编写单元测试。
- 与 Puppet Server 等其他工具配合,以确保代码中的模型与集群的状态匹配,并与 PuppetDB 配合工作,以存储报告和跟踪更改。
- 能够针对 Kubernetes API 重复运行相同的代码,以检测任何更改或修正配置。
<!--
Its also worth noting that most large organisations will have very heterogenous environments, running a wide range of software and operating systems. Having a single toolchain that unifies those discrete systems can make adopting new technology like Kubernetes much easier.
-->
值得注意的是,大多数大型组织都将拥有非常异构的环境,运行各种各样的软件和操作系统。拥有统一这些离散系统的单一工具链可以使采用 Kubernetes 等新技术变得更加容易。
<!--
Its safe to say that Kubernetes provides an excellent set of primitives on which to build cloud-native systems. And with Puppet, you can address some of the operational and configuration management issues that come with running any complex system in production. [Let us know](mailto:gareth@puppetlabs.com)&nbsp;what you think if you try the module out, and what else youd like to see supported in the future.
&nbsp;-&nbsp;Gareth Rushgrove, Senior Software Engineer, Puppet Labs
-->
可以肯定地说Kubernetes提供了一组优秀的组件来构建云原生系统。使用 Puppet您可以解决在生产中运行任何复杂系统所带来的一些操作和配置管理问题。[告诉我们](mailto:gareth@puppetlabs.com)如果您试用了该模块,您会有什么想法,以及您希望在将来看到哪些支持。
Gareth RushgrovePuppet Labs 高级软件工程师

View File

@ -0,0 +1,303 @@
<!--
---
title: " Simple leader election with Kubernetes and Docker "
date: 2016-01-11
slug: simple-leader-election-with-kubernetes
url: /blog/2016/01/Simple-Leader-Election-With-Kubernetes
---
#### Overview
-->
title: "Kubernetes 和 Docker 简单的 leader election"
date: 2016-01-11
slug: simple-leader-election-with-kubernetes
url: /blog/2016/01/Simple-Leader-Election-With-Kubernetes
<!--
Kubernetes simplifies the deployment and operational management of services running on clusters. However, it also simplifies the development of these services. In this post we'll see how you can use Kubernetes to easily perform leader election in your distributed application. Distributed applications usually replicate the tasks of a service for reliability and scalability, but often it is necessary to designate one of the replicas as the leader who is responsible for coordination among all of the replicas.
-->
Kubernetes 简化了集群上运行的服务的部署和操作管理。但是,它也简化了这些服务的发展。在本文中,我们将看到如何使用 Kubernetes 在分布式应用程序中轻松地执行 leader election。分布式应用程序通常为了可靠性和可伸缩性而复制服务的任务但通常需要指定其中一个副本作为负责所有副本之间协调的负责人。
<!--
Typically in leader election, a set of candidates for becoming leader is identified. These candidates all race to declare themselves the leader. One of the candidates wins and becomes the leader. Once the election is won, the leader continually "heartbeats" to renew their position as the leader, and the other candidates periodically make new attempts to become the leader. This ensures that a new leader is identified quickly, if the current leader fails for some reason.
-->
通常在 leader election 中,会确定一组成为领导者的候选人。这些候选人都竞相宣布自己为领袖。其中一位候选人获胜并成为领袖。一旦选举获胜,领导者就会不断地“信号”以表示他们作为领导者的地位,其他候选人也会定期地做出新的尝试来成为领导者。这样可以确保在当前领导因某种原因失败时,快速确定新领导。
<!--
Implementing leader election usually requires either deploying software such as ZooKeeper, etcd or Consul and using it for consensus, or alternately, implementing a consensus algorithm on your own. We will see below that Kubernetes makes the process of using leader election in your application significantly easier.
#### Implementing leader election in Kubernetes
-->
实现 leader election 通常需要部署 ZooKeeper、etcd 或 Consul 等软件并将其用于协商一致或者也可以自己实现协商一致算法。我们将在下面看到Kubernetes 使在应用程序中使用 leader election 的过程大大简化。
####在 Kubernetes 实施领导人选举
<!--
The first requirement in leader election is the specification of the set of candidates for becoming the leader. Kubernetes already uses _Endpoints_ to represent a replicated set of pods that comprise a service, so we will re-use this same object. (aside: You might have thought that we would use _ReplicationControllers_, but they are tied to a specific binary, and generally you want to have a single leader even if you are in the process of performing a rolling update)
To perform leader election, we use two properties of all Kubernetes API objects:
-->
Leader election 的首要条件是确定候选人的人选。Kubernetes 已经使用 _Endpoints_ 来表示组成服务的一组复制 pod因此我们将重用这个相同的对象。旁白您可能认为我们会使用 _ReplicationControllers_,但它们是绑定到特定的二进制文件,而且通常您希望只有一个领导者,即使您正在执行滚动更新)
要执行 leader election我们使用所有 Kubernetes api 对象的两个属性:
<!--
* ResourceVersions - Every API object has a unique ResourceVersion, and you can use these versions to perform compare-and-swap on Kubernetes objects
* Annotations - Every API object can be annotated with arbitrary key/value pairs to be used by clients.
Given these primitives, the code to use master election is relatively straightforward, and you can find it [here][1]. Let's run it ourselves.
-->
* ResourceVersions - 每个 API 对象都有一个惟一的 ResourceVersion您可以使用这些版本对 Kubernetes 对象执行比较和交换
* Annotations - 每个 API 对象都可以用客户端使用的任意键/值对进行注释。
给定这些原语,使用 master election 的代码相对简单,您可以在这里找到[here][1]。我们自己来做吧。
<!--
```
$ kubectl run leader-elector --image=gcr.io/google_containers/leader-elector:0.4 --replicas=3 -- --election=example
```
This creates a leader election set with 3 replicas:
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
leader-elector-inmr1 1/1 Running 0 13s
leader-elector-qkq00 1/1 Running 0 13s
leader-elector-sgwcq 1/1 Running 0 13s
```
-->
```
$ kubectl run leader-elector --image=gcr.io/google_containers/leader-elector:0.4 --replicas=3 -- --election=example
```
这将创建一个包含3个副本的 leader election 集合:
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
leader-elector-inmr1 1/1 Running 0 13s
leader-elector-qkq00 1/1 Running 0 13s
leader-elector-sgwcq 1/1 Running 0 13s
```
<!--
To see which pod was chosen as the leader, you can access the logs of one of the pods, substituting one of your own pod's names in place of
```
${pod_name}, (e.g. leader-elector-inmr1 from the above)
$ kubectl logs -f ${name}
leader is (leader-pod-name)
```
… Alternately, you can inspect the endpoints object directly:
-->
要查看哪个pod被选为领导您可以访问其中一个 pod 的日志,用您自己的一个 pod 的名称替换
```
${pod_name}, (e.g. leader-elector-inmr1 from the above)
$ kubectl logs -f ${name}
leader is (leader-pod-name)
```
…或者,可以直接检查 endpoints 对象:
<!--
_'example' is the name of the candidate set from the above kubectl run … command_
```
$ kubectl get endpoints example -o yaml
```
Now to validate that leader election actually works, in a different terminal, run:
```
$ kubectl delete pods (leader-pod-name)
```
-->
_'example' 是上面 kubectl run … 命令_中候选集的名称
```
$ kubectl get endpoints example -o yaml
```
现在,要验证 leader election 是否实际有效,请在另一个终端运行:
```
$ kubectl delete pods (leader-pod-name)
```
<!--
This will delete the existing leader. Because the set of pods is being managed by a replication controller, a new pod replaces the one that was deleted, ensuring that the size of the replicated set is still three. Via leader election one of these three pods is selected as the new leader, and you should see the leader failover to a different pod. Because pods in Kubernetes have a _grace period_ before termination, this may take 30-40 seconds.
The leader-election container provides a simple webserver that can serve on any address (e.g. http://localhost:4040). You can test this out by deleting the existing leader election group and creating a new one where you additionally pass in a --http=(host):(port) specification to the leader-elector image. This causes each member of the set to serve information about the leader via a webhook.
-->
这将删除现有领导。由于 pod 集由 replication controller 管理,因此新的 pod 将替换已删除的pod确保复制集的大小仍为3。通过 leader election这三个pod中的一个被选为新的领导者您应该会看到领导者故障转移到另一个pod。因为 Kubernetes 的吊舱在终止前有一个 _grace period_这可能需要30-40秒。
Leader-election container 提供了一个简单的 web 服务器,可以服务于任何地址(e.g. http://localhost:4040)。您可以通过删除现有的 leader election 组并创建一个新的 leader elector 组来测试这一点,在该组中,您还可以向 leader elector 映像传递--http=(host):(port) 规范。这将导致集合中的每个成员通过 webhook 提供有关领导者的信息。
<!--
```
# delete the old leader elector group
$ kubectl delete rc leader-elector
# create the new group, note the --http=localhost:4040 flag
$ kubectl run leader-elector --image=gcr.io/google_containers/leader-elector:0.4 --replicas=3 -- --election=example --http=0.0.0.0:4040
# create a proxy to your Kubernetes api server
$ kubectl proxy
```
-->
```
# delete the old leader elector group
$ kubectl delete rc leader-elector
# create the new group, note the --http=localhost:4040 flag
$ kubectl run leader-elector --image=gcr.io/google_containers/leader-elector:0.4 --replicas=3 -- --election=example --http=0.0.0.0:4040
# create a proxy to your Kubernetes api server
$ kubectl proxy
```
<!--
You can then access:
http://localhost:8001/api/v1/proxy/namespaces/default/pods/(leader-pod-name):4040/
And you will see:
```
{"name":"(name-of-leader-here)"}
```
#### Leader election with sidecars
-->
然后您可以访问:
http://localhost:8001/api/v1/proxy/namespaces/default/pods/(leader-pod-name):4040/
你会看到:
```
{"name":"(name-of-leader-here)"}
```
#### 有副手的 leader election
<!--
Ok, that's great, you can do leader election and find out the leader over HTTP, but how can you use it from your own application? This is where the notion of sidecars come in. In Kubernetes, Pods are made up of one or more containers. Often times, this means that you add sidecar containers to your main application to make up a Pod. (for a much more detailed treatment of this subject see my earlier blog post).
The leader-election container can serve as a sidecar that you can use from your own application. Any container in the Pod that's interested in who the current master is can simply access http://localhost:4040 and they'll get back a simple JSON object that contains the name of the current master. Since all containers in a Pod share the same network namespace, there's no service discovery required!
-->
好吧,那太好了,你可以通过 HTTP 进行leader election 并找到 leader但是你如何从自己的应用程序中使用它呢这就是 sidecar 的由来。Kubernetes 中Pods 由一个或多个容器组成。通常情况下,这意味着您将 sidecar containers 添加到主应用程序中以组成 pod。关于这个主题的更详细的处理请参阅我之前的博客文章
Leader-election container 可以作为一个 sidecar您可以从自己的应用程序中使用。Pod 中任何对当前 master 感兴趣的容器都可以简单地访问http://localhost:4040它们将返回一个包含当前 master 名称的简单 json 对象。由于 pod中 的所有容器共享相同的网络命名空间,因此不需要服务发现!
<!--
For example, here is a simple Node.js application that connects to the leader election sidecar and prints out whether or not it is currently the master. The leader election sidecar sets its identifier to `hostname` by default.
```
var http = require('http');
// This will hold info about the current master
var master = {};
// The web handler for our nodejs application
var handleRequest = function(request, response) {
response.writeHead(200);
response.end("Master is " + master.name);
};
// A callback that is used for our outgoing client requests to the sidecar
var cb = function(response) {
var data = '';
response.on('data', function(piece) { data = data + piece; });
response.on('end', function() { master = JSON.parse(data); });
};
// Make an async request to the sidecar at http://localhost:4040
var updateMaster = function() {
var req = http.get({host: 'localhost', path: '/', port: 4040}, cb);
req.on('error', function(e) { console.log('problem with request: ' + e.message); });
req.end();
};
/ / Set up regular updates
updateMaster();
setInterval(updateMaster, 5000);
// set up the web server
var www = http.createServer(handleRequest);
www.listen(8080);
```
Of course, you can use this sidecar from any language that you choose that supports HTTP and JSON.
-->
例如,这里有一个简单的 Node.js 应用程序,它连接到 leader election sidecar 并打印出它当前是否是 master。默认情况下leader election sidecar 将其标识符设置为 `hostname`
```
var http = require('http');
// This will hold info about the current master
var master = {};
// The web handler for our nodejs application
var handleRequest = function(request, response) {
response.writeHead(200);
response.end("Master is " + master.name);
};
// A callback that is used for our outgoing client requests to the sidecar
var cb = function(response) {
var data = '';
response.on('data', function(piece) { data = data + piece; });
response.on('end', function() { master = JSON.parse(data); });
};
// Make an async request to the sidecar at http://localhost:4040
var updateMaster = function() {
var req = http.get({host: 'localhost', path: '/', port: 4040}, cb);
req.on('error', function(e) { console.log('problem with request: ' + e.message); });
req.end();
};
/ / Set up regular updates
updateMaster();
setInterval(updateMaster, 5000);
// set up the web server
var www = http.createServer(handleRequest);
www.listen(8080);
```
当然,您可以从任何支持 HTTP 和 JSON 的语言中使用这个 sidecar。
<!--
#### Conclusion
Hopefully I've shown you how easy it is to build leader election for your distributed application using Kubernetes. In future installments we'll show you how Kubernetes is making building distributed systems even easier. In the meantime, head over to [Google Container Engine][2] or [kubernetes.io][3] to get started with Kubernetes.
[1]: https://github.com/kubernetes/contrib/pull/353
[2]: https://cloud.google.com/container-engine/
[3]: http://kubernetes.io/
-->
#### 结论
希望我已经向您展示了使用 Kubernetes 为您的分布式应用程序构建 leader election 是多么容易。在以后的部分中,我们将向您展示 Kubernetes 如何使构建分布式系统变得更加容易。同时,转到[Google Container Engine][2]或[kubernetes.io][3]开始使用Kubernetes。
[1]: https://github.com/kubernetes/contrib/pull/353
[2]: https://cloud.google.com/container-engine/
[3]: http://kubernetes.io/

View File

@ -0,0 +1,79 @@
---
title: " 为什么 Kubernetes 不用 libnetwork "
date: 2016-01-14
slug: why-kubernetes-doesnt-use-libnetwork
url: /blog/2016/01/Why-Kubernetes-Doesnt-Use-Libnetwork
---
<!-- ---
title: " Why Kubernetes doesnt use libnetwork "
date: 2016-01-14
slug: why-kubernetes-doesnt-use-libnetwork
url: /blog/2016/01/Why-Kubernetes-Doesnt-Use-Libnetwork
--- -->
<!-- Kubernetes has had a very basic form of network plugins since before version 1.0 was released — around the same time as Docker's [libnetwork](https://github.com/docker/libnetwork) and Container Network Model ([CNM](https://github.com/docker/libnetwork/blob/master/docs/design.md)) was introduced. Unlike libnetwork, the Kubernetes plugin system still retains its "alpha" designation. Now that Docker's network plugin support is released and supported, an obvious question we get is why Kubernetes has not adopted it yet. After all, vendors will almost certainly be writing plugins for Docker — we would all be better off using the same drivers, right? -->
在 1.0 版本发布之前Kubernetes 已经有了一个非常基础的网络插件形式-大约在引入 Dockers [libnetwork](https://github.com/docker/libnetwork) 和 Container Network Model ([CNM](https://github.com/docker/libnetwork/blob/master/docs/design.md)) 的时候。与 libnetwork 不同Kubernetes 插件系统仍然保留它的 'alpha' 名称。现在 Docker 的网络插件支持已经发布并得到支持,我们发现一个明显的问题是 Kubernetes 尚未采用它。毕竟,供应商几乎肯定会为 Docker 编写插件-我们最好还是用相同的驱动程序,对吧?
<!-- Before going further, it's important to remember that Kubernetes is a system that supports multiple container runtimes, of which Docker is just one. Configuring networking is a facet of each runtime, so when people ask "will Kubernetes support CNM?" what they really mean is "will kubernetes support CNM drivers with the Docker runtime?" It would be great if we could achieve common network support across runtimes, but thats not an explicit goal. -->
在进一步说明之前,重要的是记住 Kubernetes 是一个支持多种容器运行时的系统, Docker 只是其中之一。配置网络只是每一个运行时的一个方面,所以当人们问起“ Kubernetes 会支持CNM吗他们真正的意思是“ Kubernetes 会支持 Docker 运行时的 CNM 驱动吗?”如果我们能够跨运行时实现通用的网络支持会很棒,但这不是一个明确的目标。
<!-- Indeed, Kubernetes has not adopted CNM/libnetwork for the Docker runtime. In fact, weve been investigating the alternative Container Network Interface ([CNI](https://github.com/appc/cni/blob/master/SPEC.md)) model put forth by CoreOS and part of the App Container ([appc](https://github.com/appc)) specification. Why? There are a number of reasons, both technical and non-technical. -->
实际上, Kubernetes 还没有为 Docker 运行时采用 CNM/libnetwork 。事实上,我们一直在研究 CoreOS 提出的替代 Container Network Interface ([CNI](https://github.com/appc/cni/blob/master/SPEC.md)) 模型以及 App Container ([appc](https://github.com/appc)) 规范的一部分。为什么我们要这么做?有很多技术和非技术的原因。
<!-- First and foremost, there are some fundamental assumptions in the design of Docker's network drivers that cause problems for us. -->
首先Docker 的网络驱动程序设计中存在一些基本假设,这些假设会给我们带来问题。
<!-- Docker has a concept of "local" and "global" drivers. Local drivers (such as "bridge") are machine-centric and dont do any cross-node coordination. Global drivers (such as "overlay") rely on [libkv](https://github.com/docker/libkv) (a key-value store abstraction) to coordinate across machines. This key-value store is a another plugin interface, and is very low-level (keys and values, no semantic meaning). To run something like Docker's overlay driver in a Kubernetes cluster, we would either need cluster admins to run a whole different instance of [consul](https://github.com/hashicorp/consul), [etcd](https://github.com/coreos/etcd) or [zookeeper](https://zookeeper.apache.org/) (see [multi-host networking](https://docs.docker.com/engine/userguide/networking/get-started-overlay/)), or else we would have to provide our own libkv implementation that was backed by Kubernetes. -->
Docker 有一个“本地”和“全局”驱动程序的概念。本地驱动程序(例如 "bridge" )以机器为中心,不进行任何跨节点协调。全局驱动程序(例如 "overlay" )依赖于 [libkv](https://github.com/docker/libkv) (一个键值存储抽象库)来协调跨机器。这个键值存储是另一个插件接口,并且是非常低级的(键和值,没有其他含义)。 要在 Kubernetes 集群中运行类似 Docker's overlay 驱动程序,我们要么需要集群管理员来运行 [consul](https://github.com/hashicorp/consul), [etcd](https://github.com/coreos/etcd) 或 [zookeeper](https://zookeeper.apache.org/) 的整个不同实例 (see [multi-host networking](https://docs.docker.com/engine/userguide/networking/get-started-overlay/)) 否则我们必须提供我们自己的 libkv 实现,那被 Kubernetes 支持。
<!-- The latter sounds attractive, and we tried to implement it, but the libkv interface is very low-level, and the schema is defined internally to Docker. We would have to either directly expose our underlying key-value store or else offer key-value semantics (on top of our structured API which is itself implemented on a key-value system). Neither of those are very attractive for performance, scalability and security reasons. The net result is that the whole system would significantly be more complicated, when the goal of using Docker networking is to simplify things. -->
后者听起来很有吸引力,并且我们尝试实现它,但 libkv 接口是非常低级的,并且架构在内部定义为 Docker 。我们必须直接暴露我们的底层键值存储或者提供键值语义在我们的结构化API之上它本身是在键值系统上实现的。对于性能可伸缩性和安全性原因这些都不是很有吸引力。最终结果是当使用 Docker 网络的目标是简化事情时,整个系统将显得更加复杂。
<!-- For users that are willing and able to run the requisite infrastructure to satisfy Docker global drivers and to configure Docker themselves, Docker networking should "just work." Kubernetes will not get in the way of such a setup, and no matter what direction the project goes, that option should be available. For default installations, though, the practical conclusion is that this is an undue burden on users and we therefore cannot use Docker's global drivers (including "overlay"), which eliminates a lot of the value of using Docker's plugins at all. -->
对于愿意并且能够运行必需的基础架构以满足 Docker 全局驱动程序并自己配置 Docker 的用户, Docker 网络应该“正常工作。” Kubernetes 不会妨碍这样的设置,无论项目的方向如何,该选项都应该可用。但是对于默认安装,实际的结论是这对用户来说是一个不应有的负担,因此我们不能使用 Docker 的全局驱动程序(包括 "overlay" ),这消除了使用 Docker 插件的很多价值。
<!-- Docker's networking model makes a lot of assumptions that arent valid for Kubernetes. In docker versions 1.8 and 1.9, it includes a fundamentally flawed implementation of "discovery" that results in corrupted `/etc/hosts` files in containers ([docker #17190](https://github.com/docker/docker/issues/17190)) — and this cannot be easily turned off. In version 1.10 Docker is planning to [bundle a new DNS server](https://github.com/docker/docker/issues/17195), and its unclear whether this will be able to be turned off. Container-level naming is not the right abstraction for Kubernetes — we already have our own concepts of service naming, discovery, and binding, and we already have our own DNS schema and server (based on the well-established [SkyDNS](https://github.com/skynetservices/skydns)). The bundled solutions are not sufficient for our needs but are not disableable. -->
Docker 的网络模型做出了许多对 Kubernetes 无效的假设。在 docker 1.8 和 1.9 版本中,它包含一个从根本上有缺陷的“发现”实现,导致容器中的 `/etc/hosts` 文件损坏 ([docker #17190](https://github.com/docker/docker/issues/17190)) - 并且这不容易被关闭。在 1.10 版本中Docker 计划 [捆绑一个新的DNS服务器](https://github.com/docker/docker/issues/17195),目前还不清楚是否可以关闭它。容器级命名不是 Kubernetes 的正确抽象 - 我们已经有了自己的服务命名,发现和绑定概念,并且我们已经有了自己的 DNS 模式和服务器(基于完善的 [SkyDNS](https://github.com/skynetservices/skydns) )。捆绑的解决方案不足以满足我们的需求,但不能禁用。
<!-- Orthogonal to the local/global split, Docker has both in-process and out-of-process ("remote") plugins. We investigated whether we could bypass libnetwork (and thereby skip the issues above) and drive Docker remote plugins directly. Unfortunately, this would mean that we could not use any of the Docker in-process plugins, "bridge" and "overlay" in particular, which again eliminates much of the utility of libnetwork. -->
与本地/全局拆分正交, Docker 具有进程内和进程外( "remote" )插件。我们调查了是否可以绕过 libnetwork (从而跳过上面的问题)并直接驱动 Docker remote 插件。不幸的是,这意味着我们无法使用任何 Docker 进程中的插件,特别是 "bridge" 和 "overlay",这再次消除了 libnetwork 的大部分功能。
<!-- On the other hand, CNI is more philosophically aligned with Kubernetes. It's far simpler than CNM, doesn't require daemons, and is at least plausibly cross-platform (CoreOSs [rkt](https://coreos.com/rkt/docs/) container runtime supports it). Being cross-platform means that there is a chance to enable network configurations which will work the same across runtimes (e.g. Docker, Rocket, Hyper). It follows the UNIX philosophy of doing one thing well. -->
另一方面, CNI 在哲学上与 Kubernetes 更加一致。它比 CNM 简单得多,不需要守护进程,并且至少是合理的跨平台( CoreOS 的 [rkt](https://coreos.com/rkt/docs/) 容器运行时支持它)。跨平台意味着有机会启用跨运行时(例如 Docker Rocket Hyper )运行相同的网络配置。 它遵循 UNIX 的理念,即做好一件事。
<!-- Additionally, it's trivial to wrap a CNI plugin and produce a more customized CNI plugin — it can be done with a simple shell script. CNM is much more complex in this regard. This makes CNI an attractive option for rapid development and iteration. Early prototypes have proven that it's possible to eject almost 100% of the currently hard-coded network logic in kubelet into a plugin. -->
此外,包装 CNI 插件并生成更加个性化的 CNI 插件是微不足道的 - 它可以通过简单的 shell 脚本完成。 CNM 在这方面要复杂得多。这使得 CNI 对于快速开发和迭代是有吸引力的选择。早期的原型已经证明,可以将 kubelet 中几乎 100 的当前硬编码网络逻辑弹出到插件中。
<!-- We investigated [writing a "bridge" CNM driver](https://groups.google.com/forum/#!topic/kubernetes-sig-network/5MWRPxsURUw) for Docker that ran CNI drivers. This turned out to be very complicated. First, the CNM and CNI models are very different, so none of the "methods" lined up. We still have the global vs. local and key-value issues discussed above. Assuming this driver would declare itself local, we have to get info about logical networks from Kubernetes. -->
我们调查了为 Docker [编写 "bridge" CNM驱动程序](https://groups.google.com/forum/#!topic/kubernetes-sig-network/5MWRPxsURUw) 并运行 CNI 驱动程序。事实证明这非常复杂。首先, CNM 和 CNI 模型非常不同,因此没有一种“方法”协调一致。 我们仍然有上面讨论的全球与本地和键值问题。假设这个驱动程序会声明自己是本地的,我们必须从 Kubernetes 获取有关逻辑网络的信息。
<!-- Unfortunately, Docker drivers are hard to map to other control planes like Kubernetes. Specifically, drivers are not told the name of the network to which a container is being attached — just an ID that Docker allocates internally. This makes it hard for a driver to map back to any concept of network that exists in another system. -->
不幸的是, Docker 驱动程序很难映射到像 Kubernetes 这样的其他控制平面。具体来说,驱动程序不会被告知连接容器的网络名称 - 只是 Docker 内部分配的 ID 。这使得驱动程序很难映射回另一个系统中存在的任何网络概念。
<!-- This and other issues have been brought up to Docker developers by network vendors, and are usually closed as "working as intended" ([libnetwork #139](https://github.com/docker/libnetwork/issues/139), [libnetwork #486](https://github.com/docker/libnetwork/issues/486), [libnetwork #514](https://github.com/docker/libnetwork/pull/514), [libnetwork #865](https://github.com/docker/libnetwork/issues/865), [docker #18864](https://github.com/docker/docker/issues/18864)), even though they make non-Docker third-party systems more difficult to integrate with. Throughout this investigation Docker has made it clear that theyre not very open to ideas that deviate from their current course or that delegate control. This is very worrisome to us, since Kubernetes complements Docker and adds so much functionality, but exists outside of Docker itself. -->
这个问题和其他问题已由网络供应商提出给 Docker 开发人员,并且通常关闭为“按预期工作”,([libnetwork #139](https://github.com/docker/libnetwork/issues/139), [libnetwork #486](https://github.com/docker/libnetwork/issues/486), [libnetwork #514](https://github.com/docker/libnetwork/pull/514), [libnetwork #865](https://github.com/docker/libnetwork/issues/865), [docker #18864](https://github.com/docker/docker/issues/18864)),即使它们使非 Docker 第三方系统更难以与之集成。在整个调查过程中, Docker 明确表示他们对偏离当前路线或委托控制的想法不太欢迎。这对我们来说非常令人担忧,因为 Kubernetes 补充了 Docker 并增加了很多功能,但它存在于 Docker 之外。
<!-- For all of these reasons we have chosen to invest in CNI as the Kubernetes plugin model. There will be some unfortunate side-effects of this. Most of them are relatively minor (for example, `docker inspect` will not show an IP address), but some are significant. In particular, containers started by `docker run` might not be able to communicate with containers started by Kubernetes, and network integrators will have to provide CNI drivers if they want to fully integrate with Kubernetes. On the other hand, Kubernetes will get simpler and more flexible, and a lot of the ugliness of early bootstrapping (such as configuring Docker to use our bridge) will go away. -->
出于所有这些原因,我们选择投资 CNI 作为 Kubernetes 插件模型。这会有一些不幸的副作用。它们中的大多数都相对较小(例如, `docker inspect` 不会显示 IP 地址),但有些是重要的。特别是,由 `docker run` 启动的容器可能无法与 Kubernetes 启动的容器通信,如果网络集成商想要与 Kubernetes 完全集成,则必须提供 CNI 驱动程序。另一方面, Kubernetes 将变得更简单,更灵活,早期引入的许多丑陋的(例如配置 Docker 使用我们的网桥)将会消失。
<!-- As we proceed down this path, well certainly keep our eyes and ears open for better ways to integrate and simplify. If you have thoughts on how we can do that, we really would like to hear them — find us on [slack](http://slack.k8s.io/) or on our [network SIG mailing-list](https://groups.google.com/forum/#!forum/kubernetes-sig-network). -->
当我们沿着这条道路前进时,我们肯定会保持眼睛和耳朵的开放,以便更好地整合和简化。如果您对我们如何做到这一点有所想法,我们真的希望听到它们 - 在 [slack](http://slack.k8s.io/) 或者 [network SIG mailing-list](https://groups.google.com/forum/#!forum/kubernetes-sig-network) 找到我们。
Tim Hockin, Software Engineer, Google

View File

@ -0,0 +1,84 @@
---
title: " KubeCon EU 2016伦敦 Kubernetes 社区 "
date: 2016-02-24
slug: kubecon-eu-2016-kubernetes-community-in
url: /blog/2016/02/Kubecon-Eu-2016-Kubernetes-Community-In
---
<!--
---
title: " KubeCon EU 2016: Kubernetes Community in London "
date: 2016-02-24
slug: kubecon-eu-2016-kubernetes-community-in
url: /blog/2016/02/Kubecon-Eu-2016-Kubernetes-Community-In
---
-->
<!--
KubeCon EU 2016 is the inaugural [European Kubernetes](http://kubernetes.io/) community conference that follows on the American launch in November 2015. KubeCon is fully dedicated to education and community engagement for[Kubernetes](http://kubernetes.io/) enthusiasts, production users and the surrounding ecosystem.
-->
KubeCon EU 2016 是首届[欧洲 Kubernetes](http://kubernetes.io/) 社区会议,紧随 2015 年 11 月召开的北美会议。KubeCon 致力于为 [Kubernetes](http://kubernetes.io/) 爱好者、产品用户和周围的生态系统提供教育和社区参与。
<!--
Come join us in London and hang out with hundreds from the Kubernetes community and experience a wide variety of deep technical expert talks and use cases.
-->
快来加入我们在伦敦,与 Kubernetes 社区的数百人一起出去,体验各种深入的技术专家讲座和用例。
<!--
Dont miss these great speaker sessions at the conference:
-->
不要错过这些优质的演讲:
<!--
* “Kubernetes Hardware Hacks: Exploring the Kubernetes API Through Knobs, Faders, and Sliders” by Ian Lewis and Brian Dorsey, Developer Advocate, Google -* [http://sched.co/6Bl3](http://sched.co/6Bl3)
* “rktnetes: what's new with container runtimes and Kubernetes” by Jonathan Boulle, Developer and Team Lead at CoreOS -* [http://sched.co/6BY7](http://sched.co/6BY7)
* “Kubernetes Documentation: Contributing, fixing issues, collecting bounties” by John Mulhausen, Lead Technical Writer, Google -* [http://sched.co/6BUP](http://sched.co/6BUP)&nbsp;
* “[What is OpenStack's role in a Kubernetes world?](https://kubeconeurope2016.sched.org/event/6BYC/what-is-openstacks-role-in-a-kubernetes-world?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” By Thierry Carrez, Director of Engineering, OpenStack Foundation -* http://sched.co/6BYC
* “A Practical Guide to Container Scheduling” by Mandy Waite, Developer Advocate, Google -* [http://sched.co/6BZa](http://sched.co/6BZa)
* “[Kubernetes in Production in The New York Times newsroom](https://kubeconeurope2016.sched.org/event/67f2/kubernetes-in-production-in-the-new-york-times-newsroom?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” Eric Lewis, Web Developer, New York Times -* [http://sched.co/67f2](http://sched.co/67f2)
* “[Creating an Advanced Load Balancing Solution for Kubernetes with NGINX](https://kubeconeurope2016.sched.org/event/6Bc9/creating-an-advanced-load-balancing-solution-for-kubernetes-with-nginx?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” by Andrew Hutchings, Technical Product Manager, NGINX -* http://sched.co/6Bc9
* And many more http://kubeconeurope2016.sched.org/
-->
* “Kubernetes 硬件黑客:通过旋钮、推杆和滑块探索 Kubernetes API” 演讲者 Ian Lewis 和 Brian Dorsey谷歌开发布道师* [http://sched.co/6Bl3](http://sched.co/6Bl3)
* “rktnetes: 容器运行时和 Kubernetes 的新功能” 演讲者 Jonathan Boulle, CoreOS 的主程 -* [http://sched.co/6BY7](http://sched.co/6BY7)
* “Kubernetes 文档:贡献、修复问题、收集奖金” 作者John Mulhausen首席技术作家谷歌 -* [http://sched.co/6BUP](http://sched.co/6BUP)&nbsp;
* “[OpenStack 在 Kubernetes 的世界中扮演什么角色?](https://kubeconeurope2016.sched.org/event/6BYC/what-is-openstacks-role-in-a-kubernetes-world?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” 作者Thierry carez, OpenStack 基金会工程总监 -* http://sched.co/6BYC
* “容器调度的实用指南” 作者Mandy Waite开发者倡导者谷歌 -* [http://sched.co/6BZa](http://sched.co/6BZa)
* “[《纽约时报》编辑部正在制作 Kubernetes](https://kubeconeurope2016.sched.org/event/67f2/kubernetes-in-production-in-the-new-york-times-newsroom?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” Eric Lewis《纽约时报》网站开发人员 -* [http://sched.co/67f2](http://sched.co/67f2)
* “[使用 NGINX 为 Kubernetes 创建一个高级负载均衡解决方案](https://kubeconeurope2016.sched.org/event/6Bc9/creating-an-advanced-load-balancing-solution-for-kubernetes-with-nginx?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” 作者Andrew Hutchings, NGINX 技术产品经理 -* http://sched.co/6Bc9
* 还有更多 http://kubeconeurope2016.sched.org/
<!--
Get your KubeCon EU [tickets here](https://ti.to/kubecon/kubecon-eu-2016).
-->
[在这里](https://ti.to/kubecon/kubecon-eu-2016)获取您的 KubeCon EU 门票。
<!--
Venue Location: CodeNode * 10 South Pl, London, United Kingdom
Accommodations: [hotels](https://skillsmatter.com/contact-us#hotels)
Website: [kubecon.io](https://www.kubecon.io/)
Twitter: [@KubeConio](https://twitter.com/kubeconio) #KubeCon
Google is a proud Diamond sponsor of KubeCon EU 2016. Come to London next month, March 10th & 11th, and visit booth #13 to learn all about Kubernetes, Google Container Engine (GKE) and Google Cloud Platform!
-->
会场地址CodeNode * 英国伦敦南广场 10 号
酒店住宿:[酒店](https://skillsmatter.com/contact-us)
网站:[kubecon.io] (https://www.kubecon.io/)
推特:[@KubeConio] (https://twitter.com/kubeconio)
谷歌是 KubeCon EU 2016 的钻石赞助商。下个月 3 月 10 - 11 号来伦敦,参观 13 号展位,了解 KubernetesGoogle Container EngineGKEGoogle Cloud Platform 的所有信息!
<!--
_KubeCon is organized by KubeAcademy, LLC, a community-driven group of developers focused on the education of developers and the promotion of Kubernetes._
-* Sarah Novotny, Kubernetes Community Manager, Google
-->
_KubeCon 是由 KubeAcademy、LLC 组织的,这是一个由社区驱动的开发者团体,专注于开发人员的教育和 kubernet.com 的推广
-* Sarah Novotny, 谷歌的 Kubernetes 社区经理

View File

@ -0,0 +1,117 @@
---
title: " Kubernetes 社区会议记录 - 20160204 "
date: 2016-02-09
slug: kubernetes-community-meeting-notes
url: /blog/2016/02/Kubernetes-Community-Meeting-Notes
---
<!--
---
title: " Kubernetes community meeting notes - 20160204 "
date: 2016-02-09
slug: kubernetes-community-meeting-notes
url: /blog/2016/02/Kubernetes-Community-Meeting-Notes
---
-->
<!--
#### February 4th - rkt demo (congratulations on the 1.0, CoreOS!), eBay puts k8s on Openstack and considers Openstack on k8s, SIGs, and flaky test surge makes progress.
-->
#### 2月4日 - rkt演示祝贺 1.0 版本, CoreOS eBay 将 k8s 放在 Openstack 上并认为 Openstack 在 k8s SIG 和片状测试激增方面取得了进展。
<!--
The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project's status via a videoconference. Here are the notes from the latest meeting.
-->
Kubernetes 贡献社区在每周四 10:00 PT 开会,通过视频会议讨论项目状态。以下是最近一次会议的笔记。
<!--
* Note taker: Rob Hirschfeld
* Demo (20 min): CoreOS rkt + Kubernetes [Shaya Potter]
* expect to see integrations w/ rkt & k8s in the coming months ("rkt-netes"). not integrated into the v1.2 release.
* Shaya gave a demo (8 minutes into meeting for video reference)
* CLI of rkt shown spinning up containers
* [note: audio is garbled at points]
* Discussion about integration w/ k8s & rkt
* rkt community sync next week: https://groups.google.com/forum/#!topic/rkt-dev/FlwZVIEJGbY
* Dawn Chen:
* The remaining issues of integrating rkt with kubernetes: 1) cadivsor 2) DNS 3) bugs related to logging
* But need more work on e2e test suites
-->
* 书记员Rob Hirschfeld
* 演示视频20分钟CoreOS rkt + Kubernetes[Shaya Potter]
* 期待在未来几个月内看到与rkt和k8s的整合“rkt-netes”。 还没有集成到 v1.2版本中。
* Shaya 做了一个演示8分钟的会议视频参考
* rkt的CLI显示了旋转容器
* [注意:音频在点数上是乱码]
* 关于 k8s&rkt 整合的讨论
* 下周 rkt 社区同步https://groups.google.com/forum/#!topic/rkt-dev/FlwZVIEJGbY
* Dawn Chen:
* 将 rkt 与 kubernetes 集成的其余问题1cadivsor 2 DNS 3与日志记录相关的错误
* 但是需要在 e2e 测试套件上做更多的工作
<!--
* Use Case (10 min): eBay k8s on OpenStack and OpenStack on k8s [Ashwin Raveendran]
* eBay is currently running Kubernetes on OpenStack
* Goal for eBay is to manage the OpenStack control plane w/ k8s. Goal would be to achieve upgrades
* OpenStack Kolla creates containers for the control plane. Uses Ansible+Docker for management of the containers.
* Working on k8s control plan management - Saltstack is proving to be a management challenge at the scale they want to operate. Looking for automated management of the k8s control plane.
-->
* 用例10分钟在 OpenStack 上的 eBay k8s 和 k8s 上的 OpenStack [Ashwin Raveendran]
* eBay 目前正在 OpenStack 上运行 Kubernetes
* eBay 的目标是管理带有 k8s 的 OpenStack 控制平面。目标是实现升级。
* OpenStack Kolla 为控制平面创建容器。使用 Ansible+Docker 来管理容器。
* 致力于 k8s 控制计划管理 - Saltstack 被证明是他们想运营的规模的管理挑战。寻找 k8s 控制平面的自动化管理。
<!--
* SIG Report
* Testing update [Jeff, Joe, and Erick]
* Working to make the workflow about contributing to K8s easier to understanding
* [pull/19714][1] has flow chart of the bot flow to help users understand
* Need a consistent way to run tests w/ hacking config scripts (you have to fake a Jenkins process right now)
* Want to create necessary infrastructure to make test setup less flaky
* want to decouple test start (single or full) from Jenkins
* goal is to get to point where you have 1 script to run that can be pointed to any cluster
* demo included Google internal views - working to try get that external.
* want to be able to collect test run results
* Bob Wise calls for testing infrastructure to be a blocker on v1.3
* Long discussion about testing practices…
* consensus that we want to have tests work over multiple platforms.
* would be helpful to have a comprehensive state dump for test reports
* "phone-home" to collect stack traces - should be available
-->
* SIG 报告
* 测试更新 [Jeff, Joe, 和 Erick]
* 努力使有助于 K8s 的工作流程更容易理解
* [pull/19714][1]有 bot 流程图来帮助用户理解
* 需要一种一致的方法来运行测试 w/hacking 配置脚本(你现在必须伪造一个 Jenkins 进程)
* 想要创建必要的基础设施,使测试设置不那么薄弱
* 想要将测试开始(单次或完整)与 Jenkins分离
* 目标是指出你有一个可以指向任何集群的脚本
* 演示包括 Google 内部视图 - 努力尝试获取外部视图。
* 希望能够收集测试运行结果
* Bob Wise 不赞同在 v1.3 版本进行测试方面的基础设施建设。
* 关于测试实践的长期讨论…
* 我们希望在多个平台上进行测试的共识。
* 为测试报告提供一个全面转储会很有帮助
* 可以使用"phone-home"收集异常
<!--
* 1.2 Release Watch
* CoC [Sarah]
* GSoC [Sarah]
-->
* 1.2发布观察
* CoC [Sarah]
* GSoC [Sarah]
<!--
To get involved in the Kubernetes community consider joining our [Slack channel][2], taking a look at the [Kubernetes project][3] on GitHub, or join the [Kubernetes-dev Google group][4]. If you're really excited, you can do all of the above and join us for the next community conversation -- February 11th, 2016. Please add yourself or a topic you want to know about to the [agenda][5] and get a calendar invitation by joining [this group][6].
-->
要参与 Kubernetes 社区,请考虑加入我们的[Slack 频道][2],查看 GitHub上的 [Kubernetes 项目][3],或加入[Kubernetes-dev Google 小组][4]。如果你真的很兴奋,你可以完成上述所有工作并加入我们的下一次社区对话-2016年2月11日。请将您自己或您想要了解的主题添加到[议程][5]并通过加入[此组][6]来获取日历邀请。
"https://youtu.be/IScpP8Cj0hw?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ"
[1]: https://github.com/kubernetes/kubernetes/pull/19714
[2]: http://slack.k8s.io/
[3]: https://github.com/kubernetes/
[4]: https://groups.google.com/forum/#!forum/kubernetes-dev
[5]: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit#
[6]: https://groups.google.com/forum/#!forum/kubernetes-community-video-chat

View File

@ -1,182 +0,0 @@
<!-- ---
title: " SIG-Networking: Kubernetes Network Policy APIs Coming in 1.3 "
date: 2016-04-18
slug: kubernetes-network-policy-apis
url: /blog/2016/04/Kubernetes-Network-Policy-APIs
--- -->
---
title: "SIG-Networking: Kubernetes Network Policy APIs Coming in 1.3 "
date: 2016-04-18
slug: kubernetes-network-policy-apis
url: /blog/2016/04/Kubernetes-Network-Policy-APIs
---
<!-- _Editors note: This week were featuring [Kubernetes Special Interest Groups](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs)); Todays post is by the Network-SIG team describing network policy APIs coming in 1.3 - policies for security, isolation and multi-tenancy._ -->
编者按:这一周,我们的封面主题是 [Kubernetes 特别兴趣小组](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs));今天的文章由网络兴趣小组撰写,来谈谈 1.3 版本中即将出现的网络策略 API - 针对安全,隔离和多租户的策略。
<!-- The [Kubernetes network SIG](https://kubernetes.slack.com/messages/sig-network/) has been meeting regularly since late last year to work on bringing network policy to Kubernetes and were starting to see the results of this effort. -->
自去年下半年起,[Kubernetes 网络特别兴趣小组](https://kubernetes.slack.com/messages/sig-network/)经常定期开会,讨论如何将网络策略带入到 Kubernetes 之中,现在,我们也将慢慢看到这些工作的成果。
<!-- One problem many users have is that the open access network policy of Kubernetes is not suitable for applications that need more precise control over the traffic that accesses a pod or service. Today, this could be a multi-tier application where traffic is only allowed from a tiers neighbor. But as new Cloud Native applications are built by composing microservices, the ability to control traffic as it flows among these services becomes even more critical. -->
很多用户经常会碰到的一个问题是, Kubernetes 的开放访问网络策略并不能很好地满足那些需要对 pod 或服务( service )访问进行更为精确控制的场景。今天,这个场景可以是在多层应用中,只允许临近层的访问。然而,随着组合微服务构建原生应用程序潮流的发展,如何控制流量在不同服务之间的流动会别的越发的重要。
<!-- In most IaaS environments (both public and private) this kind of control is provided by allowing VMs to join a security group where traffic to members of the group is defined by a network policy or Access Control List (ACL) and enforced by a network packet filter. -->
在大多数的(公共的或私有的) IaaS 环境中,这种网络控制通常是将 VM 和“安全组”结合,其中安全组中成员的通信都是通过一个网络策略或者访问控制表( Access Control List, ACL )来定义,以及借助于网络包过滤器来实现。
<!-- The Network SIG started the effort by identifying [specific use case scenarios](https://docs.google.com/document/d/1blfqiH4L_fpn33ZrnQ11v7LcYP0lmpiJ_RaapAPBbNU/edit?pref=2&pli=1#) that require basic network isolation for enhanced security. Getting the API right for these simple and common use cases is important because they are also the basis for the more sophisticated network policies necessary for multi-tenancy within Kubernetes. -->
“网络特别兴趣小组”刚开始的工作是确定 [特定的使用场景](https://docs.google.com/document/d/1blfqiH4L_fpn33ZrnQ11v7LcYP0lmpiJ_RaapAPBbNU/edit?pref=2&pli=1#) ,这些用例需要基本的网络隔离来提升安全性。
让这些API恰如其分地满足简单、共通的用例尤其重要因为它们将为那些服务于 Kubernetes 内多租户,更为复杂的网络策略奠定基础。
<!-- From these scenarios several possible approaches were considered and a minimal [policy specification](https://docs.google.com/document/d/1qAm-_oSap-f1d6a-xRTj6xaH1sYQBfK36VyjB5XOZug/edit) was defined. The basic idea is that if isolation were enabled on a per namespace basis, then specific pods would be selected where specific traffic types would be allowed. -->
根据这些应用场景,我们考虑了集中不同的方法,然后定义了一个最简[策略规范](https://docs.google.com/document/d/1qAm-_oSap-f1d6a-xRTj6xaH1sYQBfK36VyjB5XOZug/edit)。
基本的想法是,如果是根据命名空间的不同来进行隔离,那么就会根据所被允许的流量类型的不同,来选择特定的 pods 。
<!-- The simplest way to quickly support this experimental API is in the form of a ThirdPartyResource extension to the API Server, which is possible today in Kubernetes 1.2. -->
快速支持这个实验性 API 的办法是往 API 服务器上加入一个 `ThirdPartyResource` 扩展,这在 Kubernetes 1.2 就能办到。
<!-- If youre not familiar with how this works, the Kubernetes API can be extended by defining ThirdPartyResources that create a new API endpoint at a specified URL. -->
如果你还不是很熟悉这其中的细节, Kubernetes API 是可以通过定义 `ThirdPartyResources` 扩展在特定的 URL 上创建一个新的 API 端点。
#### third-party-res-def.yaml
```
kind: ThirdPartyResource
apiVersion: extensions/v1beta1
metadata:
- name: network-policy.net.alpha.kubernetes.io
description: "Network policy specification"
versions:
- name: v1alpha1
```
```
$kubectl create -f third-party-res-def.yaml
```
<!-- This will create an API endpoint (one for each namespace): -->
这条命令会创建一个 API 端点(每个命名空间各一个):
```
/net.alpha.kubernetes.io/v1alpha1/namespace/default/networkpolicys/
```
<!-- Third party network controllers can now listen on these endpoints and react as necessary when resources are created, modified or deleted. _Note: With the upcoming release of Kubernetes 1.3 - when the Network Policy API is released in beta form - there will be no need to create a ThirdPartyResource API endpoint as shown above._ -->
第三方网络控制器可以监听这些端点,根据资源的创建,修改或者删除作出必要的响应。
_注意在接下来的 Kubernetes 1.3 发布中, Network Policy API 会以 beta API 的形式出现,这也就不需要像上面那样,创建一个 `ThirdPartyResource` API 端点了。_
<!-- Network isolation is off by default so that all pods can communicate as they normally do. However, its important to know that once network isolation is enabled, all traffic to all pods, in all namespaces is blocked, which means that enabling isolation is going to change the behavior of your pods -->
网络隔离默认是关闭的,因而,所有的 pods 之间可以自由地通信。
然而,很重要的一点是,一旦开通了网络隔离,所有命名空间下的所有 pods 之间的通信都会被阻断,换句话说,开通隔离会改变 pods 的行为。
<!-- Network isolation is enabled by defining the _network-isolation_ annotation on namespaces as shown below: -->
网络隔离可以通过定义命名空间, `net.alpha.kubernetes.io` 里的 `network-isolation` 注释来开通关闭:
```
net.alpha.kubernetes.io/network-isolation: [on | off]
```
<!-- Once network isolation is enabled, explicit network policies **must be applied** to enable pod communication. -->
一旦开通了网络隔离,**一定需要使用** 显示的网络策略来允许 pod 间的通信。
<!-- A policy specification can be applied to a namespace to define the details of the policy as shown below: -->
一个策略规范可以被用到一个命名空间中,来定义策略的细节(如下所示):
```
POST /apis/net.alpha.kubernetes.io/v1alpha1/namespaces/tenant-a/networkpolicys/
{
"kind": "NetworkPolicy",
"metadata": {
"name": "pol1"
},
"spec": {
"allowIncoming": {
"from": [
{
"pods": {
"segment": "frontend"
}
}
],
"toPorts": [
{
"port": 80,
"protocol": "TCP"
}
]
},
"podSelector": {
"segment": "backend"
}
}
}
```
<!-- In this example, the **tenant-a** namespace would get policy **pol1** applied as indicated. Specifically, pods with the **segment** label **backend** would allow TCP traffic on port 80 from pods with the **segment** label **frontend** to be received. -->
在这个例子中,**tenant-a** 空间将会使用 **pol1** 策略。
具体而言,带有 **segment** 标签为 **backend** 的 pods 会允许 **segment** 标签为 **frontend** 的 pods 访问其端口 80 。
<!-- Today, [Romana](http://romana.io/), [OpenShift](https://www.openshift.com/), [OpenContrail](http://www.opencontrail.org/) and [Calico](http://projectcalico.org/) support network policies applied to namespaces and pods. Cisco and VMware are working on implementations as well. Both Romana and Calico demonstrated these capabilities with Kubernetes 1.2 recently at KubeCon. You can watch their presentations here: [Romana](https://www.youtube.com/watch?v=f-dLKtK6qCs) ([slides](http://www.slideshare.net/RomanaProject/kubecon-london-2016-ronana-cloud-native-sdn)), [Calico](https://www.youtube.com/watch?v=p1zfh4N4SX0) ([slides](http://www.slideshare.net/kubecon/kubecon-eu-2016-secure-cloudnative-networking-with-project-calico)).&nbsp; -->
今天,[Romana](http://romana.io/), [OpenShift](https://www.openshift.com/), [OpenContrail](http://www.opencontrail.org/) 以及 [Calico](http://projectcalico.org/) 都已经支持在命名空间和pods中使用网络策略。
而 Cisco 和 VMware 也在努力实现支持之中。
Romana 和 Calico 已经在最近的 KubeCon 中展示了如何在 Kubernetes 1.2 下使用这些功能。
你可以在这里看到他们的演讲:
[Romana](https://www.youtube.com/watch?v=f-dLKtK6qCs) ([幻灯片](http://www.slideshare.net/RomanaProject/kubecon-london-2016-ronana-cloud-native-sdn)),
[Calico](https://www.youtube.com/watch?v=p1zfh4N4SX0) ([幻灯片](http://www.slideshare.net/kubecon/kubecon-eu-2016-secure-cloudnative-networking-with-project-calico)).
<!-- **How does it work?** -->
**这是如何工作的**
<!-- Each solution has their their own specific implementation details. Today, they rely on some kind of on-host enforcement mechanism, but future implementations could also be built that apply policy on a hypervisor, or even directly by the network itself.&nbsp; -->
每套解决方案都有自己不同的具体实现。尽管今天,他们都借助于每种主机上( on-host )的实现机制,但未来的实现可以通过将策略使用在 hypervisor 上,亦或是直接使用到网络本身上来达到同样的目的。
<!-- External policy control software (specifics vary across implementations) will watch the new API endpoint for pods being created and/or new policies being applied. When an event occurs that requires policy configuration, the listener will recognize the change and a controller will respond by configuring the interface and applying the policy. &nbsp;The diagram below shows an API listener and policy controller responding to updates by applying a network policy locally via a host agent. The network interface on the pods is configured by a CNI plugin on the host (not shown). -->
外部策略控制软件(不同实现各有不同)可以监听 pods 创建以及新加载策略的 API 端点。
当产生一个需要策略配置的事件之后,监听器会确认这个请求,相应的,控制器会配置接口,使用该策略。
下面的图例展示了 API 监视器和策略控制器是如何通过主机代理在本地应用网络策略的。
这些 pods 的网络接口是使用过主机上的 CNI 插件来进行配置的(并未在图中注明)。
![controller.jpg](https://lh5.googleusercontent.com/zMEpLMYmask-B-rYWnbMyGb0M7YusPQFPS6EfpNOSLbkf-cM49V7rTDBpA6k9-Zdh2soMul39rz9rHFJfL-jnEn_mHbpg0E1WlM-wjU-qvQu9KDTQqQ9uBmdaeWynDDNhcT3UjX5)
<!-- If youve been holding back on developing applications with Kubernetes because of network isolation and/or security concerns, these new network policies go a long way to providing the control you need. No need to wait until Kubernetes 1.3 since network policy is available now as an experimental API enabled as a ThirdPartyResource. -->
如果你一直受网络隔离或安全考虑的困扰,而犹豫要不要使用 Kubernetes 来开发应用程序,这些新的网络策略将会极大地解决你这方面的需求。并不需要等到 Kubernetes 1.3 ,现在就可以通过 `ThirdPartyResource` 的方式来使用这个实现性 API 。
<!-- If youre interested in Kubernetes and networking, there are several ways to participate - join us at:
- Our [Networking slack channel](https://kubernetes.slack.com/messages/sig-network/)
- Our [Kubernetes Networking Special Interest Group](https://groups.google.com/forum/#!forum/kubernetes-sig-network) email list -->
如果你对 Kubernetes 和网络感兴趣,可以通过下面的方式参与、加入其中:
- 我们的[网络 slack channel](https://kubernetes.slack.com/messages/sig-network/)
- 我们的[Kubernetes 特别网络兴趣小组](https://groups.google.com/forum/#!forum/kubernetes-sig-network) 邮件列表
<!-- The Networking “Special Interest Group,” which meets bi-weekly at 3pm (15h00) Pacific Time at [SIG-Networking hangout](https://zoom.us/j/5806599998). -->
网络“特别兴趣小组”每两周下午三点(太平洋时间)开会,地址是[SIG-Networking hangout](https://zoom.us/j/5806599998).
_--Chris Marino, Co-Founder, Pani Networks_

View File

@ -1,129 +0,0 @@
<!-- ---
title: " How to deploy secure, auditable, and reproducible Kubernetes clusters on AWS "
date: 2016-04-15
slug: kubernetes-on-aws_15
url: /blog/2016/04/Kubernetes-On-Aws_15
--- -->
---
title: " 如何在AWS上部署安全可审计可复现的k8s集群 "
date: 2016-04-15
slug: kubernetes-on-aws_15
url: /blog/2016/04/Kubernetes-On-Aws_15
---
<!-- _Todays guest post is written by Colin Hom, infrastructure engineer at [CoreOS](https://coreos.com/), the company delivering Googles Infrastructure for Everyone Else (#GIFEE) and running the world's containers securely on CoreOS Linux, Tectonic and Quay._
_Join us at [CoreOS Fest Berlin](https://coreos.com/fest/), the Open Source Distributed Systems Conference, and learn more about CoreOS and Kubernetes._ -->
_今天的客座文章是由Colin Hom撰写[CoreOS](https://coreos.com/)的基础架构工程师。CoreOS致力于推广谷歌的基础架构模式Googles Infrastructure for Everyone Else #GIFEE让全世界的容器都能在CoreOS Linux, Tectonic 和 Quay上安全运行。_
_加入到我们的[柏林CoreOS盛宴](https://coreos.com/fest/)这是一个开源分布式系统主题的会议在这里可以了解到更多关于CoreOS和Kubernetes的信息。_
<!-- At CoreOS, we're all about deploying Kubernetes in production at scale. Today we are excited to share a tool that makes deploying Kubernetes on Amazon Web Services (AWS) a breeze. Kube-aws is a tool for deploying auditable and reproducible Kubernetes clusters to AWS, currently used by CoreOS to spin up production clusters. -->
在CoreOS, 我们一直都是在生产环境中大规模部署Kubernetes。今天我们非常兴奋地想分享一款工具它能让你的Kubernetes生产环境大规模部署更加的轻松。Kube-aws这个工具可以用来在AWS上部署可审计可复现的k8s集群而CoreOS本身就在生产环境中使用它。
<!-- Today you might be putting the Kubernetes components together in a more manual way. With this helpful tool, Kubernetes is delivered in a streamlined package to save time, minimize interdependencies and quickly create production-ready deployments. -->
也许今天你更多的可能是用手工的方式来拼接Kubernetes组件。但有了这个工具之后Kubernetes可以流水化地打包、交付节省时间减少了相互间的依赖更加快捷地实现生产环境的部署。
<!-- A simple templating system is leveraged to generate cluster configuration as a set of declarative configuration templates that can be version controlled, audited and re-deployed. Since the entirety of the provisioning is by [AWS CloudFormation](https://aws.amazon.com/cloudformation/) and cloud-init, theres no need for external configuration management tools on your end. Batteries included! -->
借助于一个简单的模板系统,来生成集群配置,这么做是因为一套声明式的配置模板可以版本控制,审计以及重复部署。而且,由于整个创建过程只用到了[AWS CloudFormation](https://aws.amazon.com/cloudformation/) 和 cloud-init你也就不需要额外用到其它的配置管理工具。开箱即用
<!-- To skip the talk and go straight to the project, check out [the latest release of kube-aws](https://github.com/coreos/coreos-kubernetes/releases), which supports Kubernetes 1.2.x. To get your cluster running, [check out the documentation](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html). -->
如果要跳过演讲,直接了解这个项目,可以看看[kube-aws的最新发布](https://github.com/coreos/coreos-kubernetes/releases)支持Kubernetes 1.2.x。如果要部署集群可以参考[文档]](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html).
<!-- **Why kube-aws? Security, auditability and reproducibility** -->
**为什么是kube-aws安全可审计可复现**
<!-- Kube-aws is designed with three central goals in mind. -->
Kube-aws设计初衷有三个目标。
<!-- **Secure** : TLS assets are encrypted via the [AWS Key Management Service (KMS)](https://aws.amazon.com/kms/) before being embedded in the CloudFormation JSON. By managing [IAM policy](http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) for the KMS key independently, an operator can decouple operational access to the CloudFormation stack from access to the TLS secrets. -->
**安全** : TLS 资源在嵌入到CloudFormation JSON之前通过[AWS 秘钥管理服务](https://aws.amazon.com/kms/)加密。通过单独管理KMS密钥的[IAM 策略](http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html)可以将CloudFormation栈的访问与TLS秘钥的访问分离开。
<!-- **Auditable** : kube-aws is built around the concept of cluster assets. These configuration and credential assets represent the complete description of the cluster. Since KMS is used to encrypt TLS assets, you can feel free to check your unencrypted stack JSON into version control as well! -->
**可审计** : kube-aws是围绕集群资产的概念来创建。这些配置和账户资产是对集群的完全描述。由于KMS被用来加密TLS资产因而可以无所顾忌地将未加密的CloudFormation栈 JSON签入到版本控制服务中。
<!-- **Reproducible** : The _--export_ option packs your parameterized cluster definition into a single JSON file which defines a CloudFormation stack. This file can be version controlled and submitted directly to the CloudFormation API via existing deployment tooling, if desired. -->
**可重复** : _--export_ 选项将参数化的集群定义打包成一整个JSON文件对应一个CloudFormation栈。这个文件可以版本控制然后如果需要的话通过现有的部署工具直接提交给CloudFormation API。
<!-- **How to get started with kube-aws** -->
**如何开始用kube-aws**
<!-- On top of this foundation, kube-aws implements features that make Kubernetes deployments on AWS easier to manage and more flexible. Here are some examples. -->
在此基础之上kube-aws也实现了一些功能使得在AWS上部署Kubernetes集群更加容易灵活。下面是一些例子。
<!-- **Route53 Integration** : Kube-aws can manage your cluster DNS records as part of the provisioning process. -->
**Route53集成** : Kube-aws 可以管理你的集群DNS记录作为配置过程的一部分。
cluster.yaml
```
externalDNSName: my-cluster.kubernetes.coreos.com
createRecordSet: true
hostedZone: kubernetes.coreos.com
recordSetTTL: 300
```
<!-- **Existing VPC Support** : Deploy your cluster to an existing VPC. -->
**现有VPC支持** : 将集群部署到现有的VPC上。
cluster.yaml
```
vpcId: vpc-xxxxx
routeTableId: rtb-xxxxx
```
<!-- **Validation** : Kube-aws supports validation of cloud-init and CloudFormation definitions, along with any external resources that the cluster stack will integrate with. For example, heres a cloud-config with a misspelled parameter: -->
**验证** : kube-aws 支持验证 cloud-init 和 CloudFormation定义以及集群栈会集成用到的外部资源。例如下面就是一个cloud-config外带一个拼写错误的参数
userdata/cloud-config-worker
```
#cloud-config
coreos:
flannel:
interrface: $private\_ipv4
etcd\_endpoints: {{ .ETCDEndpoints }}
```
$ kube-aws validate
\> Validating UserData...
Error: cloud-config validation errors:
UserDataWorker: line 4: warning: unrecognized key "interrface"
<!-- To get started, check out the [kube-aws documentation](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html). -->
考虑如何起步?看看[kube-aws 文档](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html)
<!-- **Future Work** -->
**未来的工作**
<!-- As always, the goal with kube-aws is to make deployments that are production ready. While we use kube-aws in production on AWS today, this project is pre-1.0 and there are a number of areas in which kube-aws needs to evolve. -->
一如既往kube-aws的目标是让生产环境部署更加的简单。尽管我们现在在AWS下使用kube-aws进行生产环境部署但是这个项目还是pre-1.0所以还有很多的地方kube-aws需要考虑、扩展。
<!-- **Fault tolerance** : At CoreOS we believe Kubernetes on AWS is a potent platform for fault-tolerant and self-healing deployments. In the upcoming weeks, kube-aws will be rising to a new challenge: surviving the [Chaos Monkey](https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey) control plane and all! -->
**容错** : CoreOS坚信 Kubernetes on AWS是强健的平台适于容错、自恢复部署。在接下来的几个星期kube-aws将会迎接新的考验混世猴子[Chaos Monkey](https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey))测试 - 控制平面以及全部!
<!-- **Zero-downtime updates** : Updating CoreOS nodes and Kubernetes components can be done without downtime and without interdependency with the correct instance replacement strategy. -->
**零停机更新** : 更新CoreOS节点和Kubernetes组件不需要停机也不需要考虑实例更新策略instance replacement strategy的影响。
<!-- A [github issue](https://github.com/coreos/coreos-kubernetes/issues/340) tracks the work towards this goal. We look forward to seeing you get involved with the project by filing issues or contributing directly. -->
有一个[github issue](https://github.com/coreos/coreos-kubernetes/issues/340)来追踪这些工作进展。我们期待你的参与提交issue或是直接贡献。
<!-- _Learn more about Kubernetes and meet the community at [CoreOS Fest Berlin](https://coreos.com/fest/) - May 9-10, 2016_ -->
_想要更多地了解Kubernetes来[柏林CoreOS盛宴](https://coreos.com/fest/)看看,- 五月 9-10, 2016_
<!-- _ Colin Hom, infrastructure engineer, CoreOS_ -->
_ Colin Hom, 基础架构工程师, CoreOS_

View File

@ -0,0 +1,72 @@
---
title: " Citrix + Kubernetes = 全垒打 "
date: 2016-07-14
slug: citrix-netscaler-and-kubernetes
url: /blog/2016/07/Citrix-Netscaler-And-Kubernetes
---
<!--
---
title: " Citrix + Kubernetes = A Home Run "
date: 2016-07-14
slug: citrix-netscaler-and-kubernetes
url: /blog/2016/07/Citrix-Netscaler-And-Kubernetes
---
-->
<!--
_Editors note: todays guest post is by Mikko Disini, a Director of Product Management at Citrix Systems, sharing their collaboration experience on a Kubernetes integration.&nbsp;_
-->
编者按:今天的客座文章来自 Citrix Systems 的产品管理总监 Mikko Disini他分享了他们在 Kubernetes 集成上的合作经验。&nbsp;_
<!--
Technical collaboration is like sports. If you work together as a team, you can go down the homestretch and pull through for a win. Thats our experience with the Google Cloud Platform team.
-->
技术合作就像体育运动。如果你能像一个团队一样合作,你就能在最后关头取得胜利。这就是我们对谷歌云平台团队的经验。
<!--
Recently, we approached Google Cloud Platform (GCP) to collaborate on behalf of Citrix customers and the broader enterprise market looking to migrate workloads.&nbsp;This migration required including the [NetScaler Docker load balancer](https://www.citrix.com/blogs/2016/06/20/the-best-docker-load-balancer-at-dockercon-in-seattle-this-week/), CPX, into Kubernetes nodes and resolving any issues with getting traffic into the CPX proxies. &nbsp;
-->
最近,我们与 Google 云平台GCP联系代表 Citrix 客户以及更广泛的企业市场,希望就工作负载的迁移进行协作。此迁移需要将 [NetScaler Docker 负载均衡器]https://www.citrix.com/blogs/2016/06/20/the-best-docker-load-balancer-at-dockercon-in-seattle-this-week/) CPX 包含到 Kubernetes 节点中,并解决将流量引入 CPX 代理的任何问题。
<!--
**Why NetScaler and Kubernetes?**
-->
**为什么是 NetScaler 和 Kubernetes**
<!--
1. Citrix customers want the same Layer 4 to Layer 7 capabilities from NetScaler that they have on-prem as they move to the cloud as they begin deploying their container and microservices architecture with Kubernetes&nbsp;
2. Kubernetes provides a proven infrastructure for running containers and VMs with automated workload delivery
3. NetScaler CPX provides Layer 4 to Layer 7 services and highly efficient telemetry data to a logging and analytics platform, [NetScaler Management and Analytics System](https://www.citrix.com/blogs/2016/05/24/introducing-the-next-generation-netscaler-management-and-analytics-system/)
-->
1. Citrix 的客户希望他们开始使用 Kubernetes 部署他们的容器和微服务体系结构时,能够像当初迁移到云计算时一样,享有 NetScaler 所提供的第 4 层到第 7 层能力&nbsp;
2. Kubernetes 提供了一套经过验证的基础设施,可用来运行容器和虚拟机,并自动交付工作负载;
3. NetScaler CPX 提供第 4 层到第 7 层的服务,并为日志和分析平台 [NetScaler 管理和分析系统](https://www.citrix.com/blogs/2016/05/24/introducing-the-next-generation-netscaler-management-and-analytics-system/) 提供高效的度量数据。
<!--
I wish all our experiences working together with a technical partner were as good as working with GCP. We had a list of issues to enable our use cases and were able to collaborate swiftly on a solution. To resolve these, GCP team offered in depth technical assistance, working with Citrix such that NetScaler CPX can spin up and take over as a client-side proxy running on each host.&nbsp;
-->
我希望我们所有与技术合作伙伴一起工作的经验都能像与 GCP 一起工作一样好。我们有一个列表包含支持我们的用例所需要解决的问题。我们能够快速协作形成解决方案。为了解决这些问题GCP 团队提供了深入的技术支持,与 Citrix 合作,从而使得 NetScaler CPX 能够在每台主机上作为客户端代理启动运行。
<!--
Next, NetScaler CPX needed to be inserted in the data path of GCP ingress load balancer so that NetScaler CPX can spread traffic to front end web servers. The NetScaler team made modifications so that NetScaler CPX listens to API server events and configures itself to create a VIP, IP table rules and server rules to take ingress traffic and load balance across front end applications. Google Cloud Platform team provided feedback and assistance to verify modifications made to overcome the technical hurdles. Done!
-->
接下来,需要在 GCP 入口负载均衡器的数据路径中插入 NetScaler CPX使 NetScaler CPX 能够将流量分散到前端 web 服务器。NetScaler 团队进行了修改,以便 NetScaler CPX 监听 API 服务器事件,并配置自己来创建 VIP、IP 表规则和服务器规则,以便跨前端应用程序接收流量和负载均衡。谷歌云平台团队提供反馈和帮助,验证为克服技术障碍所做的修改。完成了!
<!--
NetScaler CPX use case is supported in [Kubernetes 1.3](https://kubernetes.io/blog/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads). Citrix customers and the broader enterprise market will have the opportunity to leverage NetScaler with Kubernetes, thereby lowering the friction to move workloads to the cloud.&nbsp;
-->
NetScaler CPX 用例在 [Kubernetes 1.3](https://kubernetes.io/blog/2016/07/kubernets-1.3 - bridge -cloud-native-and-enterprise-workload) 中提供支持。Citrix 的客户和更广泛的企业市场将有机会基于 Kubernetes 享用 NetScaler 服务,从而降低将工作负载转移到云平台的阻力。&nbsp;
<!--
You can learn more about&nbsp;NetScaler CPX [here](https://www.citrix.com/networking/microservices.html).
-->
您可以在[此处](https://www.citrix.com/networking/microservices.html)了解有关 NetScaler CPX 的更多信息。
<!--
_&nbsp;-- Mikko Disini, Director of Product Management - NetScaler, Citrix Systems_
-->
_&nbsp;-- Mikko DisiniCitrix Systems NetScaler 产品管理总监

View File

@ -0,0 +1,139 @@
---
title: " Dashboard - Kubernetes 的全功能 Web 界面 "
date: 2016-07-15
slug: dashboard-web-interface-for-kubernetes
url: /blog/2016/07/Dashboard-Web-Interface-For-Kubernetes
---
<!--
---
title: " Dashboard - Full Featured Web Interface for Kubernetes "
date: 2016-07-15
slug: dashboard-web-interface-for-kubernetes
url: /blog/2016/07/Dashboard-Web-Interface-For-Kubernetes
---
-->
<!--
_Editors note: this post is part of a [series of in-depth articles](https://kubernetes.io/blog/2016/07/five-days-of-kubernetes-1.3) on what's new in Kubernetes 1.3_
[Kubernetes Dashboard](http://github.com/kubernetes/dashboard) is a project that aims to bring a general purpose monitoring and operational web interface to the Kubernetes world.&nbsp;Three months ago we [released](https://kubernetes.io/blog/2016/04/building-awesome-user-interfaces-for-kubernetes) the first production ready version, and since then the dashboard has made massive improvements. In a single UI, youre able to perform majority of possible interactions with your Kubernetes clusters without ever leaving your browser. This blog post breaks down new features introduced in the latest release and outlines the roadmap for the future.&nbsp;
-->
_编者按这篇文章是[一系列深入的文章](https://kubernetes.io/blog/2016/07/five-days-of-kubernetes-1.3) 中关于Kubernetes 1.3的新内容的一部分_
[Kubernetes Dashboard](http://github.com/kubernetes/dashboard)是一个旨在为 Kubernetes 世界带来通用监控和操作 Web 界面的项目。三个月前,我们[发布](https://kubernetes.io/blog/2016/04/building-awesome-user-interfaces-for-kubernetes)第一个面向生产的版本,从那时起 dashboard 已经做了大量的改进。在一个 UI 中,您可以在不离开浏览器的情况下,与 Kubernetes 集群执行大多数可能的交互。这篇博客文章分解了最新版本中引入的新功能,并概述了未来的路线图。
<!--
**Full-Featured Dashboard**
Thanks to a large number of contributions from the community and project members, we were able to deliver many new features for [Kubernetes 1.3 release](https://kubernetes.io/blog/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads). We have been carefully listening to all the great feedback we have received from our users (see the [summary infographics](http://static.lwy.io/img/kubernetes_dashboard_infographic.png)) and addressed the highest priority requests and pain points.
-->
**全功能的 Dashboard**
由于社区和项目成员的大量贡献,我们能够为[Kubernetes 1.3发行版](https://kubernetes.io/blog/2016/07/kubernetes-1.3-bridging-cloud-native-and-enterprise-workloads)提供许多新功能。我们一直在认真听取用户的反馈(参见[摘要信息图表](http://static.lwy.io/img/kubernetes_dashboard_infographic.png)),并解决了最高优先级的请求和难点。
-->
<!--
The Dashboard UI now handles all workload resources. This means that no matter what workload type you run, it is visible in the web interface and you can do operational changes on it. For example, you can modify your stateful MySQL installation with [Pet Sets](/docs/user-guide/petset/), do a rolling update of your web server with Deployments or install cluster monitoring with DaemonSets.&nbsp;
[![](https://lh3.googleusercontent.com/p9bMGxPx4jE6_Z2KB-MktmyuAxyFst-bEk29M_Bn0Bj5ul7uzinH6u5WjHsMmqhGvBwlABZt06dwQ5qkBZiLq_EM1oddCmpwChvXDNXZypaS5l8uzkKuZj3PBUmzTQT4dgDxSXgz) ](https://lh3.googleusercontent.com/p9bMGxPx4jE6_Z2KB-MktmyuAxyFst-bEk29M_Bn0Bj5ul7uzinH6u5WjHsMmqhGvBwlABZt06dwQ5qkBZiLq_EM1oddCmpwChvXDNXZypaS5l8uzkKuZj3PBUmzTQT4dgDxSXgz)
-->
Dashboard UI 现在处理所有工作负载资源。这意味着无论您运行什么工作负载类型,它都在 web 界面中可见,并且您可以对其进行操作更改。例如,可以使用[Pet Sets](/docs/user-guide/petset/)修改有状态的 mysql 安装,使用部署对 web 服务器进行滚动更新,或使用守护程序安装群集监视。
[![](https://lh3.googleusercontent.com/p9bMGxPx4jE6_Z2KB-MktmyuAxyFst-bEk29M_Bn0Bj5ul7uzinH6u5WjHsMmqhGvBwlABZt06dwQ5qkBZiLq_EM1oddCmpwChvXDNXZypaS5l8uzkKuZj3PBUmzTQT4dgDxSXgz) ](https://lh3.googleusercontent.com/p9bMGxPx4jE6_Z2KB-MktmyuAxyFst-bEk29M_Bn0Bj5ul7uzinH6u5WjHsMmqhGvBwlABZt06dwQ5qkBZiLq_EM1oddCmpwChvXDNXZypaS5l8uzkKuZj3PBUmzTQT4dgDxSXgz)
<!--
In addition to viewing resources, you can create, edit, update, and delete them. This feature enables many use cases. For example, you can kill a failed Pod, do a rolling update on a Deployment, or just organize your resources. You can also export and import YAML configuration files of your cloud apps and store them in a version control system.
![](https://lh6.googleusercontent.com/zz-qjNcGgvWXrK1LIipUdIdPyeWJ1EyPVJxRnSvI6pMcLBkxDxpQt-ObsIiZsS_X0RjVBWtXYO5TCvhsymb__CGXFzKuPUnUrB4HKnAMsxtYdWLwMmHEb8c9P9Chzlo5ePHRKf5O)
-->
除了查看资源外,还可以创建、编辑、更新和删除资源。这个特性支持许多用例。例如,您可以杀死一个失败的 pod对部署进行滚动更新或者只组织资源。您还可以导出和导入云应用程序的 yaml 配置文件,并将它们存储在版本控制系统中。
![](https://lh6.googleusercontent.com/zz-qjNcGgvWXrK1LIipUdIdPyeWJ1EyPVJxRnSvI6pMcLBkxDxpQt-ObsIiZsS_X0RjVBWtXYO5TCvhsymb__CGXFzKuPUnUrB4HKnAMsxtYdWLwMmHEb8c9P9Chzlo5ePHRKf5O)
<!--
The release includes a beta view of cluster nodes for administration and operational use cases. The UI lists all nodes in the cluster to allow for overview analysis and quick screening for problematic nodes. The details view shows all information about the node and links to pods running on it.
![](https://lh6.googleusercontent.com/3CSTUy-8Tz-yAL9tCqxNUqMcWJYKK0dwk7kidE9zy-L-sXFiD4A4Y2LKEqbJKgI6Fl6xbzYxsziI8dULVXPJbu6eU0ci7hNtqi3tTuhdbVD6CG3EXw151fvt2MQuqumHRbab6g-_)
-->
这个版本包括一个用于管理和操作用例的集群节点的 beta 视图。UI 列出集群中的所有节点以便进行总体分析和快速筛选有问题的节点。details 视图显示有关该节点的所有信息以及指向在其上运行的 pod 的链接。
![](https://lh6.googleusercontent.com/3CSTUy-8Tz-yAL9tCqxNUqMcWJYKK0dwk7kidE9zy-L-sXFiD4A4Y2LKEqbJKgI6Fl6xbzYxsziI8dULVXPJbu6eU0ci7hNtqi3tTuhdbVD6CG3EXw151fvt2MQuqumHRbab6g-_)
<!--
There are also many smaller scope new features that the we shipped with the release, namely: support for namespaced resources, internationalization, performance improvements, and many bug fixes (find out more in the [release notes](https://github.com/kubernetes/dashboard/releases/tag/v1.1.0)). All these improvements result in a better and simpler user experience of the product.
-->
我们随发行版提供的还有许多小范围的新功能,即:支持命名空间资源、国际化、性能改进和许多错误修复(请参阅[发行说明](https://github.com/kubernetes/dashboard/releases/tag/v1.1.0)中的更多内容)。所有这些改进都会带来更好、更简单的产品用户体验。
<!--
**Future Work**
The team has ambitious plans for the future spanning across multiple use cases. We are also open to all feature requests, which you can post on our [issue tracker](https://github.com/kubernetes/dashboard/issues).
-->
**Future Work**
该团队对跨越多个用例的未来有着雄心勃勃的计划。我们还对所有功能请求开放,您可以在我们的[问题跟踪程序](https://github.com/kubernetes/dashboard/issues)上发布这些请求。
<!--
Here is a list of our focus areas for the following months:
- [Handle more Kubernetes resources](https://github.com/kubernetes/dashboard/issues/961) - To show all resources that a cluster user may potentially interact with. Once done, Dashboard can act as a complete replacement for CLI.&nbsp;
- [Monitoring and troubleshooting](https://github.com/kubernetes/dashboard/issues/962) - To add resource usage statistics/graphs to the objects shown in Dashboard. This focus area will allow for actionable debugging and troubleshooting of cloud applications.
- [Security, auth and logging in](https://github.com/kubernetes/dashboard/issues/964) - Make Dashboard accessible from networks external to a Cluster and work with custom authentication systems.
-->
以下是我们接下来几个月的重点领域:
- [Handle more Kubernetes resources](https://github.com/kubernetes/dashboard/issues/961) - 显示群集用户可能与之交互的所有资源。一旦完成dashboard 就可以完全替代cli。
- [Monitoring and troubleshooting](https://github.com/kubernetes/dashboard/issues/962) - 将资源使用统计信息/图表添加到 Dashboard 中显示的对象。这个重点领域将允许对云应用程序进行可操作的调试和故障排除。
- [Security, auth and logging in](https://github.com/kubernetes/dashboard/issues/964) - 使仪表板可从群集外部的网络访问,并使用自定义身份验证系统。
<!--
**Connect With Us**
We would love to talk with you and hear your feedback!
- Email us at the [SIG-UI mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)
- Chat with us on the Kubernetes Slack&nbsp;[#SIG-UI channel](https://kubernetes.slack.com/messages/sig-ui/)
- Join our meetings: 4PM CEST. See the [SIG-UI calendar](https://calendar.google.com/calendar/embed?src=google.com_52lm43hc2kur57dgkibltqc6kc%40group.calendar.google.com&ctz=Europe/Warsaw) for details.
_-- Piotr Bryk, Software Engineer, Google_
-->
**联系我们**
我们很乐意与您交谈并听取您的反馈!
- 请在[SIG-UI邮件列表](https://groups.google.com/forum/向我们发送电子邮件!论坛/kubernetes sig ui)
- 在 kubernetes slack 上与我们聊天。[#SIG-UI channel](https://kubernetes.slack.com/messages/sig-ui/)
- 参加我们的会议东部时间下午4点。请参阅[SIG-UI日历](https://calendar.google.com/calendar/embed?src=google.com_52lm43hc2kur57dgkibltqc6kc%40group.calendar.google.com&ctz=Europe/Warsaw)了解详细信息。

View File

@ -0,0 +1,89 @@
---
title: " Kubernetes 生日快乐。哦,这是你要去的地方! "
date: 2016-07-21
slug: oh-the-places-you-will-go
url: /blog/2016/07/Oh-The-Places-You-Will-Go
---
<!--
---
title: " Happy Birthday Kubernetes. Oh, the places youll go! "
date: 2016-07-21
slug: oh-the-places-you-will-go
url: /blog/2016/07/Oh-The-Places-You-Will-Go
---
-->
<!--
_Editors note, Todays guest post is from an independent Kubernetes contributor, Justin Santa Barbara, sharing his reflection on growth of the project from inception to its future._
**Dear K8s,**
_Its hard to believe youre only one - youve grown up so fast. On the occasion of your first birthday, I thought I would write a little note about why I was so excited when you were born, why I feel fortunate to be part of the group that is raising you, and why Im eager to watch you continue to grow up!_
-->
_编者按今天的嘉宾帖子来自一位独立的 kubernetes 撰稿人 Justin Santa Barbara分享了他对项目从一开始到未来发展的思考。_
**亲爱的 K8s,**
_很难相信你是唯一的一个 - 成长这么快的。在你一岁生日的时候我想我可以写一个小纸条告诉你为什么我在你出生的时候那么兴奋为什么我觉得很幸运能成为抚养你长大的一员为什么我渴望看到你继续成长_
<!--
_--Justin_
You started with an excellent foundation - good declarative functionality, built around a solid API with a well defined schema and the machinery so that we could evolve going forwards. And sure enough, over your first year you grew so fast: autoscaling, HTTP load-balancing support (Ingress), support for persistent workloads including clustered databases (PetSets). Youve made friends with more clouds (welcome Azure & OpenStack to the family), and even started to span zones and clusters (Federation). And these are just some of the most visible changes - theres so much happening inside that brain of yours!
I think its wonderful youve remained so open in all that you do - you seem to write down everything on GitHub - for better or worse. I think weve all learned a lot about that on the way, like the perils of having engineers make scaling statements that are then weighed against claims made without quite the same framework of precision and rigor. But Im proud that you chose not to lower your standards, but rose to the challenge and just ran faster instead - it might not be the most realistic approach, but it is the only way to move mountains!
-->
_--Justin_
你从一个优秀的基础 - 良好的声明性功能开始,它是围绕一个具有良好定义的模式和机制的坚实的 API 构建的这样我们就可以向前发展了。果然在你的第一年里你增长得如此之快autoscaling、HTTP load-balancing support (Ingress)、support for persistent workloads including clustered databases (PetSets)。你已经和更多的云交了朋友(欢迎 azure 和 openstack 加入家庭),甚至开始跨越区域和集群(Federation)。这些只是一些最明显的变化 - 在你的大脑里发生了太多的变化!
我觉得你一直保持开放的态度真是太好了 - 你好像把所有的东西都写在 github 上 - 不管是好是坏。我想我们在这方面都学到了很多,比如让工程师做缩放声明的风险,然后在没有完全相同的精确性和严谨性框架的情况下,将这些声明与索赔进行权衡。但我很自豪你选择了不降低你的标准,而是上升到挑战,只是跑得更快 - 这可能不是最现实的办法,但这是唯一的方式能移动山!
<!--
And yet, somehow, youve managed to avoid a lot of the common dead-ends that other open source software has fallen into, particularly as those projects got bigger and the developers end up working on it more than they use it directly. How did you do that? Theres a probably-apocryphal story of an employee at IBM that makes a huge mistake, and is summoned to meet with the big boss, expecting to be fired, only to be told “We just spent several million dollars training you. Why would we want to fire you?”. Despite all the investment google is pouring into you (along with Redhat and others), I sometimes wonder if the mistakes we are avoiding could be worth even more. There is a very open development process, yet theres also an “oracle” that will sometimes course-correct by telling us what happens two years down the road if we make a particular design decision. This is a parent you should probably listen to!
-->
然而,不知何故,你已经设法避免了许多其他开源软件陷入的共同死胡同,特别是当那些项目越来越大,开发人员最终要做的比直接使用它更多的时候。你是怎么做到的?有一个很可能是虚构的故事,讲的是 IBM 的一名员工犯了一个巨大的错误,被传唤去见大老板,希望被解雇,却被告知“我们刚刚花了几百万美元培训你。我们为什么要解雇你?“。尽管谷歌对你进行了大量的投资(包括 redhat 和其他公司)但我有时想知道我们正在避免的错误是否更有价值。有一个非常开放的开发过程但也有一个“oracle”它有时会通过告诉我们两年后如果我们做一个特定的设计决策会发生什么来纠正错误。这是你应该听的父母
<!--
And so although youre only a year old, you really have an [old soul](http://queue.acm.org/detail.cfm?id=2898444). Im just one of the [many people raising you](https://kubernetes.io/blog/2016/07/happy-k8sbday-1), but its a wonderful learning experience for me to be able to work with the people that have built these incredible systems and have all this domain knowledge. Yet because we started from scratch (rather than taking the existing Borg code) were at the same level and can still have genuine discussions about how to raise you. Well, at least as close to the same level as we could ever be, but its to their credit that they are all far too nice ever to mention it!
If I would pick just two of the wise decisions those brilliant people made:
-->
所以,尽管你只有一岁,你真的有一个[旧灵魂](http://queue.acm.org/detail.cfmID=2898444)。我只是[很多人抚养你]中的一员(https://kubernetes.io/blog/2016/07/happy-k8sbday-1),但对我来说,能够与那些建立了这些令人难以置信的系统并拥有所有这些领域知识的人一起工作是一次极好的学习经历。然而,因为我们是白手起家(而不是采用现有的 Borg 代码),我们处于同一水平,仍然可以就如何培养你进行真正的讨论。好吧,至少和我们的水平一样接近,但值得称赞的是,他们都太好了,从来没提过!
如果我选择两个聪明人做出的明智决定:
<!--
- Labels & selectors give us declarative “pointers”, so we can say “why” we want things, rather than listing the things directly. Its the secret to how you can scale to [great heights](https://kubernetes.io/blog/2016/07/thousand-instances-of-cassandra-using-kubernetes-pet-set); not by naming each step, but saying “a thousand more steps just like that first one”.
- Controllers are state-synchronizers: we specify the goals, and your controllers will indefatigably work to bring the system to that state. They work through that strongly-typed API foundation, and are used throughout the code, so Kubernetes is more of a set of a hundred small programs than one big one. Its not enough to scale to thousands of nodes technically; the project also has to scale to thousands of developers and features; and controllers help us get there.
-->
- 标签和选择器给我们声明性的“pointers”所以我们可以说“为什么”我们想要东西而不是直接列出东西。这是如何扩展到[伟大高度]的秘密(https://kubernetes.io/blog/2016/07/thousand-instances-of-cassandra-using-kubernetes-pet-set);不是命名每一步,而是说“像第一步一样多走一千步”。
- 控制器是状态同步器:我们指定目标,您的控制器将不遗余力地工作,使系统达到该状态。它们工作在强类型 API 基础上,并且贯穿整个代码,因此 Kubernetes 比一个大的程序多一百个小程序。仅仅从技术上扩展到数千个节点是不够的;这个项目还必须扩展到数千个开发人员和特性;控制器帮助我们达到目的。
<!--
And so on we will go! Well be replacing those controllers and building on more, and the API-foundation lets us build anything we can express in that way - with most things just a label or annotation away! But your thoughts will not be defined by language: with third party resources you can express anything you choose. Now we can build Kubernetes without building in Kubernetes, creating things that feel as much a part of Kubernetes as anything else. Many of the recent additions, like ingress, DNS integration, autoscaling and network policies were done or could be done in this way. Eventually it will be hard to imagine you before these things, but tomorrows standard functionality can start today, with no obstacles or gatekeeper, maybe even for an audience of one.
So Im looking forward to seeing more and more growth happen further and further from the core of Kubernetes. We had to work our way through those phases; starting with things that needed to happen in the kernel of Kubernetes - like replacing replication controllers with deployments. Now were starting to build things that dont require core changes. But were still still talking about infrastructure separately from applications. Its what comes next that gets really interesting: when we start building applications that rely on the Kubernetes APIs. Weve always had the Cassandra example that uses the Kubernetes API to self-assemble, but we havent really even started to explore this more widely yet. In the same way that the S3 APIs changed how we build things that remember, I think the k8s APIs are going to change how we build things that think.
-->
等等我们就走我们将取代那些控制器建立更多API 基金会让我们构建任何我们可以用这种方式表达的东西 - 大多数东西只是标签或注释远离!但你的思想不会由语言来定义:有了第三方资源,你可以表达任何你选择的东西。现在我们可以不用在 Kubernetes 建造Kubernetes 了,创造出与其他任何东西一样感觉是 Kubernetes 的一部分的东西。最近添加的许多功能如ingress、DNS integration、autoscaling and network policies ,都已经完成或可以通过这种方式完成。最终,在这些事情发生之前很难想象你会是怎样的一个人,但是明天的标准功能可以从今天开始,没有任何障碍或看门人,甚至对一个听众来说也是这样。
所以我期待着看到越来越多的增长发生在离 Kubernetes 核心越来越远的地方。我们必须通过这些阶段来工作;从需要在 kubernetes 内核中发生的事情开始——比如用部署替换复制控制器。现在我们开始构建不需要核心更改的东西。但我们仍然在讨论基础设施和应用程序。接下来真正有趣的是:当我们开始构建依赖于 kubernetes api 的应用程序时。我们一直有使用 kubernetes api 进行自组装的 cassandra 示例,但我们还没有真正开始更广泛地探讨这个问题。正如 S3 APIs 改变了我们构建记忆事物的方式一样,我认为 k8s APIs 也将改变我们构建思考事物的方式。
<!--
So Im looking forward to your second birthday: I can try to predict what youll look like then, but I know youll surpass even the most audacious things I can imagine. Oh, the places youll go!
_-- Justin Santa Barbara, Independent Kubernetes Contributor_
-->
所以我很期待你的二岁生日:我可以试着预测你那时的样子,但我知道你会超越我所能想象的最大胆的东西。哦,这是你要去的地方!
_-- Justin Santa Barbara, 独立的 Kubernetes 贡献者_

View File

@ -0,0 +1,72 @@
---
title: " Kubernetes 1.8 的五天 "
date: 2017-10-24
slug: five-days-of-kubernetes-18
url: /blog/2017/10/Five-Days-Of-Kubernetes-18
---
<!--
---
title: " Five Days of Kubernetes 1.8 "
date: 2017-10-24
slug: five-days-of-kubernetes-18
url: /blog/2017/10/Five-Days-Of-Kubernetes-18
---
-->
<!--
Kubernetes 1.8 is live, made possible by hundreds of contributors pushing thousands of commits in this latest releases.
-->
Kubernetes 1.8 已经推出,数百名贡献者在这个最新版本中推出了成千上万的提交。
<!--
The community has tallied more than 66,000 commits in the main repo and continues rapid growth outside of the main repo, which signals growing maturity and stability for the project. The community has logged more than 120,000 commits across all repos and 17,839 commits across all repos for v1.7.0 to v1.8.0 alone.
-->
社区已经有超过 66,000 个提交在主仓库,并在主仓库之外继续快速增长,这标志着该项目日益成熟和稳定。仅 v1.7.0 到 v1.8.0,社区就记录了所有仓库的超过 120,000 次提交和 17839 次提交。
<!--
With the help of our growing community of 1,400 plus contributors, we issued more than 3,000 PRs and pushed more than 5,000 commits to deliver Kubernetes 1.8 with significant security and workload support updates. This all points to increased stability, a result of our project-wide focus on maturing [process](https://github.com/kubernetes/sig-release), formalizing [architecture](https://github.com/kubernetes/community/tree/master/sig-architecture), and strengthening Kubernetes [governance model](https://github.com/kubernetes/community/tree/master/community/elections/2017).
-->
在拥有 1400 多名贡献者,并且不断发展壮大的社区的帮助下,我们合并了 3000 多个 PR并发布了 5000 多个提交,最后的 Kubernetes 1.8 在安全和工作负载方面添加了很多的更新。
这一切都表明稳定性的提高,这是我们整个项目关注成熟[流程](https://github.com/kubernetes/sig-release)、形式化[架构](https://github.com/kubernetes/community/tree/master/sig-architecture)和加强 Kubernetes 的[治理模型](https://github.com/kubernetes/community/tree/master/community/elections/2017)的结果。
<!--
While many improvements have been contributed, we highlight key features in this series of in-depth&nbsp;posts listed below. [Follow along](https://twitter.com/kubernetesio) and see whats new and improved with storage, security and more.
-->
虽然有很多改进,但我们在下面列出的这一系列深度文章中突出了一些关键特性。[跟随](https://twitter.com/kubernetesio)并了解存储,安全等方面的新功能和改进功能。
<!--
**Day 1:** [5 Days of Kubernetes 1.8](https://kubernetes.io/blog/2017/10/five-days-of-kubernetes-18)
**Day 2:** [kubeadm v1.8 Introduces Easy Upgrades for Kubernetes Clusters](https://kubernetes.io/blog/2017/10/kubeadm-v18-released)
**Day 3:** [Kubernetes v1.8 Retrospective: It Takes a Village to Raise a Kubernetes](https://kubernetes.io/blog/2017/10/it-takes-village-to-raise-kubernetes)
**Day 4:** [Using RBAC, Generally Available in Kubernetes v1.8](https://kubernetes.io/blog/2017/10/using-rbac-generally-available-18)
**Day 5:** [Enforcing Network Policies in Kubernetes](https://kubernetes.io/blog/2017/10/enforcing-network-policies-in-kubernetes)
-->
**第一天:** [Kubernetes 1.8 的五天](https://kubernetes.io/blog/2017/10/five-days-of-kubernetes-18)
**第二天:** [kubeadm v1.8 为 Kubernetes 集群引入了简单的升级](https://kubernetes.io/blog/2017/10/kubeadm-v18-released)
**第三天:** [Kubernetes v1.8 回顾:提升一个 Kubernetes 需要一个 Village](https://kubernetes.io/blog/2017/10/it-takes-village-to-raise-kubernetes)
**第四天:** [使用 RBAC一般在 Kubernetes v1.8 中提供](https://kubernetes.io/blog/2017/10/using-rbac-generally-available-18)
**第五天:** [在 Kubernetes 执行网络策略](https://kubernetes.io/blog/2017/10/enforcing-network-policies-in-kubernetes)
<!--
**Connect**
-->
**链接**
<!--
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
- Join the community portal for advocates on [K8sPort](http://k8sport.org/)
- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates&nbsp;
- Connect with the community on [Slack](http://slack.k8s.io/)
- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
-->
- 在 [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) 上发布问题(或回答问题)
- 加入 [K8sPort](http://k8sport.org/) 布道师的社区门户网站
- 在 Twitter [@Kubernetesio](https://twitter.com/kubernetesio) 关注我们以获取最新更新
- 与 [Slack](http://slack.k8s.io/) 上的社区联系
- 参与 [GitHub](https://github.com/kubernetes/kubernetes) 上的 Kubernetes 项目

View File

@ -0,0 +1,32 @@
---
title: " Kubernetes 中自动缩放 "
date: 2017-11-17
slug: autoscaling-in-kubernetes
url: /blog/2017/11/Autoscaling-In-Kubernetes
---
<!--
---
title: " Autoscaling in Kubernetes "
date: 2017-11-17
slug: autoscaling-in-kubernetes
url: /blog/2017/11/Autoscaling-In-Kubernetes
---
-->
<!--
Kubernetes allows developers to automatically adjust cluster sizes and the number of pod replicas based on current traffic and load. These adjustments reduce the amount of unused nodes, saving money and resources. In this talk, Marcin Wielgus of Google walks you through the current state of pod and node autoscaling in Kubernetes: .how it works, and how to use it, including best practices for deployments in production applications.
-->
Kubernetes 允许开发人员根据当前的流量和负载自动调整集群大小和 pod 副本的数量。这些调整减少了未使用节点的数量,节省了资金和资源。
在这次演讲中,谷歌的 Marcin Wielgus 将带领您了解 Kubernetes 中 pod 和 node 自动调焦的当前状态:它是如何工作的,以及如何使用它,包括在生产应用程序中部署的最佳实践。
<!--
Enjoyed this talk? Join us for more exciting sessions on scaling and automating your Kubernetes clusters at KubeCon in Austin on December 6-8. [Register Now](https://www.eventbrite.com/e/kubecon-cloudnativecon-north-america-registration-37824050754?_ga=2.9666039.317115486.1510003873-1623727562.1496428006)
-->
喜欢这个演讲吗? 12 月 6 日至 8 日,在 Austin 参加 KubeCon 关于扩展和自动化您的 Kubernetes 集群的更令人兴奋的会议。[现在注册](https://www.eventbrite.com/e/kubecon-cloudnativecon-north-america-registration-37824050754?_ga=2.9666039.317115486.1510003873-1623727562.1496428006)。
<!--
Be sure to check out [Automating and Testing Production Ready Kubernetes Clusters in the Public Cloud](http://sched.co/CU64) by Ron Lipke, Senior Developer, Platform as a Service, Gannet/USA Today Network.
-->
一定要查看由 Ron Lipke Gannet/USA Today Network, 平台即服务高级开发人员,在[公共云中自动化和测试产品就绪的 Kubernetes 集群](http://sched.co/CU64)。

View File

@ -1,81 +0,0 @@
---
title: "Principles of Container-based Application Design"
date: 2018-03-15
slug: principles-of-container-app-design
url: /blog/2018/03/Principles-Of-Container-App-Design
---
<!-- It's possible nowadays to put almost any application in a container and run it. Creating cloud-native applications, however—containerized applications that are automated and orchestrated effectively by a cloud-native platform such as Kubernetes—requires additional effort. Cloud-native applications anticipate failure; they run and scale reliably even when their infrastructure experiences outages. To offer such capabilities, cloud-native platforms like Kubernetes impose a set of contracts and constraints on applications. These contracts ensure that applications they run conform to certain constraints and allow the platform to automate application management. -->
现如今,几乎所有的的应用程序都可以在容器中运行。但创建云原生应用,通过诸如 Kubernetes 的云原生平台更有效地自动化运行、管理容器化的应用却需要额外的工作。
云原生应用需要考虑故障;即使是在底层架构发生故障时也需要可靠地运行。
为了提供这样的功能,像 Kubernetes 这样的云原生平台需要向运行的应用程序强加一些契约和约束。
这些契约确保应用可以在符合某些约束的条件下运行,从而使得平台可以自动化应用管理。
<!-- I've outlined [seven principles][1]for containerized applications to follow in order to be fully cloud-native. -->
我已经为容器化应用如何之为云原生应用概括出了[七项原则][1]。
| ----- |
| ![][2] |
| Container Design Principles |
<!-- These seven principles cover both build time and runtime concerns. -->
这里所述的七项原则涉及到构建时和运行时,两类关注点。
<!-- #### Build time -->
#### 构建时
<!-- * **Single Concern:** Each container addresses a single concern and does it well.
* **Self-Containment:** A container relies only on the presence of the Linux kernel. Additional libraries are added when the container is built.
* **Image Immutability:** Containerized applications are meant to be immutable, and once built are not expected to change between different environments. -->
* **单一关注点:** 每个容器只解决一个关注点,并且完成的很好。
* **自包含:** 一个容器只依赖Linux内核。额外的库要求可以在构建容器时加入。
* **镜像不变性:** 容器化的应用意味着不变性,一旦构建完成,不需要根据环境的不同而重新构建。
<!-- #### Runtime -->
#### 运行时
<!-- * **High Observability:** Every container must implement all necessary APIs to help the platform observe and manage the application in the best way possible.
* **Lifecycle Conformance:** A container must have a way to read events coming from the platform and conform by reacting to those events.
* **Process Disposability:** Containerized applications must be as ephemeral as possible and ready to be replaced by another container instance at any point in time.
* **Runtime Confinement:** Every container must declare its resource requirements and restrict resource use to the requirements indicated. -->
* **高可观测性:** 每个容器必须实现所有必要的 API 来帮助平台以最好的方式来观测、管理应用。
* **生命周期一致性:** 一个容器必须要能从平台中获取事件信息,并作出相应的反应。
* **进程易处理性:** 容器化应用的寿命一定要尽可能的短暂,这样,可以随时被另一个容器所替换。
* **运行时限制:** 每个容器都必须要声明自己的资源需求,并将资源使用限制在所需要的范围之内。
<!-- The build time principles ensure that containers have the right granularity, consistency, and structure in place. The runtime principles dictate what functionalities must be implemented in order for containerized applications to possess cloud-native function. Adhering to these principles helps ensure that your applications are suitable for automation in Kubernetes. -->
编译时原则保证了容器拥有合适的粒度,一致性以及结构。运行时原则明确了容器化必须要实现那些功能才能成为云原生函数。遵循这些原则可以帮助你的应用适应 Kubernetes 上的自动化。
<!-- The white paper is freely available for download: -->
白皮书可以免费下载:
<!-- To read more about designing cloud-native applications for Kubernetes, check out my [Kubernetes Patterns][3] book. -->
想要了解更多关于如何面向 Kubernetes 设计云原生应用,可以看看我的 [Kubernetes 模式][3] 一书。
<!-- — [Bilgin Ibryam][4], Principal Architect, Red Hat -->
— [Bilgin Ibryam][4], 首席架构师, Red Hat
Twitter:
Blog: [http://www.ofbizian.com][5]
Linkedin:
<!-- Bilgin Ibryam (@bibryam) is a principal architect at Red Hat, open source committer at ASF, blogger, author, and speaker. He is the author of Camel Design Patterns and Kubernetes Patterns books. In his day-to-day job, Bilgin enjoys mentoring, training and leading teams to be successful with distributed systems, microservices, containers, and cloud-native applications in general. -->
Bilgin Ibryam (@bibryam) 是 Red Hat 的一名首席架构师, ASF 的开源贡献者,博主,作者以及演讲者。
他是 Camel 设计模式、 Kubernetes 模式的作者。在他的日常生活中,他非常享受指导、培训以及帮助各个团队更加成功地使用分布式系统、微服务、容器,以及云原生应用。
[1]: https://www.redhat.com/en/resources/cloud-native-container-design-whitepaper
[2]: https://lh5.googleusercontent.com/1XqojkVC0CET1yKCJqZ3-0VWxJ3W8Q74zPLlqnn6eHSJsjHOiBTB7EGUX5o_BOKumgfkxVdgBeLyoyMfMIXwVm9p2QXkq_RRy2mDJG1qEExJDculYL5PciYcWfPAKxF2-DGIdiLw
[3]: http://leanpub.com/k8spatterns/
[4]: http://twitter.com/bibryam
[5]: http://www.ofbizian.com/

View File

@ -0,0 +1,728 @@
---
title: 在 Kubernetes 上开发
date: 2018-05-01
slug: developing-on-kubernetes
---
<!--
---
title: Developing on Kubernetes
date: 2018-05-01
slug: developing-on-kubernetes
---
-->
<!--**Authors**:-->
**作者** [Michael Hausenblas](https://twitter.com/mhausenblas) (Red Hat), [Ilya Dmitrichenko](https://twitter.com/errordeveloper) (Weaveworks)
<!--
How do you develop a Kubernetes app? That is, how do you write and test an app that is supposed to run on Kubernetes? This article focuses on the challenges, tools and methods you might want to be aware of to successfully write Kubernetes apps alone or in a team setting.
-->
您将如何开发一个 Kubernates 应用?也就是说,您如何编写并测试一个要在 Kubernates 上运行的应用程序?本文将重点介绍在独自开发或者团队协作中,您可能希望了解到的为了成功编写 Kubernetes 应用程序而需面临的挑战,工具和方法。
<!--
Were assuming you are a developer, you have a favorite programming language, editor/IDE, and a testing framework available. The overarching goal is to introduce minimal changes to your current workflow when developing the app for Kubernetes. For example, if youre a Node.js developer and are used to a hot-reload setup—that is, on save in your editor the running app gets automagically updated—then dealing with containers and container images, with container registries, Kubernetes deployments, triggers, and more can not only be overwhelming but really take all the fun out if it.
-->
我们假定您是一位开发人员,有您钟爱的编程语言,编辑器/IDE集成开发环境以及可用的测试框架。在针对 Kubernates 开发应用时,最重要的目标是减少对当前工作流程的影响,改变越少越好,尽量做到最小。举个例子,如果您是 Node.js 开发人员,习惯于那种热重载的环境 - 也就是说您在编辑器里一做保存,正在运行的程序就会自动更新 - 那么跟容器、容器镜像或者镜像仓库打交道,又或是跟 Kubernetes 部署、triggers 以及更多头疼东西打交道,不仅会让人难以招架也真的会让开发过程完全失去乐趣。
<!--
In the following, well first discuss the overall development setup, then review tools of the trade, and last but not least do a hands-on walkthrough of three exemplary tools that allow for iterative, local app development against Kubernetes.
-->
在下文中,我们将首先讨论 Kubernetes 总体开发环境,然后回顾常用工具,最后进行三个示例性工具的实践演练。这些工具允许针对 Kubernetes 进行本地应用程序的开发和迭代。
<!--
## Where to run your cluster?
-->
## 您的集群运行在哪里?
<!--
As a developer you want to think about where the Kubernetes cluster youre developing against runs as well as where the development environment sits. Conceptually there are four development modes:
-->
作为开发人员,您既需要考虑所针对开发的 Kubernetes 集群运行在哪里,也需要思考开发环境如何配置。概念上,有四种开发模式:
![Dev Modes](/images/blog/2018-05-01-developing-on-kubernetes/dok-devmodes_preview.png)
<!--
A number of tools support pure offline development including Minikube, Docker for Mac/Windows, Minishift, and the ones we discuss in detail below. Sometimes, for example, in a microservices setup where certain microservices already run in the cluster, a proxied setup (forwarding traffic into and from the cluster) is preferable and Telepresence is an example tool in this category. The live mode essentially means youre building and/or deploying against a remote cluster and, finally, the pure online mode means both your development environment and the cluster are remote, as this is the case with, for example, [Eclipse Che](https://www.eclipse.org/che/docs/kubernetes-single-user.html) or [Cloud 9](https://github.com/errordeveloper/k9c). Lets now have a closer look at the basics of offline development: running Kubernetes locally.
-->
许多工具支持纯 offline 开发,包括 Minikube、DockerMac 版/Windows 版、Minishift 以及下文中我们将详细讨论的几种。有时比如说在一个微服务系统中已经有若干微服务在运行proxied 模式通过转发把数据流传进传出集群就非常合适Telepresence 就是此类工具的一个实例。live 模式,本质上是您基于一个远程集群进行构建和部署。最后,纯 online 模式意味着您的开发环境和运行集群都是远程的,典型的例子是 [Eclipse Che](https://www.eclipse.org/che/docs/kubernetes-single-user.html) 或者 [Cloud 9](https://github.com/errordeveloper/k9c)。现在让我们仔细看看离线开发的基础:在本地运行 Kubernetes。
<!--
[Minikube](/docs/getting-started-guides/minikube/) is a popular choice for those who prefer to run Kubernetes in a local VM. More recently Docker for [Mac](https://docs.docker.com/docker-for-mac/kubernetes/) and [Windows](https://docs.docker.com/docker-for-windows/kubernetes/) started shipping Kubernetes as an experimental package (in the “edge” channel). Some reasons why you may want to prefer using Minikube over the Docker desktop option are:
-->
[Minikube](/docs/getting-started-guides/minikube/) 在更加喜欢于本地 VM 上运行 Kubernetes 的开发人员中非常受欢迎。不久前Docker 的 [Mac](https://docs.docker.com/docker-for-mac/kubernetes/) 版和 [Windows](https://docs.docker.com/docker-for-windows/kubernetes/) 版,都试验性地开始自带 Kubernetes需要下载 “edge” 安装包)。在两者之间,以下原因也许会促使您选择 Minikube 而不是 Docker 桌面版:
<!--
* You already have Minikube installed and running
* You prefer to wait until Docker ships a stable package
* Youre a Linux desktop user
* You are a Windows user who doesnt have Windows 10 Pro with Hyper-V
-->
* 您已经安装了 Minikube 并且它运行良好
* 您想等到 Docker 出稳定版本
* 您是 Linux 桌面用户
* 您是 Windows 用户,但是没有配有 Hyper-V 的 Windows 10 Pro
<!--
Running a local cluster allows folks to work offline and that you dont have to pay for using cloud resources. Cloud provider costs are often rather affordable and free tiers exists, however some folks prefer to avoid having to approve those costs with their manager as well as potentially incur unexpected costs, for example, when leaving cluster running over the weekend.
-->
运行一个本地集群,开发人员可以离线工作,不用支付云服务。云服务收费一般不会太高,并且免费的等级也有,但是一些开发人员不喜欢为了使用云服务而必须得到经理的批准,也不愿意支付意想不到的费用,比如说忘了下线而集群在周末也在运转。
<!--
Some developers prefer to use a remote Kubernetes cluster, and this is usually to allow for larger compute and storage capacity and also enable collaborative workflows more easily. This means its easier for you to pull in a colleague to help with debugging or share access to an app in the team. Additionally, for some developers it can be critical to mirror production environment as closely as possible, especially when it comes down to external cloud services, say, proprietary databases, object stores, message queues, external load balancer, or mail delivery systems.
-->
有些开发人员却更喜欢远程的 Kubernetes 集群,这样他们通常可以获得更大的计算能力和存储容量,也简化了协同工作流程。您可以更容易的拉上一个同事来帮您调试,或者在团队内共享一个应用的使用。再者,对某些开发人员来说,尽可能的让开发环境类似生产环境至关重要,尤其是您依赖外部厂商的云服务时,如:专有数据库、云对象存储、消息队列、外商的负载均衡器或者邮件投递系统。
<!--
In summary, there are good reasons for you to develop against a local cluster as well as a remote one. It very much depends on in which phase you are: from early prototyping and/or developing alone to integrating a set of more stable microservices.
-->
总之,无论您选择本地或者远程集群,理由都足够多。这很大程度上取决于您所处的阶段:从早期的原型设计/单人开发到后期面对一批稳定微服务的集成。
<!--
Now that you have a basic idea of the options around the runtime environment, lets move on to how to iteratively develop and deploy your app.
-->
既然您已经了解到运行环境的基本选项,那么我们就接着讨论如何迭代式的开发并部署您的应用。
<!--
## The tools of the trade
-->
## 常用工具
<!--
We are now going to review tooling allowing you to develop apps on Kubernetes with the focus on having minimal impact on your existing workflow. We strive to provide an unbiased description including implications of using each of the tools in general terms.
-->
我们现在回顾既可以允许您可以在 Kubernetes 上开发应用程序又尽可能最小地改变您现有的工作流程的一些工具。我们致力于提供一份不偏不倚的描述,也会提及使用某个工具将会意味着什么。
<!--
Note that this is a tricky area since even for established technologies such as, for example, JSON vs YAML vs XML or REST vs gRPC vs SOAP a lot depends on your background, your preferences and organizational settings. Its even harder to compare tooling in the Kubernetes ecosystem as things evolve very rapidly and new tools are announced almost on a weekly basis; during the preparation of this post alone, for example, [Gitkube](https://gitkube.sh/) and [Watchpod](https://github.com/MinikubeAddon/watchpod) came out. To cover these new tools as well as related, existing tooling such as [Weave Flux](https://github.com/weaveworks/flux) and OpenShifts [S2I](https://docs.openshift.com/container-platform/3.9/creating_images/s2i.html) we are planning a follow-up blog post to the one youre reading.
-->
请注意这很棘手,因为即使在成熟定型的技术中做选择,比如说在 JSON、YAML、XML、REST、gRPC 或者 SOAP 之间做选择,很大程度也取决于您的背景、喜好以及公司环境。在 Kubernetes 生态系统内比较各种工具就更加困难,因为技术发展太快,几乎每周都有新工具面市;举个例子,仅在准备这篇博客的期间,[Gitkube](https://gitkube.sh/) 和 [Watchpod](https://github.com/MinikubeAddon/watchpod) 相继出品。为了进一步覆盖到这些新的,以及一些相关的已推出的工具,例如 [Weave Flux](https://github.com/weaveworks/flux) 和 OpenShift 的 [S2I](https://docs.openshift.com/container-platform/3.9/creating_images/s2i.html),我们计划再写一篇跟进的博客。
### Draft
<!--
[Draft](https://github.com/Azure/draft) aims to help you get started deploying any app to Kubernetes. It is capable of applying heuristics as to what programming language your app is written in and generates a Dockerfile along with a Helm chart. It then runs the build for you and deploys resulting image to the target cluster via the Helm chart. It also allows user to setup port forwarding to localhost very easily.
-->
[Draft](https://github.com/Azure/draft) 旨在帮助您将任何应用程序部署到 Kubernetes。它能够检测到您的应用所使用的编程语言并且生成一份 Dockerfile 和 Helm 图表。然后它替您启动构建并且依照 Helm 图表把所生产的镜像部署到目标集群。它也可以让您很容易地设置到 localhost 的端口映射。
<!--
Implications:
-->
这意味着:
<!--
* User can customise the chart and Dockerfile templates however they like, or even create a [custom pack](https://github.com/Azure/draft/blob/master/docs/reference/dep-003.md) (with Dockerfile, the chart and more) for future use
-->
* 用户可以任意地自定义 Helm 图表和 Dockerfile 模版,或者甚至创建一个 [custom pack](https://github.com/Azure/draft/blob/master/docs/reference/dep-003.md)(使用 Dockerfile、Helm 图表以及其他)以备后用
<!--
* Its not very simple to guess how just any app is supposed to be built, in some cases user may need to tweak Dockerfile and the Helm chart that Draft generates
-->
* 要想理解一个应用应该怎么构建并不容易,在某些情况下,用户也许需要修改 Draft 生成的 Dockerfile 和 Heml 图表
<!--
* With [Draft version 0.12.0](https://github.com/Azure/draft/releases/tag/v0.12.0) or older, every time user wants to test a change, they need to wait for Draft to copy the code to the cluster, then run the build, push the image and release updated chart; this can timely, but it results in an image being for every single change made by the user (whether it was committed to git or not)
-->
* 如果使用 [Draft version 0.12.0](https://github.com/Azure/draft/releases/tag/v0.12.0)<sup>1</sup> 或者更老版本,每一次用户想要测试一个改动,他们需要等 Draft 把代码拷贝到集群,运行构建,推送镜像并且发布更新后的图表;这些步骤可能进行得很快,但是每一次用户的改动都会产生一个镜像(无论是否提交到 git
<!--
* As of Draft version 0.12.0, builds are executed locally
* User doesnt have an option to choose something other than Helm for deployment
* It can watch local changes and trigger deployments, but this feature is not enabled by default
-->
* 在 Draft 0.12.0版本,构建是本地进行的
* 用户不能选择 Helm 以外的工具进行部署
* 它可以监控本地的改动并且触发部署,但是这个功能默认是关闭的
<!--
* It allows developer to use either local or remote Kubernetes cluster
* Deploying to production is up to the user, Draft authors recommend their other project Brigade
* Can be used instead of Skaffold, and along the side of Squash
-->
* 它允许开发人员使用本地或者远程的 Kubernates 集群
* 如何部署到生产环境取决于用户, Draft 的作者推荐了他们的另一个项目 - Brigade
* 可以代替 Skaffold 并且可以和 Squash 一起使用
<!--
More info:
-->
更多信息:
* [Draft: Kubernetes container development made easy](https://kubernetes.io/blog/2017/05/draft-kubernetes-container-development)
* [Getting Started Guide](https://github.com/Azure/draft/blob/master/docs/getting-started.md)
【1】此处疑为 0.11.0,因为 0.12.0 已经支持本地构建,见下一条
### Skaffold
<!--
[Skaffold](https://github.com/GoogleCloudPlatform/skaffold) is a tool that aims to provide portability for CI integrations with different build system, image registry and deployment tools. It is different from Draft, yet somewhat comparable. It has a basic capability for generating manifests, but its not a prominent feature. Skaffold is extendible and lets user pick tools for use in each of the steps in building and deploying their app.
-->
[Skaffold](https://github.com/GoogleCloudPlatform/skaffold) 让 CI 集成具有可移植性的,它允许用户采用不同的构建系统,镜像仓库和部署工具。它不同于 Draft同时也具有一定的可比性。它具有生成系统清单的基本能力但那不是一个重要功能。Skaffold 易于扩展,允许用户在构建和部署应用的每一步选取相应的工具。
<!--
Implications:
-->
这意味着:
<!--
* Modular by design
* Works independently of CI vendor, user doesnt need Docker or Kubernetes plugin
* Works without CI as such, i.e. from the developers laptop
* It can watch local changes and trigger deployments
-->
* 模块化设计
* 不依赖于 CI用户不需要 Docker 或者 Kubernetes 插件
* 没有 CI 也可以工作,也就是说,可以在开发人员的电脑上工作
* 它可以监控本地的改动并且触发部署
<!--
* It allows developer to use either local or remote Kubernetes cluster
* It can be used to deploy to production, user can configure how exactly they prefer to do it and provide different kind of pipeline for each target environment
* Can be used instead of Draft, and along the side with most other tools
-->
* 它允许开发人员使用本地或者远程的 Kubernetes 集群
* 它可以用于部署生产环境,用户可以精确配置,也可以为每一套目标环境提供不同的生产线
* 可以代替 Draft并且和其他工具一起使用
<!--
More info:
-->
更多信息:
* [Introducing Skaffold: Easy and repeatable Kubernetes development](https://cloudplatform.googleblog.com/2018/03/introducing-Skaffold-Easy-and-repeatable-Kubernetes-development.html)
* [Getting Started Guide](https://github.com/GoogleCloudPlatform/skaffold#getting-started-with-local-tooling)
### Squash
<!--
[Squash](https://github.com/solo-io/squash) consists of a debug server that is fully integrated with Kubernetes, and a IDE plugin. It allows you to insert breakpoints and do all the fun stuff you are used to doing when debugging an application using an IDE. It bridges IDE debugging experience with your Kubernetes cluster by allowing you to attach the debugger to a pod running in your Kubernetes cluster.
-->
[Squash](https://github.com/solo-io/squash) 包含一个与 Kubernetes 全面集成的调试服务器,以及一个 IDE 插件。它允许您插入断点和所有的调试操作,就像您所习惯的使用 IDE 调试一个程序一般。它允许您将调试器应用到 Kubernetes 集群中运行的 pod 上,从而让您可以使用 IDE 调试 Kubernetes 集群。
<!--
Implications:
-->
这意味着:
<!--
* Can be used independently of other tools you chose
* Requires a privileged DaemonSet
* Integrates with popular IDEs
* Supports Go, Python, Node.js, Java and gdb
-->
* 不依赖您选择的其它工具
* 需要一组特权 DaemonSet
* 可以和流行 IDE 集成
* 支持 Go、Python、Node.js、Java 和 gdb
<!--
* User must ensure application binaries inside the container image are compiled with debug symbols
* Can be used in combination with any other tools described here
* It can be used with either local or remote Kubernetes cluster
-->
* 用户必须确保容器中的应用程序使编译时使用了调试符号
* 可与此处描述的任何其他工具结合使用
* 它可以与本地或远程 Kubernetes 集群一起使用
<!--
More info:
-->
更多信息:
* [Squash: A Debugger for Kubernetes Apps](https://www.youtube.com/watch?v=5TrV3qzXlgI)
* [Getting Started Guide](https://github.com/solo-io/squash/blob/master/docs/getting-started.md)
### Telepresence
<!--
[Telepresence](https://www.telepresence.io/) connects containers running on developers workstation with a remote Kubernetes cluster using a two-way proxy and emulates in-cluster environment as well as provides access to config maps and secrets. It aims to improve iteration time for container app development by eliminating the need for deploying app to the cluster and leverages local container to abstract network and filesystem interface in order to make it appear as if the app was running in the cluster.
-->
[Telepresence](https://www.telepresence.io/) 使用双向代理将开发人员工作站上运行的容器与远程 Kubernetes 集群连接起来,并模拟集群内环境以及提供对配置映射和机密的访问。它消除了将应用部署到集群的需要,并利用本地容器抽象出网络和文件系统接口,以使其看起来应用好像就在集群中运行,从而改进容器应用程序开发的迭代时间。
<!--
Implications:
-->
这意味着:
<!--
* It can be used independently of other tools you chose
* Using together with Squash is possible, although Squash would have to be used for pods in the cluster, while conventional/local debugger would need to be used for debugging local container thats connected to the cluster via Telepresence
* Telepresence imposes some network latency
-->
* 它不依赖于其它您选取的工具
* 可以同 Squash 一起使用,但是 Squash 必须用于调试集群中的 pods而传统/本地调试器需要用于调试通过 Telepresence 连接到集群的本地容器
* Telepresence 会产生一些网络延迟
<!--
* It provides connectivity via a side-car process - sshuttle, which is based on SSH
* More intrusive dependency injection mode with LD_PRELOAD/DYLD_INSERT_LIBRARIES is also available
* It is most commonly used with a remote Kubernetes cluster, but can be used with a local one also
-->
* 它通过辅助进程提供连接 - sshuttle基于SSH的一个工具
* 还提供了使用 LD_PRELOAD/DYLD_INSERT_LIBRARIES 的更具侵入性的依赖注入模式
* 它最常用于远程 Kubernetes 集群,但也可以与本地集群一起使用
<!--
More info:
-->
更多信息:
* [Telepresence: fast, realistic local development for Kubernetes microservices](https://www.telepresence.io/)
* [Getting Started Guide](https://www.telepresence.io/tutorials/docker)
* [How It Works](https://www.telepresence.io/discussion/how-it-works)
### Ksync
<!--
[Ksync](https://github.com/vapor-ware/ksync) synchronizes application code (and configuration) between your local machine and the container running in Kubernetes, akin to what [oc rsync](https://docs.openshift.com/container-platform/3.9/dev_guide/copy_files_to_container.html) does in OpenShift. It aims to improve iteration time for app development by eliminating build and deployment steps.
-->
[Ksync](https://github.com/vapor-ware/ksync) 在本地计算机和运行在 Kubernetes 中的容器之间同步应用程序代码(和配置),类似于 [oc rsync](https://docs.openshift.com/container-platform/3.9/dev_guide/copy_files_to_container.html) 在 OpenShift 中的角色。它旨在通过消除构建和部署步骤来缩短应用程序开发的迭代时间。
<!--
Implications:
-->
这意味着:
<!--
* It bypasses container image build and revision control
* Compiled language users have to run builds inside the pod (TBC)
* Two-way sync remote files are copied to local directory
* Container is restarted each time remote filesystem is updated
* No security features development only
-->
* 它绕过容器图像构建和修订控制
* 使用编译语言的用户必须在 podTBC内运行构建
* 双向同步 - 远程文件会复制到本地目录
* 每次更新远程文件系统时都会重启容器
* 无安全功能 - 仅限开发
<!--
* Utilizes [Syncthing](https://github.com/syncthing/syncthing), a Go library for peer-to-peer sync
* Requires a privileged DaemonSet running in the cluster
* Node has to use Docker with overlayfs2 no other CRI implementations are supported at the time of writing
-->
* 使用 [Syncthing](https://github.com/syncthing/syncthing),一个用于点对点同步的 Go 语言库
* 需要一个在集群中运行的特权 DaemonSet
* Node 必须使用带有 overlayfs2 的 Docker - 在写作本文时,尚不支持其他 CRI 实现
<!--
More info:
-->
更多信息:
* [Getting Started Guide](https://github.com/vapor-ware/ksync#getting-started)
* [How It Works](https://github.com/vapor-ware/ksync/blob/master/docs/architecture.md)
* [Katacoda scenario to try out ksync in your browser](https://www.katacoda.com/vaporio/scenarios/ksync)
* [Syncthing Specification](https://docs.syncthing.net/specs/)
<!--
## Hands-on walkthroughs
-->
## 实践演练
<!--
The app we will be using for the hands-on walkthroughs of the tools in the following is a simple [stock market simulator](https://github.com/kubernauts/dok-example-us), consisting of two microservices:
-->
我们接下来用于练习使用工具的应用是一个简单的[股市模拟器](https://github.com/kubernauts/dok-example-us),包含两个微服务:
<!--
* The `stock-gen` microservice is written in Go and generates stock data randomly and exposes it via HTTP endpoint `/stockdata`.
* A second microservice, `stock-con` is a Node.js app that consumes the stream of stock data from `stock-gen` and provides an aggregation in form of a moving average via the HTTP endpoint `/average/$SYMBOL` as well as a health-check endpoint at `/healthz`.
-->
* `stock-gen`(股市数据生成器)微服务是用 Go 编写的,随机生成股票数据并通过 HTTP 端点 `/ stockdata` 公开
* 第二个微服务,`stock-con`(股市数据消费者)是一个 Node.js 应用程序,它使用来自 `stock-gen` 的股票数据流,并通过 HTTP 端点 `/average/$SYMBOL` 提供股价移动平均线,也提供一个健康检查端点 `/healthz`
<!--
Overall, the default setup of the app looks as follows:
-->
总体上,此应用的默认配置如下图所示:
![Default Setup](/images/blog/2018-05-01-developing-on-kubernetes/dok-architecture_preview.png)
<!--
In the following well do a hands-on walkthrough for a representative selection of tools discussed above: ksync, Minikube with local build, as well as Skaffold. For each of the tools we do the following:
-->
在下文中我们将选取以上讨论的代表性工具进行实践演练ksync具有本地构建的 Minikube 以及 Skaffold。对于每个工具我们执行以下操作
<!--
* Set up the respective tool incl. preparations for the deployment and local consumption of the `stock-con` microservice.
* Perform a code update, that is, change the source code of the `/healthz` endpoint in the `stock-con` microservice and observe the updates.
-->
* 设置相应的工具,包括部署准备和 `stock-con` 微服务数据的本地读取
* 执行代码更新,即更改 `stock-con` 微服务的 `/healthz` 端点的源代码并观察网页刷新
<!--
Note that for the target Kubernetes cluster weve been using Minikube locally, but you can also a remote cluster for ksync and Skaffold if you want to follow along.
-->
请注意,我们一直使用 Minikube 的本地 Kubernetes 集群,但是您也可以使用 ksync 和 Skaffold 的远程集群跟随练习。
<!--
### Walkthrough: ksync
-->
### 实践演练ksync
<!--
As a preparation, install [ksync](https://vapor-ware.github.io/ksync/#installation) and then carry out the following steps to prepare the development setup:
-->
作为准备,安装 [ksync](https://vapor-ware.github.io/ksync/#installation),然后执行以下步骤配置开发环境:
```
$ mkdir -p $(pwd)/ksync
$ kubectl create namespace dok
$ ksync init -n dok
```
<!--
With the basic setup completed we're ready to tell ksyncs local client to watch a certain Kubernetes namespace and then we create a spec to define what we want to sync (the directory `$(pwd)/ksync` locally with `/app` in the container). Note that target pod is specified via the selector parameter:
-->
完成基本设置后,我们可以告诉 ksync 的本地客户端监控 Kubernetes 的某个命名空间,然后我们创建一个规范来定义我们想要同步的文件夹(本地的 `$(pwd)/ksync` 和容器中的 `/ app` )。请注意,目标 pod 是用 selector 参数指定:
```
$ ksync watch -n dok
$ ksync create -n dok --selector=app=stock-con $(pwd)/ksync /app
$ ksync get -n dok
```
<!--
Now we deploy the stock generator and the stock consumer microservice:
-->
现在我们部署股价数据生成器和股价数据消费者微服务:
```
$ kubectl -n=dok apply \
-f https://raw.githubusercontent.com/kubernauts/dok-example-us/master/stock-gen/app.yaml
$ kubectl -n=dok apply \
-f https://raw.githubusercontent.com/kubernauts/dok-example-us/master/stock-con/app.yaml
```
<!--
Once both deployments are created and the pods are running, we forward the `stock-con` service for local consumption (in a separate terminal session):
-->
一旦两个部署建好并且 pod 开始运行,我们转发 `stock-con` 服务以供本地读取(另开一个终端窗口):
```
$ kubectl get -n dok po --selector=app=stock-con \
-o=custom-columns=:metadata.name --no-headers | \
xargs -IPOD kubectl -n dok port-forward POD 9898:9898
```
<!--
With that we should be able to consume the `stock-con` service from our local machine; we do this by regularly checking the response of the `healthz` endpoint like so (in a separate terminal session):
-->
这样,通过定期查询 `healthz` 端点,我们就应该能够从本地机器上读取 `stock-con` 服务,查询命令如下(在一个单独的终端窗口):
```
$ watch curl localhost:9898/healthz
```
<!--
Now change the code in the `ksync/stock-con`directory, for example update the [`/healthz` endpoint code in `service.js`](https://github.com/kubernauts/dok-example-us/blob/2334ee8fb11f8813370122bd46285cf45bdd4c48/stock-con/service.js#L52) by adding a field to the JSON response and observe how the pod gets updated and the response of the `curl localhost:9898/healthz` command changes. Overall you should have something like the following in the end:
-->
现在,改动 `ksync/stock-con` 目录中的代码,例如改动 [`service.js` 中定义的 `/healthz` 端点代码](https://github.com/kubernauts/dok-example-us/blob/2334ee8fb11f8813370122bd46285cf45bdd4c48/stock-con/service.js#L52),在其 JSON 形式的响应中新添一个字段并观察 pod 如何更新以及 `curl localhost9898/healthz` 命令的输出发生何种变化。总的来说,您最后应该看到类似的内容:
![Preview](/images/blog/2018-05-01-developing-on-kubernetes/dok-ksync_preview.png)
<!--
### Walkthrough: Minikube with local build
-->
### 实践演练:带本地构建的 Minikube
<!--
For the following you will need to have Minikube up and running and we will leverage the Minikube-internal Docker daemon for building images, locally. As a preparation, do the following
-->
对于以下内容,您需要启动并运行 Minikube我们将利用 Minikube 自带的 Docker daemon 在本地构建镜像。作为准备,请执行以下操作
```
$ git clone https://github.com/kubernauts/dok-example-us.git && cd dok-example-us
$ eval $(minikube docker-env)
$ kubectl create namespace dok
```
<!--
Now we deploy the stock generator and the stock consumer microservice:
-->
现在我们部署股价数据生成器和股价数据消费者微服务:
```
$ kubectl -n=dok apply -f stock-gen/app.yaml
$ kubectl -n=dok apply -f stock-con/app.yaml
```
<!--
Once both deployments are created and the pods are running, we forward the `stock-con` service for local consumption (in a separate terminal session):
-->
一旦两个部署建好并且 pod 开始运行,我们转发 `stock-con` 服务以供本地读取(另开一个终端窗口):
```
$ kubectl get -n dok po --selector=app=stock-con \
-o=custom-columns=:metadata.name --no-headers | \
xargs -IPOD kubectl -n dok port-forward POD 9898:9898 &
$ watch curl localhost:9898/healthz
```
<!--
Now change the code in the `stock-con`directory, for example, update the [`/healthz` endpoint code in `service.js`](https://github.com/kubernauts/dok-example-us/blob/2334ee8fb11f8813370122bd46285cf45bdd4c48/stock-con/service.js#L52) by adding a field to the JSON response. Once youre done with your code update, the last step is to build a new container image and kick off a new deployment like shown below:
-->
现在,改一下 `ksync/stock-con` 目录中的代码,例如修改 [`service.js` 中定义的 `/healthz` 端点代码](https://github.com/kubernauts/dok-example-us/blob/2334ee8fb11f8813370122bd46285cf45bdd4c48/stock-con/service.js#L52),在其 JSON 形式的响应中添加一个字段。在您更新完代码后,最后一步是构建新的容器镜像并启动新部署,如下所示:
```
$ docker build -t stock-con:dev -f Dockerfile .
$ kubectl -n dok set image deployment/stock-con *=stock-con:dev
```
<!--
Overall you should have something like the following in the end:
-->
总的来说,您最后应该看到类似的内容:
![Local Preview](/images/blog/2018-05-01-developing-on-kubernetes/dok-minikube-localdev_preview.png)
<!--
### Walkthrough: Skaffold
-->
### 实践演练Skaffold
<!--
To perform this walkthrough you first need to install [Skaffold](https://github.com/GoogleContainerTools/skaffold#installation). Once that is done, you can do the following steps to prepare the development setup:
-->
要进行此演练,首先需要安装 [Skaffold](https://github.com/GoogleContainerTools/skaffold#installation)。完成后,您可以执行以下步骤来配置开发环境:
```
$ git clone https://github.com/kubernauts/dok-example-us.git && cd dok-example-us
$ kubectl create namespace dok
```
<!--
Now we deploy the stock generator (but not the stock consumer microservice, that is done via Skaffold):
-->
现在我们部署股价数据生成器(但是暂不部署股价数据消费者,此服务将使用 Skaffold 完成):
```
$ kubectl -n=dok apply -f stock-gen/app.yaml
```
<!--
Note that initially we experienced an authentication error when doing `skaffold dev` and needed to apply a fix as described in [Issue 322](https://github.com/GoogleContainerTools/skaffold/issues/322). Essentially it means changing the content of `~/.docker/config.json` to:
-->
请注意,最初我们在执行 `skaffold dev` 时发生身份验证错误,为避免此错误需要安装[问题322](https://github.com/GoogleContainerTools/skaffold/issues/322) 中所述的修复。本质上,需要将 `〜/.docker/config.json` 的内容改为:
```
{
"auths": {}
}
```
<!--
Next, we had to patch `stock-con/app.yaml` slightly to make it work with Skaffold:
-->
接下来,我们需要略微改动 `stock-con/app.yaml`,这样 Skaffold 才能正常使用此文件:
<!--
Add a `namespace` field to both the `stock-con` deployment and the service with the value of `dok`.
Change the `image` field of the container spec to `quay.io/mhausenblas/stock-con` since Skaffold manages the container image tag on the fly.
-->
`stock-con` 部署和服务中添加一个 `namespace` 字段,其值为 `dok`
将容器规范的 `image` 字段更改为 `quay.io/mhausenblas/stock-con`,因为 Skaffold 可以即时管理容器镜像标签。
<!--
The resulting `app.yaml` file stock-con looks as follows:
-->
最终的 stock-con 的 `app.yaml` 文件看起来如下:
```
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: stock-con
name: stock-con
namespace: dok
spec:
replicas: 1
template:
metadata:
labels:
app: stock-con
spec:
containers:
- name: stock-con
image: quay.io/mhausenblas/stock-con
env:
- name: DOK_STOCKGEN_HOSTNAME
value: stock-gen
- name: DOK_STOCKGEN_PORT
value: "9999"
ports:
- containerPort: 9898
protocol: TCP
livenessProbe:
initialDelaySeconds: 2
periodSeconds: 5
httpGet:
path: /healthz
port: 9898
readinessProbe:
initialDelaySeconds: 2
periodSeconds: 5
httpGet:
path: /healthz
port: 9898
---
apiVersion: v1
kind: Service
metadata:
labels:
app: stock-con
name: stock-con
namespace: dok
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9898
selector:
app: stock-con
```
<!--
The final step before we can start development is to configure Skaffold. So, create a file `skaffold.yaml` in the `stock-con/` directory with the following content:
-->
我们能够开始开发之前的最后一步是配置 Skaffold。因此`stock-con/` 目录中创建文件 `skaffold.yaml`,其中包含以下内容:
```
apiVersion: skaffold/v1alpha2
kind: Config
build:
artifacts:
- imageName: quay.io/mhausenblas/stock-con
workspace: .
docker: {}
local: {}
deploy:
kubectl:
manifests:
- app.yaml
```
<!--
Now were ready to kick off the development. For that execute the following in the `stock-con/` directory:
-->
现在我们准备好开始开发了。为此,在 `stock-con/` 目录中执行以下命令:
```
$ skaffold dev
```
<!--
Above command triggers a build of the `stock-con` image and then a deployment. Once the pod of the `stock-con` deployment is running, we again forward the `stock-con` service for local consumption (in a separate terminal session) and check the response of the `healthz` endpoint:
-->
上面的命令将触发 `stock-con` 图像的构建和部署。一旦 `stock-con` 部署的 pod 开始运行,我们再次转发 `stock-con` 服务以供本地读取(在单独的终端窗口中)并检查 `healthz` 端点的响应:
```bash
$ kubectl get -n dok po --selector=app=stock-con \
-o=custom-columns=:metadata.name --no-headers | \
xargs -IPOD kubectl -n dok port-forward POD 9898:9898 &
$ watch curl localhost:9898/healthz
```
<!--
If you now change the code in the `stock-con`directory, for example, by updating the [`/healthz` endpoint code in `service.js`](https://github.com/kubernauts/dok-example-us/blob/2334ee8fb11f8813370122bd46285cf45bdd4c48/stock-con/service.js#L52) by adding a field to the JSON response, you should see Skaffold noticing the change and create a new image as well as deploy it. The resulting screen would look something like this:
-->
现在,如果您修改一下 `stock-con` 目录中的代码,例如 [`service.js` 中定义的 `/healthz` 端点代码](https://github.com/kubernauts/dok-example-us/blob/2334ee8fb11f8813370122bd46285cf45bdd4c48/stock-con/service.js#L52),在其 JSON 形式的响应中添加一个字段,您应该看到 Skaffold 可以检测到代码改动并创建新图像以及部署它。您的屏幕看起来应该类似这样:
![Skaffold Preview](/images/blog/2018-05-01-developing-on-kubernetes/dok-skaffold_preview.png)
<!--
By now you should have a feeling how different tools enable you to develop apps on Kubernetes and if youre interested to learn more about tools and or methods, check out the following resources:
-->
至此,您应该对不同的工具如何帮您在 Kubernetes 上开发应用程序有了一定的概念,如果您有兴趣了解有关工具和/或方法的更多信息,请查看以下资源:
* Blog post by Shahidh K Muhammed on [Draft vs Gitkube vs Helm vs Ksonnet vs Metaparticle vs Skaffold](https://blog.hasura.io/draft-vs-gitkube-vs-helm-vs-ksonnet-vs-metaparticle-vs-skaffold-f5aa9561f948) (03/2018)
* Blog post by Gergely Nemeth on [Using Kubernetes for Local Development](https://nemethgergely.com/using-kubernetes-for-local-development/index.html), with a focus on Skaffold (03/2018)
* Blog post by Richard Li on [Locally developing Kubernetes services (without waiting for a deploy)](https://hackernoon.com/locally-developing-kubernetes-services-without-waiting-for-a-deploy-f63995de7b99), with a focus on Telepresence
* Blog post by Abhishek Tiwari on [Local Development Environment for Kubernetes using Minikube](https://abhishek-tiwari.com/local-development-environment-for-kubernetes-using-minikube/) (09/2017)
* Blog post by Aymen El Amri on [Using Kubernetes for Local DevelopmentMinikube](https://medium.com/devopslinks/using-kubernetes-minikube-for-local-development-c37c6e56e3db) (08/2017)
* Blog post by Alexis Richardson on [GitOps - Operations by Pull Request](https://www.weave.works/blog/gitops-operations-by-pull-request) (08/2017)
* Slide deck [GitOps: Drive operations through git](https://docs.google.com/presentation/d/1d3PigRVt_m5rO89Ob2XZ16bW8lRSkHHH5k816-oMzZo/), with a focus on Gitkube by Tirumarai Selvan (03/2018)
* Slide deck [Developing apps on Kubernetes](https://speakerdeck.com/mhausenblas/developing-apps-on-kubernetes), a talk Michael Hausenblas gave at a CNCF Paris meetup (04/2018)
* YouTube videos:
* [TGI Kubernetes 029: Developing Apps with Ksync](https://www.youtube.com/watch?v=QW85Y0Ug3KY )
* [TGI Kubernetes 030: Exploring Skaffold](https://www.youtube.com/watch?v=McwwWhCXMxc)
* [TGI Kubernetes 031: Connecting with Telepresence](https://www.youtube.com/watch?v=zezeBAJ_3w8)
* [TGI Kubernetes 033: Developing with Draft](https://www.youtube.com/watch?v=8B1D7cTMPgA)
* Raw responses to the [Kubernetes Application Survey](https://docs.google.com/spreadsheets/d/12ilRCly2eHKPuicv1P_BD6z__PXAqpiaR-tDYe2eudE/edit) 2018 by SIG Apps
<!--
With that we wrap up this post on how to go about developing apps on Kubernetes, we hope you learned something and if you have feedback and/or want to point out a tool that you found useful, please let us know via Twitter: [Ilya](https://twitter.com/errordeveloper) and [Michael](https://twitter.com/mhausenblas).
-->
有了这些,我们这篇关于如何在 Kubernetes 上开发应用程序的博客就可以收尾了,希望您有所收获,如果您有反馈和/或想要指出您认为有用的工具,请通过 Twitter 告诉我们:[Ilya](https://twitter.com/errordeveloper) 和 [Michael](https://twitter.com/mhausenblas)

View File

@ -0,0 +1,67 @@
---
title: 'Kubernetes 1 11向 discuss kubernetes 问好'
layout: blog
date: 2018-05-30
---
<!--
---
title: ' Kubernetes 1 11say-hello-to-discuss-kubernetes '
cn-approvers:
- congfairy
layout: blog
date: 2018-05-30
---
-->
<!--
Author: Jorge Castro (Heptio)
-->
作者: Jorge Castro (Heptio)
<!--
Communication is key when it comes to engaging a community of over 35,000 people in a global and remote environment. Keeping track of everything in the Kubernetes community can be an overwhelming task. On one hand we have our official resources, like Stack Overflow, GitHub, and the mailing lists, and on the other we have more ephemeral resources like Slack, where you can hop in, chat with someone, and then go on your merry way.
-->
就一个超过 35,000 人的全球性社区而言,参与其中时沟通是非常关键的。 跟踪 Kubernetes 社区中的所有内容可能是一项艰巨的任务。 一方面,我们有官方资源,如 Stack OverflowGitHub 和邮件列表,另一方面,我们有更多瞬时性的资源,如 Slack你可以加入进去、与某人聊天然后各走各路。
<!--
Slack is great for casual and timely conversations and keeping up with other community members, but communication can't be easily referenced in the future. Plus it can be hard to raise your hand in a room filled with 35,000 participants and find a voice. Mailing lists are useful when trying to reach a specific group of people with a particular ask and want to keep track of responses on the thread, but can be daunting with a large amount of people. Stack Overflow and GitHub are ideal for collaborating on projects or questions that involve code and need to be searchable in the future, but certain topics like "What's your favorite CI/CD tool" or "Kubectl tips and tricks" are offtopic there.
While our current assortment of communication channels are valuable in their own rights, we found that there was still a gap between email and real time chat. Across the rest of the web, many other open source projects like Docker, Mozilla, Swift, Ghost, and Chef have had success building communities on top of Discourse, an open source discussion platform. So what if we could use this tool to bring our discussions together under a modern roof, with an open API, and perhaps not let so much of our information fade into the ether? There's only one way to find out: Welcome to discuss.kubernetes.io
-->
Slack 非常适合随意和及时的对话并与其他社区成员保持联系但未来很难轻易引用通信。此外在35,000名参与者中提问并得到回答很难。邮件列表在有问题尝试联系特定人群并且想要跟踪大家的回应时非常有用但是对于大量人员来说可能是麻烦的。 Stack Overflow 和 GitHub 非常适合在涉及代码的项目或问题上进行协作,并且如果在将来要进行搜索也很有用,但某些主题如“你最喜欢的 CI/CD 工具是什么”或“[Kubectl提示和技巧](http://discuss.kubernetes.io/t/kubectl-tips-and-tricks/192)“在那里是没有意义的。
虽然我们目前的各种沟通渠道对他们自己来说都很有价值,但我们发现电子邮件和实时聊天之间仍然存在差距。在网络的其他部分,许多其他开源项目,如 Docker、Mozilla、Swift、Ghost 和 Chef已经成功地在[Discourse](http://www.discourse.org/features)之上构建社区一个开放的讨论平台。那么如果我们可以使用这个工具将我们的讨论结合在一个平台下使用开放的API或许也不会让我们的大部分信息消失在网络中呢只有一种方法可以找到欢迎来到[discuss.kubernetes.io](http://discuss.kubernetes.io)
<!--
Right off the bat we have categories that users can browse. Checking and posting in these categories allow users to participate in things they might be interested in without having to commit to subscribing to a list. Granular notification controls allow the users to subscribe to just the category or tag they want, and allow for responding to topics via email.
Ecosystem partners and developers now have a place where they can [announce projects](https://discuss.kubernetes.io/c/announcements) that they're working on to users without wondering if it would be offtopic on an official list. We can make this place be not just about core Kubernetes, but about the hundreds of wonderful tools our community is building.
This new community forum gives people a place to go where they can discuss Kubernetes, and a sounding board for developers to make announcements of things happening around Kubernetes, all while being searchable and easily accessible to a wider audience.
Hop in and take a look. We're just getting started, so you might want to begin by [introducing yourself](https://discuss.kubernetes.io/t/introduce-yourself-here/56) and then browsing around. Apps are also available for [Android](https://play.google.com/store/apps/details?id=com.discourse&hl=en_US&rdid=com.discourse&pli=1)and [iOS](https://itunes.apple.com/us/app/discourse-app/id1173672076?mt=8).
-->
马上,我们有用户可以浏览的类别。检查和发布这些类别允许用户参与他们可能感兴趣的事情,而无需订阅列表。精细的通知控件允许用户只订阅他们想要的类别或标签,并允许通过电子邮件回复主题。
生态系统合作伙伴和开发人员现在有一个地方可以[宣布项目](http://discuss.kubernetes.io/c/announcements),他们正在为用户工作,而不会想知道它是否会在官方列表中脱离主题。我们可以让这个地方不仅仅是关于核心 Kubernetes而是关于我们社区正在建设的数百个精彩工具。
这个新的社区论坛为人们提供了一个可以讨论 Kubernetes 的地方,也是开发人员在 Kubernetes 周围发布事件的声音板,同时可以搜索并且更容易被更广泛的用户访问。
进来看看。我们刚刚开始,所以,您可能希望从[自我介绍](http://discuss.kubernetes.io/t/introduce-yourself-here/56)开始,到处浏览。也有 [Android](http://play.google.com/store/apps/details?id=com.discourse&hl=en_US&rdid=com.discourse&pli=1) 和 [iOS](http://itunes.apple.com/us/app/discourse-app/id1173672076?mt=8) 应用下载。

View File

@ -0,0 +1,42 @@
---
title: Kubernetes 这四年
approvers:
cn-approvers:
- congfairy
layout: blog
date: 2018-06-06
---
<!--
**Author**: Joe Beda (CTO and Founder, Heptio)
On June 6, 2014 I checked in the [first commit](https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56) of what would become the public repository for Kubernetes. Many would assume that is where the story starts. It is the beginning of history, right? But that really doesnt tell the whole story.
-->
**作者**Joe Beda( Heptio 首席技术官兼创始人)
2014 年 6 月 6 日,我检查了 Kubernetes 公共代码库的[第一次 commit](https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56) 。许多人会认为这是故事开始的地方。这难道不是一切开始的地方吗?但这的确不能把整个过程说清楚。
![k8s_first_commit](/images/blog/2018-06-06-4-years-of-k8s/k8s-first-commit.png)
<!--
The cast leading up to that commit was large and the success for Kubernetes since then is owed to an ever larger cast.
Kubernetes was built on ideas that had been proven out at Google over the previous ten years with Borg. And Borg, itself, owed its existence to even earlier efforts at Google and beyond.
Concretely, Kubernetes started as some prototypes from Brendan Burns combined with ongoing work from me and Craig McLuckie to better align the internal Google experience with the Google Cloud experience. Brendan, Craig, and I really wanted people to use this, so we made the case to build out this prototype as an open source project that would bring the best ideas from Borg out into the open.
After we got the nod, it was time to actually build the system. We took Brendans prototype (in Java), rewrote it in Go, and built just enough to get the core ideas across. By this time the team had grown to include Ville Aikas, Tim Hockin, Brian Grant, Dawn Chen and Daniel Smith. Once we had something working, someone had to sign up to clean things up to get it ready for public launch. That ended up being me. Not knowing the significance at the time, I created a new repo, moved things over, and checked it in. So while I have the first public commit to the repo, there was work underway well before that.
The version of Kubernetes at that point was really just a shadow of what it was to become. The core concepts were there but it was very raw. For example, Pods were called Tasks. That was changed a day before we went public. All of this led up to the public announcement of Kubernetes on June 10th, 2014 in a keynote from Eric Brewer at the first DockerCon. You can watch that video here:
-->
第一次 commit 涉及的人员众多,自那以后 Kubernetes 的成功归功于更大的开发者阵容。
Kubernetes 建立在过去十年曾经在 Google 的 Borg 集群管理系统中验证过的思路之上。而 Borg 本身也是 Google 和其他公司早期努力的结果。
具体而言Kubernetes 最初是从 Brendan Burns 的一些原型开始,结合我和 Craig McLuckie 正在进行的工作,以更好地将 Google 内部实践与 Google Cloud 的经验相结合。 BrendanCraig 和我真的希望人们使用它,所以我们建议将这个原型构建为一个开源项目,将 Borg 的最佳创意带给大家。
在我们所有人同意后,就开始着手构建这个系统了。我们采用了 Brendan 的原型Java 语言),用 Go 语言重写了它,并且以上述核心思想去构建该系统。到这个时候,团队已经成长为包括 Ville AikasTim HockinBrian GrantDawn Chen 和 Daniel Smith。一旦我们有了一些工作需求有人必须承担一些脱敏的工作以便为公开发布做好准备。这个角色最终由我承担。当时我不知道这件事情的重要性我创建了一个新的仓库把代码搬过来然后进行了检查。所以在我第一次提交 public commit 之前,就有工作已经启动了。
那时 Kubernetes 的版本只是现在版本的简单雏形。核心概念已经有了但非常原始。例如Pods 被称为 Tasks这在我们推广前一天就被替换。2014年6月10日 Eric Brewe 在第一届 DockerCon 上的演讲中正式发布了 Kubernetes 。您可以在此处观看该视频:
<center><iframe width="560" height="315" src="https://www.youtube.com/embed/YrxnVKZeqK8" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe></center>
<!--
But, however raw, that modest start was enough to pique the interest of a community that started strong and has only gotten stronger. Over the past four years Kubernetes has exceeded the expectations of all of us that were there early on. We owe the Kubernetes community a huge debt. The success the project has seen is based not just on code and technology but also the way that an amazing group of people have come together to create something special. The best expression of this is the [set of Kubernetes values](https://github.com/kubernetes/steering/blob/master/values.md) that Sarah Novotny helped curate.
Here is to another 4 years and beyond! 🎉🎉🎉
-->
但是无论多么原始这小小的一步足以激起一个开始强大而且变得更强大的社区的兴趣。在过去的四年里Kubernetes 已经超出了我们所有人的期望。我们对 Kubernetes 社区的所有人员表示感谢。该项目所取得的成功不仅基于代码和技术还基于一群出色的人聚集在一起所做的有意义的事情。Sarah Novotny 策划的一套 [Kubernetes 价值观](https://github.com/kubernetes/steering/blob/master/values.md)是以上最好的表现形式。
让我们一起期待下一个4年🎉🎉🎉

View File

@ -0,0 +1,129 @@
---
title: 'Kubernetes 内的动态 Ingress'
layout: blog
---
<!--
title: Dynamic Ingress in Kubernetes
date: 2018-06-07
Author: Richard Li (Datawire)
-->
作者: Richard Li (Datawire)
<!--
Kubernetes makes it easy to deploy applications that consist of many microservices, but one of the key challenges with this type of architecture is dynamically routing ingress traffic to each of these services. One approach is Ambassador, a Kubernetes-native open source API Gateway built on the Envoy Proxy. Ambassador is designed for dynamic environment where services may come and go frequently.
Ambassador is configured using Kubernetes annotations. Annotations are used to configure specific mappings from a given Kubernetes service to a particular URL. A mapping can include a number of annotations for configuring a route. Examples include rate limiting, protocol, cross-origin request sharing, traffic shadowing, and routing rules.
-->
Kubernetes 可以轻松部署由许多微服务组成的应用程序,但这种架构的关键挑战之一是动态地将流量路由到这些服务中的每一个。
一种方法是使用 [Ambassador](https://www.getambassador.io)
一个基于 [Envoy Proxy](https://www.envoyproxy.io) 构建的 Kubernetes 原生开源 API 网关。
Ambassador 专为动态环境而设计,这类环境中的服务可能被频繁添加或删除。
Ambassador 使用 Kubernetes 注解进行配置。
注解用于配置从给定 Kubernetes 服务到特定 URL 的具体映射关系。
每个映射中可以包括多个注解,用于配置路由。
注解的例子有速率限制、协议、跨源请求共享CORS、流量影射和路由规则等。
<!--
## A Basic Ambassador Example
Ambassador is typically installed as a Kubernetes deployment, and is also available as a Helm chart. To configure Ambassador, create a Kubernetes service with the Ambassador annotations. Here is an example that configures Ambassador to route requests to /httpbin/ to the public httpbin.org service:
-->
## 一个简单的 Ambassador 示例
Ambassador 通常作为 Kubernetes Deployment 来安装,也可以作为 Helm Chart 使用。
配置 Ambassador 时,请使用 Ambassador 注解创建 Kubernetes 服务。
下面是一个例子,用来配置 Ambassador将针对 /httpbin/ 的请求路由到公共的 httpbin.org 服务:
```
apiVersion: v1
kind: Service
metadata:
name: httpbin
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: httpbin_mapping
prefix: /httpbin/
service: httpbin.org:80
host_rewrite: httpbin.org
spec:
type: ClusterIP
ports:
- port: 80
```
<!--
A mapping object is created with a prefix of /httpbin/ and a service name of httpbin.org. The host_rewrite annotation specifies that the HTTP host header should be set to httpbin.org.
-->
例子中创建了一个 Mapping 对象,其 prefix 设置为 /httpbin/service 名称为 httpbin.org。
其中的 host_rewrite 注解指定 HTTP 的 host 头部字段应设置为 httpbin.org。
<!--
## Kubeflow
Kubeflow provides a simple way to easily deploy machine learning infrastructure on Kubernetes. The Kubeflow team needed a proxy that provided a central point of authentication and routing to the wide range of services used in Kubeflow, many of which are ephemeral in nature.
<center><i>Kubeflow architecture, pre-Ambassador</center></i>
-->
## Kubeflow
[Kubeflow](https://github.com/kubeflow/kubeflow) 提供了一种简单的方法,用于在 Kubernetes 上轻松部署机器学习基础设施。
Kubeflow 团队需要一个代理,为 Kubeflow 中所使用的各种服务提供集中化的认证和路由能力Kubeflow 中许多服务本质上都是生命期很短的。
<center><i>Kubeflow architecture, pre-Ambassador</center></i>
<!--
## Service configuration
With Ambassador, Kubeflow can use a distributed model for configuration. Instead of a central configuration file, Ambassador allows each service to configure its route in Ambassador via Kubernetes annotations. Here is a simplified example configuration:
-->
## 服务配置
有了 AmbassadorKubeflow 可以使用分布式模型进行配置。
Ambassador 不使用集中的配置文件,而是允许每个服务通过 Kubernetes 注解在 Ambassador 中配置其路由。
下面是一个简化的配置示例:
```
---
apiVersion: ambassador/v0
kind: Mapping
name: tfserving-mapping-test-post
prefix: /models/test/
rewrite: /model/test/:predict
method: POST
service: test.kubeflow:8000
```
<!--
In this example, the “test” service uses Ambassador annotations to dynamically configure a route to the service, triggered only when the HTTP method is a POST, and the annotation also specifies a rewrite rule.
-->
示例中“test” 服务使用 Ambassador 注解来为服务动态配置路由。
所配置的路由仅在 HTTP 方法是 POST 时触发;注解中同时还给出了一条重写规则。
<!--
With Ambassador, Kubeflow manages routing easily with Kubernetes annotations. Kubeflow configures a single ingress object that directs traffic to Ambassador, then creates services with Ambassador annotations as needed to direct traffic to specific backends. For example, when deploying TensorFlow services, Kubeflow creates and and annotates a K8s service so that the model will be served at https://<ingress host>/models/<model name>/. Kubeflow can also use the Envoy Proxy to do the actual L7 routing. Using Ambassador, Kubeflow takes advantage of additional routing configuration like URL rewriting and method-based routing.
If youre interested in using Ambassador with Kubeflow, the standard Kubeflow install automatically installs and configures Ambassador.
If youre interested in using Ambassador as an API Gateway or Kubernetes ingress solution for your non-Kubeflow services, check out the Getting Started with Ambassador guide.
## Kubeflow and Ambassador
-->
## Kubeflow 和 Ambassador
通过 AmbassadorKubeflow 可以使用 Kubernetes 注解轻松管理路由。
Kubeflow 配置同一个 Ingress 对象,将流量定向到 Ambassador然后根据需要创建具有 Ambassador 注解的服务,以将流量定向到特定后端。
例如,在部署 TensorFlow 服务时Kubeflow 会创建 Kubernetes 服务并为其添加注解,
以便用户能够在 `https://<ingress主机>/models/<模型名称>/` 处访问到模型本身。
Kubeflow 还可以使用 Envoy Proxy 来进行实际的 L7 路由。
通过 AmbassadorKubeflow 能够更充分地利用 URL 重写和基于方法的路由等额外的路由配置能力。
如果您对在 Kubeflow 中使用 Ambassador 感兴趣,标准的 Kubeflow 安装会自动安装和配置 Ambassador。
如果您有兴趣将 Ambassador 用作 API 网关或 Kubernetes 的 Ingress 解决方案,
请参阅 [Ambassador 入门指南](https://www.getambassador.io/user-guide/getting-started)。

View File

@ -0,0 +1,241 @@
<!--
---
layout: blog
title: 'The Machines Can Do the Work, a Story of Kubernetes Testing, CI, and Automating the Contributor Experience'
date: 2018-08-29
---
-->
---
layout: blog
title: '机器可以完成这项工作,一个关于 kubernetes 测试、CI 和自动化贡献者体验的故事'
date: 2019-08-29
---
<!--
**Author**: Aaron Crickenberger (Google) and Benjamin Elder (Google)
-->
**作者**Aaron Crickenberger谷歌和 Benjamin Elder谷歌
<!--
_“Large projects have a lot of less exciting, yet, hard work. We value time spent automating repetitive work more highly than toil. Where that work cannot be automated, it is our culture to recognize and reward all types of contributions. However, heroism is not sustainable.”_ - [Kubernetes Community Values](https://git.k8s.io/community/values.md#automation-over-process)
-->
_”大型项目有很多不那么令人兴奋但却很辛苦的工作。比起辛苦工作我们更重视把时间花在自动化重复性工作上如果这项工作无法实现自动化我们的文化就是承认并奖励所有类型的贡献。然而英雄主义是不可持续的。“_ - [Kubernetes Community Values](https://git.k8s.io/community/values.md#automation-over-process)
<!--
Like many open source projects, Kubernetes is hosted on GitHub. We felt the barrier to participation would be lowest if the project lived where developers already worked, using tools and processes developers already knew. Thus the project embraced the service fully: it was the basis of our workflow, our issue tracker, our documentation, our blog platform, our team structure, and more.
-->
像许多开源项目一样Kubernetes 托管在 GitHub 上。 如果项目位于在开发人员已经工作的地方,使用的开发人员已经知道的工具和流程,那么参与的障碍将是最低的。 因此,该项目完全接受了这项服务:它是我们工作流程,问题跟踪,文档,博客平台,团队结构等的基础。
<!--
This strategy worked. It worked so well that the project quickly scaled past its contributors capacity as humans. What followed was an incredible journey of automation and innovation. We didnt just need to rebuild our airplane mid-flight without crashing, we needed to convert it into a rocketship and launch into orbit. We needed machines to do the work.
-->
这个策略奏效了。 它运作良好,以至于该项目迅速超越了其贡献者的人类能力。 接下来是一次令人难以置信的自动化和创新之旅。 我们不仅需要在飞行途中重建我们的飞机而不会崩溃,我们需要将其转换为火箭飞船并发射到轨道。 我们需要机器来完成这项工作。
<!--
## The Work
-->
## 工作
<!--
Initially, we focused on the fact that we needed to support the sheer volume of tests mandated by a complex distributed system such as Kubernetes. Real world failure scenarios had to be exercised via end-to-end (e2e) tests to ensure proper functionality. Unfortunately, e2e tests were susceptible to flakes (random failures) and took anywhere from an hour to a day to complete.
-->
最初,我们关注的事实是,我们需要支持复杂的分布式系统(如 Kubernetes所要求的大量测试。 真实世界中的故障场景必须通过端到端e2e测试来执行确保正确的功能。 不幸的是e2e 测试容易受到薄片(随机故障)的影响,并且需要花费一个小时到一天才能完成。
<!--
Further experience revealed other areas where machines could do the work for us:
-->
进一步的经验揭示了机器可以为我们工作的其他领域:
<!--
* PR Workflow
* Did the contributor sign our CLA?
* Did the PR pass tests?
* Is the PR mergeable?
* Did the merge commit pass tests?
* Triage
* Who should be reviewing PRs?
* Is there enough information to route an issue to the right people?
* Is an issue still relevant?
* Project Health
* What is happening in the project?
* What should we be paying attention to?
-->
* Pull Request 工作流程
* 贡献者是否签署了我们的 CLA
* Pull Request 通过测试吗?
* Pull Request 可以合并吗?
* 合并提交是否通过了测试?
* 鉴别分类
* 谁应该审查 Pull Request
* 是否有足够的信息将问题发送给合适的人?
* 问题是否依旧存在?
* 项目健康
* 项目中发生了什么?
* 我们应该注意什么?
<!--
As we developed automation to improve our situation, we followed a few guiding principles:
-->
当我们开发自动化来改善我们的情况时,我们遵循了以下几个指导原则:
<!--
* Follow the push/pull control loop patterns that worked well for Kubernetes
* Prefer stateless loosely coupled services that do one thing well
* Prefer empowering the entire community over empowering a few core contributors
* Eat our own dogfood and avoid reinventing wheels
-->
* 遵循适用于 Kubernetes 的推送/拉取控制循环模式
* 首选无状态松散耦合服务
* 更倾向于授权整个社区权利,而不是赋予少数核心贡献者权力
* 做好自己的事,而不要重新造轮子
<!--
## Enter Prow
-->
## 了解 Prow
<!--
This led us to create [Prow](https://git.k8s.io/test-infra/prow) as the central component for our automation. Prow is sort of like an [If This, Then That](https://ifttt.com/) for GitHub events, with a built-in library of [commands](https://prow.k8s.io/command-help), [plugins](https://prow.k8s.io/plugins), and utilities. We built Prow on top of Kubernetes to free ourselves from worrying about resource management and scheduling, and ensure a more pleasant operational experience.
-->
这促使我们创建 [Prow](https://git.k8s.io/test-infra/prow) 作为我们自动化的核心组件。 Prow有点像 [If This, Then That](https://ifttt.com/) 用于 GitHub 事件, 内置 [commands](https://prow.k8s.io/command-help) [plugins](https://prow.k8s.io/plugins) 和实用程序。 我们在 Kubernetes 之上建立了 Prow让我们不必担心资源管理和日程安排并确保更愉快的运营体验。
<!--
Prow lets us do things like:
-->
Prow 让我们做以下事情:
<!--
* Allow our community to triage issues/PRs by commenting commands such as “/priority critical-urgent”, “/assign mary” or “/close”
* Auto-label PRs based on how much code they change, or which files they touch
* Age out issues/PRs that have remained inactive for too long
* Auto-merge PRs that meet our PR workflow requirements
* Run CI jobs defined as [Knative Builds](https://github.com/knative/build), Kubernetes Pods, or Jenkins jobs
* Enforce org-wide and per-repo GitHub policies like [branch protection](https://github.com/kubernetes/test-infra/tree/master/prow/cmd/branchprotector) and [GitHub labels](https://github.com/kubernetes/test-infra/tree/master/label_sync)
-->
* 允许我们的社区通过评论诸如“/priority critical-urgent”“/assign mary”或“/close”之类的命令对 issues/Pull Requests 进行分类
* 根据用户更改的代码数量或创建的文件自动标记 Pull Requests
* 标出长时间保持不活动状态 issues/Pull Requests
* 自动合并符合我们PR工作流程要求的 Pull Requests
* 运行定义为[Knative Builds](https://github.com/knative/build)的 Kubernetes Pods或 Jenkins jobs的 CI 作业
* 实施组织范围和重构 GitHub 仓库策略,如[Knative Builds]https://github.com/kubernetes/test-infra/tree/master/prow/cmd/branchprotector和[GitHub labels]https://github.com/kubernetes/test-infra/tree/master/label_sync
<!--
Prow was initially developed by the engineering productivity team building Google Kubernetes Engine, and is actively contributed to by multiple members of Kubernetes SIG Testing. Prow has been adopted by several other open source projects, including Istio, JetStack, Knative and OpenShift. [Getting started with Prow](https://github.com/kubernetes/test-infra/blob/master/prow/getting_started.md) takes a Kubernetes cluster and `kubectl apply starter.yaml` (running pods on a Kubernetes cluster).
-->
Prow最初由构建 Google Kubernetes Engine 的工程效率团队开发,并由 Kubernetes SIG Testing 的多个成员积极贡献。 Prow 已被其他几个开源项目采用,包括 IstioJetStackKnative 和 OpenShift。 [Getting started with Prow](https://github.com/kubernetes/test-infra/blob/master/prow/getting_started.md)需要一个 Kubernetes 集群和 `kubectl apply starter.yaml`(在 Kubernetes 集群上运行 pod
<!--
Once we had Prow in place, we began to hit other scaling bottlenecks, and so produced additional tooling to support testing at the scale required by Kubernetes, including:
-->
一旦我们安装了 Prow我们就开始遇到其他的问题因此需要额外的工具以支持 Kubernetes 所需的规模测试,包括:
<!--
- [Boskos](https://github.com/kubernetes/test-infra/tree/master/boskos): manages job resources (such as GCP projects) in pools, checking them out for jobs and cleaning them up automatically ([with monitoring](http://velodrome.k8s.io/dashboard/db/boskos-dashboard?orgId=1))
- [ghProxy](https://github.com/kubernetes/test-infra/tree/master/ghproxy): a reverse proxy HTTP cache optimized for use with the GitHub API, to ensure our token usage doesnt hit API limits ([with monitoring](http://velodrome.k8s.io/dashboard/db/github-cache?refresh=1m&orgId=1))
- [Greenhouse](https://github.com/kubernetes/test-infra/tree/master/greenhouse): allows us to use a remote bazel cache to provide faster build and test results for PRs ([with monitoring](http://velodrome.k8s.io/dashboard/db/bazel-cache?orgId=1))
- [Splice](https://github.com/kubernetes/test-infra/tree/master/prow/cmd/splice): allows us to test and merge PRs in a batch, ensuring our merge velocity is not limited to our test velocity
- [Tide](https://github.com/kubernetes/test-infra/tree/master/prow/cmd/tide): allows us to merge PRs selected via GitHub queries rather than ordered in a queue, allowing for significantly higher merge velocity in tandem with splice
-->
- [Boskos](https://github.com/kubernetes/test-infra/tree/master/boskos): 管理池中的作业资源(例如 GCP 项目),检查它们是否有工作并自动清理它们 ([with monitoring](http://velodrome.k8s.io/dashboard/db/boskos-dashboard?orgId=1))
- [ghProxy](https://github.com/kubernetes/test-infra/tree/master/ghproxy): 优化用于 GitHub API 的反向代理 HTTP 缓存,以确保我们的令牌使用不会达到 API 限制 ([with monitoring](http://velodrome.k8s.io/dashboard/db/github-cache?refresh=1m&orgId=1))
- [Greenhouse](https://github.com/kubernetes/test-infra/tree/master/greenhouse): 允许我们使用远程 bazel 缓存为 Pull requests 提供更快的构建和测试结果 ([with monitoring](http://velodrome.k8s.io/dashboard/db/bazel-cache?orgId=1))
- [Splice](https://github.com/kubernetes/test-infra/tree/master/prow/cmd/splice): 允许我们批量测试和合并 Pull requests确保我们的合并速度不仅限于我们的测试速度
- [Tide](https://github.com/kubernetes/test-infra/tree/master/prow/cmd/tide): 允许我们合并通过 GitHub 查询选择的 Pull requests而不是在队列中排序允许显着更高合并速度与拼接一起
<!--
## Scaling Project Health
-->
## 关注项目健康状况
<!--
With workflow automation addressed, we turned our attention to project health. We chose to use Google Cloud Storage (GCS) as our source of truth for all test data, allowing us to lean on established infrastructure, and allowed the community to contribute results. We then built a variety of tools to help individuals and the project as a whole make sense of this data, including:
-->
随着工作流自动化的实施,我们将注意力转向了项目健康。我们选择使用 Google Cloud Storage (GCS)作为所有测试数据的真实来源,允许我们依赖已建立的基础设施,并允许社区贡献结果。然后,我们构建了各种工具来帮助个人和整个项目理解这些数据,包括:
<!--
* [Gubernator](https://github.com/kubernetes/test-infra/tree/master/gubernator): display the results and test history for a given PR
* [Kettle](https://github.com/kubernetes/test-infra/tree/master/kettle): transfer data from GCS to a publicly accessible bigquery dataset
* [PR dashboard](https://k8s-gubernator.appspot.com/pr): a workflow-aware dashboard that allows contributors to understand which PRs require attention and why
* [Triage](https://storage.googleapis.com/k8s-gubernator/triage/index.html): identify common failures that happen across all jobs and tests
* [Testgrid](https://k8s-testgrid.appspot.com/): display test results for a given job across all runs, summarize test results across groups of jobs
-->
* [Gubernator](https://github.com/kubernetes/test-infra/tree/master/gubernator): 显示给定 Pull Request 的结果和测试历史
* [Kettle](https://github.com/kubernetes/test-infra/tree/master/kettle): 将数据从 GCS 传输到可公开访问的 bigquery 数据集
* [PR dashboard](https://k8s-gubernator.appspot.com/pr): 一个工作流程识别仪表板,允许参与者了解哪些 Pull Request 需要注意以及为什么
* [Triage](https://storage.googleapis.com/k8s-gubernator/triage/index.html): 识别所有作业和测试中发生的常见故障
* [Testgrid](https://k8s-testgrid.appspot.com/): 显示所有运行中给定作业的测试结果,汇总各组作业的测试结果
<!--
We approached the Cloud Native Computing Foundation (CNCF) to develop DevStats to glean insights from our GitHub events such as:
-->
我们与云计算本地计算基金会CNCF联系开发 DevStats以便从我们的 GitHub 活动中收集见解,例如:
<!--
* [Which prow commands are people most actively using](https://k8s.devstats.cncf.io/d/5/bot-commands-repository-groups?orgId=1)
* [PR reviews by contributor over time](https://k8s.devstats.cncf.io/d/46/pr-reviews-by-contributor?orgId=1&var-period=d7&var-repo_name=All&var-reviewers=All)
* [Time spent in each phase of our PR workflow](https://k8s.devstats.cncf.io/d/44/pr-time-to-approve-and-merge?orgId=1)
-->
* [Which prow commands are people most actively using](https://k8s.devstats.cncf.io/d/5/bot-commands-repository-groups?orgId=1)
* [PR reviews by contributor over time](https://k8s.devstats.cncf.io/d/46/pr-reviews-by-contributor?orgId=1&var-period=d7&var-repo_name=All&var-reviewers=All)
* [Time spent in each phase of our PR workflow](https://k8s.devstats.cncf.io/d/44/pr-time-to-approve-and-merge?orgId=1)
<!--
## Into the Beyond
-->
## Into the Beyond
<!--
Today, the Kubernetes project spans over 125 repos across five orgs. There are 31 Special Interests Groups and 10 Working Groups coordinating development within the project. In the last year the project has had [participation from over 13,800 unique developers](https://k8s.devstats.cncf.io/d/13/developer-activity-counts-by-repository-group?orgId=1&var-period_name=Last%20year&var-metric=contributions&var-repogroup_name=All) on GitHub.
-->
今天Kubernetes 项目跨越了5个组织125个仓库。有31个特殊利益集团和10个工作组在项目内协调发展。在过去的一年里该项目有 [来自13800多名独立开发人员的参与](https://k8s.devstats.cncf.io/d/13/developer-activity-counts-by-repository-group?orgId=1&var-period_name=Last%20year&var-metric=contributions&var-repogroup_name=All)。
<!--
On any given weekday our Prow instance [runs over 10,000 CI jobs](http://velodrome.k8s.io/dashboard/db/bigquery-metrics?panelId=10&fullscreen&orgId=1&from=now-6M&to=now); from March 2017 to March 2018 it ran 4.3 million jobs. Most of these jobs involve standing up an entire Kubernetes cluster, and exercising it using real world scenarios. They allow us to ensure all supported releases of Kubernetes work across cloud providers, container engines, and networking plugins. They make sure the latest releases of Kubernetes work with various optional features enabled, upgrade safely, meet performance requirements, and work across architectures.
-->
在任何给定的工作日,我们的 Prow 实例[运行超过10,000个 CI 工作](http://velodrome.k8s.io/dashboard/db/bigquery-metrics?panelId=10&fullscreen&orgId=1&from=now-6M&to=now); 从2017年3月到2018年3月它有430万个工作岗位。 这些工作中的大多数涉及建立整个 Kubernetes 集群,并使用真实场景来实施它。 它们使我们能够确保所有受支持的 Kubernetes 版本跨云提供商,容器引擎和网络插件工作。 他们确保最新版本的 Kubernetes 能够启用各种可选功能,安全升级,满足性能要求,并跨架构工作。
<!--
With todays [announcement from CNCF](https://www.cncf.io/announcement/2018/08/29/cncf-receives-9-million-cloud-credit-grant-from-google) noting that Google Cloud has begun transferring ownership and management of the Kubernetes projects cloud resources to CNCF community contributors, we are excited to embark on another journey. One that allows the project infrastructure to be owned and operated by the community of contributors, following the same open governance model that has worked for the rest of the project. Sound exciting to you? Come talk to us at #sig-testing on kubernetes.slack.com.
-->
今天[来自CNCF的公告](https://www.cncf.io/announcement/2018/08/29/cncf-receives-9-million-cloud-credit-grant-from-google) - 注意到 Google Cloud 有开始将 Kubernetes 项目的云资源的所有权和管理权转让给 CNCF 社区贡献者,我们很高兴能够开始另一个旅程。 允许项目基础设施由贡献者社区拥有和运营,遵循对项目其余部分有效的相同开放治理模型。 听起来令人兴奋。 请来 kubernetes.slack.com 上的 #sig-testing on kubernetes.slack.com 与我们联系。
<!--
Want to find out more? Come check out these resources:
-->
想了解更多? 快来看看这些资源:
<!--
* [Prow: Testing the way to Kubernetes Next](https://bentheelder.io/posts/prow)
* [Automation and the Kubernetes Contributor Experience](https://www.youtube.com/watch?v=BsIC7gPkH5M)
-->
* [Prow: Testing the way to Kubernetes Next](https://bentheelder.io/posts/prow)
* [Automation and the Kubernetes Contributor Experience](https://www.youtube.com/watch?v=BsIC7gPkH5M)

View File

@ -0,0 +1,306 @@
---
layout: blog
title: 'KubeDirector在 Kubernetes 上运行复杂状态应用程序的简单方法'
date: 2018-10-03
---
<!--
layout: blog
title: 'KubeDirector: The easy way to run complex stateful applications on Kubernetes'
date: 2018-10-03
-->
<!--
**Author**: Thomas Phelan (BlueData)
-->
**作者**Thomas PhelanBlueData
<!--
KubeDirector is an open source project designed to make it easy to run complex stateful scale-out application clusters on Kubernetes. KubeDirector is built using the custom resource definition (CRD) framework and leverages the native Kubernetes API extensions and design philosophy. This enables transparent integration with Kubernetes user/resource management as well as existing clients and tools.
-->
KubeDirector 是一个开源项目,旨在简化在 Kubernetes 上运行复杂的有状态扩展应用程序集群。KubeDirector 使用自定义资源定义CRD
框架构建,并利用了本地 Kubernetes API 扩展和设计哲学。这支持与 Kubernetes 用户/资源 管理以及现有客户端和工具的透明集成。
<!--
We recently [introduced the KubeDirector project](https://medium.com/@thomas_phelan/operation-stateful-introducing-bluek8s-and-kubernetes-director-aa204952f619/), as part of a broader open source Kubernetes initiative we call BlueK8s. Im happy to announce that the pre-alpha
code for [KubeDirector](https://github.com/bluek8s/kubedirector/) is now available. And in this blog post, Ill show how it works.
-->
我们最近[介绍了 KubeDirector 项目](https://medium.com/@thomas_phelan/operation-stateful-introducing-bluek8s-and-kubernetes-director-aa204952f619/),作为我们称为 BlueK8s 的更广泛的 Kubernetes 开源项目的一部分。我很高兴地宣布 [KubeDirector](https://github.com/bluek8s/kubedirector/) 的
pre-alpha 代码现在已经可用。在这篇博客文章中,我将展示它是如何工作的。
<!--
KubeDirector provides the following capabilities:
-->
KubeDirector 提供以下功能:
<!--
* The ability to run non-cloud native stateful applications on Kubernetes without modifying the code. In other words, its not necessary to decompose these existing applications to fit a microservices design pattern.
* Native support for preserving application-specific configuration and state.
* An application-agnostic deployment pattern, minimizing the time to onboard new stateful applications to Kubernetes.
-->
* 无需修改代码即可在 Kubernetes 上运行非云原生有状态应用程序。换句话说,不需要分解这些现有的应用程序来适应微服务设计模式。
* 本机支持保存特定于应用程序的配置和状态。
* 与应用程序无关的部署模式,最大限度地减少将新的有状态应用程序装载到 Kubernetes 的时间。
<!--
KubeDirector enables data scientists familiar with data-intensive distributed applications such as Hadoop, Spark, Cassandra, TensorFlow, Caffe2, etc. to run these applications on Kubernetes -- with a minimal learning curve and no need to write GO code. The applications controlled by KubeDirector are defined by some basic metadata and an associated package of configuration artifacts. The application metadata is referred to as a KubeDirectorApp resource.
-->
KubeDirector 使熟悉数据密集型分布式应用程序(如 Hadoop、Spark、Cassandra、TensorFlow、Caffe2 等)的数据科学家能够在 Kubernetes 上运行这些应用程序 -- 只需极少的学习曲线,无需编写 GO 代码。由 KubeDirector 控制的应用程序由一些基本元数据和相关的配置工件包定义。应用程序元数据称为 KubeDirectorApp 资源。
<!--
To understand the components of KubeDirector, clone the repository on [GitHub](https://github.com/bluek8s/kubedirector/) using a command similar to:
-->
要了解 KubeDirector 的组件,请使用类似于以下的命令在 [GitHub](https://github.com/bluek8s/kubedirector/) 上克隆存储库:
```
git clone http://<userid>@github.com/bluek8s/kubedirector.
```
<!--
The KubeDirectorApp definition for the Spark 2.2.1 application is located
in the file `kubedirector/deploy/example_catalog/cr-app-spark221e2.json`.
-->
Spark 2.2.1 应用程序的 KubeDirectorApp 定义位于文件 `kubedirector/deploy/example_catalog/cr-app-spark221e2.json` 中。
```
~> cat kubedirector/deploy/example_catalog/cr-app-spark221e2.json
{
"apiVersion": "kubedirector.bluedata.io/v1alpha1",
"kind": "KubeDirectorApp",
"metadata": {
"name" : "spark221e2"
},
"spec" : {
"systemctlMounts": true,
"config": {
"node_services": [
{
"service_ids": [
"ssh",
"spark",
"spark_master",
"spark_worker"
],
```
<!--
The configuration of an application cluster is referred to as a KubeDirectorCluster resource. The
KubeDirectorCluster definition for a sample Spark 2.2.1 cluster is located in the file
`kubedirector/deploy/example_clusters/cr-cluster-spark221.e1.yaml`.
-->
应用程序集群的配置称为 KubeDirectorCluster 资源。示例 Spark 2.2.1 集群的 KubeDirectorCluster 定义位于文件
`kubedirector/deploy/example_clusters/cr-cluster-spark221.e1.yaml` 中。
```
~> cat kubedirector/deploy/example_clusters/cr-cluster-spark221.e1.yaml
apiVersion: "kubedirector.bluedata.io/v1alpha1"
kind: "KubeDirectorCluster"
metadata:
name: "spark221e2"
spec:
app: spark221e2
roles:
- name: controller
replicas: 1
resources:
requests:
memory: "4Gi"
cpu: "2"
limits:
memory: "4Gi"
cpu: "2"
- name: worker
replicas: 2
resources:
requests:
memory: "4Gi"
cpu: "2"
limits:
memory: "4Gi"
cpu: "2"
- name: jupyter
```
<!--
## Running Spark on Kubernetes with KubeDirector
-->
## 使用 KubeDirector 在 Kubernetes 上运行 Spark
<!--
With KubeDirector, its easy to run Spark clusters on Kubernetes.
-->
使用 KubeDirector可以轻松在 Kubernetes 上运行 Spark 集群。
<!--
First, verify that Kubernetes (version 1.9 or later) is running, using the command `kubectl version`
-->
首先,使用命令 `kubectl version` 验证 Kubernetes版本 1.9 或更高)是否正在运行
```
~> kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
```
<!--
Deploy the KubeDirector service and the example KubeDirectorApp resource definitions with the commands:
-->
使用以下命令部署 KubeDirector 服务和示例 KubeDirectorApp 资源定义:
```
cd kubedirector
make deploy
```
<!--
These will start the KubeDirector pod:
-->
这些将启动 KubeDirector pod
```
~> kubectl get pods
NAME READY STATUS RESTARTS AGE
kubedirector-58cf59869-qd9hb 1/1 Running 0 1m
```
<!--
List the installed KubeDirector applications with `kubectl get KubeDirectorApp`
-->
`kubectl get KubeDirectorApp` 列出中已安装的 KubeDirector 应用程序
```
~> kubectl get KubeDirectorApp
NAME AGE
cassandra311 30m
spark211up 30m
spark221e2 30m
```
<!--
Now you can launch a Spark 2.2.1 cluster using the example KubeDirectorCluster file and the
`kubectl create -f deploy/example_clusters/cr-cluster-spark211up.yaml` command.
Verify that the Spark cluster has been started:
-->
现在,您可以使用示例 KubeDirectorCluster 文件和 `kubectl create -f deploy/example_clusters/cr-cluster-spark211up.yaml` 命令
启动 Spark 2.2.1 集群。验证 Spark 集群已经启动:
```
~> kubectl get pods
NAME READY STATUS RESTARTS AGE
kubedirector-58cf59869-djdwl 1/1 Running 0 19m
spark221e2-controller-zbg4d-0 1/1 Running 0 23m
spark221e2-jupyter-2km7q-0 1/1 Running 0 23m
spark221e2-worker-4gzbz-0 1/1 Running 0 23m
spark221e2-worker-4gzbz-1 1/1 Running 0 23m
```
<!--
The running services now include the Spark services:
-->
现在运行的服务包括 Spark 服务:
```
~> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubedirector ClusterIP 10.98.234.194 <none> 60000/TCP 1d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
svc-spark221e2-5tg48 ClusterIP None <none> 8888/TCP 21s
svc-spark221e2-controller-tq8d6-0 NodePort 10.104.181.123 <none> 22:30534/TCP,8080:31533/TCP,7077:32506/TCP,8081:32099/TCP 20s
svc-spark221e2-jupyter-6989v-0 NodePort 10.105.227.249 <none> 22:30632/TCP,8888:30355/TCP 20s
svc-spark221e2-worker-d9892-0 NodePort 10.107.131.165 <none> 22:30358/TCP,8081:32144/TCP 20s
svc-spark221e2-worker-d9892-1 NodePort 10.110.88.221 <none> 22:30294/TCP,8081:31436/TCP 20s
```
<!--
Pointing the browser at port 31533 connects to the Spark Master UI:
-->
将浏览器指向端口 31533 连接到 Spark 主节点 UI
![kubedirector](/images/blog/2018-10-03-kubedirector/kubedirector.png)
<!--
Thats all there is to it!
In fact, in the example above we also deployed a Jupyter notebook along with the Spark cluster.
-->
就是这样!
事实上,在上面的例子中,我们还部署了一个 Jupyter notebook 和 Spark 集群。
<!--
To start another application (e.g. Cassandra), just specify another KubeDirectorApp file:
-->
要启动另一个应用程序(例如 Cassandra只需指定另一个 KubeDirectorApp 文件:
```
kubectl create -f deploy/example_clusters/cr-cluster-cassandra311.yaml
```
<!--
See the running Cassandra cluster:
-->
查看正在运行的 Cassandra 集群:
```
~> kubectl get pods
NAME READY STATUS RESTARTS AGE
cassandra311-seed-v24r6-0 1/1 Running 0 1m
cassandra311-seed-v24r6-1 1/1 Running 0 1m
cassandra311-worker-rqrhl-0 1/1 Running 0 1m
cassandra311-worker-rqrhl-1 1/1 Running 0 1m
kubedirector-58cf59869-djdwl 1/1 Running 0 1d
spark221e2-controller-tq8d6-0 1/1 Running 0 22m
spark221e2-jupyter-6989v-0 1/1 Running 0 22m
spark221e2-worker-d9892-0 1/1 Running 0 22m
spark221e2-worker-d9892-1 1/1 Running 0 22m
```
<!--
Now you have a Spark cluster (with a Jupyter notebook) and a Cassandra cluster running on Kubernetes.
Use `kubectl get service` to see the set of services.
-->
现在,您有一个 Spark 集群(带有 Jupyter notebook )和一个运行在 Kubernetes 上的 Cassandra 集群。
使用 `kubectl get service` 查看服务集。
```
~> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubedirector ClusterIP 10.98.234.194 <none> 60000/TCP 1d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
svc-cassandra311-seed-v24r6-0 NodePort 10.96.94.204 <none> 22:31131/TCP,9042:30739/TCP 3m
svc-cassandra311-seed-v24r6-1 NodePort 10.106.144.52 <none> 22:30373/TCP,9042:32662/TCP 3m
svc-cassandra311-vhh29 ClusterIP None <none> 8888/TCP 3m
svc-cassandra311-worker-rqrhl-0 NodePort 10.109.61.194 <none> 22:31832/TCP,9042:31962/TCP 3m
svc-cassandra311-worker-rqrhl-1 NodePort 10.97.147.131 <none> 22:31454/TCP,9042:31170/TCP 3m
svc-spark221e2-5tg48 ClusterIP None <none> 8888/TCP 24m
svc-spark221e2-controller-tq8d6-0 NodePort 10.104.181.123 <none> 22:30534/TCP,8080:31533/TCP,7077:32506/TCP,8081:32099/TCP 24m
svc-spark221e2-jupyter-6989v-0 NodePort 10.105.227.249 <none> 22:30632/TCP,8888:30355/TCP 24m
svc-spark221e2-worker-d9892-0 NodePort 10.107.131.165 <none> 22:30358/TCP,8081:32144/TCP 24m
svc-spark221e2-worker-d9892-1 NodePort 10.110.88.221 <none> 22:30294/TCP,8081:31436/TCP 24m
```
<!--
## Get Involved
-->
## 参与其中
<!--
KubeDirector is a fully open source, Apache v2 licensed, project the first of multiple open source projects within a broader initiative we call BlueK8s.
The pre-alpha code for KubeDirector has just been released and we would love for you to join the growing community of developers, contributors, and adopters.
Follow [@BlueK8s](https://twitter.com/BlueK8s/) on Twitter and get involved through these channels:
-->
KubeDirector 是一个完全开放源码的 Apache v2 授权项目 在我们称为 BlueK8s 的更广泛的计划中,它是多个开放源码项目中的第一个。
KubeDirector 的 pre-alpha 代码刚刚发布,我们希望您加入到不断增长的开发人员、贡献者和使用者社区。
在 Twitter 上关注 [@BlueK8s](https://twitter.com/BlueK8s/),并通过以下渠道参与:
<!--
* KubeDirector [chat room on Slack](https://join.slack.com/t/bluek8s/shared_invite/enQtNDUwMzkwODY5OTM4LTRhYmRmZmE4YzY3OGUzMjA1NDg0MDVhNDQ2MGNkYjRhM2RlMDNjMTI1NDQyMjAzZGVlMDFkNThkNGFjZGZjMGY/)
* KubeDirector [GitHub repo](https://github.com/bluek8s/kubedirector/)
-->
* KubeDirector [Slack 聊天室](https://join.slack.com/t/bluek8s/shared_invite/enQtNDUwMzkwODY5OTM4LTRhYmRmZmE4YzY3OGUzMjA1NDg0MDVhNDQ2MGNkYjRhM2RlMDNjMTI1NDQyMjAzZGVlMDFkNThkNGFjZGZjMGY/)
* KubeDirector [GitHub 仓库](https://github.com/bluek8s/kubedirector/)

View File

@ -0,0 +1,268 @@
---
layout: blog
title: 'Kubernetes 中的拓扑感知数据卷供应'
date: 2018-10-11
---
<!--
---
layout: blog
title: 'Topology-Aware Volume Provisioning in Kubernetes'
date: 2018-10-11
---
-->
<!--
**Author**: Michelle Au (Google)
-->
**作者**: Michelle Au谷歌
<!--
The multi-zone cluster experience with persistent volumes is improving in Kubernetes 1.12 with the topology-aware dynamic provisioning beta feature. This feature allows Kubernetes to make intelligent decisions when dynamically provisioning volumes by getting scheduler input on the best place to provision a volume for a pod. In multi-zone clusters, this means that volumes will get provisioned in an appropriate zone that can run your pod, allowing you to easily deploy and scale your stateful workloads across failure domains to provide high availability and fault tolerance.
-->
通过提供拓扑感知动态卷供应功能,具有持久卷的多区域集群体验在 Kubernetes 1.12 中得到了改进。此功能使得 Kubernetes 在动态供应卷时能做出明智的决策,方法是从调度器获得为 Pod 提供数据卷的最佳位置。在多区域集群环境,这意味着数据卷能够在满足你的 Pod 运行需要的合适的区域被供应,从而允许您跨故障域轻松部署和扩展有状态工作负载,从而提供高可用性和容错能力。
<!--
## Previous challenges
-->
## 以前的挑战
<!--
Before this feature, running stateful workloads with zonal persistent disks (such as AWS ElasticBlockStore, Azure Disk, GCE PersistentDisk) in multi-zone clusters had many challenges. Dynamic provisioning was handled independently from pod scheduling, which meant that as soon as you created a PersistentVolumeClaim (PVC), a volume would get provisioned. This meant that the provisioner had no knowledge of what pods were using the volume, and any pod constraints it had that could impact scheduling.
-->
在此功能被提供之前,在多区域集群中使用区域化的持久磁盘(例如 AWS ElasticBlockStoreAzure DiskGCE PersistentDisk运行有状态工作负载存在许多挑战。动态供应独立于 Pod 调度处理,这意味着只要您创建了一个 PersistentVolumeClaimPVC一个卷就会被供应。这也意味着供应者不知道哪些 Pod 正在使用该卷,也不清楚任何可能影响调度的 Pod 约束。
<!--
This resulted in unschedulable pods because volumes were provisioned in zones that:
-->
这导致了不可调度的 Pod因为在以下区域中配置了卷
<!--
* did not have enough CPU or memory resources to run the pod
* conflicted with node selectors, pod affinity or anti-affinity policies
* could not run the pod due to taints
-->
* 没有足够的 CPU 或内存资源来运行 Pod
* 与节点选择器、Pod 亲和或反亲和策略冲突
* 由于污点taint不能运行 Pod
<!--
Another common issue was that a non-StatefulSet pod using multiple persistent volumes could have each volume provisioned in a different zone, again resulting in an unschedulable pod.
-->
另一个常见问题是,使用多个持久卷的非有状态 Pod 可能会在不同的区域中配置每个卷,从而导致一个不可调度的 Pod。
<!--
Suboptimal workarounds included overprovisioning of nodes, or manual creation of volumes in the correct zones, making it difficult to dynamically deploy and scale stateful workloads.
-->
次优的解决方法包括节点超配,或在正确的区域中手动创建卷,但这会造成难以动态部署和扩展有状态工作负载的问题。
<!--
The topology-aware dynamic provisioning feature addresses all of the above issues.
-->
拓扑感知动态供应功能解决了上述所有问题。
<!--
## Supported Volume Types
-->
## 支持的卷类型
<!--
In 1.12, the following drivers support topology-aware dynamic provisioning:
-->
在 1.12 中,以下驱动程序支持拓扑感知动态供应:
<!--
* AWS EBS
* Azure Disk
* GCE PD (including Regional PD)
* CSI (alpha) - currently only the GCE PD CSI driver has implemented topology support
-->
* AWS EBS
* Azure Disk
* GCE PD (包括 Regional PD
* CSIalpha - 目前只有 GCE PD CSI 驱动实现了拓扑支持
<!--
## Design Principles
-->
## 设计原则
<!--
While the initial set of supported plugins are all zonal-based, we designed this feature to adhere to the Kubernetes principle of portability across environments. Topology specification is generalized and uses a similar label-based specification like in Pod nodeSelectors and nodeAffinity. This mechanism allows you to define your own topology boundaries, such as racks in on-premise clusters, without requiring modifications to the scheduler to understand these custom topologies.
-->
虽然最初支持的插件集都是基于区域的,但我们设计此功能时遵循 Kubernetes 跨环境可移植性的原则。
拓扑规范是通用的,并使用类似于基于标签的规范,如 Pod nodeSelectors 和 nodeAffinity。
该机制允许您定义自己的拓扑边界,例如内部部署集群中的机架,而无需修改调度程序以了解这些自定义拓扑。
<!--
In addition, the topology information is abstracted away from the pod specification, so a pod does not need knowledge of the underlying storage systems topology characteristics. This means that you can use the same pod specification across multiple clusters, environments, and storage systems.
-->
此外,拓扑信息是从 Pod 规范中抽象出来的,因此 Pod 不需要了解底层存储系统的拓扑特征。
这意味着您可以在多个集群、环境和存储系统中使用相同的 Pod 规范。
<!--
## Getting Started
-->
## 入门
<!--
To enable this feature, all you need to do is to create a StorageClass with `volumeBindingMode` set to `WaitForFirstConsumer`:
-->
要启用此功能,您需要做的就是创建一个将 `volumeBindingMode` 设置为 `WaitForFirstConsumer` 的 StorageClass
```
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: topology-aware-standard
provisioner: kubernetes.io/gce-pd
volumeBindingMode: WaitForFirstConsumer
parameters:
type: pd-standard
```
<!--
This new setting instructs the volume provisioner to not create a volume immediately, and instead, wait for a pod using an associated PVC to run through scheduling. Note that previous StorageClass `zone` and `zones` parameters do not need to be specified anymore, as pod policies now drive the decision of which zone to provision a volume in.
-->
这个新设置表明卷配置器不立即创建卷,而是等待使用关联的 PVC 的 Pod 通过调度运行。
请注意,不再需要指定以前的 StorageClass `zone``zones` 参数,因为现在在哪个区域中配置卷由 Pod 策略决定。
<!--
Next, create a pod and PVC with this StorageClass. This sequence is the same as before, but with a different StorageClass specified in the PVC. The following is a hypothetical example, demonstrating the capabilities of the new feature by specifying many pod constraints and scheduling policies:
-->
接下来,使用此 StorageClass 创建一个 Pod 和 PVC。
此过程与之前相同,但在 PVC 中指定了不同的 StorageClass。
以下是一个假设示例,通过指定许多 Pod 约束和调度策略来演示新功能特性:
<!--
* multiple PVCs in a pod
* nodeAffinity across a subset of zones
* pod anti-affinity on zones
-->
* 一个 Pod 多个 PVC
* 跨子区域的节点亲和
* 同一区域 Pod 反亲和
```
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- us-central1-a
- us-central1-f
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: failure-domain.beta.kubernetes.io/zone
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
- name: logs
mountPath: /logs
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: topology-aware-standard
resources:
requests:
storage: 10Gi
- metadata:
name: logs
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: topology-aware-standard
resources:
requests:
storage: 1Gi
```
<!--
Afterwards, you can see that the volumes were provisioned in zones according to the policies set by the pod:
-->
之后,您可以看到根据 Pod 设置的策略在区域中配置卷:
```
$ kubectl get pv -o=jsonpath='{range .items[*]}{.spec.claimRef.name}{"\t"}{.metadata.labels.failure\-domain\.beta\.kubernetes\.io/zone}{"\n"}{end}'
www-web-0 us-central1-f
logs-web-0 us-central1-f
www-web-1 us-central1-a
logs-web-1 us-central1-a
```
<!--
## How can I learn more?
-->
## 我怎样才能了解更多?
<!--
Official documentation on the topology-aware dynamic provisioning feature is available here:https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode
-->
有关拓扑感知动态供应功能的官方文档可在此处获取https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode
<!--
Documentation for CSI drivers is available at https://kubernetes-csi.github.io/docs/
-->
有关 CSI 驱动程序的文档请访问https://kubernetes-csi.github.io/docs/
<!--
## Whats next?
-->
## 下一步是什么?
<!--
We are actively working on improving this feature to support:
-->
我们正积极致力于改进此功能以支持:
<!--
* more volume types, including dynamic provisioning for local volumes
* dynamic volume attachable count and capacity limits per node
-->
* 更多卷类型,包括本地卷的动态供应
* 动态容量可附加计数和每个节点的容量限制
<!--
## How do I get involved?
-->
## 我如何参与?
<!--
If you have feedback for this feature or are interested in getting involved with the design and development, join the [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG). Were rapidly growing and always welcome new contributors.
-->
如果您对此功能有反馈意见或有兴趣参与设计和开发,请加入 [Kubernetes 存储特别兴趣小组](https://github.com/kubernetes/community/tree/master/sig-storage)SIG。我们正在快速成长并始终欢迎新的贡献者。
<!--
Special thanks to all the contributors that helped bring this feature to beta, including Cheng Xing ([verult](https://github.com/verult)), Chuqiang Li ([lichuqiang](https://github.com/lichuqiang)), David Zhu ([davidz627](https://github.com/davidz627)), Deep Debroy ([ddebroy](https://github.com/ddebroy)), Jan Šafránek ([jsafrane](https://github.com/jsafrane)), Jordan Liggitt ([liggitt](https://github.com/liggitt)), Michelle Au ([msau42](https://github.com/msau42)), Pengfei Ni ([feiskyer](https://github.com/feiskyer)), Saad Ali ([saad-ali](https://github.com/saad-ali)), Tim Hockin ([thockin](https://github.com/thockin)), and Yecheng Fu ([cofyc](https://github.com/cofyc)).
-->
特别感谢帮助推出此功能的所有贡献者,包括 Cheng Xing ([verult](https://github.com/verult))、Chuqiang Li ([lichuqiang](https://github.com/lichuqiang))、David Zhu ([davidz627](https://github.com/davidz627))、Deep Debroy ([ddebroy](https://github.com/ddebroy))、Jan Šafránek ([jsafrane](https://github.com/jsafrane))、Jordan Liggitt ([liggitt](https://github.com/liggitt))、Michelle Au ([msau42](https://github.com/msau42))、Pengfei Ni ([feiskyer](https://github.com/feiskyer))、Saad Ali ([saad-ali](https://github.com/saad-ali))、Tim Hockin ([thockin](https://github.com/thockin)),以及 Yecheng Fu ([cofyc](https://github.com/cofyc))。

View File

@ -0,0 +1,73 @@
---
layout: blog
title: '2018 年督导委员会选举结果'
date: 2018-10-15
---
<!--
---
layout: blog
title: '2018 Steering Committee Election Results'
date: 2018-10-15
---
-->
<!-- **Authors**: Jorge Castro (Heptio), Ihor Dvoretskyi (CNCF), Paris Pittman (Google) -->
**作者**: Jorge Castro (Heptio), Ihor Dvoretskyi (CNCF), Paris Pittman (Google)
<!--
## Results
-->
## 结果
<!--
The [Kubernetes Steering Committee Election](https://kubernetes.io/blog/2018/09/06/2018-steering-committee-election-cycle-kicks-off/) is now complete and the following candidates came ahead to secure two year terms that start immediately:
-->
[Kubernetes 督导委员会选举](https://kubernetes.io/blog/2018/09/06/2018-steering-committee-election-cycle-kicks-off/)现已完成,以下候选人获得了立即开始的两年任期:
* Aaron Crickenberger, Google, [@spiffxp](https://github.com/spiffxp)
* Davanum Srinivas, Huawei, [@dims](https://github.com/dims)
* Tim St. Clair, Heptio, [@timothysc](https://github.com/timothysc)
<!--
## Big Thanks!
-->
## 十分感谢!
<!--
* Steering Committee Member Emeritus [Quinton Hoole](https://github.com/quinton-hoole) for his service to the community over the past year. We look forward to
* The candidates that came forward to run for election. May we always have a strong set of people who want to push community forward like yours in every election.
* All 307 voters who cast a ballot.
* And last but not least...Cornell University for hosting [CIVS](https://civs.cs.cornell.edu/)!
-->
* 督导委员会荣誉退休成员 [Quinton Hoole](https://github.com/quinton-hoole),表扬他在过去一年为社区所作的贡献。我们期待着
* 参加竞选的候选人。愿我们永远拥有一群强大的人,他们希望在每一次选举中都能像你们一样推动社区向前发展。
* 共计 307 名选民参与投票。
* 本次选举由康奈尔大学主办 [CIVS](https://civs.cs.cornell.edu/)
<!--
## Get Involved with the Steering Committee
-->
## 加入督导委员会
<!--
You can follow along to Steering Committee [backlog items](https://git.k8s.io/steering/backlog.md) and weigh in by filing an issue or creating a PR against their [repo](https://github.com/kubernetes/steering). They meet bi-weekly on [Wednesdays at 8pm UTC](https://github.com/kubernetes/steering) and regularly attend Meet Our Contributors.
-->
你可以关注督导委员会的[任务清单](https://git.k8s.io/steering/backlog.md),并通过向他们的[代码仓库](https://github.com/kubernetes/steering)提交 issue 或 PR 的方式来参与。他们也会在[UTC 时间每周三晚 8 点](https://github.com/kubernetes/steering)举行会议,并定期与我们的贡献者见面。
<!--
Steering Committee Meetings:
-->
督导委员会会议:
* [YouTube 播放列表](https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM)
<!--
Meet Our Contributors Steering AMAs:
-->
与我们的贡献者会面:
<!--
* [Oct 3 2018](https://youtu.be/x6Jm8p0K-IQ)
* [Sept 5 2018](https://youtu.be/UbxWV12Or58)
-->
* [2018 年 10 月 3 日](https://youtu.be/x6Jm8p0K-IQ)
* [2018 年 7 月 5 日](https://youtu.be/UbxWV12Or58)

View File

@ -0,0 +1,118 @@
---
layout: "Blog"
title: "Kubernetes 2018 年北美贡献者峰会"
date: 2018-10-16
---
<!--
---
layout: "Blog"
title: "Kubernetes 2018 North American Contributor Summit"
date: 2018-10-16
---
-->
<!--
**Authors:**
-->
**作者:**
<!--
[Bob Killen][bob] (University of Michigan)
[Sahdev Zala][sahdev] (IBM),
[Ihor Dvoretskyi][ihor] (CNCF)
-->
[Bob Killen][bob](密歇根大学)
[Sahdev Zala][sahdev]IBM
[Ihor Dvoretskyi][ihor]CNCF
<!--
The 2018 North American Kubernetes Contributor Summit to be hosted right before
[KubeCon + CloudNativeCon][kubecon] Seattle is shaping up to be the largest yet.
-->
2018 年北美 Kubernetes 贡献者峰会将在西雅图 [KubeCon + CloudNativeCon][kubecon] 会议之前举办,这将是迄今为止规模最大的一次盛会。
<!--
It is an event that brings together new and current contributors alike to
connect and share face-to-face; and serves as an opportunity for existing
contributors to help shape the future of community development. For new
community members, it offers a welcoming space to learn, explore and put the
contributor workflow to practice.
-->
这是一个将新老贡献者聚集在一起,面对面交流和分享的活动;并为现有的贡献者提供一个机会,帮助塑造社区发展的未来。它为新的社区成员提供了一个学习、探索和实践贡献工作流程的良好空间。
<!--
Unlike previous Contributor Summits, the event now spans two-days with a more
relaxed hallway track and general Contributor get-together to be hosted from
5-8pm on Sunday December 9th at the [Garage Lounge and Gaming Hall][garage], just
a short walk away from the Convention Center. There, contributors can enjoy
billiards, bowling, trivia and more; accompanied by a variety of food and drink.
-->
与之前的贡献者峰会不同,本次活动为期两天,有一个更为轻松的行程安排,一般贡献者将于 12 月 9 日(周日)下午 5 点至 8 点在距离会议中心仅几步远的 [Garage Lounge and Gaming Hall][garage] 举办峰会。在那里,贡献者也可以进行台球、保龄球等娱乐活动,而且还有各种食品和饮料。
<!--
Things pick up the following day, Monday the 10th with three separate tracks:
-->
接下来的一天,也就是 10 号星期一,有三个独立的会议你可以选择参与:
<!--
### New Contributor Workshop:
A half day workshop aimed at getting new and first time contributors onboarded
and comfortable with working within the Kubernetes Community. Staying for the
duration is required; this is not a workshop you can drop into.
-->
### 新贡献者研讨会:
为期半天的研讨会旨在让新贡献者加入社区,并营造一个良好的 Kubernetes 社区工作环境。
请在开会期间保持在场,该讨论会不允许随意进出。
<!--
### Current Contributor Track:
Reserved for those that are actively engaged with the development of the
project; the Current Contributor Track includes Talks, Workshops, Birds of a
Feather, Unconferences, Steering Committee Sessions, and more! Keep an eye on
the [schedule in GitHub][schedule] as content is frequently being updated.
-->
### 当前贡献者追踪:
保留给那些积极参与项目开发的贡献者目前的贡献者追踪包括讲座、研讨会、聚会、Unconferences 会议、指导委员会会议等等!
请留意 [GitHub 中的时间表][时间表],因为内容经常更新。
<!--
### Docs Sprint:
SIG-Docs will have a curated list of issues and challenges to be tackled closer
to the event date.
-->
### Docs 冲刺:
SIG-Docs 将在活动日期临近的时候列出一个需要处理的问题和挑战列表。
<!--
## To Register:
To register for the Contributor Summit, see the [Registration section of the
Event Details in GitHub][register]. Please note that registrations are being
reviewed. If you select the “Current Contributor Track” and are not an active
contributor, you will be asked to attend the New Contributor Workshop, or asked
to be put on a waitlist. With thousands of contributors and only 300 spots, we
need to make sure the right folks are in the room.
-->
## 注册:
要注册贡献者峰会,请参阅 Git Hub 上的[活动详情注册部分][注册]。请注意报名正在审核中。
如果您选择了 “当前贡献者追踪”,而您却不是一个活跃的贡献者,您将被要求参加新贡献者研讨会,或者被要求进入候补名单。
成千上万的贡献者只有 300 个位置,我们需要确保正确的人被安排席位。
<!--
If you have any questions or concerns, please dont hesitate to reach out to
the Contributor Summit Events Team at community@kubernetes.io.
-->
如果您有任何问题或疑虑,请随时通过 community@kubernetes.io 联系贡献者峰会组织团队。
<!--
Look forward to seeing everyone there!
-->
期待在那里看到每个人!
[bob]: https://twitter.com/mrbobbytables
[sahdev]: https://twitter.com/sp_zala
[ihor]: https://twitter.com/idvoretskyi
[kubecon]: https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/
[garage]: https://www.garagebilliards.com/
[时间表]: https://git.k8s.io/community/events/2018/12-contributor-summit#agenda
[注册]: https://git.k8s.io/community/events/2018/12-contributor-summit#registration

View File

@ -0,0 +1,89 @@
---
layout: blog
title: 'Kubernetes 文档更新,国际版'
date: 2018-11-08
---
<!--
---
layout: blog
title: 'Kubernetes Docs Updates, International Edition'
date: 2018-11-08
---
-->
<!-- **Author**: Zach Corleissen (Linux Foundation) -->
**作者**Zach Corleissen Linux 基金会)
<!-- As a co-chair of SIG Docs, I'm excited to share that Kubernetes docs have a fully mature workflow for localization (l10n). -->
作为文档特别兴趣小组SIG Docs的联合主席我很高兴能与大家分享 Kubernetes 文档在本地化l10n方面所拥有的一个完全成熟的工作流。
<!-- ## Abbreviations galore -->
## 丰富的缩写
<!-- L10n is an abbreviation for _localization_. -->
L10n 是 _localization_ 的缩写。
<!-- I18n is an abbreviation for _internationalization_. -->
I18n 是 _internationalization_ 的缩写。
<!-- I18n is [what you do](https://www.w3.org/International/questions/qa-i18n) to make l10n easier. L10n is a fuller, more comprehensive process than translation (_t9n_). -->
I18n 定义了[做什么](https://www.w3.org/International/questions/qa-i18n) 能让 l10n 更容易。而 L10n 更全面,相比翻译( _t9n_ )具备更完善的流程。
<!-- ## Why localization matters -->
## 为什么本地化很重要
<!-- The goal of SIG Docs is to make Kubernetes easier to use for as many people as possible. -->
SIG Docs 的目标是让 Kubernetes 更容易为尽可能多的人使用。
<!-- One year ago, we looked at whether it was possible to host the output of a Chinese team working independently to translate the Kubernetes docs. After many conversations (including experts on OpenStack l10n), [much transformation](https://kubernetes.io/blog/2018/05/05/hugo-migration/), and [renewed commitment to easier localization](https://github.com/kubernetes/website/pull/10485), we realized that open source documentation is, like open source software, an ongoing exercise at the edges of what's possible. -->
一年前,我们研究了是否有可能由一个独立翻译 Kubernetes 文档的中国团队来主持文档输出。经过多次交谈(包括 OpenStack l10n 的专家),[多次转变](https://kubernetes.io/blog/2018/05/05/hugo-migration/),以及[重新致力于更轻松的本地化](https://github.com/kubernetes/website/pull/10485),我们意识到,开源文档就像开源软件一样,是在可能的边缘不断进行实践。
<!-- Consolidating workflows, language labels, and team-level ownership may seem like simple improvements, but these features make l10n scalable for increasing numbers of l10n teams. While SIG Docs continues to iterate improvements, we've paid off a significant amount of technical debt and streamlined l10n in a single workflow. That's great for the future as well as the present. -->
整合工作流程、语言标签和团队级所有权可能看起来像是十分简单的改进,但是这些功能使 l10n 可以扩展到规模越来越大的 l10n 团队。随着 SIG Docs 不断改进,我们已经在单一工作流程中偿还了大量技术债务并简化了 l10n。这对未来和现在都很有益。
<!-- ## Consolidated workflow -->
## 整合的工作流程
<!-- Localization is now consolidated in the [kubernetes/website](https://github.com/kubernetes/website) repository. We've configured the Kubernetes CI/CD system, [Prow](https://github.com/kubernetes/test-infra/tree/master/prow), to handle automatic language label assignment as well as team-level PR review and approval. -->
现在,本地化已整合到 [kubernetes/website](https://github.com/kubernetes/website) 存储库。我们已经配置了 Kubernetes CI/CD 系统,[Prow](https://github.com/kubernetes/test-infra/tree/master/prow) 来处理自动语言标签分配以及团队级 PR 审查和批准。
<!-- ### Language labels -->
### 语言标签
<!-- Prow automatically applies language labels based on file path. Thanks to SIG Docs contributor [June Yi](https://github.com/kubernetes/test-infra/pull/9835), folks can also manually assign language labels in pull request (PR) comments. For example, when left as a comment on an issue or PR, this command assigns the label `language/ko` (Korean). -->
Prow 根据文件路径自动添加语言标签。感谢 SIG Docs 贡献者 [June Yi](https://github.com/kubernetes/test-infra/pull/9835),他让人们还可以在 pull requestPR注释中手动分配语言标签。例如当为 issue 或 PR 留下下述注释时,将为之分配标签 `language/ko`Korean
```
/language ko
```
<!-- These repo labels let reviewers filter for PRs and issues by language. For example, you can now filter the k/website dashboard for [PRs with Chinese content](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3Alanguage%2Fzh). -->
这些存储库标签允许审阅者按语言过滤 PR 和 issue。例如您现在可以过滤 kubernetes/website 面板中[具有中文内容的 PR](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3Alanguage%2Fzh)。
<!-- ### Team review -->
### 团队审核
<!-- L10n teams can now review and approve their own PRs. For example, review and approval permissions for English are [assigned in an OWNERS file](https://github.com/kubernetes/website/blob/master/content/en/OWNERS) in the top subfolder for English content. -->
L10n 团队现在可以审查和批准他们自己的 PR。例如英语的审核和批准权限在位于用于显示英语内容的顶级子文件夹中的 [OWNERS 文件中指定](https://github.com/kubernetes/website/blob/master/content/en/OWNERS)。
<!-- Adding `OWNERS` files to subdirectories lets localization teams review and approve changes without requiring a rubber stamp approval from reviewers who may lack fluency. -->
`OWNERS` 文件添加到子目录可以让本地化团队审查和批准更改,而无需由可能并不擅长该门语言的审阅者进行批准。
<!-- ## What's next -->
## 下一步是什么
<!-- We're looking forward to the [doc sprint in Shanghai](https://kccncchina2018english.sched.com/event/HVb2/contributor-summit-doc-sprint-additional-registration-required) to serve as a resource for the Chinese l10n team. -->
我们期待着[上海的 doc sprint](https://kccncchina2018english.sched.com/event/HVb2/contributor-summit-doc-sprint-additional-registration-required) 能作为中国 l10n 团队的资源。
<!-- We're excited to continue supporting the Japanese and Korean l10n teams, who are making excellent progress. -->
我们很高兴继续支持正在取得良好进展的日本和韩国 l10n 队伍。
<!-- If you're interested in localizing Kubernetes for your own language or region, check out our [guide to localizing Kubernetes docs](https://kubernetes.io/docs/contribute/localization/) and reach out to a [SIG Docs chair](https://github.com/kubernetes/community/tree/master/sig-docs#leadership) for support. -->
如果您有兴趣将 Kubernetes 本地化为您自己的语言或地区,请查看我们的[本地化 Kubernetes 文档指南](https://kubernetes.io/docs/contribute/localization/),并联系 [SIG Docs 主席团](https://github.com/kubernetes/community/tree/master/sig-docs#leadership)获取支持。
<!-- ### Get involved with SIG Docs -->
### 加入SIG Docs
<!-- If you're interested in Kubernetes documentation, come to a SIG Docs [weekly meeting](https://github.com/kubernetes/community/tree/master/sig-docs#meetings), or join [#sig-docs in Kubernetes Slack](https://kubernetes.slack.com/messages/C1J0BPD2M/details/). -->
如果您对 Kubernetes 文档感兴趣,请参加 SIG Docs [每周会议](https://github.com/kubernetes/community/tree/master/sig-docs#meetings),或在 [Kubernetes Slack 加入 #sig-docs](https://kubernetes.slack.com/messages/C1J0BPD2M/details/)。

View File

@ -3,8 +3,8 @@ layout: blog
title: '新贡献者工作坊上海站'
date: 2018-12-05
---
<!--
<!--
---
layout: blog
title: 'New Contributor Workshop Shanghai'

View File

@ -1,23 +1,21 @@
---
title: 案例研究
linkTitle: 案例研究
bigheader: Kubernetes 用户案例研究
abstract: 在生产环境中运行 Kubernetes 的用户集合。
layout: basic
class: gridPage
cid: caseStudies
---
<!--
---
title: Case Studies
linkTitle: Case Studies
bigheader: Kubernetes User Case Studies
abstract: A collection of users running Kubernetes in production.
layout: basic
class: gridPage
cid: caseStudies
---
-->
---
title: 案例分析
linkTitle: 案例分析
bigheader: Kubernetes 用户案例分析
abstract: 在生产环境下使用 Kubernetes 的案例集
layout: basic
class: gridPage
cid: caseStudies
---
<!--
---
title: Case Studies
linkTitle: Case Studies
bigheader: Kubernetes User Case Studies
abstract: A collection of users running Kubernetes in production.
layout: basic
class: gridPage
cid: caseStudies
---
-->

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.9 KiB

View File

@ -0,0 +1,118 @@
---
title: Adform Case Study
linkTitle: Adform
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
logo: adform_featured_logo.png
draft: false
featured: true
weight: 47
quote: >
Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier.
---
<div class="banner1 desktop" style="background-image: url('/images/CaseStudy_adform_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/adform_logo.png" style="width:15%;margin-bottom:0%" class="header_logo"><br> <div class="subhead">Improving Performance and Morale with Cloud Native
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>AdForm</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Copenhagen, Denmark</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Adtech</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>Challenge</h2>
<a href="https://site.adform.com/">Adforms</a> mission is to provide a secure and transparent full stack of advertising technology to enable digital ads across devices. The company has a large infrastructure: <a href="https://www.openstack.org/">OpenStack</a>-based private clouds running on 1,100 physical servers in 7 data centers around the world, 3 of which were opened in the past year. With the companys growth, the infrastructure team felt that "our private cloud was not really flexible enough," says IT System Engineer Edgaras Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software takes time. We were really struggling with our releases, and we didnt have self-healing infrastructure."
<br>
<h2>Solution</h2>
The team, which had already been using <a href="https://prometheus.io/">Prometheus</a> for monitoring, embraced <a href="https://kubernetes.io/">Kubernetes</a> and cloud native practices in 2017. "To start our Kubernetes journey, we had to adapt all our software, so we had to choose newer frameworks," says Apšega. "We also adopted the microservices way, so observability is much better because you can inspect the bug or the services separately."
</div>
<div class="col2">
<h2>Impact</h2>
"Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. The release process went from several hours to several minutes. Autoscaling has been at least 6 times faster than the semi-manual VM bootstrapping and application deployment required before. The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching 2-3 times more efficiency over virtual machines. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes," says Apšega. Prometheus has also had a positive impact: "It provides high availability for metrics and alerting. We monitor everything starting from hardware to applications. Having all the metrics in <a href="https://grafana.com/">Grafana</a> dashboards provides great insight on your systems."
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier."<span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase;line-height:14px"><br><br>— Edgaras Apšega, IT Systems Engineer, Adform</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>Adform made <a href="https://www.wsj.com/articles/fake-ad-operation-used-to-steal-from-publishers-is-uncovered-1511290981">headlines</a> last year when it detected the HyphBot ad fraud network that was costing some businesses hundreds of thousands of dollars a day.</h2>With its mission to provide a secure and transparent full stack of advertising technology to enable an open internet, Adform published a <a href="https://site.adform.com/media/85132/hyphbot_whitepaper_.pdf">white paper</a> revealing what it did—and others could too—to limit customers exposure to the scam. <br><br>
In that same spirit, Adform is sharing its cloud native journey. "When you see that everyone shares their best practices, it inspires you to contribute back to the project," says IT Systems Engineer Edgaras Apšega.<br><br>
The company has a large infrastructure: <a href="https://www.openstack.org/">OpenStack</a>-based private clouds running on 1,100 physical servers in their own seven data centers around the world, three of which were opened in the past year. With the companys growth, the infrastructure team felt that "our private cloud was not really flexible enough," says Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software really takes time. We were really struggling with our releases, and we didnt have self-healing infrastructure."
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_adform_banner3.jpg')">
<div class="banner3text">
"The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral. And we can see that a community really gathers around it. Everyone shares their experiences, their knowledge, and the fact that its open source, you can contribute."<span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase;line-height:14px"><br><br>— Edgaras Apšega, IT Systems Engineer, Adform</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
The team, which had already been using Prometheus for monitoring, embraced Kubernetes, microservices, and cloud native practices. "The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral," says Apšega. "And we can see that a community really gathers around it."<br><br>
A proof of concept project was started, with a Kubernetes cluster running on bare metal in the data center. When developers saw how quickly containers could be spun up compared to the virtual machine process, "they wanted to ship their containers in production right away, and we were still doing proof of concept," says IT Systems Engineer Andrius Cibulskis.
Of course, a lot of work still had to be done. "First of all, we had to learn Kubernetes, see all of the moving parts, how they glue together," says Apšega. "Second of all, the whole CI/CD part had to be redone, and our DevOps team had to invest more man hours to implement it. And third is that developers had to rewrite the code, and theyre still doing it."
<br><br>
The first production cluster was launched in the spring of 2018, and is now up to 20 physical machines dedicated for pods throughout three data centers, with plans for separate clusters in the other four data centers. The user-facing Adform application platform, data distribution platform, and back ends are now all running on Kubernetes. "Many APIs for critical applications are being developed for Kubernetes," says Apšega. "Teams are rewriting their applications to .NET core, because it supports containers, and preparing to move to Kubernetes. And new applications, by default, go in containers."
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_adform_banner4.jpg')">
<div class="banner4text">
"Releases are really nice for them, because they just push their code to Git and thats it. They dont have to worry about their virtual machines anymore." <span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase;line-height:14px"><br><br>— Andrius Cibulskis, IT Systems Engineer, Adform</span>
</div>
</div>
<section class="section5">
<div class="fullcol">
This big push has been driven by the real impact that these new practices have had. "Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes." The release process went from several hours to several minutes. Autoscaling is at least six times faster than the semi-manual VM bootstrapping and application deployment required before. <br><br>
The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching two to three times more efficiency over virtual machines. <br><br>
Prometheus has also had a positive impact: "It provides high availability for metrics and alerting," says Apšega. "We monitor everything starting from hardware to applications. Having all the metrics in Grafana dashboards provides great insight on our systems."
</div>
<div class="banner5">
<div class="banner5text">
"I think that our company just started our cloud native journey. It seems like a huge road ahead, but were really happy that we joined it." <span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase;line-height:14px"><br><br>— Edgaras Apšega, IT Systems Engineer, Adform</span>
</div>
</div>
<div class="fullcol">
All of these benefits have trickled down to individual team members, whose working lives have been changed for the better. "They used to have to get up at night to re-start some services, and now Kubernetes handles all of that," says Apšega. Adds Cibulskis: "Releases are really nice for them, because they just push their code to Git and thats it. They dont have to worry about their virtual machines anymore." Even the security teams have been impacted. "Security teams are always not happy," says Apšega, "and now theyre happy because they can easily inspect the containers."
The company plans to remain in the data centers for now, "mostly because we want to keep all the data, to not share it in any way," says Cibulskis, "and its cheaper at our scale." But, Apšega says, the possibility of using a hybrid cloud for computing is intriguing: "One of the projects were interested in is the <a href="https://github.com/virtual-kubelet/virtual-kubelet">Virtual Kubelet</a> that lets you spin up the working nodes on different clouds to do some computing."
<br><br>
Apšega, Cibulskis and their colleagues are keeping tabs on how the cloud native ecosystem develops, and are excited to contribute where they can. "I think that our company just started our cloud native journey," says Apšega. "It seems like a huge road ahead, but were really happy that we joined it."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

View File

@ -0,0 +1,105 @@
---
title: Amadeus Case Study
case_study_styles: true
cid: caseStudies
css: /css/style_amadeus.css
---
<div class="banner1">
<h1> CASE STUDY:<img src="/images/amadeus_logo.png" class="header_logo"><br> <div class="subhead">Another Technical Evolution for a 30-Year-Old Company
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>Amadeus IT Group</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Madrid, Spain</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Travel Technology</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>Challenge</h2>
In the past few years, Amadeus, which provides IT solutions to the travel industry around the world, found itself in need of a new platform for the 5,000 services supported by its service-oriented architecture. The 30-year-old company operates its own data center in Germany, and there were growing demands internally and externally for solutions that needed to be geographically dispersed. And more generally, "we had objectives of being even more highly available," says Eric Mountain, Senior Expert, Distributed Systems at Amadeus. Among the companys goals: to increase automation in managing its infrastructure, optimize the distribution of workloads, use data center resources more efficiently, and adopt new technologies more easily.
</div>
<div class="col2">
<h2>Solution</h2>
Mountain has been overseeing the companys migration to <a href="http://kubernetes.io/">Kubernetes</a>, using <a href="https://www.openshift.org/">OpenShift</a> Container Platform, <a href="https://www.redhat.com/en">Red Hat</a>s enterprise container platform.
<br><br>
<h2>Impact</h2>
One of the first projects the team deployed in Kubernetes was the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume. "Its now handling in production several thousand transactions per second, and its deployed in multiple data centers throughout the world," says Mountain. "Its not a migration of an existing workload; its a whole new workload that we couldnt have done otherwise. [This platform] gives us access to market opportunities that we didnt have before."
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"We want multi-data center capabilities, and we want them for our mainstream system as well. We didnt think that we could achieve them with our existing system. We need new automation, things that Kubernetes and OpenShift bring."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Eric Mountain, Senior Expert, Distributed Systems at Amadeus IT Group</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>In his two decades at Amadeus, Eric Mountain has been the migrations guy. </h2>
Back in the day, he worked on the companys move from Unix to Linux, and now hes overseeing the journey to cloud native. "Technology just keeps changing, and we embrace it," he says. "We are celebrating our 30 years this year, and we continue evolving and innovating to stay cost-efficient and enhance everyones travel experience, without interrupting workflows for the customers who depend on our technology."<br><br>
That was the challenge that Amadeus—which provides IT solutions to the travel industry around the world, from flight searches to hotel bookings to customer feedback—faced in 2014. The technology team realized it was in need of a new platform for the 5,000 services supported by its service-oriented architecture.<br><br>
The tipping point occurred when they began receiving many requests, internally and externally, for solutions that needed to be geographically outside the companys main data center in Germany. "Some requests were for running our applications on customer premises," Mountain says. "There were also new services we were looking to offer that required response time to the order of a few hundred milliseconds, which we couldnt achieve with transatlantic traffic. Or at least, not without eating into a considerable portion of the time available to our applications for them to process individual queries."<br><br>
More generally, the company was interested in leveling up on high availability, increasing automation in managing infrastructure, optimizing the distribution of workloads and using data center resources more efficiently. "We have thousands and thousands of servers," says Mountain. "These servers are assigned roles, so even if the setup is highly automated, the machine still has a given role. Its wasteful on many levels. For instance, an application doesnt necessarily use the machine very optimally. Virtualization can help a bit, but its not a silver bullet. If that machine breaks, you still want to repair it because it has that role and you cant simply say, Well, Ill bring in another machine and give it that role. Its not fast. Its not efficient. So we wanted the next level of automation."<br><br>
</div>
</section>
<div class="banner3">
<div class="banner3text">
"We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier."
</div>
</div>
<section class="section3">
<div class="fullcol">
While mainly a C++ and Java shop, Amadeus also wanted to be able to adopt new technologies more easily. Some of its developers had started using languages like <a href="https://www.python.org/">Python</a> and databases like <a href="https://www.couchbase.com/">Couchbase</a>, but Mountain wanted still more options, he says, "in order to better adapt our technical solutions to the products we offer, and open up entirely new possibilities to our developers." Working with recent technologies and cool new things would also make it easier to attract new talent.
<br><br>
All of those needs led Mountain and his team on a search for a new platform. "We did a set of studies and proofs of concept over a fairly short period, and we considered many technologies," he says. "In the end, we were left with three choices: build everything on premise, build on top of <a href="http://kubernetes.io/">Kubernetes</a> whatever happens to be missing from our point of view, or go with <a href="https://www.openshift.com/">OpenShift</a> and build whatever remains there."
<br><br>
The team decided against building everything themselves—though theyd done that sort of thing in the past—because "people were already inventing things that looked good," says Mountain.
<br><br>
Ultimately, they went with OpenShift Container Platform, <a href="https://www.redhat.com/en">Red Hat</a>s Kubernetes-based enterprise offering, instead of building on top of Kubernetes because "there was a lot of synergy between what we wanted and the way Red Hat was anticipating going with OpenShift," says Mountain. "They were clearly developing Kubernetes, and developing certain things ahead of time in OpenShift, which were important to us, such as more security."
<br><br>
The hope was that those particular features would eventually be built into Kubernetes, and, in the case of security, Mountain feels that has happened. "We realize that theres always a certain amount of automation that we will probably have to develop ourselves to compensate for certain gaps," says Mountain. "The less we do that, the better for us. We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier."
</div>
</section>
<div class="banner4">
<div class="banner4text">
"Its not a migration of an existing workload; its a whole new workload that we couldnt have done otherwise. [This platform] gives us access to market opportunities that we didnt have before."
</div>
</div>
<section class="section4">
<div class="fullcol">
The first project the team tackled was one that they knew had to run outside the data center in Germany. Because of the projects needs, "We couldnt rely only on the built-in Kubernetes service discovery; we had to layer on top of that an extra service discovery level that allows us to load balance at the operation level within our system," says Mountain. They also built a stream dedicated to monitoring, which at the time wasnt offered in the Kubernetes or OpenShift ecosystem. Now that <a href="https://www.prometheus.io/">Prometheus</a> and other products are available, Mountain says the company will likely re-evaluate their monitoring system: "We obviously always like to leverage what Kubernetes and OpenShift can offer."
<br><br>
The second project ended up going into production first: the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume and was deployed in public cloud. Launched in early 2016, it is "now handling in production several thousand transactions per second, and its deployed in multiple data centers throughout the world," says Mountain. "Its not a migration of an existing workload; its a whole new workload that we couldnt have done otherwise. [This platform] gives us access to market opportunities that we didnt have before."
<br><br>
Having been through this kind of technical evolution more than once, Mountain has advice on how to handle the cultural changes. "Thats one aspect that we can tackle progressively," he says. "We have to go on supplying our customers with new features on our pre-existing products, and we have to keep existing products working. So we cant simply do absolutely everything from one day to the next. And we mustnt sell it that way."
<br><br>
The first order of business, then, is to pick one or two applications to demonstrate that the technology works. Rather than choosing a high-impact, high-risk project, Mountains team selected a smaller application that was representative of all the companys other applications in its complexity: "We just made sure we picked something thats complex enough, and we showed that it can be done."
</div>
</section>
<div class="banner5">
<div class="banner5text">
"The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we dont think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring."
</div>
</div>
<section class="section5">
<div class="fullcol">
Next comes convincing people. "On the operations side and on the R&D side, there will be people who say quite rightly, There is a system, and it works, so why change?" Mountain says. "The only thing that really convinces people is showing them the value." For Amadeus, people realized that the Airline Cloud Availability product could not have been made available on the public cloud with the companys existing system. The question then became, he says, "Do we go into a full-blown migration? Is that something that is justified?"
<br><br>
"The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we dont think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring."
<br><br>
So how do you get everyone on board? "Make sure you have good links between your R&D and your operations," he says. "Also make sure youre going to talk early on to the investors and stakeholders. Figure out what it is that they will be expecting from you, that will convince them or not, that this is the right way for your company."
<br><br>
His other advice is simply to make the technology available for people to try it. "Kubernetes and OpenShift Origin are open source software, so theres no complicated license key for the evaluation period and youre not limited to 30 days," he points out. "Just go and get it running." Along with that, he adds, "Youve got to be prepared to rethink how you do things. Of course making your applications as cloud native as possible is how youll reap the most benefits: 12 factors, CI/CD, which is continuous integration, continuous delivery, but also continuous deployment."
<br><br>
And while they explore that aspect of the technology, Mountain and his team will likely be practicing what he preaches to others taking the cloud native journey. "See what happens when you break it, because its important to understand the limits of the system," he says. Or rather, he notes, the advantages of it. "Breaking things on Kube is actually one of the nice things about it—it recovers. Its the only real way that youll see that you might be able to do things."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

View File

@ -0,0 +1,117 @@
---
title: Ancestry Case Study
case_study_styles: true
cid: caseStudies
css: /css/style_ancestry.css
---
<div class="banner1">
<h1> CASE STUDY:<img src="/images/ancestry_logo.png" width="22%" style="margin-bottom:-12px;margin-left:3px;"><br> <div class="subhead">Digging Into the Past With New Technology</div></h1>
</div>
<div class="details">
Company &nbsp;<b>Ancestry</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Lehi, Utah</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Internet Company, Online Services</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>Challenge</h2>
Ancestry, the global leader in family history and consumer genomics, uses sophisticated engineering and technology to help everyone, everywhere discover the story of what led to them. The company has spent more than 30 years innovating and building products and technologies that at their core, result in real and emotional human responses. <a href="https://www.ancestry.com">Ancestry</a> currently serves more than 2.6 million paying subscribers, holds 20 billion historical records, 90 million family trees and more than four million people are in its AncestryDNA network, making it the largest consumer genomics DNA network in the world. The company's popular website, <a href="https://www.ancestry.com">ancestry.com</a>, has been working with big data long before the term was popularized. The site was built on hundreds of services, technologies and a traditional deployment methodology. "It's worked well for us in the past," says Paul MacKay, software engineer and architect at Ancestry, "but had become quite cumbersome in its processing and is time-consuming. As a primarily online service, we are constantly looking for ways to accelerate to be more agile in delivering our solutions and our&nbsp;products."
<br>
</div>
<div class="col2">
<h2>Solution</h2>
The company is transitioning to cloud native infrastructure, using <a href="https://www.docker.com">Docker</a> containerization, <a href="https://kubernetes.io">Kubernetes</a> orchestration and <a href="https://prometheus.io">Prometheus</a> for cluster monitoring.<br>
<br>
<h2>Impact</h2>
"Every single product, every decision we make at Ancestry, focuses on delighting our customers with intimate, sometimes life-changing discoveries about themselves and their families," says MacKay. "As the company continues to grow, the increased productivity gains from using Kubernetes has helped Ancestry make customer discoveries faster. With the move to Dockerization for example, instead of taking between 20 to 50 minutes to deploy a new piece of code, we can now deploy in under a minute for much of our code. Weve truly experienced significant time savings in addition to the various features and benefits from cloud native and Kubernetes-type technologies."
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"At a certain point, you have to step back if you're going to push a new technology and get key thought leaders with engineers within the organization to become your champions for new technology adoption. At training sessions, the development teams were always the ones that were saying, 'Kubernetes saved our time tremendously; it's an enabler. It really is incredible.'"<br><br><span style="font-size:16px">- PAUL MACKAY, SOFTWARE ENGINEER AND ARCHITECT AT ANCESTRY</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>It started with a Shaky Leaf.</h2>
Since its introduction a decade ago, the Shaky Leaf icon has become one of Ancestry's signature features, which signals to users that there's a helpful hint you can use to find out more about your family tree.<br><br>
So when the company decided to begin moving its infrastructure to cloud native technology, the first service that was launched on <a href="https://kubernetes.io">Kubernetes</a>, the open source platform for managing application containers across clusters of hosts, was this hint system. Think of it as Amazon's recommended products, but instead of recommending products the company recommends records, stories, or familial connections. "It was a very important part of the site," says Ancestry software engineer and architect Paul MacKay, "but also small enough for a pilot project that we knew we could handle in a very appropriate, secure way."<br><br>
And when it went live smoothly in early 2016, "our deployment time for this service literally was cut down from 50 minutes to 2 or 5 minutes," MacKay adds. "The development team was just thrilled because we're focused on supplying a great experience for our customers. And that means features, it means stability, it means all those things that we need for a first-in-class type operation."<br><br>
The stability of that Shaky Leaf was a signal for MacKay and his team that their decision to embrace cloud native technologies was the right one for the company. With a private data center, Ancestry built its website (which launched in 1996) on hundreds of services and technologies and a traditional deployment methodology. "It worked well for us in the past, but the sum of the legacy systems became quite cumbersome in its processing and was time-consuming," says MacKay. "We were looking for other ways to accelerate, to be more agile in delivering our solutions and our products."
</div>
</section>
<div class="banner3">
<div class="banner3text">
"And when it [Kubernetes] went live smoothly in early 2016, 'our deployment time for this service literally was cut down from 50 minutes to 2 or 5 minutes,' MacKay adds. 'The development team was just thrilled because we're focused on supplying a great experience for our customers. And that means features, it means stability, it means all those things that we need for a first-in-class type operation.'"
</div>
</div>
<section class="section3">
<div class="fullcol">
That need led them in 2015 to explore containerization. Ancestry engineers had already been using technology like <a href="https://www.java.com/en/">Java</a> and <a href="https://www.python.org">Python</a> on Linux, so part of the decision was about making the infrastructure more Linux-friendly. They quickly decided that they wanted to go with Docker for containerization, "but it always comes down to the orchestration part of it to make it really work," says MacKay.<br><br>
His team looked at orchestration platforms offered by <a href="https://docs.docker.com/compose/">Docker Compose</a>, <a href="http://mesos.apache.org">Mesos</a> and <a href="https://www.openstack.org/software/">OpenStack</a>, and even started to prototype some homegrown solutions. And then they started hearing rumblings of the imminent release of Kubernetes v1.0. "At the forefront, we were looking at the secret store, so we didn't have to manage that all ourselves, the config maps, the methodology of seamless deployment strategy," he says. "We found that how Kubernetes had done their resources, their types, their labels and just their interface was so much further advanced than the other things we had seen. It was a feature fit."<br><br>
<div class="quote">
Plus, MacKay says, "I just believed in the confidence that comes with the history that Google has with containerization. So we started out right on the leading edge of it. And we haven't looked back since."</div><br>
Which is not to say that adopting a new technology hasn't come with some challenges. "Change is hard," says MacKay. "Not because the technology is hard or that the technology is not good. It's just that people like to do things like they had done [before]. You have the early adopters and you have those who are coming in later. It was a learning experience on both sides."<br><br>
Figuring out the best deployment operations for Ancestry was a big part of the work it took to adopt cloud native infrastructure. "We want to make sure the process is easy and also controlled in the manner that allows us the highest degree of security that we demand and our customers demand," says MacKay. "With Kubernetes and other products, there are some good solutions, but a little bit of glue is needed to bring it into corporate processes and governances. It's like having a set of gloves that are generic, but when you really do want to grab something you have to make it so it's customized to you. That's what we had to do."<br><br>
Their best practices include allowing their developers to deploy into development stage and production, but then controlling the aspects that need governance and auditing, such as secrets. They found that having one namespace per service is useful for achieving that containment of secrets and config maps. And for their needs, having one container per pod makes it easier to manage and to have a smaller unit of deployment.
<br><br>
</div>
</section>
<div class="banner4">
<div class="banner4text">
"The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology."
</div>
</div>
<section class="section4">
<div class="fullcol">
With that process established, the time spent on deployment was cut down to under a minute for some services. "As programmers, we have what's called REPL: read, evaluate, print, and loop, but with Kubernetes, we have CDEL: compile, deploy, execute, and loop," says MacKay. "It's a very quick loop back and a great benefit to understand that when our services are deployed in production, they're the same as what we tested in the pre-production environments. The approach of cloud native for Ancestry provides us a better ability to scale and to accommodate the business needs as work loads occur."<br><br>
The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology. "Engineers like to code, they like to do features, they don't like to sit around waiting for things to be deployed and worrying about scaling up and out and down," says MacKay. "After a while the engineers became our champions. At training sessions, the development teams were always the ones saying, 'Kubernetes saved our time tremendously; it's an enabler; it really is incredible.' Over time, we were able to convince our management that this was a transition that the industry is making and that we needed to be a part of it."<br><br>
A year later, Ancestry has transitioned a good number of applications to Kubernetes. "We have many different services that make up the rich environment that [the website] has from both the DNA side and the family history side," says MacKay. "We have front-end stacks, back-end stacks and back-end processing type stacks that are in the cluster."<br><br>
The company continues to weigh which services it will move forward to Kubernetes, which ones will be kept as is, and which will be replaced in the future and thus don't have to be moved over. MacKay estimates that the company is "approaching halfway on those features that are going forward. We don't have to do a lot of convincing anymore. It's more of an issue of timing with getting product management and engineering staff the knowledge and information that they need."
</div>
</section>
<div class="banner5">
<div class="banner5text">
"... 'I believe in Kubernetes. I believe in containerization. I think
if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about,
and it'll&nbsp;go&nbsp;forward.'"
</div>
</div>
<section class="section5">
<div class="fullcol">
Looking ahead, MacKay sees Ancestry maximizing the benefits of Kubernetes in 2017. "We're very close to having everything that should be or could be in a Linux-friendly world in Kubernetes by the end of the year," he says, adding that he's looking forward to features such as federation and horizontal pod autoscaling that are currently in the works. "Kubernetes has been very wonderful for us and we continue to ride the wave."<br><br>
That wave, he points out, has everything to do with the vibrant Kubernetes community, which has grown by leaps and bounds since Ancestry joined it as an early adopter. "This is just a very rough way of judging it, but on Slack in June 2015, there were maybe 500 on there," MacKay says. "The last time I looked there were maybe 8,500 just on the Slack channel. There are so many major companies and different kinds of companies involved now. It's the variety of contributors, the number of contributors, the incredibly competent and friendly community."<br><br>
As much as he and his team at Ancestry have benefited from what he calls "the goodness and the technical abilities of many" in the community, they've also contributed information about best practices, logged bug issues and participated in the open source conversation. And they've been active in attending <a href="https://www.meetup.com/Utah-Kubernetes-Meetup/">meetups</a> to help educate and give back to the local tech community in Utah. Says MacKay: "We're trying to give back as far as our experience goes, rather than just code."
<br><br>When he meets with companies considering adopting cloud native infrastructure, the best advice he has to give from Ancestry's Kubernetes journey is this: "Start small, but with hard problems," he says. And "you need a patron who understands the vision of containerization, to help you tackle the political as well as other technical roadblocks that can occur when change is needed."<br><br>
With the changes that MacKay's team has led over the past year and a half, cloud native will be part of Ancestry's technological genealogy for years to come. MacKay has been such a champion of the technology that he says people have jokingly accused him of having a Kubernetes tattoo.<br><br>
"I really don't," he says with a laugh. "But I'm passionate. I'm not exclusive to any technology; I use whatever I need that's out there that makes us great. If it's something else, I'll use it. But right now I believe in Kubernetes. I believe in containerization. I think if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about, and it'll go forward."<br><br>
He pauses. "So, yeah, I guess you can say I'm an evangelist for Kubernetes," he says. "But I'm not getting a tattoo!"
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -0,0 +1,96 @@
---
title: Ant Financial Case Study
linkTitle: ant-financial
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_antfinancial_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/antfinancial_logo.png" class="header_logo" style="width:20%;margin-bottom:-2.5%"><br> <div class="subhead" style="margin-top:1%">Ant Financials Hypergrowth Strategy Using Kubernetes
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>Ant Financial</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Hangzhou, China</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Financial Services</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
Officially founded in October 2014, <a href="https://www.antfin.com/index.htm?locale=en_us">Ant Financial</a> originated from <a href="https://global.alipay.com/">Alipay</a>, the worlds largest online payment platform that launched in 2004. The company also offers numerous other services leveraging technology innovation. With the volume of transactions Alipay handles for its 900+ million users worldwide (through its local and global partners)—256,000 transactions per second at the peak of Double 11 Singles Day 2017, and total gross merchandise value of $31 billion for Singles Day 2018—not to mention that of its other services, Ant Financial faces “data processing challenge in a whole new way,” says Haojie Hang, who is responsible for Product Management for the Storage and Compute Group. “We see three major problems of operating at that scale: how to provide real-time compute, storage, and processing capability, for instance to make real-time recommendations for fraud detection; how to provide intelligence on top of this data, because theres too much data and then were not getting enough insight; and how to apply security in the application level, in the middleware level, the system level, even the chip level.” In order to provide reliable and consistent services to its customers, Ant Financial embraced containers in early 2014, and soon needed an orchestration solution for the tens-of-thousands-of-node clusters in its data centers.
<h2>Solution</h2>
After investigating several technologies, the team chose <a href="https://kubernetes.io/">Kubernetes</a> for orchestration, as well as a number of other CNCF projects, including <a href="https://prometheus.io/">Prometheus</a>, <a href="https://opentracing.io/">OpenTracing</a>, <a href="https://coreos.com/etcd/">etcd</a> and <a href="https://coredns.io/">CoreDNS</a>. “In late 2016, we decided that Kubernetes will be the de facto standard,” says Hang. “Looking back, we made the right bet on the right technology. But then we needed to move the production workload from the legacy infrastructure to the latest Kubernetes-enabled platform, and that took some time, because we are very careful in terms of reliability and consistency.” All core financial systems were containerized by November 2017, and the migration to Kubernetes is ongoing.
<br>
<h2>Impact</h2>
“Weve seen at least tenfold in improvement in terms of the operations with cloud native technology, which means you can have tenfold increase in terms of output,” says Hang. Ant also provides its fully integrated financial cloud platform to business partners around the world, and hopes to power the next generation of digital banking with deep experience in service innovation and technology expertise. Hang says the team hasnt begun to focus on optimizing the Kubernetes platform, either: “Because were still in the hyper growth stage, were not in a mode where we do cost saving yet.”
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"In late 2016, we decided that Kubernetes will be the de facto standard. Looking back, we made the right bet on the right technology."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- HAOJIE HANG, PRODUCT MANAGEMENT, ANT FINANCIAL</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>A spinoff of the multinational conglomerate Alibaba, Ant Financial boasts a $150+ billion valuation and the scale to match. The fintech startup, launched in 2014, is comprised of Alipay, the worlds largest online payment platform, and numerous other services leveraging technology innovation.</h2>
And the volume of transactions that Alipay handles for over 900 million users worldwide (through its local and global partners) is staggering: 256,000 per second at the peak of Double 11 Singles Day 2017, and total gross merchandise value of $31 billion for Singles Day 2018. With the mission of “bringing the world equal opportunities,” Ant Financial is dedicated to creating an open, shared credit system and financial services platform through technology innovations.
<br><br>
Combine that with the operations of its other properties—such as the Huabei online credit system, Jiebei lending service, and the 350-million-user <a href="https://en.wikipedia.org/wiki/Ant_Forest">Ant Forest</a> green energy mobile app—and Ant Financial faces “data processing challenge in a whole new way,” says Haojie Hang, who is responsible for Product Management for the Storage and Compute Group. “We see three major problems of operating at that scale: how to provide real-time compute, storage, and processing capability, for instance to make real-time recommendations for fraud detection; how to provide intelligence on top of this data, because theres too much data and were not getting enough insight; and how to apply security in the application level, in the middleware level, the system level, even the chip level.”
<br><br>
To address those challenges and provide reliable and consistent services to its customers, Ant Financial embraced <a href="https://www.docker.com/">Docker</a> containerization in 2014. But they soon realized that they needed an orchestration solution for some tens-of-thousands-of-node clusters in the companys data centers.
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_antfinancial_banner3.jpg')">
<div class="banner3text">
"On Double 11 this year, we had plenty of nodes on Kubernetes, but compared to the whole scale of our infrastructure, this is still in progress."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- RANGER YU, GLOBAL TECHNOLOGY PARTNERSHIP & DEVELOPMENT, ANT FINANCIAL</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
The team investigated several technologies, including Docker Swarm and Mesos. “We did a lot of POCs, but were very careful in terms of production systems, because we want to make sure we dont lose any data,” says Hang. “You cannot afford to have a service downtime for one minute; even one second has a very, very big impact. We operate every day under pressure to provide reliable and consistent services to consumers and businesses in China and globally.”
<br><br>
Ultimately, Hang says Ant chose Kubernetes because it checked all the boxes: a strong community, technology that “will be relevant in the next three to five years,” and a good match for the companys engineering talent. “In late 2016, we decided that Kubernetes will be the de facto standard,” says Hang. “Looking back, we made the right bet on the right technology. But then we needed to move the production workload from the legacy infrastructure to the latest Kubernetes-enabled platform. We spent a lot of time learning and then training our people to build applications on Kubernetes well.”
<br><br>
All core financial systems were containerized by November 2017, and the migration to Kubernetes is ongoing. Ants platform also leverages a number of other CNCF projects, including <a href="https://prometheus.io/">Prometheus</a>, <a href="https://opentracing.io/">OpenTracing</a>, <a href="https://coreos.com/etcd/">etcd</a> and <a href="https://coredns.io/">CoreDNS</a>. “On Double 11 this year, we had plenty of nodes on Kubernetes, but compared to the whole scale of our infrastructure, this is still in progress,” says Ranger Yu, Global Technology Partnership & Development.
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_antfinancial_banner4.jpg')">
<div class="banner4text">
"Were very grateful for CNCF and this amazing technology, which we need as we continue to scale globally. Were definitely embracing the community and open source more in the future." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- HAOJIE HANG, PRODUCT MANAGEMENT, ANT FINANCIAL</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
Still, there has already been an impact. “Cloud native technology has benefited us greatly in terms of efficiency,” says Hang. “In general, we want to make sure our infrastructure is nimble and flexible enough for the work that could happen tomorrow. Thats the goal. And with cloud native technology, weve seen at least tenfold improvement in operations, which means you can have tenfold increase in terms of output. Lets say you are operating 10 nodes with one person. With cloud native, tomorrow you can have 100 nodes.”
<br><br>
Ant also provides its financial cloud platform to partners around the world, and hopes to power the next generation of digital banking with deep experience in service innovation and technology expertise. Hang says the team hasnt begun to focus on optimizing the Kubernetes platform, either: “Because were still in the hyper growth stage, were not in a mode where we do cost-saving yet.”
<br><br>
The CNCF community has also been a valuable asset during Ant Financials move to cloud native. “If you are applying a new technology, its very good to have a community to discuss technical problems with other users,” says Hang. “Were very grateful for CNCF and this amazing technology, which we need as we continue to scale globally. Were definitely embracing the community and open sourcing more in the future.”
</div>
<div class="banner5" >
<div class="banner5text">
"In China, we are the North Star in terms of innovation in financial and other related services,” says Hang. “We definitely want to make sure were still leading in the next 5 to 10 years with our investment in technology."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- RANGER YU, GLOBAL TECHNOLOGY PARTNERSHIP & DEVELOPMENT, ANT FINANCIAL</span></div>
</div>
<div class="fullcol">
In fact, the company has already started to open source some of its <a href="https://github.com/alipay">cloud native middleware</a>. “We are going to be very proactive about that,” says Yu. “CNCF provided a platform so everyone can plug in or contribute components. This is very good open source governance.”
<br><br>
Looking ahead, the Ant team will continue to evaluate many other CNCF projects. Building a service mesh community in China, the team has brought together many China-based companies and developers to discuss the potential of that technology. “Service mesh is very attractive for Chinese developers and end users because we have a lot of legacy systems running now, and its an ideal mid-layer to glue everything together, both new and legacy,” says Hang. “For new technologies, we look very closely at whether they will last.”
<br><br>
At Ant, Kubernetes passed that test with flying colors, and the team hopes other companies will follow suit. “In China, we are the North Star in terms of innovation in financial and other related services,” says Hang. “We definitely want to make sure were still leading in the next 5 to 10 years with our investment in technology.”
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.5 KiB

View File

@ -0,0 +1,99 @@
---
title: AppDirect Case Study
linkTitle: AppDirect
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
logo: appdirect_featured_logo.png
featured: true
weight: 4
quote: >
We made the right decisions at the right time. Kubernetes and the cloud native technologies are now seen as the de facto ecosystem.
---
<div class="banner1" style="background-image: url('/images/CaseStudy_appdirect_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/appdirect_logo.png" class="header_logo" style="margin-bottom:-2%"><br> <div class="subhead" style="margin-top:1%;font-size:0.5em">AppDirect: How AppDirect Supported the 10x Growth of Its Engineering Staff with Kubernetess
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>AppDirect</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>San Francisco, California
</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Software</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
<a href="https://www.appdirect.com/">AppDirect</a> provides an end-to-end commerce platform for cloud-based products and services. When Director of Software Development Pierre-Alexandre Lacerte began working there in 2014, the company had a monolith application deployed on a "tomcat infrastructure, and the whole release process was complex for what it should be," he says. "There were a lot of manual steps involved, with one engineer building a feature, then another team picking up the change. So you had bottlenecks in the pipeline to ship a feature to production." At the same time, the engineering team was growing, and the company realized it needed a better infrastructure to both support that growth and increase velocity.
<br><br>
<h2>Solution</h2>
"My idea was: Lets create an environment where teams can deploy their services faster, and they will say, Okay, I dont want to build in the monolith anymore. I want to build a service," says Lacerte. They considered and prototyped several different technologies before deciding to adopt <a href="https://kubernetes.io/">Kubernetes</a> in early 2016. Lacertes team has also integrated <a href="https://prometheus.io/">Prometheus</a> monitoring into the platform; tracing is next. Today, AppDirect has more than 50 microservices in production and 15 Kubernetes clusters deployed on <a href="https://aws.amazon.com/">AWS</a> and on premise around the world.
<br><br>
<h2>Impact</h2>
The Kubernetes platform has helped support the engineering teams 10x growth over the past few years. Coupled with the fact that they were continually adding new features, Lacerte says, "I think our velocity would have slowed down a lot if we didnt have this new infrastructure." Moving to Kubernetes and services has meant that deployments have become much faster due to less dependency on custom-made, brittle shell scripts with SCP commands. Time to deploy a new version has shrunk from 4 hours to a few minutes. Additionally, the company invested a lot of effort to make things self-service for developers. "Onboarding a new service doesnt require <a href="https://www.atlassian.com/software/jira">Jira</a> tickets or meeting with three different teams," says Lacerte. Today, the company sees 1,600 deployments per week, compared to 1-30 before. The company also achieved cost savings by moving its marketplace and billing monoliths to Kubernetes from legacy EC2 hosts as well as by leveraging autoscaling, as traffic is higher during business hours.
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"It was an immense engineering culture shift, but the benefits are undeniable in terms of scale and speed."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Alexandre Gervais, Staff Software Developer, AppDirect</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>With its end-to-end commerce platform for cloud-based products and services, <a href="https://www.appdirect.com/">AppDirect</a> has been helping organizations such as Comcast and GoDaddy simplify the digital supply chain since 2009. </h2>
<br>
When Director of Software Development Pierre-Alexandre Lacerte started working there in 2014, the company had a monolith application deployed on a "tomcat infrastructure, and the whole release process was complex for what it should be," he says. "There were a lot of manual steps involved, with one engineer building a feature then creating a pull request, and a QA or another engineer validating the feature. Then it gets merged and someone else will take care of the deployment. So we had bottlenecks in the pipeline to ship a feature to production." <br><br>
At the same time, the engineering team of 40 was growing, and the company wanted to add an increasing number of features to its products. As a member of the platform team, Lacerte began hearing from multiple teams that wanted to deploy applications using different frameworks and languages, from <a href="https://nodejs.org/">Node.js</a> to <a href="http://spring.io/projects/spring-boot">Spring Boot Java</a>. He soon realized that in order to both support growth and increase velocity, the company needed a better infrastructure, and a system in which teams are autonomous, can do their own deploys, and be responsible for their services in production.
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_appdirect_banner3.jpg')">
<div class="banner3text">
"We made the right decisions at the right time. Kubernetes and the cloud native technologies are now seen as the de facto ecosystem. We know where to focus our efforts in order to tackle the new wave of challenges we face as we scale out. The community is so active and vibrant, which is a great complement to our awesome internal team."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Alexandre Gervais, Staff Software Developer, AppDirect
</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
From the beginning, Lacerte says, "My idea was: Lets create an environment where teams can deploy their services faster, and they will say, Okay, I dont want to build in the monolith anymore. I want to build a service." (Lacerte left the company in 2019.)<br><br>
Working with the operations team, Lacertes group got more control and access to the companys <a href="https://aws.amazon.com/">AWS infrastructure</a>, and started prototyping several orchestration technologies. "Back then, Kubernetes was a little underground, unknown," he says. "But we looked at the community, the number of pull requests, the velocity on GitHub, and we saw it was getting traction. And we found that it was much easier for us to manage than the other technologies."
They spun up the first few services on Kubernetes using <a href="https://www.chef.io/">Chef</a> and <a href="https://www.terraform.io/">Terraform</a> provisioning, and as more services were added, more automation was, too. "We have clusters around the world—in Korea, in Australia, in Germany, and in the U.S.," says Lacerte. "Automation is critical for us." Theyre now largely using <a href="https://github.com/kubernetes/kops">Kops</a>, and are looking at managed Kubernetes offerings from several cloud providers.<br><br>
Today, though the monolith still exists, there are fewer and fewer commits and features. All teams are deploying on the new infrastructure, and services are the norm. AppDirect now has more than 50 microservices in production and 15 Kubernetes clusters deployed on AWS and on premise around the world.<br><br>
Lacertes strategy ultimately worked because of the very real impact the Kubernetes platform has had to deployment time. Due to less dependency on custom-made, brittle shell scripts with SCP commands, time to deploy a new version has shrunk from 4 hours to a few minutes. Additionally, the company invested a lot of effort to make things self-service for developers. "Onboarding a new service doesnt require <a href="https://www.atlassian.com/software/jira">Jira</a> tickets or meeting with three different teams," says Lacerte. Today, the company sees 1,600 deployments per week, compared to 1-30 before.
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_appdirect_banner4.jpg');width:100%;">
<div class="banner4text">
"I think our velocity would have slowed down a lot if we didnt have this new infrastructure."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Pierre-Alexandre Lacerte, Director of Software Development, AppDirect</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
Additionally, the Kubernetes platform has helped support the engineering teams 10x growth over the past few years. "Ownership, a core value of AppDirect, reflects in our ability to ship services independently of our monolith code base," says Staff Software Developer Alexandre Gervais, who worked with Lacerte on the initiative. "Small teams now own critical parts of our business domain model, and they operate in their decoupled domain of expertise, with limited knowledge of the entire codebase. This reduces and isolates some of the complexity." Coupled with the fact that they were continually adding new features, Lacerte says, "I think our velocity would have slowed down a lot if we didnt have this new infrastructure."
The company also achieved cost savings by moving its marketplace and billing monoliths to Kubernetes from legacy EC2 hosts as well as by leveraging autoscaling, as traffic is higher during business hours.<br><br>
AppDirects cloud native stack also includes <a href="https://grpc.io/">gRPC</a> and <a href="https://www.fluentd.org/">Fluentd</a>, and the team is currently working on setting up <a href="https://opencensus.io/">OpenCensus</a>. The platform already has <a href="https://prometheus.io/">Prometheus</a> integrated, so "when teams deploy their service, they have their notifications, alerts and configurations," says Lacerte. "For example, in the test environment, I want to get a message on Slack, and in production, I want a <a href="https://slack.com/">Slack</a> message and I also want to get paged. We have integration with pager duty. Teams have more ownership on their services."
</div>
<div class="banner5" >
<div class="banner5text">
"We moved from a culture limited to pushing code in a branch to exciting new responsibilities outside of the code base: deployment of features and configurations; monitoring of application and business metrics; and on-call support in case of outages. It was an immense engineering culture shift, but the benefits are undeniable in terms of scale and speed."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Pierre-Alexandre Lacerte, Director of Software Development, AppDirect</span></div>
</div>
<div class="fullcol">
That of course also means more responsibility. "We asked engineers to expand their horizons," says Gervais. "We moved from a culture limited to pushing code in a branch to exciting new responsibilities outside of the code base: deployment of features and configurations; monitoring of application and business metrics; and on-call support in case of outages. It was an immense engineering culture shift, but the benefits are undeniable in terms of scale and speed." <br><br>
As the engineering ranks continue to grow, the platform team has a new challenge, of making sure that the Kubernetes platform is accessible and easily utilized by everyone. "How can we make sure that when we add more people to our team that they are efficient, productive, and know how to ramp up on the platform?" Lacerte says. So we have the evangelists, the documentation, some project examples. We do demos, we have AMA sessions. Were trying different strategies to get everyones attention."<br><br>
Three and a half years into their Kubernetes journey, Gervais feels AppDirect "made the right decisions at the right time," he says. "Kubernetes and the cloud native technologies are now seen as the de facto ecosystem. We know where to focus our efforts in order to tackle the new wave of challenges we face as we scale out. The community is so active and vibrant, which is a great complement to our awesome internal team. Going forward, our focus will really be geared towards benefiting from the ecosystem by providing added business value in our day-to-day operations."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

View File

@ -0,0 +1,98 @@
---
title: BlaBlaCar Case Study
case_study_styles: true
cid: caseStudies
css: /css/style_blablacar.css
---
<div class="banner1">
<h1> CASE STUDY:<img src="/images/blablacar_logo.png" class="header_logo"><br /> <div class="subhead">Turning to Containerization to Support Millions of Rideshares</div></h1>
</div>
<div class="details">
Company &nbsp;<b>BlaBlaCar</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Paris, France</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Ridesharing Company</b>
</div>
<hr />
<section class="section1">
<div class="cols">
<div class="col1">
<h2>Challenge</h2>
The worlds largest long-distance carpooling community, <a href="https://www.blablacar.com/">BlaBlaCar</a>, connects 40 million members across 22 countries. The company has been experiencing exponential growth since 2012 and needed its infrastructure to keep up. "When youre thinking about doubling the number of servers, you start thinking, What should I do to be more efficient?" says Simon Lallemand, Infrastructure Engineer at BlaBlaCar. "The answer is not to hire more and more people just to deal with the servers and installation." The team knew they had to scale the platform, but wanted to stay on their own bare metal servers.
<br />
<br />
<h2>Solution</h2>
Opting not to shift to cloud virtualization or use a private cloud on their own servers, the BlaBlaCar team became early adopters of containerization, using the CoreOs runtime <a href="https://coreos.com/rkt">rkt</a>, initially deployed using <a href="https://coreos.com/fleet/docs/latest/launching-containers-fleet.html">fleet</a> cluster manager. Last year, the company switched to <a href="http://kubernetes.io/">Kubernetes</a> orchestration, and now also uses <a href="https://prometheus.io/">Prometheus</a> for monitoring.
</div>
<div class="col2">
<h2>Impact</h2>
"Before using containers, it would take sometimes a day, sometimes two, just to create a new service," says Lallemand. "With all the tooling that we made around the containers, copying a new service now is a matter of minutes. Its really a huge gain. We are better at capacity planning in our data center because we have fewer constraints due to this abstraction between the services and the hardware we run on. For the developers, it also means they can focus only on the features that theyre developing, and not on the infrastructure."
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"When youre switching to this cloud-native model and running everything in containers, you have to make sure that at any moment you can reboot without any downtime and without losing traffic. [With Kubernetes] our infrastructure is much more resilient and we have better availability than before."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br />- Simon Lallemand, Infrastructure Engineer at BlaBlaCar</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>For the 40 million users of <a href="https://www.blablacar.com/">BlaBlaCar</a>, its easy to find strangers headed in the same direction to share rides and costs. You can even choose how much "bla bla" chatter you want from a long-distance ride mate.</h2>
Behind the scenes, though, the infrastructure was falling woefully behind the rider communitys exponential growth. Founded in 2006, the company hit its current stride around 2012. "Our infrastructure was very traditional," says Infrastructure Engineer Simon Lallemand, who began working at the company in 2014. "In the beginning, it was a bit chaotic because we had to [grow] fast. But then comes the time when you have to design things to make it manageable."<br /><br />
By 2015, the company had about 50 bare metal servers. The team was using a <a href="https://www.mysql.com/">MySQL</a> database and <a href="http://php.net/">PHP</a>, but, Lallemand says, "it was a very static way." They also utilized the configuration management system, <a href="https://www.chef.io/chef/">Chef</a>, but had little automation in its process. "When youre thinking about doubling the number of servers, you start thinking, What should I do to be more efficient?" says Lallemand. "The answer is not to hire more and more people just to deal with the servers and installation."<br /><br />
Instead, BlaBlaCar began its cloud-native journey but wasnt sure which route to take. "We could either decide to go into cloud virtualization or even use a private cloud on our own servers," says Lallemand. "But going into the cloud meant we had to make a lot of changes in our application work, and we were just not ready to make the switch from on premise to the cloud." They wanted to keep the great performance they got on bare metal, so they didnt want to go to virtualization on premise.<br /><br />
The solution: containerization. This was early 2015 and containers were still relatively new. "It was a bold move at the time," says Lallemand. "We decided that the next servers that we would buy in the new data center would all be the same model, so we could outsource the maintenance of the servers. And we decided to go with containers and with <a href="https://coreos.com/">CoreOS</a> Container Linux as an abstraction for this hardware. It seemed future-proof to go with containers because we could see what companies were already doing with containers."
</div>
</section>
<div class="banner3">
<div class="banner3text">
"With all the tooling that we made around the containers, copying a new service is a matter of minutes. Its a huge gain. For the developers, it means they can focus only on the features that theyre developing and not on the infrastructure or the hour they would test their code, or the hour that it would get deployed."
</div>
</div>
<section class="section3">
<div class="fullcol">
Next, they needed to choose a runtime for the containers, but "there were very few deployments in production at that time," says Lallemand. They experimented with <a href="https://www.docker.com/">Docker</a> but decided to go with <a href="https://coreos.com/rkt">rkt</a>. Lallemand explains that for BlaBlaCar, it was "much simpler to integrate things that are on rkt." At the time, the project was still pre-v1.0, so "we could speak with the developers of rkt and give them feedback. It was an advantage." Plus, he notes, rkt was very stable, even at this early stage.<br /><br />
Once those decisions were made that summer, the company came up with a plan for implementation. First, they formed a task force to create a workflow that would be tested by three of the 10 members on Lallemands team. But they took care to run regular workshops with all 10 members to make sure everyone was on board. "When youre focused on your product sometimes you forget if its really user friendly, whether other people can manage to create containers too," Lallemand says. "So we did a lot of iterations to find a good workflow."<br /><br />
After establishing the workflow, Lallemand says with a smile that "we had this strange idea that we should try the most difficult thing first. Because if it works, it will work for everything." So the first project the team decided to containerize was the database. "Nobody did that at the time, and there were really no existing tools for what we wanted to do, including building container images," he says. So the team created their own tools, such as <a href="https://github.com/blablacar/dgr">dgr</a>, which builds container images so that the whole team has a common framework to build on the same images with the same standards. They also revamped the service-discovery tools <a href="https://github.com/airbnb/nerve">Nerve</a> and <a href="http://airbnb.io/projects/synapse/">Synapse</a>; their versions, <a href="https://github.com/blablacar/go-nerve">Go-Nerve</a> and <a href="https://github.com/blablacar/go-synapse">Go-Synapse</a>, were written in Go and built to be more efficient and include new features. All of these tools were open-sourced.<br /><br />
At the same time, the company was working to migrate its entire platform to containers with a deadline set for Christmas 2015. With all the work being done in parallel, BlaBlaCar was able to get about 80 percent of its production into containers by its deadline with live traffic running on containers during December. (Its now at 100 percent.) "Its a really busy time for traffic," says Lallemand. "We knew that by using those new servers with containers, it would help us handle the traffic."<br /><br />
In the middle of that peak season for carpooling, everything worked well. "The biggest impact that we had was for the deployment of new services," says Lallemand. "Before using containers, we had to first deploy a new server and create configurations with Chef. It would take sometimes a day, sometimes two, just to create a new service. And with all the tooling that we made around the containers, copying a new service is a matter of minutes. So its really a huge gain. For the developers, it means they can focus only on the features that theyre developing and not on the infrastructure or the hour they would test their code, or the hour that it would get deployed."
</div>
</section>
<div class="banner4">
<div class="banner4text">
"We realized that there was a really strong community around it [Kubernetes], which meant we would not have to maintain a lot of tools of our own," says Lallemand. "It was better if we could contribute to some bigger project like Kubernetes."
</div>
</div>
<section class="section4">
<div class="fullcol">
In order to meet their self-imposed deadline, one of the decisions they made was to not do any "orchestration magic" for containers in the first production alignment. Instead, they used the basic <a href="https://coreos.com/fleet/docs/latest/launching-containers-fleet.html">fleet</a> tool from CoreOS to deploy their containers. (They did build a tool called <a href="https://github.com/blablacar/ggn">GGN</a>, which theyve open-sourced, to make it more manageable for their system engineers to use.)<br /><br />
Still, the team knew that theyd want more orchestration. "Our tool was doing a pretty good job, but at some point you want to give more autonomy to the developer team," Lallemand says. "We also realized that we dont want to be the single point of contact for developers when they want to launch new services." By the summer of 2016, they found their answer in <a href="http://kubernetes.io/">Kubernetes</a>, which had just begun supporting rkt implementation.<br /><br />
After discussing their needs with their contacts at CoreOS and Google, they were convinced that Kubernetes would work for BlaBlaCar. "We realized that there was a really strong community around it, which meant we would not have to maintain a lot of tools of our own," says Lallemand. "It was better if we could contribute to some bigger project like Kubernetes." They also started using <a href="https://prometheus.io/">Prometheus</a>, as they were looking for "service-oriented monitoring that could be updated nightly." Production on Kubernetes began in December 2016. "We like to do crazy stuff around Christmas," he adds with a laugh.<br /><br />
BlaBlaCar now has about 3,000 pods, with 1200 of them running on Kubernetes. Lallemand leads a "foundations team" of 25 members who take care of the networks, databases and systems for about 100 developers. There have been some challenges getting to this point. "The rkt implementation is still not 100 percent finished," Lallemand points out. "Its really good, but there are some features still missing. We have questions about how we do things with stateful services, like databases. We know how we will be migrating some of the services; some of the others are a bit more complicated to deal with. But the Kubernetes community is making a lot of progress on that part."<br /><br />
The team is particularly happy that theyre now able to plan capacity better in the companys data center. "We have fewer constraints since we have this abstraction between the services and the hardware we run on," says Lallemand. "If we lose a server because theres a hardware problem on it, we just move the containers onto another server. Its much more efficient. We do that by just changing a line in the configuration file. And with Kubernetes, it should be automatic, so we would have nothing to do."
</div>
</section>
<div class="banner5">
<div class="banner5text">
"If we lose a server because theres a hardware problem on it, we just move the containers onto another server. Its much more efficient. We do that by just changing a line in the configuration file. With Kubernetes, it should be automatic, so we would have nothing to do."
</div>
</div>
<section class="section5">
<div class="fullcol">
And these advances ultimately trickle down to BlaBlaCars users. "We have improved availability overall on our website," says Lallemand. "When youre switching to this cloud-native model with running everything in containers, you have to make sure that you can at any moment reboot a server or a data container without any downtime, without losing traffic. So now our infrastructure is much more resilient and we have better availability than before."<br /><br />
Within BlaBlaCars technology department, the cloud-native journey has created some profound changes. Lallemand thinks that the regular meetings during the conception stage and the training sessions during implementation helped. "After that everybody took part in the migration process," he says. "Then we split the organization into different tribes—teams that gather developers, product managers, data analysts, all the different jobs, to work on a specific part of the product. Before, they were organized by function. The idea is to give all these tribes access to the infrastructure directly in a self-service way without having to ask. These people are really autonomous. They have responsibility of that part of the product, and they can make decisions faster." <br /><br />
This DevOps transformation turned out to be a positive one for the companys staffers. "The team was very excited about the DevOps transformation because it was new, and we were working to make things more reliable, more future-proof," says Lallemand. "We like doing things that very few people are doing, other than the internet giants." <br /><br />
With these changes already making an impact, BlaBlaCar is looking to split up more and more of its application into services. "I dont say microservices because theyre not so micro," Lallemand says. "If we can split the responsibilities between the development teams, it would be easier to manage and more reliable, because we can easily add and remove services if one fails. You can handle it easily, instead of adding a big monolith that we still have." <br /><br />
When Lallemand speaks to other European companies curious about what BlaBlaCar has done with its infrastructure, he tells them to come along for the ride. "I tell them that its such a pleasure to deal with the infrastructure that we have today compared to what we had before," he says. "They just need to keep in mind their real motive, whether its flexibility in development or reliability or so on, and then go step by step towards reaching those objectives. Thats what weve done. Its important not to do technology for the sake of technology. Do it for a purpose. Our focus was on helping the developers."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.3 KiB

View File

@ -0,0 +1,112 @@
---
title: BlackRock Case Study
case_study_styles: true
cid: caseStudies
css: /css/style_blackrock.css
---
<div class="banner1">
<h1> CASE STUDY: <img src="/images/blackrock_logo.png" class="header_logo"><br>
<div class="subhead">Rolling Out Kubernetes in Production in 100 Days</div>
</h1>
</div>
<div class="details">
Company &nbsp;<b>BlackRock</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>New York, NY</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Financial Services</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>Challenge</h2>
The worlds largest asset manager, <a href="https://www.blackrock.com/investing">BlackRock</a> operates a very controlled static deployment scheme, which has allowed for scalability over the years. But in their data science division, there was a need for more dynamic access to resources. "We want to be able to give every investor access to data science, meaning <a href="https://www.python.org">Python</a> notebooks, or even something much more advanced, like a MapReduce engine based on <a href="https://spark.apache.org">Spark</a>," says Michael Francis, a Managing Director in BlackRocks Product Group, which runs the companys investment management platform. "Managing complex Python installations on users desktops is really hard because everyone ends up with slightly different environments. We have existing environments that do these things, but we needed to make it real, expansive and scalable. Being able to spin that up on demand, tear it down, make that much more dynamic, became a critical thought process for us. Its not so much that we had to solve our main core production problem, its how do we extend that? How do we evolve?"
</div>
<div class="col2">
<h2>Solution</h2>
Drawing from what they learned during a pilot done last year using <a href="https://www.docker.com">Docker</a> environments, Francis put together a cross-sectional team of 20 to build an investor research web app using <a href="https://kubernetes.io">Kubernetes</a> with the goal of getting it into production within one quarter.
<br><br>
<h2>Impact</h2>
"Our goal was: How do you give people tools rapidly without having to install them on their desktop?" says Francis. And the team hit the goal within 100 days. Francis is pleased with the results and says, "Were going to use this infrastructure for lots of other application workloads as time goes on. Its not just data science; its this style of application that needs the dynamism. But I think were 6-12 months away from making a [large scale] decision. We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues. Whats interesting is that just having this technology there is changing the way our developers are starting to think about their future development."
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"My message to other enterprises like us is you can actually integrate Kubernetes into an existing, well-orchestrated machinery. You dont have to throw out everything you do. And using Kubernetes made a complex problem significantly easier."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Michael Francis, Managing Director, BlackRock</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
One of the management objectives for BlackRocks Product Group employees in 2017 was to "build cool stuff." Led by Managing Director Michael Francis, a cross-sectional group of 20 did just that: They rolled out a full production Kubernetes environment and released a new investor research web app on it. In 100 days.<br><br>
For a company thats the worlds largest asset manager, "just equipment procurement can take 100 days sometimes, let alone from inception to delivery," says Karl Wieman, a Senior System Administrator. "It was an aggressive schedule. But it moved the dial."
In fact, the project achieved two goals: It solved a business problem (creating the needed web app) as well as provided real-world, in-production experience with Kubernetes, a cloud-native technology that the company was eager to explore. "Its not so much that we had to solve our main core production problem, its how do we extend that? How do we evolve?" says Francis. The ultimate success of this project, beyond delivering the app, lies in the fact that "weve managed to integrate a radically new thought process into a controlled infrastructure that we didnt want to change."<br><br>
After all, in its three decades of existence, BlackRock has "a very well-established environment for managing our compute resources," says Francis. "We manage large cluster processes on machines, so we do a lot of orchestration and management for our main production processes in a way thats very cloudish in concept. Were able to manage them in a very controlled, static deployment scheme, and that has given us a huge amount of scalability."<br><br>
Though that works well for the core production, the company has found that some data science workloads require more dynamic access to resources. "Its a very bursty process," says Francis, who is head of data for the companys Aladdin investment management platform division.<br><br>
Aladdin, which connects the people, information and technology needed for money management in real time, is used internally and is also sold as a platform to other asset managers and insurance companies. "We want to be able to give every investor access to data science, meaning <a href="https://www.python.org">Python</a> notebooks, or even something much more advanced, like a MapReduce engine based on <a href="https://spark.apache.org">Spark</a>," says Francis. But "managing complex Python installations on users desktops is really hard because everyone ends up with slightly different environments. Docker allows us to flatten that environment."
</div>
</section>
<div class="banner3">
<div class="banner3text">
"We manage large cluster processes on machines, so we do a lot of orchestration and management for our main production processes in a way thats very cloudish in concept. Were able to manage them in a very controlled, static deployment scheme, and that has given us a huge amount of scalability."
</div>
</div>
<section class="section3">
<div class="fullcol">
Still, challenges remain. "If you have a shared cluster, you get this storming herd problem where everyone wants to do the same thing at the same time," says Francis. "You could put limits on it, but youd have to build an infrastructure to define limits for our processes, and the Python notebooks werent really designed for that. We have existing environments that do these things, but we needed to make it real, expansive, and scalable. Being able to spin that up on demand, tear it down, and make that much more dynamic, became a critical thought process for us."<br><br>
Made up of managers from technology, infrastructure, production operations, development and information security, Franciss team was able to look at the problem holistically and come up with a solution that made sense for BlackRock. "Our initial straw man was that we were going to build everything using <a href="https://www.ansible.com">Ansible</a> and run it all using some completely different distributed environment," says Francis. "That would have been absolutely the wrong thing to do. Had we gone off on our own as the dev team and developed this solution, it would have been a very different product. And it would have been very expensive. We would not have gone down the route of running under our existing orchestration system. Because we dont understand it. These guys [in operations and infrastructure] understand it. Having the multidisciplinary team allowed us to get to the right solutions and that actually meant we didnt build anywhere near the amount we thought we were going to end up building."<br><br>
In search of a solution in which they could manage usage on a user-by-user level, Franciss team gravitated to Red Hats <a href="https://www.openshift.com">OpenShift</a> Kubernetes offering. The company had already experimented with other cloud-native environments, but the team liked that Kubernetes was open source, and "we felt the winds were blowing in the direction of Kubernetes long term," says Francis. "Typically we make technology choices that we believe are going to be here in 5-10 years time, in some form. And right now, in this space, Kubernetes feels like the one thats going to be there." Adds Uri Morris, Vice President of Production Operations: "When you see that the non-Google committers to Kubernetes overtook the Google committers, thats an indicator of the momentum."<br><br>
Once that decision was made, the major challenge was figuring out how to make Kubernetes work within BlackRocks existing framework. "Its about understanding how we can operate, manage and support a platform like this, in addition to tacking it onto our existing technology platform," says Project Manager Michael Maskallis. "All the controls we have in place, the change management process, the software development lifecycle, onboarding processes we go through—how can we do all these things?"<br><br>
The first (anticipated) speed bump was working around issues behind BlackRocks corporate firewalls. "One of our challenges is there are no firewalls in most open source software," says Francis. "So almost all install scripts fail in some bizarre way, and pulling down packages doesnt necessarily work." The team ran into these types of problems using <a href="/docs/getting-started-guides/minikube/">Minikube</a> and did a few small pushes back to the open source project.
</div>
</section>
<div class="banner4">
<div class="banner4text">
"Typically we make technology choices that we believe are going to be here in 5-10 years time, in some form. And right now, in this space, Kubernetes feels like the one thats going to be there."
</div>
</div>
<section class="section4">
<div class="fullcol">
There were also questions about service discovery. "You can think of Aladdin as a cloud of services with APIs between them that allows us to build applications rapidly," says Francis. "Its all on a proprietary message bus, which gives us all sorts of advantages but at the same time, how does that play in a third party [platform]?"<br><br>
Another issue they had to navigate was that in BlackRocks existing system, the messaging protocol has different instances in the different development, test and production environments. While Kubernetes enables a more DevOps-style model, it didnt make sense for BlackRock. "I think what we are very proud of is that the ability for us to push into production is still incredibly rapid in this [new] infrastructure, but we have the control points in place, and we didnt have to disrupt everything," says Francis. "A lot of the cost of this development was thinking how best to leverage our internal tools. So it was less costly than we actually thought it was going to be."<br><br>
The project leveraged tools associated with the messaging bus, for example. "The way that the Kubernetes cluster will talk to our internal messaging platform is through a gateway program, and this gateway program already has built-in checks and throttles," says Morris. "We can use them to control and potentially throttle the requests coming in from Kubernetess very elastic infrastructure to the production infrastructure. Well continue to go in that direction. It enables us to scale as we need to from the operational perspective."<br><br>
The solution also had to be complementary with BlackRocks centralized operational support team structure. "The core infrastructure components of Kubernetes are hooked into our existing orchestration framework, which means that anyone in our support team has both control and visibility to the cluster using the existing operational tools," Morris explains. "That means that I dont need to hire more people."<br><br>
With those points established, the team created a procedure for the project: "We rolled this out first to a development environment, then moved on to a testing environment and then eventually to two production environments, in that sequential order," says Maskallis. "That drove a lot of our learning curve. We have all these moving parts, the software components on the infrastructure side, the software components with Kubernetes directly, the interconnectivity with the rest of the environment that we operate here at BlackRock, and how we connect all these pieces. If we came across issues, we fixed them, and then moved on to the different environments to replicate that until we eventually ended up in our production environment where this particular cluster is supposed to live."<br><br>
The team had weekly one-hour working sessions with all the members (who are located around the world) participating, and smaller breakout or deep-dive meetings focusing on specific technical details. Possible solutions would be reported back to the group and debated the following week. "I think what made it a successful experiment was people had to work to learn, and they shared their experiences with others," says Vice President and Software Developer Fouad Semaan. Then, Francis says, "We gave our engineers the space to do what theyre good at. This hasnt been top-down."
</div>
</section>
<div class="banner5">
<div class="banner5text">
"The core infrastructure components of Kubernetes are hooked into our existing orchestration framework, which means that anyone in our support team has both control and visibility to the cluster using the existing operational tools. That means that I dont need to hire more people."
</div>
</div>
<section class="section5">
<div class="fullcol">
They were led by one key axiom: To stay focused and avoid scope creep. This meant that they wouldnt use features that werent in the core of Kubernetes and Docker. But if there was a real need, theyd build the features themselves. Luckily, Francis says, "Because of the rapidity of the development, a lot of things we thought we would have to build ourselves have been rolled into the core product. [The package manager<a href="https://helm.sh"> Helm</a> is one example]. People have similar problems."<br><br>
By the end of the 100 days, the app was up and running for internal BlackRock users. The initial capacity of 30 users was hit within hours, and quickly increased to 150. "People were immediately all over it," says Francis. In the next phase of this project, they are planning to scale up the cluster to have more capacity.<br><br>
Even more importantly, they now have in-production experience with Kubernetes that they can continue to build on—and a complete framework for rolling out new applications. "Were going to use this infrastructure for lots of other application workloads as time goes on. Its not just data science; its this style of application that needs the dynamism," says Francis. "Is it the right place to move our core production processes onto? It might be. Were not at a point where we can say yes or no, but we felt that having real production experience with something like Kubernetes at some form and scale would allow us to understand that. I think were 6-12 months away from making a [large scale] decision. We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues."<br><br>
For other big companies considering a project like this, Francis says commitment and dedication are key: "We got the signoff from [senior management] from day one, with the commitment that we were able to get the right people. If I had to isolate what makes something complex like this succeed, I would say senior hands-on people who can actually drive it make a huge difference." With that in place, he adds, "My message to other enterprises like us is you can actually integrate Kubernetes into an existing, well-orchestrated machinery. You dont have to throw out everything you do. And using Kubernetes made a complex problem significantly easier."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

View File

@ -0,0 +1,103 @@
---
title: Bose Case Study
linkTitle: Bose
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
logo: bose_featured_logo.png
featured: false
weight: 2
quote: >
The CNCF Landscape quickly explains whats going on in all the different areas from storage to cloud providers to automation and so forth. This is our shopping cart to build a cloud infrastructure. We can go choose from the different aisles.
---
<div class="banner1" style="background-image: url('/images/CaseStudy_bose_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/bose_logo.png" class="header_logo" style="width:20%;margin-bottom:-1.2%"><br> <div class="subhead" style="margin-top:1%">Bose: Supporting Rapid Development for Millions of IoT Products With Kubernetes
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>Bose Corporation</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Framingham, Massachusetts
</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Consumer Electronics</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
A household name in high-quality audio equipment, <a href="https://www.bose.com/en_us/index.html">Bose</a> has offered connected products for more than five years, and as that demand grew, the infrastructure had to change to support it. "We needed to provide a mechanism for developers to rapidly prototype and deploy services all the way to production pretty fast,” says Lead Cloud Engineer Josh West. In 2016, the company decided to start building a platform from scratch. The primary goal: "To be one to two steps ahead of the different product groups so that we are never scrambling to catch up with their scale,” says Cloud Architecture Manager Dylan OMahony.
<br><br>
<h2>Solution</h2>
From the beginning, the team knew it wanted a microservices architecture. After evaluating and prototyping a couple of orchestration solutions, the team decided to adopt <a href="https://kubernetes.io/">Kubernetes</a> for its scaled IoT Platform-as-a-Service running on AWS. The platform, which also incorporated Prometheus monitoring, launched in production in 2017, serving over 3 million connected products from the get-go. Bose has since adopted a number of other CNCF technologies, including <a href="https://www.fluentd.org/">Fluentd</a>, <a href="https://coredns.io/">CoreDNS</a>, <a href="https://www.jaegertracing.io/">Jaeger</a>, and <a href="https://opentracing.io/">OpenTracing</a>.
<br><br>
<h2>Impact</h2>
With about 100 engineers onboarded, the platform is now enabling 30,000 non-production deployments across dozens of microservices per year. In 2018, there were 1250+ production deployments. Just one production cluster holds 1,800 namespaces and 340 worker nodes. "We had a brand new service taken from concept through coding and deployment all the way to production, including hardening, security testing and so forth, in less than two and a half weeks,” says OMahony.
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"At Bose were building an IoT platform that has enabled our physical products. If it werent for Kubernetes and the rest of the CNCF projects being free open source software with such a strong community, we would never have achieved scale, or even gotten to launch&nbsp;on&nbsp;schedule."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Josh West, Lead Cloud Engineer, Bose</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>A household name in high-quality audio equipment, <a href="https://www.bose.com/en_us/index.html">Bose</a> has offered connected products for more than five years, and as that demand grew, the infrastructure had to change to support it. </h2>
"We needed to provide a mechanism for developers to rapidly prototype and deploy services all the way to production pretty fast,” says Lead Cloud Engineer Josh West. "There were a lot of cloud capabilities we wanted to provide to support our audio equipment and experiences.”<br><br>
In 2016, the company decided to start building an IoT platform from scratch. The primary goal: "To be one to two steps ahead of the different product groups so that we are never scrambling to catch up with their scale,” says Cloud Architecture Manager Dylan OMahony. "If they release a new connected product, we want to be already well ahead of being able to handle whatever scale that theyre going to throw at us.”<br><br>
From the beginning, the team knew it wanted a microservices architecture and platform as a service. After evaluating and prototyping orchestration solutions, including Mesos and Docker Swarm, the team decided to adopt <a href="https://kubernetes.io/">Kubernetes</a> for its platform running on AWS. Kubernetes was still in 1.5, but already the technology could do much of what the team wanted and needed for the present and the future. For West, that meant having storage and network handled. OMahony points to Kubernetes portability in case Bose decides to go multi-cloud.<br><br>
"Bose is a company that looks out for the long term,” says West. "Going with a quick commercial off-the-shelf solution mightve worked for that point in time, but it would not have carried us forward, which is what we needed from Kubernetes and the CNCF.”
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_bose_banner3.jpg')">
<div class="banner3text">
"Everybody on the team thinks in terms of automation, leaning out the processes, getting things done as quickly as possible. When you step back and look at what it means for a 50-plus-year-old speaker company to have that sort of culture, it really is quite incredible, and I think the tools that we use and the foundation that weve built with them is a huge piece of that."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Dylan OMahony, Cloud Architecture Manager, Bose</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
The team spent time working on choosing tooling to make the experience easier for developers. "Our developers interact with tools provided by our Ops team, and the Ops team run all of their tooling on top of Kubernetes,” says OMahony. "We try not to make direct Kubernetes access the only way. In fact, ideally, our developers wouldnt even need to know that theyre running on Kubernetes.”<br><br>
The platform, which also incorporated <a href="https://prometheus.io/">Prometheus</a> monitoring from the beginning, backdoored its way into production in 2017, serving over 3 million connected products from the get-go. "Even though the speakers and the products that we were designing this platform for were still quite a ways away from being launched, we did have some connected speakers on the market,” says OMahony. "We basically started to point certain features of those speakers and the apps that go with those speakers to this platform.”<br><br>
Today, just one of Boses production clusters holds 1,800 namespaces/discrete services and 340 nodes. With about 100 engineers now onboarded, the platform infrastructure is now enabling 30,000 non-production deployments across dozens of microservices per year. In 2018, there were 1250+ production deployments.. Its a staggering improvement over some of Boses previous deployment processes, which supported far fewer deployments and services.
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_bose_banner4.jpg');width:100%">
<div class="banner4text">
"The CNCF Landscape quickly explains whats going on in all the different areas from storage to cloud providers to automation and so forth. This is our shopping cart to build a cloud infrastructure. We can go choose from the different aisles." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Josh West, Lead Cloud Engineer, Bose</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
"We had a brand new service deployed from concept through coding and deployment all the way to production, including hardening, security testing and so forth, in less than two and a half weeks,” says OMahony. "Everybody thinks in terms of automation, leaning out the processes, getting things done as quickly as possible. When you step back and look at what it means for a 50-plus-year-old speaker company to have that sort of culture, it really is quite incredible, and I think the tools that we use and the foundation that weve built is a huge piece of that.”<br><br>
Many of those technologies—such as <a href="https://www.fluentd.org/">Fluentd</a>, <a href="https://coredns.io/">CoreDNS</a>, <a href="https://www.jaegertracing.io/">Jaeger</a>, and <a href="https://opentracing.io/">OpenTracing</a>—come from the <a href="https://landscape.cncf.io/">CNCF Landscape</a>, which West and OMahony have relied upon throughout Boses cloud native journey. "The CNCF Landscape quickly explains whats going on in all the different areas from storage to cloud providers to automation and so forth,” says West. "This is our shopping cart to build a cloud infrastructure. We can go choose from the different aisles.” <br><br>
And, he adds, "If it werent for Kubernetes and the rest of the CNCF projects being free open source software with such a strong community, we would never have achieved scale, or even gotten to launch on schedule.”<br><br>
Another benefit of going cloud native: "We are even attracting much more talent into Bose because were so involved with the <a href="http://careers.bose.com">CNCF Landscape</a>,” says West. (Yes, theyre hiring.) "Its just enabled so many people to do so many great things and really brought Bose into the future of cloud.”
</div>
<div class="banner5" >
<div class="banner5text">
"We have a lot going on to support many more of our business units at Bose in addition to the consumer electronics division, which we currently do. Its only because of the cloud native landscape and the tools and the features that are available that we can provide such a fantastic cloud platform for all the developers and divisions that are trying to enable some pretty amazing experiences."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Dylan OMahony, Cloud Architecture Manager, Bose</span></div>
</div>
<div class="fullcol">
In the coming year, the team wants to work on service mesh and serverless, as well as expansion around the world. "Getting our latency down by going multi-region is going to be a big focus for us,” says OMahony. "In order to make sure that our customers in Japan, Australia, and everywhere else are having a good experience, we want to have points of presence closer to them. Its never been done at Bose before.”<br><br>
That wont stop them, because the team is all about lofty goals. "We want to get to billions of connected products!” says West. "We have a lot going on to support many more of our business units at Bose in addition to the consumer electronics division, which we currently do. Its only because of the cloud native landscape and the tools and the features that are available that we can provide such a fantastic cloud platform for all the developers and divisions that are trying to enable some pretty amazing experiences.”<br><br>
In fact, given the scale the platform is already supporting, says OMahony, "doing anything other than Kubernetes, I think, would be folly at this point.”
</div>
</section>
</body>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.3 KiB

View File

@ -0,0 +1,114 @@
---
title: Box Case Study
case_study_styles: true
cid: caseStudies
css: /css/style_box.css
video: https://www.youtube.com/embed/of45hYbkIZs?autoplay=1
quote: >
Kubernetes has the opportunity to be the new cloud platform. The amount of innovation that's going to come from being able to standardize on Kubernetes as a platform is incredibly exciting - more exciting than anything I've seen in the last 10 years of working on the cloud.
---
<div class="banner1">
<h1>CASE STUDY: <img src="/images/box_logo.png" width="10%" style="margin-bottom:-6px"><br>
<div class="subhead">An Early Adopter Envisions
a New Cloud Platform</div>
</h1>
</div>
<div class="details">
Company &nbsp;<b>Box</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Redwood City, California</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Technology</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>Challenge</h2>
Founded in 2005, the enterprise content management company allows its more than 50 million users to manage content in the cloud. <a href="https://www.box.com/home">Box</a> was built primarily with bare metal inside the companys own data centers, with a monolithic PHP code base. As the company was expanding globally, it needed to focus on "how we run our workload across many different cloud infrastructures from bare metal to public cloud," says Sam Ghods, Cofounder and Services Architect of Box. "Its been a huge challenge because of different clouds, especially bare metal, have very different interfaces."
<br>
</div>
<div class="col2">
<h2>Solution</h2>
Over the past couple of years, Box has been decomposing its infrastructure into microservices, and became an early adopter of, as well as contributor to, <a href="http://kubernetes.io/">Kubernetes</a> container orchestration. Kubernetes, Ghods says, has allowed Boxs developers to "target a universal set of concepts that are portable across all clouds."<br><br>
<h2>Impact</h2>
"Before Kubernetes," Ghods says, "our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Today, a new microservice takes less than five days to deploy. And were working on getting it to an hour."
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"We looked at a lot of different options, but Kubernetes really stood out....the fact that on day one it was designed to run on bare metal just as well as Google Cloud meant that we could actually migrate to it inside of our data centers, and then use those same tools and concepts to run across public cloud providers as&nbsp;well."<br><br><span style="font-size:15px;letter-spacing:0.08em">- SAM GHOUDS, CO-FOUNDER AND SERVICES ARCHITECT OF BOX</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>In the summer of 2014, Box was feeling the pain of a decades worth of hardware and software infrastructure that wasnt keeping up with the companys needs.</h2>
A platform that allows its more than 50 million users (including governments and big businesses like <a href="https://www.ge.com/">General Electric</a>) to manage and share content in the cloud, Box was originally a <a href="http://php.net/">PHP</a> monolith of millions of lines of code built exclusively with bare metal inside of its own data centers. It had already begun to slowly chip away at the monolith, decomposing it into microservices. And "as weve been expanding into regions around the globe, and as the public cloud wars have been heating up, weve been focusing a lot more on figuring out how we run our workload across many different environments and many different cloud infrastructure providers," says Box Cofounder and Services Architect Sam Ghods. "Its been a huge challenge thus far because of all these different providers, especially bare metal, have very different interfaces and ways in which you work with them."<br><br>
Boxs cloud native journey accelerated that June, when Ghods attended <a href="https://www.docker.com/events/dockercon">DockerCon</a>. The company had come to the realization that it could no longer run its applications only off bare metal, and was researching containerizing with Docker, virtualizing with OpenStack, and supporting public cloud.<br><br>
At that conference, Google announced the release of its Kubernetes container management system, and Ghods was won over. "We looked at a lot of different options, but Kubernetes really stood out, especially because of the incredibly strong team of <a href="https://research.google.com/pubs/pub43438.html">Borg</a> veterans and the vision of having a completely infrastructure-agnostic way of being able to run cloud software," he says, referencing Googles internal container orchestrator Borg. "The fact that on day one it was designed to run on bare metal just as well as <a href="https://cloud.google.com/">Google Cloud</a> meant that we could actually migrate to it inside of our data centers, and then use those same tools and concepts to run across public cloud providers as well."<br><br>
Another plus: Ghods liked that <a href="https://kubernetes.io/">Kubernetes</a> has a universal set of API objects like pod, service, replica set and deployment object, which created a consistent surface to build tooling against. "Even PaaS layers like <a href="https://www.openshift.com/">OpenShift</a> or <a href="http://deis.io/">Deis</a> that build on top of Kubernetes still treat those objects as first-class principles," he says. "We were excited about having these abstractions shared across the entire ecosystem, which would result in a lot more momentum than we saw in other potential solutions."<br><br>
Box deployed Kubernetes in a cluster in a production data center just six months later. Kubernetes was then still pre-beta, on version 0.11. They started small: The very first thing Ghodss team ran on Kubernetes was a Box API checker that confirms Box is up. "That was just to write and deploy some software to get the whole pipeline functioning," he says. Next came some daemons that process jobs, which was "nice and safe because if they experienced any interruptions, we wouldnt fail synchronous incoming requests from customers."
</div>
</section>
<div class="banner3">
<div class="banner3text">
"As weve been expanding into regions around the globe, and as the public cloud wars have been heating up, weve been focusing a lot more on figuring out how we [can have Kubernetes help] run our workload across many different environments and many different cloud infrastructure providers."
</div>
</div>
<section class="section3">
<div class="fullcol">
The first live service, which the team could route to and ask for information, was launched a few months later. At that point, Ghods says, "We were comfortable with the stability of the Kubernetes cluster. We started to port some services over, then we would increase the cluster size and port a few more, and thats ended up to about 100 servers in each data center that are dedicated purely to Kubernetes. And thats going to be expanding a lot over the next 12 months, probably too many hundreds if not thousands."<br><br>
While observing teams who began to use Kubernetes for their microservices, "we immediately saw an uptick in the number of microservices being released," Ghods&nbsp;notes. "There was clearly a pent-up demand for a better way of building software through microservices, and the increase in agility helped our developers be more productive and make better architectural choices."
<br><br><div class="quote">"There was clearly a pent-up demand for a better way of building software through microservices, and the increase in agility helped our developers be more productive and make better architectural choices."</div><br>
Ghods reflects that as early adopters, Box had a different journey from what companies experience now. "We were definitely lock step with waiting for certain things to stabilize or features to get released," he says. "In the early days we were doing a lot of contributions [to components such as kubectl apply] and waiting for Kubernetes to release each of them, and then wed upgrade, contribute more, and go back and forth several times. The entire project took about 18 months from our first real deployment on Kubernetes to having general availability. If we did that exact same thing today, it would probably be no more than six."<br><br>
In any case, Box didnt have to make too many modifications to Kubernetes for it to work for the company. "The vast majority of the work our team has done to implement Kubernetes at Box has been making it work inside of our existing (and often legacy) infrastructure," says Ghods, "such as upgrading our base operating system from RHEL6 to RHEL7 or integrating it into <a href="https://www.nagios.org/">Nagios</a>, our monitoring infrastructure. But overall Kubernetes has been remarkably flexible with fitting into many of our constraints, and weve been running it very successfully on our bare metal infrastructure."<br><br>
Perhaps the bigger challenge for Box was a cultural one. "Kubernetes, and cloud native in general, represents a pretty big paradigm shift, and its not very incremental," Ghods says. "Were essentially making this pitch that Kubernetes is going to solve everything because it does things the right way and everything is just suddenly better. But its important to keep in mind that its not nearly as proven as many other solutions out there. You cant say how long this or that company took to do it because there just arent that many yet. Our team had to really fight for resources because our project was a bit of a moonshot."
</div>
</section>
<div class="banner4">
<div class="banner4text">
"The vast majority of the work our team has done to implement Kubernetes at Box has been making it work inside of our existing [and often legacy] infrastructure....overall Kubernetes has been remarkably flexible with fitting into many of our constraints, and weve been running it very successfully on our bare metal infrastructure."
</div>
</div>
<section class="section4">
<div class="fullcol">
Having learned from experience, Ghods offers these two pieces of advice for companies going through similar challenges:
<h2>1. Deliver early and often.</h2> Service discovery was a huge problem for Box, and the team had to decide whether to build an interim solution or wait for Kubernetes to natively satisfy Boxs unique requirements. After much debate, "we just started focusing on delivering something that works, and then dealing with potentially migrating to a more native solution later," Ghods says. "The above-all-else target for the team should always be to serve real production use cases on the infrastructure, no matter how trivial. This helps keep the momentum going both for the team itself and for the organizational perception of the project." </br></br>
<h2>2. Keep an open mind about what your company has to abstract away from developers and what it&nbsp;doesnt.</h2> Early on, the team built an abstraction on top of Docker files to help ensure that images had the right security updates.
This turned out to be superfluous work, since container images are considered immutable and you can easily scan them post-build to ensure they do not contain vulnerabilities. Because managing infrastructure through containerization is such a discontinuous leap, its better to start by interacting directly with the native tools and learning their unique advantages and caveats. An abstraction should be built only after a practical need for it arises.</br></br>
In the end, the impact has been powerful. "Before Kubernetes," Ghods says, "our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Now a new microservice takes less than five days to deploy. And were working on getting it to an hour. Granted, much of that six months was due to how broken our systems were, but bare metal is intrinsically a difficult platform to support unless you have a system like Kubernetes to help manage&nbsp;it."</br></br>
By Ghodss estimate, Box is still several years away from his goal of being a 90-plus percent Kubernetes shop. "Were very far along on having a mission-critical, stable Kubernetes deployment that provides a lot of value," he says. "Right now about five percent of all of our compute runs on Kubernetes, and I think in the next six months well likely be between 20 to 50 percent. Were working hard on enabling all stateless service use cases, and shift our focus to stateful services after&nbsp;that."
</div>
</section>
<div class="banner5">
<div class="banner5text">
"Ghods predicts that Kubernetes has the opportunity to be the new cloud platform. '...because its a never-before-seen level of automation and intelligence surrounding infrastructure that is portable and agnostic to every way you can run your infrastructure.'"
</div>
</div>
<section class="section5">
<div class="fullcol">
In fact, thats what he envisions across the industry: Ghods predicts that Kubernetes has the opportunity to be the new cloud platform. Kubernetes provides an API consistent across different cloud platforms including bare metal, and "I dont think people have seen the full potential of whats possible when you can program against one single interface," he says. "The same way <a href="https://aws.amazon.com/">AWS</a> changed infrastructure so that you dont have to think about servers or cabinets or networking equipment anymore, Kubernetes enables you to focus exclusively on the containers that youre running, which is pretty exciting. Thats the vision."</br></br>
Ghods points to projects that are already in development or recently released for Kubernetes as a cloud platform: cluster federation, the Dashboard UI, and <a href="https://coreos.com/">CoreOS</a>s etcd operator. "I honestly believe its the most exciting thing Ive seen in cloud infrastructure," he says, "because its a never-before-seen level of automation and intelligence surrounding infrastructure that is portable and agnostic to every way you can run your infrastructure."</br></br>
Box, with its early decision to use bare metal, embarked on its Kubernetes journey out of necessity. But Ghods says that even if companies dont have to be agnostic about cloud providers today, Kubernetes may soon become the industry standard, as more and more tooling and extensions are built around the API.</br></br>
"The same way it doesnt make sense to deviate from Linux because its such a standard," Ghods says, "I think Kubernetes is going down the same path. It is still early days—the documentation still needs work and the user experience for writing and publishing specs to the Kubernetes clusters is still rough. When youre on the cutting edge you can expect to bleed a little. But the bottom line is, this is where the industry is going. Three to five years from now its really going to be shocking if you run your infrastructure any other way."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 KiB

View File

@ -0,0 +1,157 @@
---
title: 案例研究Buffer
case_study_styles: true
cid: caseStudies
css: /css/style_buffer.css
---
<!-- <div class="banner1">
<h1>CASE STUDY: <img src="/images/buffer.png" width="18%" style="margin-bottom:-5px;margin-left:10px;"><br>
<div class="subhead">Making Deployments Easy for a Small, Distributed Team</div>
</h1>
</div> -->
<div class="banner1">
<h1>案例研究: <img src="/images/buffer.png" width="18%" style="margin-bottom:-5px;margin-left:10px;"><br>
<div class="subhead">使小型分布式团队轻松部署</div>
</h1>
</div>
<!-- <div class="details">
Company&nbsp;<b>Buffer</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Around the World</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Social Media Technology</b>
</div> -->
<div class="details">
公司&nbsp;<b>Buffer</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;位置 &nbsp;<b>全球</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;行业 &nbsp;<b>社交媒体技术公司</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<!-- <h2>Challenge</h2>
With a small but fully distributed team of 80 working across almost a dozen time zones, Buffer—which offers social media management to agencies and marketers—was looking to solve its "classic monolithic code base problem," says Architect Dan Farrelly. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as&nbsp;necessary." -->
<h2>挑战</h2>
Buffer 拥有一支80人的分布式团队他们遍布全球近十几个时区。这样一个为代理商和营销人员提供社交媒体管理的公司希望解决其“典型的单一庞大编码基数问题”架构师 Dan Farrelly 这样说。“我们希望拥有一种流动性的基础架构,开发人员可以创建一个应用程序,可以根据需要部署并横向扩展它”。
</div>
<div class="col2">
<!-- <h2>Solution</h2>
Embracing containerization, Buffer moved its infrastructure from Amazon Web Services Elastic Beanstalk to Docker on AWS, orchestrated with&nbsp;Kubernetes. -->
<h2>解决方案</h2>
拥抱容器化Buffer 将其基础设施从 AWS 上的 Elastic Beanstalk 迁移到由 Kubernetes 负责编排的 Docker 上。
<br>
<br>
<!-- <h2>Impact</h2>
The new system "leveled up our ability with deployment and rolling out new changes," says Farrelly. "Building something on your computer and knowing that its going to work has shortened things up a lot. Our feedback cycles are a lot faster now&nbsp;too." -->
<h2>影响</h2>
Farrelly 说,新系统“提高了我们将新变化进行部署和上线的能力”。“在自己的计算机上构建一些东西,并且知道它是可用的,这已经让事情简单了很多;而且我们的反馈周期现在也快了很多。”
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
<!-- "Its amazing that we can use the Kubernetes solution off the shelf with our team. And it just keeps getting better. Before we even know that we need something, its there in the next release or its coming in the next few months."<br><br><span style="font-size:16px;letter-spacing:2px;">- DAN FARRELLY, BUFFER ARCHITECT</span> -->
“我们的团队能够直接使用既有的 Kubernetes 解决方案,这太棒了,而且它还在不断改进中。在意识到我们需要某些功能之前,它就出现在下一个版本中,或者在未来几个月内出现。<br><br><span style="font-size:16px;letter-spacing:2px;">- DAN FARRELLY, BUFFER ARCHITECT</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<!-- <h2>Dan Farrelly uses a carpentry analogy to explain the problem his company, <a href="https://buffer.com">Buffer</a>, began having as its team of developers grew over the past few years.</h2> -->
<h2>Dan Farrelly 用木工类比来解释他的公司<a href="https://buffer.com"> Buffer </a>,随着过去几年的发展,公司开始有这个问题。</h2>
<!-- "If youre building a table by yourself, its fine," the companys architect says. "If you bring in a second person to work on the table, maybe that person can start sanding the legs while youre sanding the top. But when you bring a third or fourth person in, someone should probably work on a different table." Needing to work on more and more different tables led Buffer on a path toward microservices and containerization made possible by Kubernetes. -->
“如果你自己做一张桌子,这很好!”公司的架构师说。“如果你请另一个人一起来做这个桌子,也许这个人可以在你抛光桌面时开始对腿进行抛光。但是,当你把第三个或第四个人带进来时,有人也许应该在另外一张桌子上工作。”需要处理越来越多不同的桌子,使 Buffer 走上了 Kubernetes 实现微服务和容器化的道路。
<br><br>
<!-- Since around 2012, Buffer had already been using <a href="https://aws.amazon.com/elasticbeanstalk/">Elastic Beanstalk</a>, the orchestration service for deploying infrastructure offered by <a href="https://aws.amazon.com">Amazon Web Services</a>. "We were deploying a single monolithic <a href="http://php.net/manual/en/intro-whatis.php">PHP</a> application, and it was the same application across five or six environments," says Farrelly. "We were very much a product-driven company. -->
自2012年左右以来Buffer 开始使用<a href="https://aws.amazon.com/elasticbeanstalk/"> Elastic Beanstalk </a>,这是<a href="https://aws.amazon.com">亚马逊网络服务</a>提供的网络基础设施编排服务。“我们部署了一个单一的<a href="http://php.net/manual/en/intro-whatis.php">PHP</a>应用程序,它是在五六个环境中相同的应用程序”Farrelly 说。“我们在很大程度上是一家产品驱动型的公司。”
<!-- It was all about shipping new features quickly and getting things out the door, and if something was not broken, we didnt spend too much time on it. If things were getting a little bit slow, wed maybe use a faster server or just scale up one instance, and it would be good enough. Wed move on." -->
这一切都是为了尽快给应用推出新功能并进行交付,如果能够正常运行,我们就不会花太多时间在上面。如果产品变得有点慢,我们可能会使用更快的服务器或只是增加一个实例,这些就已经足够了。然后继续前进。
<br><br>
<!-- But things came to a head in 2016. With the growing number of committers on staff, Farrelly and Buffers then-CTO, Sunil Sadasivan, decided it was time to re-architect and rethink their infrastructure. "It was a classic monolithic code base problem," says Farrelly.<br><br>Some of the companys team was already successfully using <a href="https://www.docker.com">Docker</a> in their development environment, but the only application running on Docker in production was a marketing website that didnt see real user traffic. They wanted to go further with Docker, and the next step was looking at options for&nbsp;orchestration. -->
但事情在2016年就到了头。随着应用提交修改的数量不断增加Farrelly 和 Buffer 当时的首席技术官 Sunil Sadasivan 决定,是重新思考和构建其基础架构的时候了。“这是一个典型的单一庞大编码基数问题”Farrelly说。公司的一些团队已经在开发环境中成功使用<a href="https://www.docker.com">Docker</a>,但在生产环境中,运行在 Docker 上的唯一应用程序是一个看不到真实用户流量的营销网站。他们希望在使用 Docker 上更进一步,下一步是寻找业务流程的选项。
</div>
</section>
<div class="banner3">
<div class="banner3text">
<!-- And all the things Kubernetes did well suited Buffers needs. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary," says Farrelly. "We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it&nbsp;[Kubernetes]." -->
Kubernetes 所做的所有事情都很好地适应了 Buffer 的需求。Farrelly 说:“我们希望拥有一种流动基础架构,开发人员可以创建一个应用程序,并在必要时部署并进行水平扩展。”“我们很快就使用一些脚本来设置几个测试集群,在容器中构建了一些小的概念性验证应用程序,并在一小时内部署了这些内容。我们在生产中运行容器方面经验很少。令人惊奇的是,我们能很快地用 Kubernetes 来处理。”
</div>
</div>
<section class="section3">
<div class="fullcol">
<!-- First they considered <a href="https://mesosphere.com">Mesosphere</a>, <a href="https://dcos.io">DC/OS</a> and <a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service</a> (which their data systems team was already using for some data pipeline jobs). While they were impressed by these offerings, they ultimately went with Kubernetes. -->
首先,他们考虑了<a href="https://mesosphere.com">Mesosphere</a><a href="https://dcos.io">DC/OS</a><a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service</a>(他们的数据系统团队已经将其用于某些数据管道作业)。虽然他们对这些产品印象深刻,但他们最终还是与 Kubernetes 合作。
<!-- "We run on AWS still, so spinning up, creating services and creating load balancers on demand for us without having to configure them manually was a great way for our team to get into this," says Farrelly. "We didnt need to figure out how to configure this or that, especially coming from a former Elastic Beanstalk environment that gave us an automatically-configured load balancer. I really liked Kubernetes controls of the command line. It just took care of ports. It was a lot more flexible. Kubernetes was designed for doing what it does, so it does it very well." -->
Farrelly 说:“我们的应用仍在 AWS 上运行,但是我们的团队通过 Kubernetes 无需手动配置即可创建服务并按需创建负载均衡器,这是使用 Kubernetes 的最佳入门体验。”“我们不需要考虑如何配置这个或那个,特别是相较于以前的 Elastic Beanstalk 环境,它为我们提供了一个自动配置的负载均衡器。我真的很喜欢 Kubernetes 对命令行的控制只需要对端口进行配置这样更加灵活。Kubernetes 是为做它所做的事情而设计的,所以它做得非常好。”
<br><br>
<!-- And all the things Kubernetes did well suited Buffers needs. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary," says Farrelly. "We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it [Kubernetes]." -->
Kubernetes 所做的所有事情都很好地适应了 Buffer 的需求。Farrelly 说:“我们希望拥有一种流动基础架构,开发人员可以创建一个应用程序,并在必要时部署并进行水平扩展。”“我们很快就使用一些脚本来设置几个测试集群,在容器中构建了一些小的概念性验证应用程序,并在一小时内部署了这些内容。我们在生产中运行容器方面经验很少。令人惊奇的是,我们能很快地用 Kubernetes 来处理。”
<br><br>
<!-- Above all, it provided a powerful solution for one of the companys most distinguishing characteristics: their remote team thats spread across a dozen different time zones. "The people with deep knowledge of our infrastructure live in time zones different from our peak traffic time zones, and most of our product engineers live in other places," says Farrelly. "So we really wanted something where anybody could get a grasp of the system early on and utilize it, and not have to worry that the deploy engineer is asleep. Otherwise people would sit around for 12 to 24 hours for something. Its been really cool to see people moving much faster." -->
最重要的是它为公司最显著的特征之一提供了强大的解决方案他们的远程团队分布在十几个不同的时区。Farrelly 说“对我们的基础设施有深入了解的人生活在不同于我们相对集中的时区而我们的大部分产品工程师都住在其他地方。”“因此我们确实希望有人能够尽早掌握这套系统并利用好它而不必担心部署工程师正在睡觉。否则人们会为了一些东西必须得等待12到24小时左右。看到人们前进得更快这真的很酷。”
<br><br>
<!-- With a relatively small engineering team—just 25 people, and only a handful working on infrastructure, with the majority front-end developers—Buffer needed "something robust for them to deploy whatever they wanted," says Farrelly. Before, "it was only a couple of people who knew how to set up everything in the old way. With this system, it was easy to review documentation and get something out extremely quickly. It lowers the bar for us to get everything in production. We don't have the big team to build all these tools or manage the infrastructure like other larger companies&nbsp;might." -->
Farrelly 说Buffer 拥有相对较小的工程团队,只有 25 人,只有少数人从事基础设施工作,大多数前端开发人员都需要“一些强大的配置能力,以便部署任何他们想要的内容。”以前,“只有几个人知道如何用旧的方式设置一切。有了这个系统,审查文档变得很容易,并且能很快地看到效果。它降低了我们获取在开发环境中所需要一切的门槛。因为我们团队的人数不足以来构建所有这些工具或像其他大公司那样管理基础架构。”
</div>
</section>
<div class="banner4">
<div class="banner4text">
<!-- "In our old way of working, the feedback loop was a lot longer, and it was delicate because if you deployed something, the risk was high to potentially break something else," Farrelly says. "With the kind of deploys that we built around Kubernetes, we were able to detect bugs and fix them, and get them deployed super fast. The second someone is fixing [a bug], its out the&nbsp;door." -->
Farrelly 说:“在我们以往的工作方式中,反馈循环流程要长得多,而且很微妙,因为如果你部署了某些东西,就存在破坏其他东西的风险。”“我们通过围绕 Kubernetes 构建的部署类型,能够及时检测 Bug 并修复它们,并使其部署得超快。这一秒正在修复漏洞,不一会就上线运行了。”
</div>
</div>
<section class="section4">
<div class="fullcol">
<!-- To help with this, Buffer developers wrote a deploy bot that wraps the Kubernetes deploy process and can be used by every team. "Before, our data analysts would update, say, a <a href="https://www.python.org">Python</a> analysis script and have to wait for the lead on that team to click the button and deploy it," Farrelly explains. "Now our data analysts can make a change, enter a <a href="https://slack.com">Slack</a> command, /deploy, and it goes out instantly. They dont need to wait on these slow turnaround times. They dont even know where its running; it doesnt matter." -->
为了帮助解决这一问题Buffer 开发人员编写了一个部署机器人,该机器人包装了 Kubernetes 部署过程,并且每个团队都可以使用。”“以前,我们的数据分析师会更新<a href="https://www.python.org"> Python </a>分析脚本并且必须等待该团队的主管单击该按钮并部署它”Farrelly 解释道。“现在,我们的数据分析师可以进行更改,输入<a href="https://slack.com"> Slack </a>命令,‘/deploy,它会立即进行部署。他们不需要等待这些缓慢的周转时间,他们甚至不知道它在哪里运行。这些都不重要了。
<br><br>
<!-- One of the first applications the team built from scratch using Kubernetes was a new image resizing service. As a social media management tool that allows marketing teams to collaborate on posts and send updates across multiple social media profiles and networks, Buffer has to be able to resize photographs as needed to meet the varying limitations of size and format posed by different social networks. "We always had these hacked together solutions," says Farrelly. -->
团队使用 Kubernetes 从头开始构建的第一个应用程序是一种新的图像大小调整服务。作为一种社交媒体管理工具它允许营销团队通过发帖进行协作并通过多个社交媒体配置文件和网络发送更新Buffer 必须能够根据需要调整照片的大小以满足不同的社交网络。Farrelly 说:“我们一直有这些拼凑在一起的解决方案。”
<br><br>
<!-- To create this new service, one of the senior product engineers was assigned to learn Docker and Kubernetes, then build the service, test it, deploy it and monitor it—which he was able to do relatively quickly. "In our old way of working, the feedback loop was a lot longer, and it was delicate because if you deployed something, the risk was high to potentially break something else," Farrelly says. "With the kind of deploys that we built around Kubernetes, we were able to detect bugs and fix them, and get them deployed super fast. The second someone is fixing [a bug], its out the door." -->
为了创建这项新服务,一位高级产品工程师被指派学习 Docker 和 Kubernetes然后构建服务、测试、部署和监视服务他能够相对快速地完成该服务。Farrelly说“在我们以往的工作方式中反馈循环流程要长得多而且很微妙因为如果你部署了某些东西就存在破坏其他东西的风险。”“我们通过围绕 Kubernetes 构建的部署类型,能够及时检测 Bug 并修复它们,并使其部署得超快。这一秒正在修复漏洞,不一会就上线运行了。”
<br><br>
<!-- Plus, unlike with their old system, they could scale things horizontally with one command. "As we rolled it out," Farrelly says, "we could anticipate and just click a button. This allowed us to deal with the demand that our users were placing on the system and easily scale it to handle it." -->
此外,与旧系统不同,他们只需一个命令就可以水平缩放内容。“当我们推出它,”Farrelly说“我们可以预测只需点击一个按钮。这使我们能够处理用户对系统的需求,并轻松扩展以处理它。”
<br><br>
<!-- Another thing they werent able to do before was a canary deploy. This new capability "made us so much more confident in deploying big changes," says Farrelly. "Before, it took a lot of testing, which is still good, but it was also a lot of fingers crossed. And this is something that gets run 800,000 times a day, the core of our business. If it doesnt work, our business doesnt work. In a Kubernetes world, I can do a canary deploy to test it for 1 percent and I can shut it down very quickly if it isnt working. This has leveled up our ability to deploy and roll out new changes quickly while reducing&nbsp;risk." -->
他们以前不能做的另外一件事是金丝雀部署。Farrelly说这种新功能“使我们在部署重大变革方面更加自信。”“以前虽然进行很多测试但仍表现不错但它也存在很多手指交叉。这是每天运行 800000 次的东西,这是我们业务的核心。如果它不工作,我们的业务也无法运行。在 Kubernetes 世界中,我可以执行金丝雀部署以测试 1%,如果它不工作,我可以很快将其关闭。这使我们在快速部署和推出新更改的同时降低风险的能力得到了提高。”
</div>
</section>
<div class="banner5">
<div class="banner5text">
<!-- "If you want to run containers in production, with nearly the power that Google uses internally, this [Kubernetes] is a great way to do that," Farrelly says. "Were a relatively small team thats actually running Kubernetes, and weve never run anything like it before. So its more approachable than you might think. Thats the one big thing that I tell people who are experimenting with it. Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this&nbsp;way." -->
Farrelly 说:“如果你想在生产中运行容器,拥有像谷歌内部使用的那种效果,那么 Kubernetes 就是一个很好的方法。”“我们是一个相对较小的团,实际上运行 Kubernetes我们之前从来没来做过这样的事情。因此它比你想象的更容易上手这正是我想要告诉正在尝试使用它的人们的一件很重要的事。挑几个应用,把它们准备好,在机器上运行几个月,看看它能处理的怎么样。通过这种方式你可以学到很多东西。”
</div>
</div>
<section class="section5">
<div class="fullcol">
<!-- By October 2016, 54 percent of Buffers traffic was going through their Kubernetes cluster. "Theres a lot of our legacy functionality that still runs alright, and those parts might move to Kubernetes or stay in our old setup forever," says Farrelly. But the company made the commitment at that time that going forward, "all new development, all new features, will be running on Kubernetes." -->
到 2016 年 10 月Buffer 54% 的流量都通过其 Kubernetes 集群。Farrelly 说:“我们很多传统功能仍然运行正常,这些部分可能会转移到 Kubernetes 或永远留在我们的旧设置中。”但该公司当时承诺,“所有新的开发,所有新功能,将在Kubernetes上运行。”
<br><br>
<!-- The plan for 2017 is to move all the legacy applications to a new Kubernetes cluster, and run everything theyve pulled out of their old infrastructure, plus the new services theyre developing in Kubernetes, on another cluster. "I want to bring all the benefits that weve seen on our early services to everyone on the team," says Farrelly. -->
2017年计划是将所有旧应用程序迁移到新的 Kubernetes 集群,并运行他们从旧基础架构中撤出的所有内容,以及他们正在另一个 Kubernetes 集群上开发的新服务。Farrelly 说:“我想为团队中的每个人带来我们早期服务的所有好处。”
<br><br>
<h2>
<!-- For Buffers engineers, its an exciting process. "Every time were deploying a new service, we need to figure out: OK, whats the architecture? How do these services communicate? Whats the best way to build this service?" Farrelly says. "And then we use the different features that Kubernetes has to glue all the pieces together. Its enabling us to experiment as were learning how to design a service-oriented architecture. Before, we just wouldnt have been able to do it. This is actually giving us a blank white board so we can do whatever we want on it." -->
对于 Buffer 的工程师来说,这是一个令人兴奋的过程。“每次部署新服务时,我们都需要弄清楚:好的体系结构是什么这些服务如何沟通构建此服务的最佳方式是什么”Farrelly说。“然后我们使用 Kubernetes 的特性将所有部分聚合到一起。在学习如何设计面向服务的体系结构时,它使我们能够进行试验。以前,我们只是不能做到这一点。这实际上给了我们一个空白的白板,所以我们可以做任何我们想要的。”
</h2>
<!-- Part of that blank slate is the flexibility that Kubernetes offers should the time come when Buffer may want or need to change its cloud. "Its cloud agnostic so maybe one day we could switch to Google or somewhere else," Farrelly says. "Were very deep in Amazon but its nice to know we could move away if we need to." -->
部分空白是 Kubernetes 提供的灵活性,如果某一天 Buffer 可能想要或需要改变他的云服务。“这是与云无关的所以也许有一天我们可以切换到谷歌或其他地方”Farrelly 说。“我们非常依赖亚马逊的基础服务,但很高兴知道,如果我们需要的话,我们可以搬走。”
<br><br>
<!-- At this point, the team at Buffer cant imagine running their infrastructure any other way—and theyre happy to spread the word. "If you want to run containers in production, with nearly the power that Google uses internally, this [Kubernetes] is a great way to do that," Farrelly says. "Were a relatively small team thats actually running Kubernetes, and weve never run anything like it before. So its more approachable than you might think. Thats the one big thing that I tell people who are experimenting with it. Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this&nbsp;way." -->
此时Buffer 团队无法想象以任何其他方式运行其基础结构他们很乐意传播这一信息。Farrelly 说:“如果你想在生产中运行容器,拥有像谷歌内部使用的那种效果,那么 Kubernetes 就是一个很好的方法。”“我们是一个相对较小的团队,实际上运行 Kubernetes我们之前从来没来做过这样的事情。因此它比你想象的更容易上手这正是我想要告诉正在尝试使用它的人们的一件很重要的事。挑几个应用把它们准备好在机器上运行几个月看看它能处理的怎么样。通过这种方式你可以学到很多东西。”
<br><br>
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -0,0 +1,96 @@
---
title: Capital One Case Study
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
---
<div class="banner1 desktop" style="background-image: url('/images/CaseStudy_capitalone_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/capitalone-logo.png" style="margin-bottom:-2%" class="header_logo"><br> <div class="subhead">Supporting Fast Decisioning Applications with Kubernetes
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>Capital One</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>McLean, Virginia</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Retail banking</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>Challenge</h2>
The team set out to build a provisioning platform for <a href="https://www.capitalone.com/">Capital One</a> applications deployed on AWS that use streaming, big-data decisioning, and machine learning. One of these applications handles millions of transactions a day; some deal with critical functions like fraud detection and credit decisioning. The key considerations: resilience and speed—as well as full rehydration of the cluster from base AMIs.
<br>
<h2>Solution</h2>
The decision to run <a href="https://kubernetes.io/">Kubernetes</a> "is very strategic for us," says John Swift, Senior Director Software Engineering. "We use Kubernetes as a substrate or an operating system, if you will. Theres a degree of affinity in our product development."
</div>
<div class="col2">
<h2>Impact</h2>
"Kubernetes is a significant productivity multiplier," says Lead Software Engineer Keith Gasser, adding that to run the platform without Kubernetes would "easily see our costs triple, quadruple what they are now for the amount of pure AWS expense." Time to market has been improved as well: "Now, a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer." Deployments increased by several orders of magnitude. Plus, the rehydration/cluster-rebuild process, which took a significant part of a day to do manually, now takes a couple hours with Kubernetes automation and declarative configuration.
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
<iframe width="560" height="315" src="https://www.youtube.com/embed/UHVW01ksg-s" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe><br><br>
"With the scalability, the management, the coordination, Kubernetes really empowers us and gives us more time back than we had before." <span style="font-size:16px;text-transform:uppercase">— Jamil Jadallah, Scrum Master</span>
</div>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2></h2>
As a top 10 U.S. retail bank, Capital One has applications that handle millions of transactions a day. Big-data decisioning—for fraud detection, credit approvals and beyond—is core to the business. To support the teams that build applications with those functions for the bank, the cloud team led by Senior Director Software Engineering John Swift embraced Kubernetes for its provisioning platform. "Kubernetes and its entire ecosystem are very strategic for us," says Swift. "We use Kubernetes as a substrate or an operating system, if you will. Theres a degree of affinity in our product development."<br><br>
Almost two years ago, the team embarked on this journey by first working with Docker. Then came Kubernetes. "We wanted to put streaming services into Kubernetes as one feature of the workloads for fast decisioning, and to be able to do batch alongside it," says Lead Software Engineer Keith Gasser. "Once the data is streamed and batched, there are so many tool sets in <a href="https://flink.apache.org/">Flink</a> that we use for decisioning. We want to provide the tools in the same ecosystem, in a consistent way, rather than have a large custom snowflake ecosystem where every tool needs its own custom deployment. Kubernetes gives us the ability to bring all of these together, so the richness of the open source and even the license community dealing with big data can be corralled."
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_capitalone_banner3.jpg')">
<div class="banner3text">
"We want to provide the tools in the same ecosystem, in a consistent way, rather than have a large custom snowflake ecosystem where every tool needs its own custom deployment. Kubernetes gives us the ability to bring all of these together, so the richness of the open source and even the license community dealing with big data can be corralled."
</div>
</div>
<section class="section3">
<div class="fullcol">
In this first year, the impact has already been great. "Time to market is really huge for us," says Gasser. "Especially with fraud, you have to be very nimble in the way you respond to threats in the marketplace—being able to add and push new rules, detect new patterns of behavior, detect anomalies in account and transaction flows." With Kubernetes, "a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer. Kubernetes is a manifold productivity multiplier."<br><br>
Teams now have the tools to be autonomous in their deployments, and as a result, deployments have increased by two orders of magnitude. "And that was with just seven dedicated resources, without needing a whole group sitting there watching everything," says Scrum Master Jamil Jadallah. "Thats a huge cost savings. With the scalability, the management, the coordination, Kubernetes really empowers us and gives us more time back than we had before."
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_capitalone_banner4.jpg')">
<div class="banner4text">
With Kubernetes, "a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer. Kubernetes is a manifold productivity multiplier."
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
Kubernetes has also been a great time-saver for Capital Ones required period "rehydration" of clusters from base AMIs. To minimize the attack vulnerability profile for applications in the cloud, "Our entire clusters get rebuilt from scratch periodically, with new fresh instances and virtual server images that are patched with the latest and greatest security patches," says Gasser. This process used to take the better part of a day, and personnel, to do manually. Its now a quick Kubernetes job.<br><br>
Savings extend to both capital and operating expenses. "It takes very little to get into Kubernetes because its all open source," Gasser points out. "We went the DIY route for building our cluster, and we definitely like the flexibility of being able to embrace the latest from the community immediately without waiting for a downstream company to do it. Theres capex related to those licenses that we dont have to pay for. Moreover, theres capex savings for us from some of the proprietary software that we get to sunset in our particular domain. So that goes onto our ledger in a positive way as well." (Some of those open source technologies include Prometheus, Fluentd, gRPC, Istio, CNI, and Envoy.)
</div>
<div class="banner5">
<div class="banner5text">
"If we had to do all of this without Kubernetes, on underlying cloud services, I could easily see our costs triple, quadruple what they are now for the amount of pure AWS expense. That doesnt account for personnel to deploy and maintain all the additional infrastructure."
</div>
</div>
<div class="fullcol">
And on the opex side, Gasser says, the savings are high. "We run dozens of services, we have scores of pods, many daemon sets, and since were data-driven, we take advantage of EBS-backed volume claims for all of our stateful services. If we had to do all of this without Kubernetes, on underlying cloud services, I could easily see our costs triple, quadruple what they are now for the amount of pure AWS expense. That doesnt account for personnel to deploy and maintain all the additional infrastructure."<br><br>
The team is confident that the benefits will continue to multiply—without a steep learning curve for the engineers being exposed to the new technology. "As we onboard additional tenants in this ecosystem, I think the need for folks to understand Kubernetes may not necessarily go up. In fact, I think it goes down, and thats good," says Gasser. "Because that really demonstrates the scalability of the technology. You start to reap the benefits, and they can concentrate on all the features they need to build for great decisioning in the business— fraud decisions, credit decisions—and not have to worry about, Is my AWS server broken? Is my pod not running?"
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

View File

@ -0,0 +1,4 @@
---
title: CCP Games
content_url: https://cloud.google.com/customers/ccp-games/
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 KiB

View File

@ -0,0 +1,93 @@
---
title: CERN Case Study
linkTitle: cern
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
logo: cern_featured_logo.png
---
<div class="banner1" style="background-image: url('/images/CaseStudy_cern_banner1.jpg')">
<h1> CASE STUDY: CERN<br> <div class="subhead" style="margin-top:1%">CERN: Processing Petabytes of Data More Efficiently with Kubernetes
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>CERN</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Geneva, Switzerland
</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Particle physics research</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
At CERN, the European Organization for Nuclear Research, physicists conduct experiments to learn about fundamental science. In its particle accelerators, "we accelerate protons to very high energy, close to the speed of light, and we make the two beams of protons collide," says CERN Software Engineer Ricardo Rocha. "The end result is a lot of data that we have to process." CERN currently stores 330 petabytes of data in its data centers, and an upgrade of its accelerators expected in the next few years will drive that number up by 10x. Additionally, the organization experiences extreme peaks in its workloads during periods prior to big conferences, and needs its infrastructure to scale to those peaks. "We want to have a more hybrid infrastructure, where we have our on premise infrastructure but can make use of public clouds temporarily when these peaks come up," says Rocha. "Weve been looking to new technologies that can help improve our efficiency in our infrastructure so that we can dedicate more of our resources to the actual processing of the data."
<br><br>
<h2>Solution</h2>
CERNs technology team embraced containerization and cloud native practices, choosing Kubernetes for orchestration, Helm for deployment, Prometheus for monitoring, and CoreDNS for DNS resolution inside the clusters. Kubernetes federation has allowed the organization to run some production workloads both on premise and in public clouds.
<br><br>
<h2>Impact</h2>
"Kubernetes gives us the full automation of the application," says Rocha. "It comes with built-in monitoring and logging for all the applications and the workloads that deploy in Kubernetes. This is a massive simplification of our current deployments." The time to deploy a new cluster for a complex distributed storage system has gone from more than 3 hours to less than 15 minutes. Adding new nodes to a cluster used to take more than an hour; now it takes less than 2 minutes. The time it takes to autoscale replicas for system components has decreased from more than an hour to less than 2 minutes. Initially, virtualization gave 20% overhead, but with tuning this was reduced to ~5%. Moving to Kubernetes on bare metal would get this to 0%. Not having to host virtual machines is expected to also get 10% of memory capacity back.
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"Kubernetes is something we can relate to very much because its naturally distributed. What it gives us is a uniform API across heterogeneous resources to define our workloads. This is something we struggled with a lot in the past when we want to expand our resources outside our infrastructure."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Ricardo Rocha, Software Engineer, CERN</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>With a mission of researching fundamental science, and a stable of extremely large machines, the European Organization for Nuclear Research (CERN) operates at what can only be described as hyperscale. </h2>
Experiments are conducted in particle accelerators, the biggest of which is 27 kilometers in circumference. "We accelerate protons to very high energy, to close to the speed of light, and we make the two beams of protons collide in well-defined places," says CERN Software Engineer Ricardo Rocha. "We build experiments around these places where we do the collisions. The end result is a lot of data that we have to process."<br><br>
And he does mean a lot: CERN currently stores and processes 330 petabytes of data—gathered from 4,300 projects and 3,300 users—using 10,000 hypervisors and 320,000 cores in its data centers. <br><br>
Over the years, the CERN technology department has built a large computing infrastructure, based on OpenStack private clouds, to help the organizations physicists analyze and treat all this data. The organization experiences extreme peaks in its workloads. "Very often, just before conferences, physicists want to do an enormous amount of extra analysis to publish their papers, and we have to scale to these peaks, which means overcommitting resources in some cases," says Rocha. "We want to have a more hybrid infrastructure, where we have our on premise infrastructure but can make use of public clouds temporarily when these peaks come up."<br><br>
Additionally, few years ago, CERN announced that it would be doing a big upgrade of its accelerators, which will mean a ten-fold increase in the amount of data that can be collected. "So weve been looking to new technologies that can help improve our efficiency in our infrastructure, so that we can dedicate more of our resources to the actual processing of the data," says Rocha.
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_cern_banner3.jpg')">
<div class="banner3text">
"Before, the tendency was always: I need this, I get a couple of developers, and I implement it. Right now its I need this, Im sure other people also need this, so Ill go and ask around. The CNCF is a good source because theres a very large catalog of applications available. Its very hard right now to justify developing a new product in-house. There is really no real reason to keep doing that. Its much easier for us to try it out, and if we see its a good solution, we try to reach out to the community and start working with that community." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Ricardo Rocha, Software Engineer, CERN</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
Rochas team started looking at Kubernetes and containerization in the second half of 2015. "Weve been using distributed infrastructures for decades now," says Rocha. "Kubernetes is something we can relate to very much because its naturally distributed. What it gives us is a uniform API across heterogeneous resources to define our workloads. This is something we struggled with a lot in the past when we want to expand our resources outside our infrastructure."<br><br>
The team created a prototype system for users to deploy their own Kubernetes cluster in CERNs infrastructure, and spent six months validating the use cases and making sure that Kubernetes integrated with CERNs internal systems. The main use case is batch workloads, which represent more than 80% of resource usage at CERN. (One single project that does most of the physics data processing and analysis alone consumes 250,000 cores.) "This is something where the investment in simplification of the deployment, logging, and monitoring pays off very quickly," says Rocha. Other use cases include Spark-based data analysis and machine learning to improve physics analysis. "The fact that most of these technologies integrate very well with Kubernetes makes our lives easier," he adds.<br><br>
The system went into production in October 2016, also using Helm for deployment, Prometheus for monitoring, and CoreDNS for DNS resolution within the cluster. "One thing that Kubernetes gives us is the full automation of the application," says Rocha. "So it comes with built-in monitoring and logging for all the applications and the workloads that deploy in Kubernetes. This is a massive simplification of our current deployments." The time to deploy a new cluster for a complex distributed storage system has gone from more than 3 hours to less than 15 minutes.<br><br> Adding new nodes to a cluster used to take more than an hour; now it takes less than 2 minutes. The time it takes to autoscale replicas for system components has decreased from more than an hour to less than 2 minutes.
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_cern_banner4.jpg')">
<div class="banner4text">
"With Kubernetes, theres a well-established technology and a big community that we can contribute to. It allows us to do our physics analysis without having to focus so much on the lower level software. This is just exciting. We are looking forward to keep contributing to the community and collaborating with everyone."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Ricardo Rocha, Software Engineer, CERN</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
Rocha points out that the metric used in the particle accelerators may be events per second, but in reality "its how fast and how much of the data we can process that actually counts." And efficiency has certainly been improved with Kubernetes. Initially, virtualization gave 20% overhead, but with tuning this was reduced to ~5%. Moving to Kubernetes on bare metal would get this to 0%. Not having to host virtual machines is expected to also get 10% of memory capacity back.<br><br>
Kubernetes federation, which CERN has been using for a portion of its production workloads since February 2018, has allowed the organization to adopt a hybrid cloud strategy. And it was remarkably simple to do. "We had a summer intern working on federation," says Rocha. "For many years, Ive been developing distributed computing software, which took like a decade and a lot of effort from a lot of people to stabilize and make sure it works. And for our intern, in a couple of days he was able to demo to me and my team that we had a cluster at CERN and a few clusters outside in public clouds that were federated together and that we could submit workloads to. This was shocking for us. It really shows the power of using this kind of well-established technologies." <br><br>
With such results, adoption of Kubernetes has made rapid gains at CERN, and the team is eager to give back to the community. "If we look back into the 90s and early 2000s, there were not a lot of companies focusing on systems that have to scale to this kind of size, storing petabytes of data, analyzing petabytes of data," says Rocha. "The fact that Kubernetes is supported by such a wide community and different backgrounds, it motivates us to contribute back."
</div>
<div class="banner5" >
<div class="banner5text">
This means that the physicist can build his or her analysis and publish it in a repository, share it with colleagues, and in 10 years redo the same analysis with new data. If we looked back even 10 years, this was just a dream."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Ricardo Rocha, Software Engineer, CERN</span></div>
</div>
<div class="fullcol">
These new technologies arent just enabling infrastructure improvements. CERN also uses the Kubernetes-based <a href="https://github.com/recast-hep">Reana/Recast</a> platform for reusable analysis, which is "the ability to define physics analysis as a set of workflows that are fully containerized in one single entry point," says Rocha. "This means that the physicist can build his or her analysis and publish it in a repository, share it with colleagues, and in 10 years redo the same analysis with new data. If we looked back even 10 years, this was just a dream."<br><br>
All of these things have changed the culture at CERN considerably. A decade ago, "The tendency was always: I need this, I get a couple of developers, and I implement it," says Rocha. "Right now its I need this, Im sure other people also need this, so Ill go and ask around. The CNCF is a good source because theres a very large catalog of applications available. Its very hard right now to justify developing a new product in-house. There is really no real reason to keep doing that. Its much easier for us to try it out, and if we see its a good solution, we try to reach out to the community and start working with that community."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -0,0 +1,99 @@
---
title: China Unicom Case Study
linkTitle: chinaunicom
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
logo: chinaunicom_featured_logo.png
featured: true
weight: 1
quote: >
Kubernetes has improved our experience using cloud infrastructure. There is currently no alternative technology that can replace it.
---
<div class="banner1" style="background-image: url('/images/CaseStudy_chinaunicom_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/chinaunicom_logo.png" class="header_logo" style="width:25%;margin-bottom:-1%"><br> <div class="subhead" style="margin-top:1%;line-height:1.4em">China Unicom: How China Unicom Leveraged Kubernetes to Boost Efficiency<br>and Lower IT&nbsp;Costs
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>China Unicom</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Beijing, China</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Telecom</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
China Unicom is one of the top three telecom operators in China, and to serve its 300 million users, the company runs several data centers with thousands of servers in each, using <a href="https://www.docker.com/">Docker</a> containerization and <a href="https://www.vmware.com/">VMWare</a> and <a href="https://www.openstack.org/">OpenStack</a> infrastructure since 2016. Unfortunately, "the resource utilization rate was relatively low," says Chengyu Zhang, Group Leader of Platform Technology R&D, "and we didnt have a cloud platform to accommodate our hundreds of applications." Formerly an entirely state-owned company, China Unicom has in recent years taken private investment from BAT (Baidu, Alibaba, Tencent) and JD.com, and is now focusing on internal development using open source technology, rather than commercial products. As such, Zhangs China Unicom Lab team began looking for open source orchestration for its cloud infrastructure.
<br><br>
<h2>Solution</h2>
Because of its rapid growth and mature open source community, Kubernetes was a natural choice for China Unicom. The companys Kubernetes-enabled cloud platform now hosts 50 microservices and all new development going forward. "Kubernetes has improved our experience using cloud infrastructure," says Zhang. "There is currently no alternative technology that can replace it." China Unicom also uses <a href="https://istio.io/">Istio</a> for its microservice framework, <a href="https://www.envoyproxy.io/">Envoy</a>, <a href="https://coredns.io/">CoreDNS</a>, and <a href="https://www.fluentd.org/">Fluentd</a>.
<br><br>
<h2>Impact</h2>
At China Unicom, Kubernetes has improved both operational and development efficiency. Resource utilization has increased by 20-50%, lowering IT infrastructure costs, and deployment time has gone from a couple of hours to 5-10 minutes. "This is mainly because of the self-healing and scalability, so we can increase our efficiency in operation and maintenance," Zhang says. "For example, we currently have only five people maintaining our multiple systems. We could never imagine we can achieve this scalability in such a short time."
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"Kubernetes has improved our experience using cloud infrastructure. There is currently no alternative technology that can replace it."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Chengyu Zhang, Group Leader of Platform Technology R&D, China Unicom</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>With more than 300 million users, China Unicom is one of the countrys top three telecom operators. </h2>
Behind the scenes, the company runs multiple data centers with thousands of servers in each, using Docker containerization and VMWare and OpenStack infrastructure since 2016. Unfortunately, "the resource utilization rate was relatively low," says Chengyu Zhang, Group Leader of Platform Technology R&D, "and we didnt have a cloud platform to accommodate our hundreds of applications." <br><br>
Zhangs team, which is responsible for new technology, R&D and platforms, set out to find an IT management solution. Formerly an entirely state-owned company, China Unicom has in recent years taken private investment from BAT (Baidu, Alibaba, Tencent) and JD.com, and is now focusing on homegrown development using open source technology, rather than commercial products. For that reason, the team began looking for open source orchestration for its cloud infrastructure.
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_chinaunicom_banner3.jpg');width:100%;padding-left:0;">
<div class="banner3text">
"We could never imagine we can achieve this scalability in such a short time."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Chengyu Zhang, Group Leader of Platform Technology R&D, China Unicom</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
Though China Unicom was already using Mesos for a core telecom operator system, the team felt that Kubernetes was a natural choice for the new cloud platform. "The main reason was that it has a mature community," says Zhang. "It grows very rapidly, and so we can learn a lot from others best practices." China Unicom also uses Istio for its microservice framework, Envoy, CoreDNS, and Fluentd.<br><br>
The companys Kubernetes-enabled cloud platform now hosts 50 microservices and all new development going forward. China Unicom developers can easily leverage the technology through APIs, without doing the development work themselves. The cloud platform provides 20-30 services connected to the companys data center PaaS platform, as well as supports things such as big data analysis for internal users in the branch offices across the 31 provinces in China.<br><br>
"Kubernetes has improved our experience using cloud infrastructure," says Zhang. "There is currently no alternative technology that can replace it."
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_chinaunicom_banner4.jpg');width:100%">
<div class="banner4text">
"This technology is relatively complicated, but as long as developers get used to it, they can enjoy all the benefits." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Jie Jia, Member of Platform Technology R&D, China Unicom</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
In fact, Kubernetes has boosted both operational and development efficiency at China Unicom. Resource utilization has increased by 20-50%, lowering IT infrastructure costs, and deployment time has gone from a couple of hours to 5-10 minutes. "This is mainly because of the self-healing and scalability of Kubernetes, so we can increase our efficiency in operation and maintenance," Zhang says. "For example, we currently have only five people maintaining our multiple systems."<br><br>
With the wins China Unicom has experienced with Kubernetes, Zhang and his team are eager to give back to the community. That starts with participating in meetups and conferences, and offering advice to other companies that are considering a similar path. "Especially for those companies who have had traditional cloud computing system, I really recommend them to join the cloud native computing community," says Zhang.
</div>
<div class="banner5" >
<div class="banner5text">
"Companies can use the managed services offered by companies like Rancher, because they have already customized this technology, you can easily leverage this technology."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Jie Jia, Member of Platform Technology R&D, China Unicom</span></div>
</div>
<div class="fullcol">
Platform Technology R&D team member Jie Jia adds that though "this technology is relatively complicated, as long as developers get used to it, they can enjoy all the benefits." And Zhang points out that in his own experience with virtual machine cloud, "Kubernetes and these cloud native technologies are relatively simpler."<br><br>
Plus, "companies can use the managed services offered by companies like <a href="https://rancher.com/">Rancher</a>, because they have already customized this technology," says Jia. "You can easily leverage this technology."<br><br>
Looking ahead, China Unicom plans to develop more applications on Kubernetes, focusing on big data and machine learning. The team is continuing to optimize the cloud platform that it built, and hopes to pass the conformance test to join CNCFs <a href="https://www.cncf.io/announcement/2017/11/13/cloud-native-computing-foundation-launches-certified-kubernetes-program-32-conformant-distributions-platforms/">Certified Kubernetes Conformance Program</a>. Theyre also hoping to someday contribute code back to the community. <br><br>
If that sounds ambitious, its because the results theyve gotten from adopting Kubernetes have been beyond even their greatest expectations. Says Zhang: "We could never imagine we can achieve this scalability in such a short time."
</div>
</section>
</body>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.4 KiB

View File

@ -0,0 +1,99 @@
---
title: City of Montreal Case Study
linkTitle: city-of-montreal
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_montreal_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/montreal_logo.png" class="header_logo" style="width:20%;margin-bottom:-1.2%"><br> <div class="subhead" style="margin-top:1%">City of Montréal - How the City of Montréal Is Modernizing Its 30-Year-Old, Siloed&nbsp;Architecture&nbsp;with&nbsp;Kubernetes
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>City of Montréal</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Montréal, Québec, Canada</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Government</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
Like many governments, Montréal has a number of legacy systems, and “we have systems that are older than some developers working here,” says the citys CTO, Jean-Martin Thibault. “We have mainframes, all flavors of Windows, various flavors of Linux, old and new Oracle systems, Sun servers, all kinds of databases. Like all big corporations, some of the most important systems, like Budget and Human Resources, were developed on mainframes in-house over the past 30 years.” There are over 1,000 applications in all, and most of them were running on different ecosystems. In 2015, a new management team decided to break down those silos, and invest in IT in order to move toward a more integrated governance for the city. They needed to figure out how to modernize the architecture.
<h2>Solution</h2>
The first step was containerization. The team started with a small Docker farm with four or five servers, with Rancher for providing access to the Docker containers and their logs and Jenkins to deploy. “We based our effort on the new trends; we understood the benefits of immutability and deployments without downtime and such things,” says Solutions Architect Marc Khouzam. They soon realized they needed orchestration as well, and opted for Kubernetes. Says Enterprise Architect Morgan Martinet: “Kubernetes offered concepts on how you would describe an architecture for any kind of application, and based on those concepts, deploy whats required to run the infrastructure. It was becoming a de facto standard.”
<br>
<h2>Impact</h2>
The time to market has improved drastically, from many months to a few weeks. Deployments went from months to hours. “In the past, you would have to ask for virtual machines, and that alone could take weeks, easily,” says Thibault. “Now you dont even have to ask for anything. You just create your project and it gets deployed.” Kubernetes has also improved the efficiency of how the city uses its compute resources: “Before, the 200 application components we currently run on Kubernetes would have required hundreds of virtual machines, and now, if were talking about a single environment of production, we are able to run them on 8 machines, counting the masters of Kubernetes,” says Martinet. And its all done with a small team of just 5 people operating the Kubernetes clusters.
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"We realized the limitations of having a non-orchestrated Docker environment. Kubernetes came to the rescue, bringing in all these features that make it a lot easier to manage and give a lot more benefits to the users."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- JEAN-MARTIN THIBAULT, CTO, CITY OF MONTRÉAL</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>The second biggest municipality in Canada, Montréal has a large number of legacy systems keeping the government running. And while they dont quite date back to the citys founding in 1642, “we have systems that are older than some developers working here,” jokes the citys CTO, Jean-Martin Thibault.</h2>
“We have mainframes, all flavors of Windows, various flavors of Linux, old and new Oracle systems, Sun servers, all kinds of databases. Some of the most important systems, like Budget and Human Resources, were developed on mainframes in-house over the past 30 years.”
<br><br>
In recent years, that fact became a big pain point. There are over 1,000 applications in all, running on almost as many different ecosystems. In 2015, a new city management team decided to break down those silos, and invest in IT in order to move toward a more integrated governance. “The organization was siloed, so as a result the architecture was siloed,” says Thibault. “Once we got integrated into one IT team, we decided to redo an overall enterprise architecture.”
<br><br>
The first step to modernize the architecture was containerization. “We based our effort on the new trends; we understood the benefits of immutability and deployments without downtime and such things,” says Solutions Architect Marc Khouzam. The team started with a small Docker farm with four or five servers, with Rancher for providing access to the Docker containers and their logs and Jenkins for deployment.
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_montreal_banner3.jpg')">
<div class="banner3text">
"Getting a project running in Kubernetes is entirely dependent on how long you need to program the actual software. Its no longer dependent on deployment. Deployment is so fast that its negligible."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- MARC KHOUZAM, SOLUTIONS ARCHITECT, CITY OF MONTRÉAL</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
But this Docker farm setup had some limitations, including the lack of self-healing and dynamic scaling based on traffic, and the effort required to optimize server resources and scale to multiple instances of the same container. The team soon realized they needed orchestration as well. “Kubernetes came to the rescue,” says Thibault, “bringing in all these features that make it a lot easier to manage and give a lot more benefits to the users.”
<br><br>
The team had evaluated several orchestration solutions, but Kubernetes stood out because it addressed all of the pain points. (They were also inspired by Yahoo! Japans use case, which the team members felt came close to their vision.) “Kubernetes offered concepts on how you would describe an architecture for any kind of application, and based on those concepts, deploy whats required to run the infrastructure,” says Enterprise Architect Morgan Martinet. “It was becoming a de facto standard. It also promised portability across cloud providers. The choice of Kubernetes now gives us many options such as running clusters in-house or in any IaaS provider, or even using Kubernetes-as-a-service in any of the major cloud providers.”
<br><br>
Another important factor in the decision was vendor neutrality. “As a government entity, it is essential for us to be neutral in our selection of products and providers,” says Thibault. “The independence of the Cloud Native Computing Foundation from any company provides this.”
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_montreal_banner4.jpg')">
<div class="banner4text">
"Kubernetes has been great. Its been stable, and it provides us with elasticity, resilience, and robustness. While re-architecting for Kubernetes, we also benefited from the monitoring and logging aspects, with centralized logging, Prometheus logging, and Grafana dashboards. We have enhanced visibility of whats being deployed." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- MORGAN MARTINET, ENTERPRISE ARCHITECT, CITY OF MONTRÉAL</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
The Kubernetes implementation began with the deployment of a small cluster using an internal Ansible playbook, which was soon replaced by the Kismatic distribution. Given the complexity they saw in operating a Kubernetes platform, they decided to provide development groups with an automated CI/CD solution based on Helm. “An integrated CI/CD solution on Kubernetes standardized how the various development teams designed and deployed their solutions, but allowed them to remain independent,” says Khouzam.
<br><br>
During the re-architecting process, the team also added Prometheus for monitoring and alerting, Fluentd for logging, and Grafana for visualization. “We have enhanced visibility of whats being deployed,” says Martinet. Adds Khouzam: “The big benefit is we can track anything, even things that dont run inside the Kubernetes cluster. Its our way to unify our monitoring effort.”
<br><br>
All together, the cloud native solution has had a positive impact on velocity as well as administrative overhead. With standardization, code generation, automatic deployments into Kubernetes, and standardized monitoring through Prometheus, the time to market has improved drastically, from many months to a few weeks. Deployments went from months and weeks of planning down to hours. “In the past, you would have to ask for virtual machines, and that alone could take weeks to properly provision,” says Thibault. Plus, for dedicated systems, experts often had to be brought in to install them with their own recipes, which could take weeks and months.
<br><br>
Now, says Khouzam, “we can deploy pretty much any application thats been Dockerized without any help from anybody. Getting a project running in Kubernetes is entirely dependent on how long you need to program the actual software. Its no longer dependent on deployment. Deployment is so fast that its negligible.”
</div>
<div class="banner5" >
<div class="banner5text">
"Were working with the market when possible, to put pressure on our vendors to support Kubernetes, because its a much easier solution to manage"<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- MORGAN MARTINET, ENTERPRISE ARCHITECT, CITY OF MONTRÉAL</span></div>
</div>
<div class="fullcol">
Kubernetes has also improved the efficiency of how the city uses its compute resources: “Before, the 200 application components we currently run in Kubernetes would have required hundreds of virtual machines, and now, if were talking about a single environment of production, we are able to run them on 8 machines, counting the masters of Kubernetes,” says Martinet. And its all done with a small team of just five people operating the Kubernetes clusters. Adds Martinet: “Its a dramatic improvement no matter what you measure.”
<br><br>
So it should come as no surprise that the teams strategy going forward is to target Kubernetes as much as they can. “If something cant run inside Kubernetes, well wait for it,” says Thibault. That means they havent moved any of the citys Windows systems onto Kubernetes, though its something they would like to do. “Were working with the market when possible, to put pressure on our vendors to support Kubernetes, because its a much easier solution to manage,” says Martinet.
<br><br>
Thibault sees a near future where 60% of the citys workloads are running on a Kubernetes platform—basically any and all of the use cases that they can get to work there. “Its so much more efficient than the way we used to do things,” he says. “Theres no looking back.”
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.6 KiB

View File

@ -0,0 +1,101 @@
---
title: Crowdfire Case Study
case_study_styles: true
cid: caseStudies
css: /css/style_crowdfire.css
---
<div class="banner1">
<h1> CASE STUDY:<img src="/images/crowdfire_logo.png" class="header_logo"><br> <div class="subhead">How to Keep Iterating a Fast-Growing App With a Cloud-Native Approach</div></h1>
</div>
<div class="details">
Company &nbsp;<b>Crowdfire</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Mumbai, India</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Social Media Software</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>Challenge</h2>
<a href="https://www.crowdfireapp.com/">Crowdfire</a> helps content creators create their content anywhere on the Internet and publish it everywhere else in the right format. Since its launch in 2010, it has grown to 16 million users. The product began as a monolith app running on <a href="https://cloud.google.com/appengine/">Google App Engine</a>, and in 2015, the company began a transformation to microservices running on Amazon Web Services <a href="https://aws.amazon.com/elasticbeanstalk/">Elastic Beanstalk</a>. "It was okay for our use cases initially, but as the number of services, development teams and scale increased, the deploy times, self-healing capabilities and resource utilization started to become problems for us," says Software Engineer Amanpreet Singh, who leads the infrastructure team for Crowdfire.<br>
<h2>Solution</h2>
"We realized that we needed a more cloud-native approach to deal with these issues," says Singh. The team decided to implement a custom setup of Kubernetes based on <a href="https://www.terraform.io/">Terraform</a> and <a href="https://www.ansible.com/">Ansible</a>.
<br>
</div>
<div class="col2">
<h2>Impact</h2>
"Kubernetes has helped us reduce the deployment time from 15 minutes to less than a minute," says Singh. "Due to Kubernetess self-healing nature, the operations team doesnt need to do any manual intervention in case of a node or pod failure." Plus, he says, "Dev-Prod parity has improved since developers can experiment with options in dev/staging clusters, and when its finalized, they just commit the config changes in the respective code repositories. These changes automatically get replicated on the production cluster via CI/CD pipelines."
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"In the 15 months that weve been using Kubernetes, it has been amazing for us. It enabled us to iterate quickly, increase development speed, and continuously deliver new features and bug fixes to our users, while keeping our operational costs and infrastructure management overhead under control."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Amanpreet Singh, Software Engineer at Crowdfire</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>"If you build it, they will come."</h2>
For most content creators, only half of that movie quote may ring true. Sure, platforms like Wordpress, YouTube and Shopify have made it simple for almost anyone to start publishing new content online, but attracting an audience isnt as easy. Crowdfire "helps users publish their content to all possible places where their audience exists," says Amanpreet Singh, a Software Engineer at the company based in Mumbai, India. Crowdfire has gained more than 16 million users—from bloggers and artists to makers and small businesses—since its launch in 2010.<br><br>
With that kind of growth—and a high demand from users for new features and continuous improvements—the Crowdfire team struggled to keep up behind the scenes. In 2015, they moved their monolith Java application to Amazon Web Services <a href="https://aws.amazon.com/elasticbeanstalk/">Elastic Beanstalk</a> and started breaking it down into microservices.<br><br>
It was a good first step, but the team soon realized they needed to go further down the cloud-native path, which would lead them to Kubernetes. "It was okay for our use cases initially, but as the number of services and development teams increased and we scaled further, deploy times, self-healing capabilities and resource utilization started to become problematic," says Singh, who leads the infrastructure team at Crowdfire. "We realized that we needed a more cloud-native approach to deal with these issues."<br><br>
As he looked around for solutions, Singh had a checklist of what Crowdfire needed. "We wanted to keep some things separate so they could be shipped independent of other things; this would help remove blockers and let different teams work at their own pace," he says. "We also make a lot of data-driven decisions, so shipping a feature and its iterations quickly was a must."<br><br>
Kubernetes checked all the boxes and then some. "One of the best things was the built-in service discovery," he says. "When you have a bunch of microservices that need to call each other, having internal DNS readily available and service IPs and ports automatically set as environment variables help a lot." Plus, he adds, "Kubernetess opinionated approach made it easier to get started."
</div>
</section>
<div class="banner3">
<div class="banner3text">
"We realized that we needed a more cloud-native approach to deal with these issues," says Singh. The team decided to implement a custom setup of Kubernetes based on Terraform and Ansible."
</div>
</div>
<section class="section3">
<div class="fullcol">
There was another compelling business reason for the cloud-native approach. "In todays world of ever-changing business requirements, using cloud native technology provides a variety of options to choose from—even the ability to run services in a hybrid cloud environment," says Singh. "Businesses can keep services in a region closest to the users, and thus benefit from high-availability and resiliency."<br><br>
So in February 2016, Singh set up a test Kubernetes cluster using the kube-up scripts provided. "I explored the features and was able to deploy an application pretty easily," he says. "However, it seemed like a black box since I didnt understand the components completely, and had no idea what the kube-up script did under the hood. So when it broke, it was hard to find the issue and fix it." <br><br>
To get a better understanding, Singh dove into the internals of Kubernetes, reading the docs and even some of the code. And he looked to the Kubernetes community for more insight. "I used to stay up a little late every night (a lot of users were active only when its night here in India) and would try to answer questions on the Kubernetes community Slack from users who were getting started," he says. "I would also follow other conversations closely. I must admit I was able to avoid a lot of issues in our setup because I knew others had faced the same issues."<br><br>
Based on the knowledge he gained, Singh decided to implement a custom setup of Kubernetes based on <a href="https://www.terraform.io/">Terraform</a> and <a href="https://www.ansible.com/">Ansible</a>. "I wrote Terraform to launch Kubernetes master and nodes (Auto Scaling Groups) and an Ansible playbook to install the required components," he says. (The company recently switched to using prebaked <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html">AMIs</a> to make the node bringup faster, and is planning to change its networking layer.) <br><br>
</div>
</section>
<div class="banner4">
<div class="banner4text">
"Kubernetes helped us reduce the deployment time from 15 minutes to less than a minute. Due to Kubernetess self-healing nature, the operations team doesnt need to do any manual intervention in case of a node or pod failure."
</div>
</div>
<section class="section4">
<div class="fullcol">
First, the team migrated a few staging services from Elastic Beanstalk to the new Kubernetes staging cluster, and then set up a production cluster a month later to deploy some services. The results were convincing. "By the end of March 2016, we established that all the new services must be deployed on Kubernetes," says Singh. "Kubernetes helped us reduce the deployment time from 15 minutes to less than a minute. Due to Kubernetess self-healing nature, the operations team doesnt need to do any manual intervention in case of a node or pod failure." On top of that, he says, "Dev-Prod parity has improved since developers can experiment with options in dev/staging clusters, and when its finalized, they just commit the config changes in the respective code repositories. These changes automatically get replicated on the production cluster via CI/CD pipelines. This brings more visibility into the changes being made, and keeping an audit trail."<br><br>
Over the next six months, the team worked on migrating all the services from Elastic Beanstalk to Kubernetes, except for the few that were deprecated and would soon be terminated anyway. The services were moved one at a time, and their performance was monitored for two to three days each. Today, "Were completely migrated and we run all new services on Kubernetes," says Singh. <br><br>
The impact has been considerable: With Kubernetes, the company has experienced a 90% cost savings on Elastic Load Balancer, which is now only used for their public, user-facing services. Their EC2 operating expenses have been decreased by as much as 50%.<br><br>
All 30 engineers at Crowdfire were onboarded at once. "I gave an internal talk where I shared the basic components and demoed the usage of kubectl," says Singh. "Everyone was excited and happy about using Kubernetes. Developers have more control and visibility into their applications running in production now. Most of all, theyre happy with the low deploy times and self-healing services." <br><br>
And theyre much more productive, too. "Where we used to do about 5 deployments per day," says Singh, "now were doing 30+ production and 50+ staging deployments almost every day."
</div>
</section>
<div class="banner5">
<div class="banner5text">
The impact has been considerable: With Kubernetes, the company has experienced a 90% cost savings on Elastic Load Balancer, which is now only used for their public, user-facing services. Their EC2 operating expenses have been decreased by as much as 50%.
</div>
</div>
<section class="section5">
<div class="fullcol">
Singh notes that almost all of the engineers interact with the staging cluster on a daily basis, and that has created a cultural change at Crowdfire. "Developers are more aware of the cloud infrastructure now," he says. "Theyve started following cloud best practices like better health checks, structured logs to stdout [standard output], and config via files or environment variables."<br><br>
With Crowdfires commitment to Kubernetes, Singh is looking to expand the companys cloud-native stack. The team already uses <a href="https://prometheus.io/">Prometheus</a> for monitoring, and he says he is evaluating <a href="https://linkerd.io/">Linkerd</a> and <a href="https://envoyproxy.github.io/">Envoy Proxy</a> as a way to "get more metrics about request latencies and failures, and handle them better." Other CNCF projects, including <a href="http://opentracing.io/">OpenTracing</a> and <a href="https://grpc.io/">gRPC</a> are also on his radar.<br><br>
Singh has found that the cloud-native community is growing in India, too, particularly in Bangalore. "A lot of startups and new companies are starting to run their infrastructure on Kubernetes," he says. <br><br>
And when people ask him about Crowdfires experience, he has this advice to offer: "Kubernetes is a great piece of technology, but it might not be right for you, especially if you have just one or two services or your app isnt easy to run in a containerized environment," he says. "Assess your situation and the value that Kubernetes provides before going all in. If you do decide to use Kubernetes, make sure you understand the components that run under the hood and what role they play in smoothly running the cluster. Another thing to consider is if your apps are Kubernetes-ready, meaning if they have proper health checks and handle termination signals to shut down gracefully."<br><br>
And if your company fits that profile, go for it. Crowdfire clearly did—and is now reaping the benefits. "In the 15 months that weve been using Kubernetes, it has been amazing for us," says Singh. "It enabled us to iterate quickly, increase development speed and continuously deliver new features and bug fixes to our users, while keeping our operational costs and infrastructure management overhead under control."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -0,0 +1,4 @@
---
title: Goldman Sachs
content_url: http://blogs.wsj.com/cio/2016/02/24/big-changes-in-goldmans-software-emerge-from-small-containers/
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

View File

@ -0,0 +1,125 @@
---
title: GolfNow Case Study
case_study_styles: true
cid: caseStudies
css: /css/style_golfnow.css
---
<div class="banner1">
<h1>CASE STUDY: <img src="/images/golfnow_logo.png" width="20%" style="margin-bottom:-6px"><br>
<div class="subhead">Saving Time and Money with Cloud Native Infrastructure</div>
</h1>
</div>
<div class="details">
Company&nbsp;<b>GolfNow</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location&nbsp;<b>Orlando, Florida</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry&nbsp;<b>Golf Industry Technology and Services Provider</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>Challenge</h2>
A member of the <a href="http://www.nbcunicareers.com/our-businesses/nbc-sports-group">NBC Sports Group</a>, <a href="https://www.golfnow.com/">GolfNow</a> is the golf industrys technology and services leader, managing 10 different products, as well as the largest e-commerce tee time marketplace in the world. As its business began expanding rapidly and globally, GolfNows monolithic application became problematic. "We kept growing our infrastructure vertically rather than horizontally, and the cost of doing business became problematic," says Sheriff Mohamed, GolfNows Director, Architecture. "We wanted the ability to more easily expand globally."
<br>
</div>
<div class="col2">
<h2>Solution</h2>
Turning to microservices and containerization, GolfNow began moving its applications and databases from third-party services to its own clusters running on <a href="https://www.docker.com/">Docker</a> and <a href="http://kubernetes.io/">Kubernetes.</a><br><br>
<h2>Impact</h2>
The results were immediate. While maintaining the same capacity—and beyond, during peak periods—GolfNow saw its infrastructure costs for the first application virtually cut in half.
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"With our growth we obviously needed to expand our infrastructure, and we kept growing vertically rather than horizontally. We were basically wasting money and doubling the cost of our infrastructure."<br><br><span style="font-size:15px;letter-spacing:0.08em">- SHERIFF MOHAMED, DIRECTOR, ARCHITECTURE AT GOLFNOW</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>Its not every day that you can say youve slashed an operating expense by half.</h2>
But Sheriff Mohamed and Josh Chandler did just that when they helped lead their company, <a href="https://www.golfnow.com/">GolfNow</a>, on a journey from a monolithic to a containerized, cloud native infrastructure managed by Kubernetes.
<br> <br>
A top-performing business within the NBC Sports Group, GolfNow is a technology and services company with the largest tee time marketplace in the world. GolfNow serves 5 million active golfers across 10 different products. In recent years, the business had grown so fast that the infrastructure supporting their giant monolithic application (written in C#.NET and backed by SQL Server database management system) could not keep up. "With our growth we obviously needed to expand our infrastructure, and we kept growing vertically rather than horizontally," says Sheriff, GolfNows Director, Architecture. "Our costs were growing exponentially. And on top of that, we had to build a Disaster Recovery (DR) environment, which then meant wed have to copy exactly what we had in our original data center to another data center that was just the standby. We were basically wasting money and doubling the cost of our infrastructure."
<br> <br>
In moving just the first of GolfNows important applications—a booking engine for golf courses and B2B marketing platform—from third-party services to their own Kubernetes environment, "our bill went down drastically," says Sheriff.
<br> <br>
The path to those stellar results began in late 2014. In order to support GolfNows global growth, the team decided that the company needed to have multiple data centers and the ability to quickly and easily re-route traffic as needed. "From there we knew that we needed to go in a direction of breaking things apart, microservices, and containerization," says Sheriff. "At the time we were trying to get away from <a href="https://www.microsoft.com/net">C#.NET</a> and <a href="https://www.microsoft.com/en-cy/sql-server/sql-server-2016">SQL Server</a> since it didnt run very well on Linux, where everything container was running smoothly."
<br> <br>
To that end, the team shifted to working with <a href="https://nodejs.org/">Node.js</a>, the open-source, cross-platform JavaScript runtime environment for developing tools and applications, and <a href="https://www.mongodb.com/">MongoDB</a>, the open-source database program. At the time, <a href="https://www.docker.com/">Docker</a>, the platform for deploying applications in containers, was still new. But once the team began experimenting with it, Sheriff says, "we realized that was the way we wanted to go, especially since thats the way the industry is heading."
</div>
</section>
<div class="banner3">
<div class="banner3text">
"The team migrated the rest of the application into their Kubernetes cluster. And the impact was immediate: On top of cutting monthly costs by a large percentage, says Sheriff, 'Running at the same capacity and during our peak time, we were able to horizontally grow. Since we were using our VMs more efficiently with containers, we didnt have to pay extra money at all.'"
</div>
</div>
<section class="section3">
<div class="fullcol">
GolfNows dev team ran an "internal, low-key" proof of concept and were won over. "We really liked how easy it was to be able to pass containers around to each other and have them up and running in no time, exactly the way it was running on my machine," says Sheriff. "Because that is always the biggest gripe that Ops has with developers, right? It worked on my machine! But then we started getting to the point of, How do we make sure that these things stay up and running?" <br><br>
That led the team on a quest to find the right orchestration system for the companys needs. Sheriff says the first few options they tried were either too heavy or "didnt feel quite right." In late summer 2015, they discovered the just-released <a href="http://kubernetes.io/">Kubernetes</a>, which Sheriff immediately liked for its ease of use. "We did another proof of concept," he says, "and Kubernetes won because of the fact that the community backing was there, built on top of what Google had already done."
<br><br>
But before they could go with Kubernetes, <a href="http://www.nbc.com/">NBC</a>, GolfNows parent company, also asked them to comparison shop with another company. Sheriff and his team liked the competing companys platform user interface, but didnt like that its platform would not allow containers to run natively on Docker. With no clear decision in sight, Sheriffs VP at GolfNow, Steve McElwee, set up a three-month trial during which a GolfNow team (consisting of Sheriff and Josh, whos now Lead Architect, Open Platforms) would build out a Kubernetes environment, and a large NBC team would build out one with the other companys platform.
<br><br>
"We spun up the cluster and we tried to get everything to run the way we wanted it to run," Sheriff says. "The biggest thing that we took away from it is that not only did we want our applications to run within Kubernetes and Docker, we also wanted our databases to run there. We literally wanted our entire infrastructure to run within Kubernetes."
<br><br>
At the time there was nothing in the community to help them get Kafka and MongoDB clusters running within a Kubernetes and Docker environment, so Sheriff and Josh figured it out on their own, taking a full month to get it right. "Everything started rolling from there," Sheriff says. "We were able to get all our applications connected, and we finished our side of the proof of concept a month in advance. My VP was like, Alright, its over. Kubernetes wins."
<br><br>
The next step, beginning in January 2016, was getting everything working in production. The team focused first on one application that was already written in Node.js and MongoDB. A booking engine for golf courses and B2B marketing platform, the application was already going in the microservice direction but wasnt quite finished yet. At the time, it was running in <a href="https://devcenter.heroku.com/articles/mongohq">Heroku Compose</a> and other third-party services—resulting in a large monthly bill.
</div>
</section>
<div class="banner4">
<div class="banner4text">
"'The time I spent actually moving the applications was under 30 seconds! We can move data centers in just incredible amounts of time. If you havent come from the Kubernetes world you wouldnt believe me.' Sheriff puts it in these terms: 'Before Kubernetes I wasnt sleeping at night, literally. I was woken up all the time, because things were down. After Kubernetes, Ive been sleeping at night.'"
</div>
</div>
<section class="section4">
<div class="fullcol">
"The goal was to take all of that out and put it within this new platform weve created with Kubernetes on <a href="https://cloud.google.com/compute/">Google Compute Engine (GCE)</a>," says Sheriff. "So we ended up building piece by piece, in parallel, what was out in Heroku and Compose, in our Kubernetes cluster. Then, literally, just switched configs in the background. So in Heroku we had the app running hitting a Compose database. Wed take the config, change it and make it hit the database that was running in our cluster."
<br><br>
Using this procedure, they were able to migrate piecemeal, without any downtime. The first migration was done during off hours, but to test the limits, the team migrated the second database in the middle of the day, when lots of users were running the application. "We did it," Sheriff says, "and again it was successful. Nobody noticed."
<br><br>
After three weeks of monitoring to make sure everything was running stable, the team migrated the rest of the application into their Kubernetes cluster. And the impact was immediate: On top of cutting monthly costs by a large percentage, says Sheriff, "Running at the same capacity and during our peak time, we were able to horizontally grow. Since we were using our VMs more efficiently with containers, we didnt have to pay extra money at all."
<br><br>
Not only were they saving money, but they were also saving time. "I had a meeting this morning about migrating some applications from one cluster to another," says Josh. "I spent about 2 hours explaining the process. The time I spent actually moving the applications was under 30 seconds! We can move data centers in just incredible amounts of time. If you havent come from the Kubernetes world you wouldnt believe me." Sheriff puts it in these terms: "Before Kubernetes I wasnt sleeping at night, literally. I was woken up all the time, because things were down. After Kubernetes, Ive been sleeping at night."
<br><br>
A small percentage of the applications on GolfNow have been migrated over to the Kubernetes environment. "Our Core Team is rewriting a lot of the .NET applications into <a href="https://www.microsoft.com/net/core">.NET Core</a> [which is compatible with Linux and Docker] so that we can run them within containers," says Sheriff.
<br><br>
Looking ahead, Sheriff and his team want to spend 2017 continuing to build a whole platform around Kubernetes with <a href="https://github.com/drone/drone">Drone</a>, an open-source continuous delivery platform, to make it more developer-centric. "Now theyre able to manage configuration, theyre able to manage their deployments and things like that, making all these subteams that are now creating all these microservices, be self sufficient," he says. "So it can pull us away from applications and allow us to just make sure the cluster is running and healthy, and then actually migrate that over to our Ops team."
</div>
</section>
<div class="banner5">
<div class="banner5text">
"Having gone from complete newbies to production-ready in three months, the GolfNow team is eager to encourage other companies to follow their lead. 'This is The Six Million Dollar Man of the cloud right now,' adds Josh. 'Just try it out, watch it happen. I feel like the proof is in the pudding when you look at these kinds of application stacks. Theyre faster, theyre more resilient.'"
</div>
</div>
<section class="section5">
<div class="fullcol">
And long-term, Sheriff has an even bigger goal for getting more people into the Kubernetes fold. "Were actually trying to make this platform generic enough so that any of our sister companies can use it if they wish," he says. "Most definitely I think it can be used as a model. I think the way we migrated into it, the way we built it out, are all ways that I think other companies can learn from, and should not be afraid of."
<br><br>
The GolfNow team is also giving back to the Kubernetes community by open-sourcing a bot framework that Josh built. "We noticed that the dashboard user interface is actually moving a lot faster than when we started," says Sheriff. "However we realized what we needed was something thats more of a bot that really helps us administer Kubernetes as a whole through Slack." Josh explains: "With the Kubernetes-Slack integration, you can essentially hook into a cluster and the issue commands and edit configurations. Weve tried to simplify the security configuration as much as possible. We hope this will be our major thank you to Kubernetes, for everything youve given us."
<br><br>
Having gone from complete newbies to production-ready in three months, the GolfNow team is eager to encourage other companies to follow their lead. The lessons theyve learned: "Youve got to have buy-in from your boss," says Sheriff. "Another big deal is having two to three people dedicated to this type of endeavor. You cant have people who are half in, half out." And if you dont have buy-in from the get go, proving it out will get you there.
<br><br>
"This is The Six Million Dollar Man of the cloud right now," adds Josh. "Just try it out, watch it happen. I feel like the proof is in the pudding when you look at these kinds of application stacks. Theyre faster, theyre more resilient."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 KiB

View File

@ -0,0 +1,112 @@
---
title: Haufe Group Case Study
case_study_styles: true
cid: caseStudies
css: /css/style_haufegroup.css
---
<div class="banner1">
<h1> CASE STUDY:<img src="/images/haufegroup_logo.png" class="header_logo"><br> <div class="subhead">Paving the Way for Cloud Native for Midsize Companies</div></h1>
</div>
<div class="details">
Company &nbsp;<b>Haufe Group</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Freiburg, Germany</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Media and Software</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>Challenge</h2>
Founded in 1930 as a traditional publisher, Haufe Group has grown into a media and software company with 95 percent of its sales from digital products. Over the years, the company has gone from having "hardware in the basement" to outsourcing its infrastructure operations and IT. More recently, the development of new products, from Internet portals for tax experts to personnel training software, has created demands for increased speed, reliability and scalability. "We need to be able to move faster," says Solution Architect Martin Danielsson. "Adapting workloads is something that we really want to be able to do."
<br>
<br>
<h2>Solution</h2>
Haufe Group began its cloud-native journey when <a href="https://azure.microsoft.com/">Microsoft Azure</a> became available in Europe; the company needed cloud deployments for its desktop apps with bandwidth-heavy download services. "After that, it has been different projects trying out different things," says Danielsson. Two years ago, Holger Reinhardt joined Haufe Group as CTO and rapidly re-oriented the traditional host provider-based approach toward a cloud and API-first strategy.
</div>
<div class="col2">
A core part of this strategy was a strong mandate to embrace infrastructure-as-code across the entire software deployment lifecycle via Docker. The company is now getting ready to go live with two services in production using <a href="https://kubernetes.io/">Kubernetes</a> orchestration on <a href="https://azure.microsoft.com/">Microsoft Azure</a> and <a href="https://aws.amazon.com/">Amazon Web Services</a>. The team is also working on breaking up one of their core Java Enterprise desktop products into microservices to allow for better evolvability and dynamic scaling in the cloud.
<br>
<br>
<h2>Impact</h2>
With the ability to adapt workloads, Danielsson says, teams "will be able to scale down to around half the capacity at night, saving 30 percent of the hardware cost." Plus, shorter release times have had a major impact. "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," he says. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"Over the next couple of years, people wont even think that much about it when they want to run containers. Kubernetes is going to be the go-to solution."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Martin Danielsson, Solution Architect, Haufe Group</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>More than 80 years ago, Haufe Group was founded as a traditional publishing company, printing books and commentary on paper.</h2> By the 1990s, though, the companys leaders recognized that the future was digital, and to their credit, were able to transform Haufe Group into a media and software business that now gets 95 percent of its sales from digital products. "Among the German companies doing this, we were one of the early adopters," says Martin Danielsson, Solution Architect for Haufe Group.<br><br>
And now theyre leading the way for midsize companies embracing cloud-native technology like Kubernetes. "The really big companies like Ticketmaster and Google get it right, and the startups get it right because theyre faster," says Danielsson. "Were in this big lump of companies in the middle with a lot of legacy, a lot of structure, a lot of culture that does not easily fit the cloud technologies. Were just 1,500 people, but we have hundreds of customer-facing applications. So were doing things that will be relevant for many companies of our size or even smaller."<br><br>
Many of those legacy challenges stemmed from simply following the technology trends of the times. "We used to do full DevOps," he says. In the 1990s and 2000s, "that meant that you had your hardware in the basement. And then 10 years ago, the hype of the moment was to outsource application operations, outsource everything, and strip down your IT department to take away the distraction of all these hardware things. Thats not our area of expertise. We didnt want to be an infrastructure provider. And now comes the backlash of that."<br><br>
Haufe Group began feeling the pain as they were developing more new products, from Internet portals for tax experts to personnel training software, that have created demands for increased speed, reliability and scalability. "Right now, we have this break in workflows, where we go from writing concepts to developing, handing it over to production and then handing that over to your host provider," he says. "And then when things go bad we have no clue what went wrong. We definitely want to take back control, and we want to move a lot faster. Adapting workloads is something that we really want to be able to do."<br><br>
Those needs led them to explore cloud-native technology. Their first foray into the cloud was doing deployments in <a href="https://azure.microsoft.com/">Microsoft Azure</a>, once it became available in Europe, for desktop products that had built-in download services. Hosting expenses for such bandwidth-heavy services were too high, so the company turned to the cloud. "After that, it has been different projects trying out different things," says Danielsson.
</div>
</section>
<div class="banner3">
<div class="banner3text">
"We have been doing containers for the last two years, and we really got the hang of how they work," says Danielsson. "But it was always for development and test, never in production, because we didnt fully understand how that would work. And to me, Kubernetes was definitely the technology that solved that."
</div>
</div>
<section class="section3">
<div class="fullcol">
Two years ago, Holger Reinhardt joined Haufe Group as CTO and rapidly re-oriented the traditional host provider-based approach toward a cloud and API-first strategy. A core part of this strategy was a strong mandate to embrace infrastructure-as-code across the entire software deployment lifecycle via Docker.
Some experiments went further than others; German regulations about sensitive data proved to be a road block in moving some workloads to Azure and Amazon Web Services. "Due to our history, Germany is really strict with things like personally identifiable data," Danielsson says.<br><br>
These experiments took on new life with the arrival of the Azure Sovereign Cloud for Germany (an Azure clone run by the German T-Systems provider). With the availability of Azure.de—which conforms to Germanys privacy regulations—teams started to seriously consider deploying production loads in Docker into the cloud. "We have been doing containers for the last two years, and we really got the hang of how they work," says Danielsson. "But it was always for development and test, never in production, because we didnt fully understand how that would work. And to me, Kubernetes was definitely the technology that solved that."<br><br>
In parallel, Danielsson had built an API management system with the aim of supporting CI/CD scenarios, aspects of which were missing in off-the-shelf API management products. With a foundation based on <a href="https://getkong.org/">Mashapes Kong</a> gateway, it is open-sourced as <a href="http://wicked.haufe.io/">wicked.haufe.io</a>. He put wicked.haufe.io to use with his product team.<br><br> Otherwise, Danielsson says his philosophy was "dont try to reinvent the wheel all the time. Go for whats there and 99 percent of the time it will be enough. And if you think you really need something custom or additional, think perhaps once or twice again. One of the things that I find so amazing with this cloud-native framework is that everything ties in."<br><br>
Currently, Haufe Group is working on two projects using Kubernetes in production. One is a new mobile application for researching legislation and tax laws. "We needed a way to take out functionality from a legacy core and put an application on top of that with an API gateway—a lot of moving parts that screams containers," says Danielsson. So the team moved the build pipeline away from "deploying to some old, huge machine that you could deploy anything to" and onto a Kubernetes cluster where there would be automatic CI/CD "with feature branches and all these things that were a bit tedious in the past."
</div>
</section>
<div class="banner4">
<div class="banner4text">
"Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," says Danielsson. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."
</div>
</div>
<section class="section4">
<div class="fullcol">
It was a proof of concept effort, and the proof was in the pudding. "Everyone was really impressed at what we accomplished in a week," says Danielsson. "We did these kinds of integrations just to make sure that we got a handle on how Kubernetes works. If you can create optimism and buzz around something, its half won. And if the developers and project managers know this is working, youre more or less done." Adds Reinhardt: "You need to create some very visible, quick wins in order to overcome the status quo."<br><br>
The impact on the speed of deployment was clear: "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," says Danielsson. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days." <br><br>
The potential impact on cost was another bonus. "Hosting applications is quite expensive, so moving to the cloud is something that we really want to be able to do," says Danielsson. With the ability to adapt workloads, teams "will be able to scale down to around half the capacity at night, saving 30 percent of the hardware cost." <br><br>
Just as importantly, Danielsson says, theres added flexibility: "When we try to move or rework applications that are really crucial, its often tricky to validate whether the path we want to take is going to work out well. In order to validate that, we would need to reproduce the environment and really do testing, and thats prohibitively expensive and simply not doable with traditional host providers. Cloud native gives us the ability to do risky changes and validate them in a cost-effective way."<br><br>
As word of the two successful test projects spread throughout the company, interest in Kubernetes has grown. "We want to be able to support our developers in running Kubernetes clusters but were not there yet, so we allow them to do it as long as theyre aware that they are on their own," says Danielsson. "So thats why we are also looking at things like [the managed Kubernetes platform] <a href="https://coreos.com/tectonic/">CoreOS Tectonic</a>, <a href="https://azure.microsoft.com/en-us/services/container-service/">Azure Container Service</a>, <a href="https://aws.amazon.com/ecs/">ECS</a>, etc. These kinds of services will be a lot more relevant to midsize companies that want to leverage cloud native but dont have the IT departments or the structure around that."<br><br>
In the next year and a half, Danielsson says the company will be working on moving one of their legacy desktop products, a web app for researching legislation and tax laws originally built in Java Enterprise, onto cloud-native technology. "Were doing a microservice split out right now so that we can independently deploy the different parts," he says. The main website, which provides free content for customers, is also moving to cloud native.
</div>
</section>
<div class="banner5">
<div class="banner5text">
"the execution of a strategy requires alignment of culture, structure and technology. Only if those three dimensions are aligned can you successfully execute a transformation into microservices and cloud-native architectures. And it is only then that the Cloud will pay the dividends in much faster speeds in product innovation and much lower operational costs."
</div>
</div>
<section class="section5">
<div class="fullcol">
But with these goals, Danielsson believes there are bigger cultural challenges that need to be constantly addressed. The move to new technology, not to mention a shift toward DevOps, means a lot of change for employees. "The roles were rather fixed in the past," he says. "You had developers, you had project leads, you had testers. And now you get into these really, really important things like test automation. Testers arent actually doing click testing anymore, and they have to write automated testing. And if you really want to go full-blown CI/CD, all these little pieces have to work together so that you get the confidence to do a check in, and know this check in is going to land in production, because if I messed up, some test is going to break. This is a really powerful thing because whatever you do, whenever you merge something into the trunk or to the master, this is going live. And thats where you either get the people or they run away screaming."
Danielsson understands that it may take some people much longer to get used to the new ways.<br><br>
"Culture is nothing that you can force on people," he says. "You have to live it for yourself. You have to evangelize. You have to show the advantages time and time again: This is how you can do it, this is what you get from it." To that end, his team has scheduled daylong workshops for the staff, bringing in outside experts to talk about everything from API to Devops to cloud. <br><br>
For every person who runs away screaming, many others get drawn in. "Get that foot in the door and make them really interested in this stuff," says Danielsson. "Usually it catches on. We have people you never would have expected chanting, Docker Docker Docker now. Its cool to see them realize that there is a world outside of their Python libraries. Its awesome to see them really work with Kubernetes."<br><br>
Ultimately, Reinhardt says, "the execution of a strategy requires alignment of culture, structure and technology. Only if those three dimensions are aligned can you successfully execute a transformation into microservices and cloud-native architectures. And it is only then that the Cloud will pay the dividends in much faster speeds in product innovation and much lower operational costs."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

View File

@ -0,0 +1,433 @@
---
title: 华为案例分析
case_study_styles: true
cid: caseStudies
css: /css/style_huawei.css
---
<!--
---
title: Huawei Case Study
case_study_styles: true
cid: caseStudies
css: /css/style_huawei.css
---
-->
<div class="banner1">
<h1> 案例分析:<img src="/images/huawei_logo.png" class="header_logo"><br> <div class="subhead">以用户和供应商身份拥抱云原生</div></h1>
<!--
<h1> CASE STUDY:<img src="/images/huawei_logo.png" class="header_logo"><br> <div class="subhead">Embracing Cloud Native as a User and a Vendor</div></h1>
-->
</div>
<div class="details">
公司 &nbsp;<b>华为</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;地点 &nbsp;<b>中国深圳</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;产业 &nbsp;<b>通信设备</b>
<!--
Company &nbsp;<b>Huawei</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Shenzhen, China</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Telecommunications Equipment</b>
-->
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>挑战</h2>
<!--
<h2>Challenge</h2>
-->
华为是世界上最大的电信设备制造商,拥有超过 18 万名员工。
<!--
A multinational company thats the largest telecommunications equipment manufacturer in the world,
Huawei has more than 180,000 employees.
-->
为了支持华为在全球的快速业务发展,<a href="http://www.huawei.com/">华为</a>内部 IT 部门有 8 个数据中心,
这些数据中心在 100K+ VMs 上运行了 800 多个应用程序,服务于这 18 万用户。
<!--
In order to support its fast business development around the globe,
<a href="http://www.huawei.com/">Huawei</a> has eight data centers for its internal I.T. department,
which have been running 800+ applications in 100K+ VMs to serve these 180,000 users.
-->
随着新应用程序的快速增长,基于 VM 的应用程序的管理和部署的成本和效率都成为业务敏捷性的关键挑战。
<!--
With the rapid increase of new applications, the cost and efficiency of management and
deployment of VM-based apps all became critical challenges for business agility.
-->
该公司首席软件架构师、开源社区总监侯培新表示:
“这是一个超大的分布式系统,因此我们发现,以更一致的方式管理所有任务始终是一个挑战。
我们希望进入一种更敏捷、更得体的实践”。
<!--
"Its very much a distributed system so we found that managing all of the tasks
in a more consistent way is always a challenge," says Peixin Hou,
the companys Chief Software Architect and Community Director for Open Source.
"We wanted to move into a more agile and decent practice."
-->
</div>
<div class="col2">
<h2>解决方案</h2>
<!--
<h2>Solution</h2>
-->
在决定使用容器技术后,华为开始将内部 IT 部门的应用程序迁移到<a href="http://kubernetes.io/"> Kubernetes </a>上运行。
到目前为止,大约 30% 的应用程序已经转移为云原生程序。
<!--
After deciding to use container technology, Huawei began moving the internal I.T. departments applications
to run on <a href="http://kubernetes.io/">Kubernetes</a>.
So far, about 30 percent of these applications have been transferred to cloud native.
-->
<br>
<br>
<h2>影响</h2>
<!--
<h2>Impact</h2>
-->
“到 2016 年底,华为的内部 IT 部门使用基于 Kubernetes 的平台即服务PaaS解决方案管理了 4000 多个节点和数万个容器。
全局部署周期从一周缩短到几分钟,应用程序交付效率提高了 10 倍”。
<!--
"By the end of 2016, Huaweis internal I.T. department managed more than 4,000 nodes with tens of thousands containers
using a Kubernetes-based Platform as a Service (PaaS) solution," says Hou.
"The global deployment cycles decreased from a week to minutes, and the efficiency of application delivery has been improved 10 fold."
-->
对于底线,侯培新表示,“我们还看到运营开支大幅削减,在某些情况下可削减 20% 到 30%,我们认为这对我们的业务非常有帮助”。
<!--
For the bottom line, he says, "We also see significant operating expense spending cut, in some circumstances 20-30 percent,
which we think is very helpful for our business."
-->
这里给出一些华为内部结果资料、外部需求,也是公司的技术包装产品<a href="http://developer.huawei.com/ict/en/site-paas"> FusionStage™ </a>
它被作为一套 PaaS 解决方案提供给其客户。
<!--
Given the results Huawei has had internally and the demand it is seeing externally the company has also built the technologies
into <a href="http://developer.huawei.com/ict/en/site-paas">FusionStage™</a>, the PaaS solution it offers its customers.
-->
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
“如果你是一个供应商,为了说服你的客户,你应该自己使用它。
幸运的是,因为华为有很多员工,我们可以利用这种技术来展示我们所能构建的云的规模。”
<!--
"If youre a vendor, in order to convince your customer, you should use it yourself.
Luckily because Huawei has a lot of employees,
we can demonstrate the scale of cloud we can build using this technology."
-->
<br style="height:25px">
<span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;">
<br>- 侯培新,首席软件架构师、开源社区总监
<!--
<br>- Peixin Hou, chief software architect and community director for open source
-->
</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
华为的 Kubernetes 之旅始于一位开发者。
<!--
Huaweis Kubernetes journey began with one developer.
-->
两年前,这家网络和电信巨头雇佣的一名工程师对<a href="http://kubernetes.io/"> Kubernetes </a>
这一跨主机集群的管理应用程序容器的技术产生了兴趣,并开始为其开源社区作出贡献。
<!--
Over two years ago, one of the engineers employed by the networking and telecommunications giant became interested
in <a href="http://kubernetes.io/">Kubernetes</a>,
the technology for managing application containers across clusters of hosts,
and started contributing to its open source community.
-->
随着技术和社区的发展,他不断地将这门技术告诉他的经理们。<br><br>
<!--
As the technology developed and the community grew, he kept telling his managers about it.<br><br>
-->
与此同时,华为也在为其内部的企业 IT 部门寻找更好的编排系统,该系统应该支持每一个业务的流程处理。
<!--
And as fate would have it, at the same time,
Huawei was looking for a better orchestration system for its internal enterprise I.T. department,
which supports every business flow processing.
-->
华为首席软件架构师、开源社区总监侯培新表示,
“我们在全球拥有逾 18 万名员工,内部流程复杂,所以这个部门可能每周都需要开发一些新的应用程序。
<!--
"We have more than 180,000 employees worldwide, and a complicated internal procedure,
so probably every week this department needs to develop some new applications," says Peixin Hou,
Huaweis Chief Software Architect and Community Director for Open Source.
-->
我们的 IT 部门经常需要启动数万个容器,任务要跨越全球数千个节点。
这是一个超大的分布式的系统,所以我们发现以更一致的方式管理所有的任务总是一个挑战”。<br><br>
<!--
"Very often our I.T. departments need to launch tens of thousands of containers,
with tasks running across thousands of nodes across the world.
Its very much a distributed system, so we found that managing all of the tasks
in a more consistent way is always a challenge."<br><br>
-->
过去,华为曾使用虚拟机来封装应用程序,但是,“每次我们启动虚拟机时”,侯培新说,
“无论是因为它是一项新服务,还是因为它是一项由于节点功能异常而被关闭的服务,都需要花费大量时间”。
<!--
In the past, Huawei had used virtual machines to encapsulate applications,
but "every time when we start a VM," Hou says,
"whether because its a new service or because it was a service that was shut down
because of some abnormal node functioning, it takes a lot of time."
-->
华为转向了容器化,所以是时候尝试 Kubernetes 了。
采纳了这位工程师的建议花费了一年的时间,这个过程“不是一蹴而就的”,侯说,
<!--
Huawei turned to containerization, so the timing was right to try Kubernetes.
It took a year to adopt that engineers suggestion the process "is not overnight," says Hou
-->
但一旦投入使用“Kubernetes 基本上解决了我们的大部分问题。
以前,部署时间大约需要一周,现在只需几分钟。
<!--
but once in use, he says, "Kubernetes basically solved most of our problems.
Before, the time of deployment took about a week, now it only takes minutes.
-->
开发人员非常高兴。使用 Kubernetes 的那个部门也十分高兴”。<br><br>
<!--
The developers are happy. That department is also quite happy."<br><br>
-->
侯培新看到了使用这项技术给公司带来的巨大好处,
“Kubernetes 为基于云的应用程序带来了敏捷性、扩展能力和 DevOps 实践”,他说,
<!--
Hou sees great benefits to the company that come with using this technology:
"Kubernetes brings agility, scale-out capability,
and DevOps practice to the cloud-based applications," he says.
-->
“它为我们提供了自定义调度体系结构的能力,这使得容器任务之间的关联性成为可能,从而提高了效率。
它支持多种容器格式,同时广泛支持各种容器网络解决方案和容器存储方案”。
<!--
"It provides us with the ability to customize the scheduling architecture,
which makes possible the affinity between container tasks that gives greater efficiency.
It supports multiple container formats. It has extensive support for various container
networking solutions and container storage."
-->
</div>
</section>
<div class="banner3">
<div class="banner3text">
“Kubernetes 基本上解决了我们的大部分问题。
以前,部署时间大约需要一周,现在只需几分钟。
开发人员很高兴。使用 Kubernetes 的部门也很高兴。”
<!--
"Kubernetes basically solved most of our problems.
Before, the time of deployment took about a week, now it only takes minutes.
The developers are happy. That department is also quite happy."
-->
</div>
</div>
<section class="section3">
<div class="fullcol">
最重要的是,这对底线有影响。侯培新说,
“我们还看到,在某些情况下,运营开支会大幅削减 20% 到 30%,这对我们的业务非常有帮助”。<br><br>
<!--
And not least of all, theres an impact on the bottom line.
Says Hou: "We also see significant operating expense spending cut in some circumstances 20-30 percent,
which is very helpful for our business."<br><br>
-->
华为对这些初步结果感到满意,并看到客户对云原生技术的需求,因此加大了 Kubernetes 的投入。
2016 年春,公司不仅成为用户,而且成为了供应商。<br><br>
<!--
Pleased with those initial results, and seeing a demand for cloud native technologies from its customers,
Huawei doubled down on Kubernetes.
In the spring of 2016, the company became not only a user but also a vendor.<br><br>
-->
“我们构建了 Kubernetes 技术解决方案”,侯培新说,
指的是华为的<a href="http://developer.huawei.com/ict/en/site-paas"> FusionStage™ </a> PaaS 输出。
<!--
"We built the Kubernetes technologies into our solutions," says Hou, referring to Huaweis
<a href="http://developer.huawei.com/ict/en/site-paas">FusionStage™</a> PaaS offering.
-->
“我们的客户,从非常大的电信运营商到银行,都喜欢云原生的想法。他们喜欢 Kubernetes 的技术。
但是他们需要花费大量的时间来分解他们的应用程序,将它们转换为微服务体系结构。
作为解决方案提供者,我们帮助他们。
<!--
"Our customers, from very big telecommunications operators to banks, love the idea of cloud native.
They like Kubernetes technology. But they need to spend a lot of time to decompose their applications
to turn them into microservice architecture, and as a solution provider, we help them.
-->
我们已经开始与一些中国银行合作我们看到中国移动China Mobile和德国电信Deutsche Telekom等客户对我们很感兴趣”。<br><br>
<!--
Weve started to work with some Chinese banks, and we see a lot of interest from our customers
like <a href="http://www.chinamobileltd.com/">China Mobile</a> and
<a href="https://www.telekom.com/en">Deutsche Telekom</a>."<br><br>
-->
“如果你是一个用户,你就仅仅是个用户”,侯培新补充道,“但如果你是一个供应商,为了说服你的客户,你应该自己使用它。
<!--
"If youre just a user, youre just a user," adds Hou.
"But if youre a vendor, in order to even convince your customers, you should use it yourself.
-->
幸运的是,因为华为有很多员工,我们可以利用这种技术来展示我们所能构建的云的规模,向客户提供智慧服务”。
<!--
Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology.
We provide customer wisdom."
-->
尽管华为拥有自己的私有云,但其许多客户使用华为的解决方案运行跨云应用程序。
这是一个很大的卖点,大多数公共云提供商现在都支持 Kubernetes。
侯培新说,“这使得跨云转换比其他解决方案更容易”。<br><br>
<!--
While Huawei has its own private cloud, many of its customers run cross-cloud applications using Huaweis solutions.
Its a big selling point that most of the public cloud providers now support Kubernetes.
"This makes the cross-cloud transition much easier than with other solutions," says Hou.<br><br>
-->
</div>
</section>
<div class="banner4">
<div class="banner4text">
“我们的客户,从非常大的电信运营商到银行,都喜欢云原生的想法。他们喜欢 Kubernetes 的技术。
但是他们需要花很多时间来分解他们的应用程序,把它们变成微服务体系结构,作为一个解决方案提供商,我们帮助他们。”
<!--
"Our customers, from very big telecommunications operators to banks, love the idea of cloud native.
They like Kubernetes technology. But they need to spend a lot of time to decompose their applications
to turn them into microservice architecture, and as a solution provider, we help them."
-->
</div>
</div>
<section class="section4">
<div class="fullcol">
在华为内部,一旦他的团队完成内部业务流程部门向 Kubernetes 的转型,侯培新希望说服更多部门转向云原生开发和实践。
<!--
Within Huawei itself, once his team completes the transition of the internal business procedure department to Kubernetes,
Hou is looking to convince more departments to move over to the cloud native development cycle and practice.
-->
“我们有很多软件开发人员,所以我们将为他们提供我们的平台作为服务解决方案,我们自己的产品”,
他说,“我们希望在他们的迭代周期中看到显著的成本削减”。<br><br>
<!--
"We have a lot of software developers,
so we will provide them with our platform as a service solution, our own product," he says.
"We would like to see significant cuts in their iteration cycle."<br><br>
-->
在见证了华为最开始的向 Kubernetes 的转型之后,侯培新为其他考虑该技术的公司提供了建议,
“当你开始设计应用程序的架构时,首先考虑云原生,然后再考虑微服务架构”,他说,“我想你会从中受益”。<br><br>
<!--
Having overseen the initial move to Kubernetes at Huawei, Hou has advice for other companies considering the technology:
"When you start to design the architecture of your application, think about cloud native,
think about microservice architecture from the beginning," he says.
"I think you will benefit from that."<br><br>
-->
但是如果您已经有了遗留应用程序,“首先从这些应用程序中一些对微服务友好的部分开始,
这些部分相对容易分解成更简单的部分,并且相对轻量级”,侯培新说,
<!--
But if you already have legacy applications, "start from some microservice-friendly part of those applications first,
parts that are relatively easy to be decomposed into simpler pieces and are relatively lightweight," Hou says.
-->
“不要从一开始就认为我想在几天内将整个架构或所有东西都迁移到微服务中。
不要把它当作目标。你应该循序渐进地做这件事。
我想说的是,对于遗留应用程序,并不是每个部分都适合微服务架构”。<br><br>
<!--
"Dont think from day one that within how many days I want to move the whole architecture,
or move everything into microservices. Dont put that as a kind of target.
You should do it in a gradual manner. And I would say for legacy applications,
not every piece would be suitable for microservice architecture. No need to force it."<br><br>
-->
毕竟,尽管侯培新对华为的 Kubernetes 充满热情,但他估计,
“未来 10 年,或许 80% 的工作负载可以分布式地在云原生环境中运行,但仍然有 20% 不是,但是没关系。
如果我们能够让 80% 的工作负载真正是云原生的、敏捷的,那么最终会有一个更好的世界”。
<!--
After all, as enthusiastic as Hou is about Kubernetes at Huawei, he estimates that "in the next 10 years,
maybe 80 percent of the workload can be distributed, can be run on the cloud native environments.
Theres still 20 percent thats not, but its fine.
If we can make 80 percent of our workload really be cloud native, to have agility,
its a much better world at the end of the day."
-->
</div>
</section>
<div class="banner5">
<div class="banner5text">
“未来 10 年,可能 80% 的工作负载可以分布式地在云原生环境中运行,但仍然有 20% 不是,不过没关系。
如果我们能够让 80% 的工作负载真正是云原生的、敏捷的,那么最终会有一个更好的世界。”
<!--
"In the next 10 years, maybe 80 percent of the workload can be distributed,
can be run on the cloud native environments.
Theres still 20 percent thats not, but its fine.
If we can make 80 percent of our workload really be cloud native, to have agility,
its a much better world at the end of the day."
-->
</div>
</div>
<section class="section5">
<div class="fullcol">
在不久的将来,侯培新期待着围绕着 Kubernetes 开发的新功能,尤其是华为正在开发的那些功能。
<!--
In the nearer future, Hou is looking forward to new features that are being developed around Kubernetes,
not least of all the ones that Huawei is contributing to.
-->
华为的工程师已经在为联邦功能(将多个 Kubernetes 集群放在一个框架中进行无缝管理)、调度、容器网络和存储,以及刚刚发布的一项名为
<a href="http://containerops.org/"> Container Ops </a>的技术工作,这是一个 DevOps 管道引擎。
<!--
Huawei engineers have worked on the federation feature
(which puts multiple Kubernetes clusters in a single framework to be managed seamlessly), scheduling,
container networking and storage, and a just-announced technology called
<a href="http://containerops.org/">Container Ops</a>, which is a DevOps pipeline engine.
-->
“这将把每个 DevOps 作业放到一个容器中”,他解释说,“这种容器机制使用 Kubernetes 运行,也用于测试 Kubernetes。
有了这种机制,我们可以比以前更容易地创建、共享和管理容器化 DevOps 作业”。<br><br>
<!--
"This will put every DevOps job into a container," he explains.
"And then this container mechanism is running using Kubernetes, but is also used to test Kubernetes.
With that mechanism, we can make the containerized DevOps jobs be created,
shared and managed much more easily than before."<br><br>
-->
尽管如此,侯培新认为这项技术只是实现其全部潜力的一半。
首先,也是最重要的,他想要扩大它可以协调的规模,
这对于华为这样的超大规模公司以及它的一些客户来说非常重要。<br><br>
<!--
Still, Hou sees this technology as only halfway to its full potential.
First and foremost, hed like to expand the scale it can orchestrate,
which is important for supersized companies like Huawei as well as some of its customers.<br><br>
-->
侯培新自豪地指出,在华为第一位工程师成为 Kubernetes 的贡献者和传道者两年后,华为现在是这个社区的最大贡献者。
他说,“我们发现,你对社区的贡献越大,你得到的回报也就越多”。
<!--
Hou proudly notes that two years after that first Huawei engineer became a contributor to and evangelist for Kubernetes,
Huawei is now a top contributor to the community. "Weve learned that the more you contribute to the community,"
he says, "the more you get back."
-->
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215.9892 128.40633"><defs><style>.cls-1{fill:#f9f9f9;}.cls-2{fill:#4c81c2;}</style></defs><title>ibm_featured_logo</title><rect class="cls-1" x="-5.9997" y="-8.99955" width="229.48853" height="143.9928"/><polygon class="cls-2" points="190.441 33.693 162.454 33.693 164.178 28.868 190.441 28.868 190.441 33.693"/><path class="cls-2" d="M115.83346,28.867l25.98433-.003,1.7014,4.83715c.01251-.00687-27.677.00593-27.677,0C115.84224,33.69422,115.82554,28.867,115.83346,28.867Z"/><path class="cls-2" d="M95.19668,28.86593A18.6894,18.6894,0,0,1,106.37358,33.7s-47.10052.00489-47.10052,0V28.86488Z"/><rect class="cls-2" x="22.31176" y="28.86593" width="32.72063" height="4.82558"/><path class="cls-2" d="M190.44115,42.74673h-31.194s1.70142-4.79994,1.691-4.80193h29.50305Z"/><polygon class="cls-2" points="146.734 42.753 115.832 42.753 115.832 37.944 145.041 37.944 146.734 42.753"/><path class="cls-2" d="M110.04127,37.94271a12.47,12.47,0,0,1,1.35553,4.80214H59.28193V37.94271Z"/><rect class="cls-2" x="22.31176" y="37.94271" width="32.72063" height="4.80214"/><polygon class="cls-2" points="156.056 51.823 157.768 46.998 181.191 47.005 181.191 51.812 156.056 51.823"/><polygon class="cls-2" points="148.237 46.997 149.944 51.823 125.046 51.823 125.046 46.997 148.237 46.997"/><path class="cls-2" d="M111.81,46.99627a15.748,15.748,0,0,1-.68923,4.82641H96.85137V46.99627Z"/><rect class="cls-2" x="31.43162" y="47.01973" width="14.06406" height="4.8019"/><rect class="cls-2" x="68.7486" y="46.99627" width="14.03976" height="4.82537"/><path class="cls-2" d="M138.87572,57.03292s.004,3.65225.001,3.65913H125.04558V55.89h26.35583l1.637,4.4773c.00773.00292,1.57841-4.48815,1.58153-4.47835h26.56223V60.692h-13.763c-.00124-.00687-.00771-3.65819-.00771-3.65819l-1.273,3.65819-25.99183-.00687Z"/><path class="cls-2" d="M68.7486,55.889h40.30365v-.00188a18.13723,18.13723,0,0,1-3.99812,4.80494s-36.30647.00668-36.30647,0Z"/><rect class="cls-2" x="31.43162" y="55.88794" width="14.06406" height="4.80316"/><rect class="cls-2" x="167.41912" y="64.94348" width="13.76302" height="4.80212"/><path class="cls-2" d="M138.87572,64.94348H125.04558V69.7456c-.00688-.0025,13.83411.00167,13.83411,0C138.87969,69.7431,138.89532,64.94348,138.87572,64.94348Z"/><path class="cls-2" d="M164.63927,64.94348c-.06255-.007-1.61218,4.79962-1.67723,4.80212l-19.60378.00835c-.01543-.00751-1.72371-4.81745-1.725-4.81047Z"/><path class="cls-2" d="M68.74672,64.94233H104.985a23.7047,23.7047,0,0,1,4.32076,4.80327c.06609-.0025-40.5581.00167-40.5581,0Z"/><path class="cls-2" d="M45.49359,69.74436v-4.802H31.45487V69.7431Z"/><rect class="cls-2" x="167.41912" y="73.99693" width="13.76198" height="4.80295"/><rect class="cls-2" x="125.04474" y="73.99693" width="13.83097" height="4.80212"/><path class="cls-2" d="M159.74351,78.8224c.00376-.02169,1.69745-4.82964,1.72373-4.82547H144.80219c-.029-.00209,1.70848,4.80378,1.70848,4.80378S159.7404,78.84241,159.74351,78.8224Z"/><path class="cls-2" d="M68.74766,78.79905c0,.01919-.00094-4.80212,0-4.803H82.9958s.01272,4.80462,0,4.80462C82.98224,78.80072,68.74766,78.79489,68.74766,78.79905Z"/><path class="cls-2" d="M111.30529,73.9961a13.94783,13.94783,0,0,1,.89542,4.825H97.10364v-4.825Z"/><rect class="cls-2" x="31.45487" y="73.9961" width="14.03872" height="4.80171"/><rect class="cls-2" x="167.41912" y="82.86525" width="23.0212" height="4.80421"/><rect class="cls-2" x="115.83139" y="82.86525" width="23.04432" height="4.80421"/><polygon class="cls-2" points="156.647 87.669 149.618 87.669 147.931 82.865 158.272 82.865 156.647 87.669"/><path class="cls-2" d="M22.3099,82.86525v4.80212H55.008c.01366.00751-.01469-4.79919,0-4.79919Z"/><path class="cls-2" d="M111.60237,82.86525c-.3442,1.58445-.65962,3.5158-1.81732,4.80421l-.43175-.00209H59.28005V82.86525Z"/><polygon class="cls-2" points="153.461 96.733 152.814 96.733 151.171 91.92 155.147 91.92 153.461 96.733"/><rect class="cls-2" x="167.41788" y="91.91953" width="23.02244" height="4.82547"/><path class="cls-2" d="M59.27307,96.73333V91.92745s47.24073.00585,47.37623.00585A17.945,17.945,0,0,1,94.43864,96.745l-35.15859-.00959"/><rect class="cls-2" x="115.83139" y="91.91953" width="23.04432" height="4.82547"/><path class="cls-2" d="M55.008,91.94079s-.01469,4.79253,0,4.79253c.01366,0-32.6885.0196-32.69809.00961-.00888-.00961.00875-4.81548,0-4.81548S54.9933,91.95664,55.008,91.94079Z"/></svg>

After

Width:  |  Height:  |  Size: 4.3 KiB

View File

@ -0,0 +1,93 @@
---
title: 案例研究IBM
case_study_styles: true
cid: caseStudies
css: /css/style_ibm.css
---
<!-- <div class="banner1" style="background-image: url('/images/CaseStudy_ibm_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/ibm_logo.png" class="header_logo" style="width:10%"><br> <div class="subhead">Building an Image Trust Service on Kubernetes with Notary and TUF</div></h1>
</div> -->
<div class="banner1">
<h1> 案例研究:<img src="/images/ibm_logo.png" width="18%" style="margin-bottom:-5px;margin-left:10px;"><br> <div class="subhead">在 Kubernetes 上使用 Notary 和 TUF 建立镜像信任服务</div></h1>
</div>
<div class="details">
公司 &nbsp;<b>IBM</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;位置 &nbsp;<b>阿蒙克, 纽约</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;行业 &nbsp;<b>云计算</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>挑战</h2>
<!-- <a href="https://www.ibm.com/cloud/">IBM Cloud</a> offers public, private, and hybrid cloud functionality across a diverse set of runtimes from its OpenWhisk-based function as a service (FaaS) offering, managed <a href="https://kubernetes.io">Kubernetes</a> and containers, to <a href="https://www.cloudfoundry.org">Cloud Foundry</a> platform as a service (PaaS). These runtimes are combined with the power of the companys enterprise technologies, such as MQ and DB2, its modern artificial intelligence (AI) Watson, and data analytics services. Users of IBM Cloud can exploit capabilities from more than 170 different cloud native services in its catalog, including capabilities such as IBMs Weather Company API and data services. In the later part of 2017, the IBM Cloud Container Registry team wanted to build out an image trust service. -->
<a href="https://www.ibm.com/cloud/">IBM Cloud</a> 提供公共、私有和混合云功能,包括基于 OpenWhisk 的服务 FaaS、托管于 <a href="https://kubernetes.io">Kubernetes</a> 和容器,以及 <a href="https://www.cloudfoundry.org">Cloud Foundry</a> 服务 PaaS 的各种运行时。这些运行时与公司企业技术(如 MQ 和 DB2、其现代人工智能 AI Watson 和数据分析服务的强大功能相结合。IBM Cloud 用户可以使用其目录中 170 多个不同云原生服务的功能,包括 IBM 的气象公司 API 和数据服务等功能。在 2017 年后期IBM 云容器托管团队希望构建镜像信任服务。<br><br>
<h2>解决方案</h2>
<!-- The work on this new service culminated with its public availability in the IBM Cloud in February 2018. The image trust service, called Portieris, is fully based on the <a href="https://www.cncf.io">Cloud Native Computing Foundation (CNCF)</a> open source project <a href="https://github.com/theupdateframework/notary">Notary</a>, according to Michael Hough, a software developer with the IBM Cloud Container Registry team. Portieris is a Kubernetes admission controller for enforcing content trust. Users can create image security policies for each Kubernetes namespace, or at the cluster level, and enforce different levels of trust for different images. Portieris is a key part of IBMs trust story, since it makes it possible for users to consume the companys Notary offering from within their IKS clusters. The offering is that Notary server runs in IBMs cloud, and then Portieris runs inside the IKS cluster. This enables users to be able to have their IKS cluster verify that the image they're loading containers from contains exactly what they expect it to, and Portieris is what allows an IKS cluster to apply that verification. -->
2018 年 2 月,这项新服务在 IBM 云中公开发布。IBM 云容器托管团队的软件开发者 Michael Hough 说,名为 Portieris 的镜像信任服务完全基于 <a href="https://www.cncf.io">Cloud Native Computing Foundation (CNCF)</a> 的开源项目 <a href="https://github.com/theupdateframework/notary">Notary</a>。Portieris 是 Kubernetes 的准入控制器,用于强制执行适当的信任等级。用户可以为每个 Kubernetes 命名空间或在集群级别创建镜像安全策略并为不同的镜像强制实施不同级别的信任。Portieris 是 IBM 信任内容的关键部分,因为它使用户能够从 IKS 集群中使用公司的 Notary。产品是 Notary 服务器在 IBM 的云中运行,然后 Portieris 在 IKS 集群内运行。这使用户能够让 IKS 集群验证他们加载容器的镜像是否包含他们期望的内容,而 Portieris 是允许 IKS 集群应用该验证的原因。
</div>
<div class="col2">
<h2>影响</h2>
<!-- IBM's intention in offering a managed Kubernetes container service and image registry is to provide a fully secure end-to-end platform for its enterprise customers. "Image signing is one key part of that offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem," Hough says. The company had not been offering image signing before, and Notary is the tool it used to implement that capability. "We had a multi-tenant Docker Registry with private image hosting," Hough says. "The Docker Registry uses hashes to ensure that image content is correct, and data is encrypted both in flight and at rest. But it does not provide any guarantees of who pushed an image. We used Notary to enable users to sign images in their private registry namespaces if they so choose." -->
IBM 打算提供基于 Kubernetes 的容器服务和镜像托管服务目的是为其企业客户提供完全安全的端到端平台。Hough 说:“镜像签名是该产品的关键部分之一,我们的容器托管团队将 Notary 视为在当前 Docker 和容器生态系统中实现该功能的实际方式。”该公司以前没有提供镜像签名Notary 是它用来实现该功能的工具。“我们有一个多租户 Docker 托管服务,具有私有镜像托管功能,” Hough 说。“ Docker 托管使用哈希值来确保镜像内容正确,并且数据在传输和静态时都进行了加密。但它没有提供任何保证谁推镜像。我们使用 Notary 来允许用户在其专用注册表命名空间中签名镜像(如果他们愿意的话)。”
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
<!-- "We see CNCF as a safe haven for cloud native open source, providing stability, longevity, and expected maintenance for member projects—no matter the originating vendor or project."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- Michael Hough, a software developer with the IBM Container Registry team</span> -->
“我们将 CNCF 视为云原生开源的安全避难所,为成员项目(无论是原始供应商还是项目)提供稳定性、使用寿命和预期维护。”<br><br><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;">- Michael Hough, IBM 容器托管团队软件开发人员</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<!-- <h2>Docker had already created the Notary project as an implementation of <a href="https://github.com/theupdateframework/specification" style="text-decoration:underline">The Update Framework (TUF)</a>, and this implementation of TUF provided the capabilities for Docker Content Trust.</h2> "After contribution to CNCF of both TUF and Notary, we perceived that it was becoming the de facto standard for image signing in the container ecosystem", says Michael Hough, a software developer with the IBM Cloud Container Registry team. -->
<h2>Docker 已经创建了 Notary 项目作为 <a href="https://github.com/theupdateframework/specification" style="text-decoration:underline">The Update Framework (TUF)</a> 的实现TUF 的此实现为 Docker 内容信任提供了功能。</h2> IBM 云容器托管团队的软件开发者 Michael Hough 说:“在 TUF 和 Notary 对 CNCF 做出了贡献后,我们发现它正在成为容器生态系统中镜像签名的实际标准。”<br><br>
<!-- The key reason for selecting Notary was that it was already compatible with the existing authentication stack IBMs container registry was using. So was the design of TUF, which does not require the registry team to have to enter the business of key management. Both of these were "attractive design decisions that confirmed our choice of Notary," he says. -->
选择 Notary 的关键原因是它已经与 IBM 的容器托管正在使用的现有身份验证技术兼容。TUF 的设计也是如此,它不要求托管团队必须涉足密钥管理业务。他说,这两项都是“有吸引力的设计决定,证实了我们对 Notary 的选择是正确的。”<br><br>
<!-- The introduction of Notary to implement image signing capability in IBM Cloud encourages increased security across IBM's cloud platform, "where we expect it will include both the signing of official IBM images as well as expected use by security-conscious enterprise customers," Hough says. "When combined with security policy implementations, we expect an increased use of deployment policies in CI/CD pipelines that allow for fine-grained control of service deployment based on image signers." -->
在 IBM Cloud 中引入 Notary 功能以实现镜像签名,可提高 IBM 云平台的安全性,“我们预计这将包括签署 IBM 官方镜像以及预期的有安全需求的企业客户,” Hough 说。与安全策略实现结合使用时,我们预计 CI/CD 管道中会更多地使用部署策略,以便根据镜像签名者对服务部署进行精细控制。
<!-- The availability of image signing "is a huge benefit to security-conscious customers who require this level of image provenance and security," Hough says. "With our IBM Cloud Kubernetes as-a-service offering and the admission controller we have made available, it allows both IBM services as well as customers of the IBM public cloud to use security policies to control service deployment." -->
Hough 说,镜像签名的可用性“对于需要这种级别镜像来源和安全性的客户来说,是一个巨大的好处。”“借助我们的 IBM 云上的 Kubernetes 以及我们提供的许可控制器,它允许 IBM 服务以及 IBM 公共云的客户使用安全策略来控制服务部署。”
</div>
</section>
<div class="banner3">
<div class="banner3text">
<!-- "Image signing is one key part of our Kubernetes container service offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem"<span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br><br>- Michael Hough, a software developer with the IBM Cloud Container Registry team</span> -->
镜像签名是我们 Kubernetes 容器服务的关键部分之一,我们的容器托管团队将 Notary 视为在当前 Docker 和容器生态系统中实现该功能的实际方式。<br><br><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;">- Michael Hough, IBM 容器托管团队软件开发人员</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
<!-- Now that the Notary-implemented service is generally available in IBMs public cloud as a component of its existing IBM Cloud Container Registry, it is deployed as a highly available service across five IBM Cloud regions. This high-availability deployment has three instances across two zones in each of the five regions, load balanced with failover support. "We have also deployed it with end-to-end TLS support through to our back-end IBM Cloudant persistence storage service," Hough says. -->
现在Notary 通常作为现有 IBM 云容器托管的一个组件在 IBM 的公共云中提供服务,因此它被部署为五个 IBM 云区域中的高可用服务。此高可用性部署在五个区域中的每个区域中各有三个实例实现负载均衡与故障转移。Hough 说:“我们还将其部署到后端 IBM Cloudant 持久性存储服务,并随端到端 TLS 支持一起部署。”<br><br>
<!-- The IBM team has created and open sourced a Kubernetes admission controller called Portieris, which uses Notary signing information combined with customer-defined security policies to control image deployment into their cluster. "We are hoping to drive adoption of Portieris through its use of our Notary offering," Hough says. -->
IBM 团队创建并开源了名为 Portieris 的 Kubernetes 准入控制器,该控制器使用 Notary 签名信息与客户定义的安全策略相结合,以控制将镜像部署到集群中。“我们希望通过使用我们的 Notary 服务来推动 Portieris 的使用,” Hough 说。<br><br>
<!-- IBM has been a key player in the creation and support of open source foundations, including CNCF. Todd Moore, IBM's vice president of Open Technology, is the current CNCF governing board chair and a number of IBMers are active across many of the CNCF member projects. -->
IBM 在创建和支持开源基础(包括 CNCF方面一直占据主导地位。IBM 开放技术副总裁 Todd Moore 是现任 CNCF 董事会主席,许多 IBM 员工活跃于 CNCF 成员项目中。
</div>
</section>
<div class="banner4">
<div class="banner4text">
<!-- "There are new projects addressing these challenges, including within CNCF. We will definitely be following these advancements with interest. We found the Notary community to be an active and friendly community open to changes, such as our addition of a CouchDB backend for persistent storage." <span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br><br>- Michael Hough, a software developer with the IBM Cloud Container Registry team</span> -->
“有新项目应对这些挑战,包括在 CNCF 内。我们一定会饶有兴趣地关注这些进步。我们发现 Notary 社区是一个积极友好的社区,对变化持开放态度,例如我们为持久存储添加的 CouchDB 后端。”<br><br> <span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;">- Michael Hough, IBM 容器托管团队软件开发人员</span>
</div>
</div>
<section class="section4">
<div class="fullcol">
<!-- The company has used other CNCF projects <a href="https://containerd.io">containerd</a>, <a href="https://www.envoyproxy.io">Envoy</a>, <a href="https://prometheus.io">Prometheus</a>, <a href="https://grpc.io">gRPC</a>, and <a href="https://github.com/containernetworking">CNI</a>, and is looking into <a href="https://github.com/spiffe">SPIFFE</a> and <a href="https://github.com/spiffe/spire">SPIRE</a> as well for potential future use. -->
该公司已经使用的 CNCF 项目有 <a href="https://containerd.io">containerd</a><a href="https://www.envoyproxy.io">Envoy</a><a href="https://prometheus.io">Prometheus</a><a href="https://grpc.io">gRPC</a><a href="https://github.com/containernetworking">CNI</a>,而且正在探索 <a href="https://github.com/spiffe">SPIFFE</a><a href="https://github.com/spiffe/spire">SPIRE</a> 在未来的潜在可用性。<br><br>
<!-- What advice does Hough have for other companies that are looking to deploy Notary or a cloud native infrastructure? -->
对于希望部署 Notary 或云原生基础架构的其他公司Hough 有何建议?<br><br>
<!-- "While this is true for many areas of cloud native infrastructure software, we found that a high-availability, multi-region deployment of Notary requires a solid implementation to handle certificate management and rotation," he says. "There are new projects addressing these challenges, including within CNCF. We will definitely be following these advancements with interest. We found the Notary community to be an active and friendly community open to changes, such as our addition of a CouchDB backend for persistent storage." -->
“虽然对于云原生基础结构软件的许多领域也是如此,但我们发现,高可用性、多区域的 Notary 部署需要扎实的实现来处理证书管理和轮换,”他说。“有新项目应对这些挑战,包括在 CNCF 内。我们一定会饶有兴趣地关注这些进步。我们发现 Notary 社区是一个积极友好的社区,对变化持开放态度,例如我们为持久存储添加的 CouchDB 后端。”
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.4 KiB

View File

@ -0,0 +1,4 @@
---
title: LivePerson
content_url: https://www.openstack.org/videos/video/running-kubernetes-on-openstack-at-liveperson
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

View File

@ -0,0 +1,4 @@
---
title: Monzo
content_url: https://youtu.be/YkOY7DgXKyw
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -0,0 +1,105 @@
---
title: 案例研究NetEase
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
---
<!-- <div class="banner1" style="background-image: url('/images/CaseStudy_netease_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/netease_logo.png" class="header_logo" style="width:22%;margin-bottom:-1%"><br> <div class="subhead" style="margin-top:1%"> How NetEase Leverages Kubernetes to Support Internet Business Worldwide</div></h1>
</div> -->
<div class="banner1" style="background-image: url('/images/CaseStudy_netease_banner1.jpg')">
<h1> 案例研究:<img src="/images/netease_logo.png" class="header_logo" style="width:22%;margin-bottom:-1%"><br> <div class="subhead" style="margin-top:1%"> 网易如何利用 Kubernetes 支持在全球的互联网业务</div></h1>
</div>
<div class="details" style="font-size:1em">
公司 &nbsp;<b>网易</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;位置 &nbsp;<b>杭州,中国</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;行业 &nbsp;<b>互联网科技</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="margin-left:-1.5% !important">
<h2>挑战</h2>
<!-- Its gaming business is one of the largest in the world, but thats not all that <a href="https://netease-na.com/">NetEase</a> provides to Chinese consumers. The company also operates e-commerce, advertising, music streaming, online education, and email platforms; the last of which serves almost a billion users with free email services through sites like <a href="https://www.163.com/">163.com</a>. In 2015, the NetEase Cloud team providing the infrastructure for all of these systems realized that their R&D process was slowing down developers. “Our users needed to prepare all of the infrastructure by themselves,” says Feng Changjian, Architect for NetEase Cloud and Container Service. “We were eager to provide the infrastructure and tools for our users automatically via serverless container service.” -->
其游戏业务是世界上最大的游戏业务之一,但这不是<a href="https://netease-na.com/">网易</a>为中国消费者提供的所有。公司还经营电子商务、广告、音乐流媒体、在线教育和电子邮件平台其中最后一个服务有近10亿用户通过网站使用免费的电子邮件服务<a href="https://www.163.com/">163.com</a>。2015 年,为所有这些系统提供基础设施的网易云团队意识到,他们的研发流程正在减缓开发人员的速度。网易云和容器服务架构师冯长健表示:“我们的用户需要自己准备所有基础设施。”“我们希望通过无服务器容器服务自动为用户提供基础设施和工具。”
<br><br>
<h2>解决方案</h2>
<!-- After considering building its own orchestration solution, NetEase decided to base its private cloud platform on <a href="https://kubernetes.io/">Kubernetes</a>. The fact that the technology came out of Google gave the team confidence that it could keep up with NetEases scale. “After our 2-to-3-month evaluation, we believed it could satisfy our needs,” says Feng. The team started working with Kubernetes in 2015, before it was even 1.0. Today, the NetEase internal cloud platform—which also leverages the CNCF projects <a href="https://prometheus.io/">Prometheus</a>, <a href="https://www.envoyproxy.io/">Envoy</a>, <a href="https://goharbor.io/">Harbor</a>, <a href="https://grpc.io/">gRPC</a>, and <a href="https://helm.sh/">Helm</a>—runs 10,000 nodes in a production cluster and can support up to 30,000 nodes in a cluster. Based on its learnings from its internal platform, the company introduced a Kubernetes-based cloud and microservices-oriented PaaS product, <a href="https://landscape.cncf.io/selected=netease-qingzhou-microservice">NetEase Qingzhou Microservice</a>, to outside customers. -->
在考虑构建自己的业务流程解决方案后,网易决定将其私有云平台建立在 <a href="https://kubernetes.io/">Kubernetes</a> 的基础上。这项技术来自 Google这一事实让团队有信心它能够跟上网易的规模。“经过2到3个月的评估我们相信它能满足我们的需求”冯长健说。该团队于 2015 年开始与 Kubernetes 合作那会它甚至还不是1.0版本。如今,网易内部云平台还使用了 CNCF 项目 <a href="https://prometheus.io/">Prometheus</a><a href="https://www.envoyproxy.io/">Envoy</a><a href="https://goharbor.io/">Harbor</a><a href="https://grpc.io/">gRPC</a><a href="https://helm.sh/">Helm</a> 在生产集群中运行 10000 个节点,并可支持集群多达 30000 个节点。基于对内部平台的学习,公司向外部客户推出了基于 Kubernetes 的云和微服务型 PaaS 产品,网易轻舟微服务。
<br><br>
<h2>影响</h2>
<!-- The NetEase team reports that Kubernetes has increased R&D efficiency by more than 100%. Deployment efficiency has improved by 280%. “In the past, if we wanted to do upgrades, we needed to work with other teams, even in other departments,” says Feng. “We needed special staff to prepare everything, so it took about half an hour. Now we can do it in only 5 minutes.” The new platform also allows for mixed deployments using GPU and CPU resources. “Before, if we put all the resources toward the GPU, we wont have spare resources for the CPU. But now we have improvements thanks to the mixed deployments,” he says. Those improvements have also brought an increase in resource utilization. -->
网易团队报告说Kubernetes 已经提高了研发效率一倍多,部署效率提高了 2.8倍。“过去,如果我们想要进行升级,我们需要与其他团队合作,甚至加入其他部门,”冯长健说。“我们需要专人来准备一切,需要花费约半个小时。现在我们只需 5 分钟即可完成。”新平台还允许使用 GPU 和 CPU 资源进行混合部署。“以前,如果我们将所有资源都用于 GPU则 CPU 的备用资源将没有。但是现在,由于混合部署,我们有了很大的改进,”他说。这些改进也提高了资源的利用率。
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
<!-- "The system can support 30,000 nodes in a single cluster. In production, we have gotten the data of 10,000 nodes in a single cluster. The whole internal system is using this system for development, test, and production."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>— Zeng Yuxing, Architect, NetEase</span> -->
“系统可以在单个群集中支持 30000 个节点。在生产中,我们在单个群集中获取到了 10000 个节点的数据。整个内部系统都在使用该系统进行开发、测试和生产。”<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>— 曾宇兴,网易架构师</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<!-- <h2>Its gaming business is the <a href="https://newzoo.com/insights/rankings/top-25-companies-game-revenues/">fifth-largest</a> in the world, but thats not all that <a href="https://netease-na.com/">NetEase</a> provides consumers.</h2>The company also operates e-commerce, advertising, music streaming, online education, and email platforms in China; the last of which serves almost a billion users with free email services through popular sites like <a href="https://www.163.com/">163.com</a> and <a href="https://www.126.com/">126.com</a>. With that kind of scale, the NetEase Cloud team providing the infrastructure for all of these systems realized in 2015 that their R&D process was making it hard for developers to keep up with demand. “Our users needed to prepare all of the infrastructure by themselves,” says Feng Changjian, Architect for NetEase Cloud and Container Service. “We were eager to provide the infrastructure and tools for our users automatically via serverless container service.”<br><br> -->
<h2>其游戏业务是世界<a href="https://newzoo.com/insights/rankings/top-25-companies-game-revenues/">第五大</a>游戏业务,但这不是<a href="https://netease-na.com/">网易</a>为消费者提供的所有业务。</h2>公司还在中国经营电子商务、广告、音乐流媒体、在线教育和电子邮件平台其中最后一个服务是有近10亿用户使用的网站<a href="https://www.163.com/">163.com</a><a href="https://www.126.com/">126.com</a>免费电子邮件服务。有了这样的规模,为所有这些系统提供基础设施的网易云团队在 2015 年就意识到,他们的研发流程使得开发人员难以跟上需求。网易云和容器服务架构师冯长健表示:“我们的用户需要自己准备所有基础设施。”“我们渴望通过无服务器容器服务自动为用户提供基础设施和工具。”<br><br>
<!-- After considering building its own orchestration solution, NetEase decided to base its private cloud platform on <a href="https://kubernetes.io/">Kubernetes</a>. The fact that the technology came out of Google gave the team confidence that it could keep up with NetEases scale. “After our 2-to-3-month evaluation, we believed it could satisfy our needs,” says Feng. -->
在考虑构建自己的业务流程解决方案后,网易决定将其私有云平台建立在 <a href="https://kubernetes.io/">Kubernetes</a> 的基础上。这项技术来自谷歌这一事实让团队有信心它能够跟上网易的规模。“经过2到3个月的评估我们相信它能满足我们的需求”冯长健说。
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_netease_banner3.jpg')">
<div class="banner3text">
<!-- "We leveraged the programmability of Kubernetes so that we can build a platform to satisfy the needs of our internal customers for upgrades and deployment." -->
“我们利用 Kubernetes 的可编程性,构建一个平台,以满足内部客户对升级和部署的需求。”<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- 冯长健,网易云和容器托管平台架构师</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
<!-- The team started adopting Kubernetes in 2015, before it was even 1.0, because it was relatively easy to use and enabled DevOps at the company. “We abandoned some of the concepts of Kubernetes; we only wanted to use the standardized framework,” says Feng. “We leveraged the programmability of Kubernetes so that we can build a platform to satisfy the needs of our internal customers for upgrades and deployment.”<br><br> -->
该团队于 2015 年开始采用 Kubernetes那会它甚至还不是1.0版本,因为它相对易于使用,并且使 DevOps 在公司中得以实现。“我们放弃了 Kubernetes 的一些概念;我们只想使用标准化框架,”冯长健说。“我们利用 Kubernetes 的可编程性,构建一个平台,以满足内部客户对升级和部署的需求。”<br><br>
<!-- The team first focused on building the container platform to manage resources better, and then turned their attention to improving its support of microservices by adding internal systems such as monitoring. That has meant integrating the CNCF projects <a href="https://prometheus.io/">Prometheus</a>, <a href="https://www.envoyproxy.io/">Envoy</a>, <a href="https://goharbor.io/">Harbor</a>, <a href="https://grpc.io/">gRPC</a>, and <a href="https://helm.sh/">Helm</a>. “We are trying to provide a simplified and standardized process, so our users and customers can leverage our best practices,” says Feng.<br><br> -->
团队首先专注于构建容器平台以更好地管理资源,然后通过添加内部系统(如监视)来改进对微服务的支持。这意味着整合了 CNCF 项目 <a href="https://prometheus.io/">Prometheus</a><a href="https://www.envoyproxy.io/">Envoy</a><a href="https://goharbor.io/">Harbor</a><a href="https://grpc.io/">gRPC</a><a href="https://helm.sh/">Helm</a>。“我们正在努力提供简化和标准化的流程,以便我们的用户和客户能够利用我们的最佳实践,”冯长健说。<br><br>
<!-- And the team is continuing to make improvements. For example, the e-commerce part of the business needs to leverage mixed deployments, which in the past required using two separate platforms: the infrastructure-as-a-service platform and the Kubernetes platform. More recently, NetEase has created a cross-platform application that enables using both with one-command deployment. -->
团队正在继续改进。例如,企业的电子商务部分需要利用混合部署,过去需要使用两个单独的平台:基础架构即服务平台和 Kubernetes 平台。最近,网易创建了一个跨平台应用程序,支持将两者同时使用单命令部署。
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_netease_banner4.jpg')">
<div class="banner4text">
<!-- "As long as a company has a mature team and enough developers, I think Kubernetes is a very good technology that can help them."<span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br><br>- Li Lanqing, Kubernetes Developer, NetEase</span> -->
“只要公司拥有成熟的团队和足够的开发人员,我认为 Kubernetes 是一个很好的有所助力的技术。”<span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br><br>- 李兰青, 网易 Kubernetes 开发人员</span>
</div>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
<!-- Today, the NetEase internal cloud platform “can support 30,000 nodes in a single cluster,” says Architect Zeng Yuxing. “In production, we have gotten the data of 10,000 nodes in a single cluster. The whole internal system is using this system for development, test, and production.” <br><br> -->
“系统可以在单个群集中支持 30000 个节点。在生产中,我们在单个群集中获取到了 10000 个节点的数据。整个内部系统都在使用该系统进行开发、测试和生产。”<br><br>
<!-- The NetEase team reports that Kubernetes has increased R&D efficiency by more than 100%. Deployment efficiency has improved by 280%. “In the past, if we wanted to do upgrades, we needed to work with other teams, even in other departments,” says Feng. “We needed special staff to prepare everything, so it took about half an hour. Now we can do it in only 5 minutes.” The new platform also allows for mixed deployments using GPU and CPU resources. “Before, if we put all the resources toward the GPU, we wont have spare resources for the CPU. But now we have improvements thanks to the mixed deployments.” Those improvements have also brought an increase in resource utilization. -->
网易团队报告说Kubernetes 已经提高了研发效率一倍多。部署效率提高了 2.8倍。“过去,如果我们想要进行升级,我们需要与其他团队合作,甚至加入其他部门,”冯长健说。“我们需要专人来准备一切,需要花费约半个小时。现在我们只需 5 分钟即可完成。”新平台还允许使用 GPU 和 CPU 资源进行混合部署。“以前,如果我们将所有资源都用于 GPU则 CPU 的备用资源将没有。但是现在,由于混合部署,我们有了很大的改进,”他说。这些改进也提高了资源的利用率。
</div>
<div class="banner5">
<div class="banner5text">
<!-- "By engaging with this community, we can gain some experience from it and we can also benefit from it. We can see what are the concerns and the challenges faced by the community, so we can get involved."<span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br><br>- Li Lanqing, Kubernetes Developer, NetEase</span> -->
“通过与这个社区接触,我们可以从中获得一些经验,我们也可以从中获益。我们可以看到社区所关心的问题和挑战,以便我们参与其中。”<span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br><br>- 李兰青, 网易 Kubernetes 开发人员</span>
</div>
</div>
<div class="fullcol">
<!-- Based on the results and learnings from using its internal platform, the company introduced a Kubernetes-based cloud and microservices-oriented PaaS product, <a href="https://landscape.cncf.io/selected=netease-qingzhou-microservice">NetEase Qingzhou Microservice</a>, to outside customers. “The idea is that we can find the problems encountered by our game and e-commerce and cloud music providers, so we can integrate their experiences and provide a platform to satisfy the needs of our users,” says Zeng. <br><br> -->
基于使用内部平台的成果和学习,公司向外部客户推出了基于 Kubernetes 的云和微服务型 PaaS 产品,网易轻舟微服务。“我们的想法是,我们可以找到我们的游戏和电子商务以及云音乐提供商遇到的问题,所以我们可以整合他们的体验,并提供一个平台,以满足所有用户的需求,”曾宇兴说。<br><br>
<!-- With or without the use of the NetEase product, the team encourages other companies to try Kubernetes. “As long as a company has a mature team and enough developers, I think Kubernetes is a very good technology that can help them,” says Kubernetes developer Li Lanqing.<br><br> -->
无论是否使用网易产品,该团队鼓励其他公司尝试 Kubernetes。Kubernetes 开发者李兰青表示:“只要公司拥有成熟的团队和足够的开发人员,我认为 Kubernetes 是一个很好的技术,可以帮助他们。”<br><br>
<!-- As an end user as well as a vendor, NetEase has become more involved in the community, learning from other companies and sharing what theyve done. The team has been contributing to the Harbor and Envoy projects, providing feedback as the technologies are being tested at NetEase scale. “We are a team focusing on addressing the challenges of microservices architecture,” says Feng. “By engaging with this community, we can gain some experience from it and we can also benefit from it. We can see what are the concerns and the challenges faced by the community, so we can get involved.” -->
作为最终用户和供应商,网易已经更多地参与社区,向其他公司学习,分享他们所做的工作。该团队一直在为 Harbor 和 Envoy 项目做出贡献,在网易进行规模测试技术时提供反馈。“我们是一个团队,专注于应对微服务架构的挑战,”冯长健说。“通过与这个社区接触,我们可以从中获得一些经验,我们也可以从中获益。我们可以看到社区所关心的问题和挑战,以便我们参与其中。”
</div>
</section>

Some files were not shown because too many files have changed in this diff Show More