Merge main into dev-1.31 to keep in sync

pull/46495/head
Oluebube Princes Egbuna 2024-05-22 16:14:41 +01:00
commit e941a6ab1d
1481 changed files with 38132 additions and 8064 deletions

View File

@ -10,18 +10,19 @@ aliases:
- sftim
sig-docs-website-owners: # Admins for overall website
- divya-mohan0209
- katcosgrove
- natalisucks
- reylejano
- salaxander
- sftim
- tengqm
- drewhagen # RT 1.30 Docs Lead
- katcosgrove # RT 1.30 Lead
sig-docs-localization-owners: # Admins for localization content
- a-mccarthy
- divya-mohan0209
- natalisucks
- nate-double-u
- reylejano
- salaxander
- sftim
- seokho-son
- tengqm
@ -31,9 +32,21 @@ aliases:
- natalisucks
- nate-double-u
- reylejano
- salaxander
- sftim
- seokho-son
- tengqm
sig-docs-bn-owners: # Admins for Bengali content
- asem-hamid
- Imtiaz1234
- mitul3737
- rajibmitra
sig-docs-bn-reviews: # PR reviews for Bengali content
- asem-hamid
- Imtiaz1234
- mitul3737
- rajibmitra
- sajibAdhi
sig-docs-de-owners: # Admins for German content
- bene2k1
- rlenferink
@ -44,47 +57,43 @@ aliases:
- celestehorgan
- dipesh-rawat
- divya-mohan0209
- katcosgrove
- natalisucks
- nate-double-u
- reylejano
- salaxander
- sftim
- tengqm
sig-docs-en-reviews: # PR reviews for English content
- celestehorgan
- dipesh-rawat
- divya-mohan0209
- katcosgrove
- kbhawkey
- mengjiao-liu
- mickeyboxell
- natalisucks
- nate-double-u
- reylejano
- salaxander
- sftim
- shannonxtreme
- tengqm
- windsonsea
sig-docs-es-owners: # Admins for Spanish content
- 92nqb
- electrocucaracha
- krol3
- raelga
- ramrodo
sig-docs-es-reviews: # PR reviews for Spanish content
- 92nqb
- electrocucaracha
- jossemarGT
- krol3
- raelga
- ramrodo
sig-docs-fr-owners: # Admins for French content
- awkif
- feloy
- perriea
- rekcah78
- remyleone
sig-docs-fr-reviews: # PR reviews for French content
- awkif
- feloy
- perriea
- rekcah78
- remyleone
@ -121,6 +130,7 @@ aliases:
- atoato88
- bells17
- kakts
- Okabe-Junya
- t-inu
sig-docs-ko-owners: # Admins for Korean content
- gochist
@ -140,6 +150,7 @@ aliases:
- divya-mohan0209
- natalisucks
- reylejano
- salaxander
- sftim
- tengqm
sig-docs-zh-owners: # Admins for Chinese content
@ -173,18 +184,14 @@ aliases:
sig-docs-pt-owners: # Admins for Portuguese content
- devlware
- edsoncelio
- femrtnz
- jcjesus
- stormqueen1990
- yagonobre
sig-docs-pt-reviews: # PR reviews for Portugese content
- devlware
- edsoncelio
- femrtnz
- jcjesus
- mrerlison
- stormqueen1990
- yagonobre
sig-docs-vi-owners: # Admins for Vietnamese content
- huynguyennovem
- truongnh1992

210
README-bn.md Normal file
View File

@ -0,0 +1,210 @@
# কুবারনেটিস ডকুমেন্টেশন
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
এই রিপোজিটোরিতে [কুবারনেটিস ওয়েবসাইট এবং ডকুমেন্টেশন](https://kubernetes.io/) তৈরি করার জন্য প্রয়োজনীয় সমস্ত উপাদান রয়েছে। আমরা খুবই আনন্দিত যে আপনি অবদান রাখতে চান!
- [ডকুমেন্টেশন এ অবদান](#contributing-to-the-docs)
- [স্থানীয়করণ ReadMeগুলো](#localization-readmemds)
## এই রিপোজিটোরি ব্যবহার
আপনি [Hugo (বর্ধিত সংস্করণ)](https://gohugo.io/) ব্যবহার করে স্থানীয়ভাবে ওয়েবসাইটটি চালাতে পারেন, অথবা আপনি এটি একটি কন্টেইনার রানটাইমে চালাতে পারেন। আমরা দৃঢ়ভাবে কন্টেইনার রানটাইম ব্যবহার করার পরামর্শ দিই, কারণ এটি লাইভ ওয়েবসাইটের সাথে ডিপ্লয়মেন্টের ধারাবাহিকতা দেয়।
## পূর্বশর্ত
এই রিপোজিটোরিটি ব্যবহার করার জন্য, আপনাকে লোকাল সিস্টেম বা, ডিভাইস এ নিম্নলিখিত জিনিস ইনস্টল করতে হবে:
- [npm](https://www.npmjs.com/)
- [Go](https://go.dev/)
- [Hugo (বর্ধিত সংস্করণ)](https://gohugo.io/)
- একটি কন্টেইনার রানটাইম, যেমন [Docker](https://www.docker.com/).
> [!NOTE]
[`netlify.toml`](netlify.toml#L11) ফাইলে `HUGO_VERSION` এনভায়রনমেন্ট ভেরিয়েবল দ্বারা নির্দিষ্ট করা Hugo বর্ধিত সংস্করণ ইনস্টল করা নিশ্চিত করুন৷
আপনি কাজ শুরু করার আগে, দরকারি জিনিসগুলো ইনস্টল করুন। রিপোজিটোরি ক্লোন(clone) করুন এবং ডিরেক্টরিতে(directory) প্রবেশ করুন:
```bash
git clone https://github.com/kubernetes/website.git
cd website
```
কুবারনেটিস ওয়েবসাইটটি [Docsy Hugo থিম](https://github.com/google/docsy#readme) ব্যবহার করে। এমনকি যদি আপনি একটি কন্টেইনারে ওয়েবসাইট চালানোর পরিকল্পনা করেন, আমরা দৃঢ়ভাবে নিম্নলিখিতগুলি চালিয়ে সাবমডিউল এবং অন্যান্য প্রয়োজনীয় জিনিসগুলো পুল(pull) করার পরামর্শ দিই:
### Windows
```powershell
# fetch submodule dependencies
git submodule update --init --recursive --depth 1
```
### Linux / other Unix
```bash
# fetch submodule dependencies
make module-init
```
## একটি কন্টেইনার ব্যবহার করে ওয়েবসাইট চালানো
একটি কন্টেইনারে সাইটটি তৈরি করতে, নিম্নলিখিতটি চালান:
```bash
# You can set $CONTAINER_ENGINE to the name of any Docker-like container tool
make container-serve
```
আপনি যদি ত্রুটি দেখতে পান, তাহলে সম্ভবত এর অর্থ হলো Hugo কন্টেইনারে যথেষ্ট কম্পিউটিং রিসোর্স নেই। এটি সমাধান করতে, আপনার মেশিনে Docker এর জন্য অনুমোদিত CPU এবং মেমরি ব্যবহারের পরিমাণ বাড়ান ([MacOS](https://docs.docker.com/desktop/settings/mac/) এবং [Windows](https://docs.docker.com/desktop/settings/windows/))।
ওয়েবসাইটটি দেখতে <http://localhost:1313>-এ আপনার ব্রাউজার খুলুন। আপনি সোর্স ফাইলগুলোতে পরিবর্তন করার সাথে সাথে, Hugo ওয়েবসাইট আপডেট করে এবং একটি ব্রাউজার রিফ্রেশ করতে বাধ্য করে।
## Hugo ব্যবহার করে লোকাল ডিভাইস এ ওয়েবসাইট চালানো
দরকারি জিনিসগুলো ইনস্টল করতে, স্থানীয়ভাবে সাইট তৈরি এবং পরীক্ষা করতে, চালান:
- For macOS and Linux
```bash
npm ci
make serve
```
- For Windows (PowerShell)
```powershell
npm ci
hugo.exe server --buildFuture --environment development
```
এটি পোর্ট 1313-এ স্থানীয় Hugo সার্ভার শুরু করবে। ওয়েবসাইট দেখতে <http://localhost:1313>-এ আপনার ব্রাউজার খুলুন। আপনি সোর্স ফাইলগুলোতে পরিবর্তন করার সাথে সাথে, Hugo ওয়েবসাইট আপডেট হবে এবং একটি ব্রাউজার রিফ্রেশ করতে বাধ্য করে।
## API রেফারেন্স পৃষ্ঠা তৈরি করা
`content/en/docs/reference/kubernetes-api` এ অবস্থিত API রেফারেন্স পৃষ্ঠাগুলো <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs> ব্যবহার করে Swagger স্পেসিফিকেশন থেকে তৈরি করা হয়েছে, যা OpenAPI স্পেসিফিকেশন নামেও পরিচিত।
একটি নতুন কুবারনেটিস রিলিজের জন্য রেফারেন্স পৃষ্ঠাগুলো আপডেট করতে এই পদক্ষেপগুলো অনুসরণ করুন:
1. `api-ref-generator` সাবমডিউল পুল (Pull) করুন:
```bash
git submodule update --init --recursive --depth 1
```
2. Swagger স্পেসিফিকেশন আপডেট করুন:
```bash
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-assets/api/swagger.json
```
3. `api-ref-assets/config/`-এ, নতুন রিলিজের পরিবর্তনগুলো প্রতিফলিত করতে `toc.yaml` এবং `fields.yaml` ফাইলগুলোকে হালনাগাদ করে নিন।
4. পরবর্তী, পৃষ্ঠাগুলো তৈরি করুন:
```bash
make api-reference
```
আপনি একটি কন্টেইনার থেকে সাইট তৈরি এবং পরিবেশন করে স্থানীয়ভাবে ফলাফল পরীক্ষা করতে পারেন:
```bash
make container-serve
```
একটি ওয়েব ব্রাউজারে, API রেফারেন্স দেখতে <http://localhost:1313/docs/reference/kubernetes-api/> এ যান।
5. যখন নতুন চুক্তির সমস্ত পরিবর্তন কনফিগারেশন ফাইল `toc.yaml` এবং `fields.yaml`-এ প্রতিফলিত হয়, তখন নতুন তৈরি করা API রেফারেন্স পৃষ্ঠাগুলোর সাথে একটি পুল রিকোয়েস্ট তৈরি করুন।
## সমস্যা সমাধান (Troubleshooting)
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
প্রযুক্তিগত কারণে Hugo কে দুই সেট বাইনারিতে পাঠানো হয়। বর্তমান ওয়েবসাইটটি শুধুমাত্র **Hugo Extended** সংস্করণের উপর ভিত্তি করে চলে। [রিলিজ পৃষ্ঠা](https://github.com/gohugoio/hugo/releases) নামের মধ্যে `বর্ধিত(extended)` সহ সংরক্ষণাগারগুলো খুঁজুন। নিশ্চিত করতে, `hugo version` চালান এবং `extended` শব্দটি সন্ধান করুন।
### অনেকগুলো খোলা ফাইলের জন্য macOS সমস্যা সমাধান করা
আপনি যদি macOS-এ `make serve` চালান এবং নিম্নলিখিত ত্রুটি পান:
```bash
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
make: *** [serve] Error 1
```
খোলা ফাইলগুলির জন্য বর্তমান সীমা পরীক্ষা করার চেষ্টা করুন:
`launchctl limit maxfiles`
তারপর নিম্নলিখিত কমান্ডগুলি চালান (<https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c> থেকে নেয়া):
```shell
#!/bin/sh
# These are the original gist links, linking to my gists now.
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxfiles.plist
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxproc.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxfiles.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxproc.plist
sudo mv limit.maxfiles.plist /Library/LaunchDaemons
sudo mv limit.maxproc.plist /Library/LaunchDaemons
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
sudo chown root:wheel /Library/LaunchDaemons/limit.maxproc.plist
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
```
এটি Catalina পাশাপাশি Mojave macOS এর জন্য কাজ করে।
## SIG Docs এর সাথে জড়িত হন
[কমিউনিটি পৃষ্ঠা](https://github.com/kubernetes/community/tree/master/sig-docs#meetings) থেকে SIG Docs কুবারনেটিস কমিউনিটি এবং মিটিং সম্পর্কে আরও জানুন।
এছাড়াও আপনি এই প্রকল্পের রক্ষণাবেক্ষণকারীদের কাছে পৌঁছাতে পারেন:
- [Slack](https://kubernetes.slack.com/messages/sig-docs)
- [এই Slack এর জন্য একটি আমন্ত্রণ পান](https://slack.k8s.io/)
- [মেইলিং তালিকা](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
## Docs এ অবদান রাখুন
আপনি আপনার GitHub অ্যাকাউন্টে এই রিপোজিটোরি এর একটি অনুলিপি তৈরি করতে স্ক্রিনের উপরের ডানদিকে **Fork** বোতামে ক্লিক করতে পারেন। এই অনুলিপিটিকে _ফর্ক(fork)_ বলা হয়। আপনার ফর্কটিতে আপনি যে কোনো পরিবর্তন করতে চান এবং আপনি যখন সেই পরিবর্তনগুলো আমাদের কাছে পাঠাতে প্রস্তুত হন, তখন আপনার ফর্কে যান এবং এটি সম্পর্কে আমাদের জানাতে একটি নতুন পুল রিকোয়েস্ট (Pull request) তৈরি করুন৷
একবার আপনার পুল রিকোয়েস্ট তৈরি হয়ে গেলে, একজন কুবারনেটিস পর্যালোচক স্পষ্ট, কার্যকর প্রতিক্রিয়া প্রদানের দায়িত্ব নেবেন। পুল রিকোয়েস্টের মালিক হিসাবে, **কুবারনেটিস পর্যালোচক আপনাকে যে প্রতিক্রিয়া প্রদান করেছেন তা সমাধান করার জন্য আপনার পুল রিকোয়েস্ট সংশোধন করার দায়িত্ব আপনার।**
এছাড়াও, মনে রাখবেন যে আপনার কাছে একাধিক কুবারনেটিস পর্যালোচক আপনাকে প্রতিক্রিয়া প্রদান করতে পারেন বা আপনি একজন কুবারনেটিস পর্যালোচকের কাছ থেকে প্রতিক্রিয়া পেতে পারেন যা আপনাকে প্রতিক্রিয়া প্রদানের জন্য প্রাথমিকভাবে নির্ধারিত একটি থেকে আলাদা।
তদুপরি, কিছু ক্ষেত্রে, আপনার একজন পর্যালোচক প্রয়োজনের সময় একজন কুবারনেটিস টেকনিকাল পর্যালোচনাকারীর কাছ থেকে প্রযুক্তিগত পর্যালোচনা চাইতে পারেন। পর্যালোচকরা যথাসময়ে প্রতিক্রিয়া প্রদানের জন্য তাদের যথাসাধ্য চেষ্টা করবেন কিন্তু প্রতিক্রিয়ার সময় পরিস্থিতির উপর ভিত্তি করে পরিবর্তিত হতে পারে।
কুবারনেটিস ডকুমেন্টেশনে অবদান সম্পর্কে আরও তথ্যের জন্য, দেখুন:
- [কুবারনেটিস ডক্সে অবদান রাখুন](https://kubernetes.io/docs/contribute/)
- [পৃষ্ঠা বিষয়বস্তুর প্রকার](https://kubernetes.io/docs/contribute/style/page-content-types/)
- [ডকুমেন্টেশন শৈলী গাইড](https://kubernetes.io/docs/contribute/style/style-guide/)
- [কুবারনেটিস ডকুমেন্টেশন স্থানীয়করণ](https://kubernetes.io/docs/contribute/localization/)
- [কুবারনেটিস ডক্সের পরিচিতি](https://www.youtube.com/watch?v=pprMgmNzDcw)
### নতুন অবদানকারী অ্যাম্বাসেডর
অবদান রাখার সময় আপনার যদি যেকোনো সময়ে সাহায্যের প্রয়োজন হয়, [নতুন কন্ট্রিবিউটর অ্যাম্বাসেডর](https://kubernetes.io/docs/contribute/advanced/#serve-as-a-new-contributor-ambassador) যোগাযোগের একটি ভালো জায়গা। . এগুলো হলো SIG ডক্স অনুমোদনকারীযাদের দায়িত্ব গুলোর মধ্যে নতুন অবদানকারীদের পরামর্শ দেওয়া এবং তাদের প্রথম কয়েকটি পুল রিকোয়েস্টের মাধ্যমে তাদের সাহায্য করা অন্তর্ভুক্ত৷ নতুন কন্ট্রিবিউটর অ্যাম্বাসেডরদের সাথে যোগাযোগ করার সবচেয়ে ভালো জায়গা হবে [Kubernetes Slack](https://slack.k8s.io/)। SIG ডক্সের জন্য বর্তমান নতুন অবদানকারী অ্যাম্বাসেডর:
| Name | Slack | GitHub |
| -------------------------- | -------------------------- | -------------------------- |
| Arsh Sharma | @arsh | @RinkiyaKeDad |
## Localization READMEs
| Language | Language |
| -------------------------- | -------------------------- |
| [Bengali](README-bn.md) | [Korean](README-ko.md) |
| [Chinese](README-zh.md) | [Polish](README-pl.md) |
| [French](README-fr.md) | [Portuguese](README-pt.md) |
| [German](README-de.md) | [Russian](README-ru.md) |
| [Hindi](README-hi.md) | [Spanish](README-es.md) |
| [Indonesian](README-id.md) | [Ukrainian](README-uk.md) |
| [Italian](README-it.md) | [Vietnamese](README-vi.md) |
| [Japanese](README-ja.md) | |
## কোড অফ কন্ডাক্ট
কুবারনেটিস কমিউনিটিয়ের অংশগ্রহণ [CNCF কোড অফ কন্ডাক্ট](https://github.com/cncf/foundation/blob/main/code-of-conduct.md) দ্বারা পরিচালিত হয়।
## ধন্যবাদ
কুবারনেটিস কমিউনিটিয়ের অংশগ্রহণে উন্নতি লাভ করে, এবং আমরা আমাদের ওয়েবসাইট এবং আমাদের ডকুমেন্টেশনে আপনার অবদানের প্রশংসা করি!

View File

@ -2,14 +2,14 @@
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
このリポジトリには、[KubernetesのWebサイトとドキュメント](https://kubernetes.io/)をビルドするために必要な全アセットが格納されています。貢献に興味を持っていただきありがとうございます!
このリポジトリには、[KubernetesのWebサイトとドキュメント](https://kubernetes.io/)をビルドするために必要な全アセットが格納されています。あなたの貢献をお待ちしています!
- [ドキュメントに貢献する](#contributing-to-the-docs)
- [翻訳された`README.md`一覧](#localization-readmemds)
# リポジトリの使い方
## リポジトリの使い方
Hugo(Extended version)を使用してWebサイトをローカルで実行することも、コンテナランタイムで実行することもできます。コンテナランタイムを使用することを強くお勧めします。これにより、本番Webサイトとのデプロイメントの一貫性が得られます。
Hugo(Extended version)を使用してWebサイトをローカルで実行することも、コンテナランタイムで実行することもできます。コンテナランタイムを使用することを強くお勧めします。これにより、本番Webサイトとのデプロイの一貫性が得られます。
## 前提条件
@ -17,75 +17,92 @@ Hugo(Extended version)を使用してWebサイトをローカルで実行する
- [npm](https://www.npmjs.com/)
- [Go](https://go.dev/)
- [Hugo(Extended version)](https://gohugo.io/)
- [Hugo (Extended version)](https://gohugo.io/)
- [Docker](https://www.docker.com/)などのコンテナランタイム
開始する前に、依存関係をインストールしてください。リポジトリのクローンを作成し、ディレクトリに移動します。
> [!NOTE]
[`netlify.toml`](netlify.toml#L11)の`HUGO_VERSION`環境変数で指定されたHugo extended versionをインストールしてください。
```
始める前に、依存関係をインストールしてください。リポジトリをクローンし、ディレクトリに移動します。
```bash
git clone https://github.com/kubernetes/website.git
cd website
```
KubernetesのWebサイトではDocsyというHugoテーマを使用しています。コンテナでWebサイトを実行する場合でも、以下を実行して、サブモジュールおよびその他の開発依存関係をプルすることを強くお勧めします。
KubernetesのWebサイトでは[DocsyというHugoテーマ](https://github.com/google/docsy#readme)を使用しています。コンテナでWebサイトを実行する場合でも、以下を実行して、サブモジュールおよびその他の依存関係を取得することを強くお勧めします。
```
# pull in the Docsy submodule
### Windows
```powershell
# サブモジュールの依存関係を取得
git submodule update --init --recursive --depth 1
```
### Linux / other Unix
```bash
# サブモジュールの依存関係を取得
make module-init
```
## コンテナを使ってウェブサイトを動かす
コンテナ内でサイトを構築するには、以下を実行してコンテナイメージを構築し、実行します。
```
make container-image
```bash
# 環境変数$CONTAINER_ENGINEを設定することで、Docker以外のコンテナランタイムを使用することもできます
make container-serve
```
お使いのブラウザにて http://localhost:1313 にアクセスしてください。リポジトリ内のソースファイルに変更を加えると、HugoがWebサイトの内容を更新してブラウザに反映します。
エラーが発生した場合はhugoコンテナの計算リソースが不足しています。これを解決するには、使用しているマシン([MacOS](https://docs.docker.com/desktop/settings/mac/)と[Windows](https://docs.docker.com/desktop/settings/windows/))でDockerが使用できるCPUとメモリを増やしてください。
ブラウザで<http://localhost:1313>にアクセスしてください。リポジトリ内のソースファイルに変更を加えると、HugoがWebサイトの内容を更新してブラウザに反映します。
## Hugoを使ってローカル環境でWebサイトを動かす
[`netlify.toml`](netlify.toml#L10)ファイルに記述されている`HUGO_VERSION`と同じExtended versionのHugoをインストールするようにしてください
ローカルで依存関係をインストールし、サイトを構築してテストするには、次のコマンドを実行します
ローカルでサイトを構築してテストするには、次のコマンドを実行します。
- For macOS and Linux
```bash
npm ci
make serve
```
- For Windows (PowerShell)
```powershell
npm ci
hugo.exe server --buildFuture --environment development
```
```bash
# install dependencies
npm ci
make serve
```
これで、Hugoのサーバーが1313番ポートを使って開始します。お使いのブラウザにて http://localhost:1313 にアクセスしてください。リポジトリ内のソースファイルに変更を加えると、HugoがWebサイトの内容を更新してブラウザに反映します。
これで、Hugoのサーバーが1313番ポートを使って起動します。使用しているブラウザで<http://localhost:1313>にアクセスしてください。リポジトリ内のソースファイルに変更を加えると、HugoがWebサイトの内容を更新してブラウザに反映します。
## API reference pagesをビルドする
`content/en/docs/reference/kubernetes-api`に配置されているAPIリファレンスページは<https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>を使ってSwagger仕様書からビルドされています。
`content/ja/docs/reference/kubernetes-api`に配置されているAPIリファレンスページは<https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>を使ってSwagger Specification (OpenAPI Specification)からビルドされています。
新しいKubernetesリリースのためにリファレンスページをアップデートするには、次の手順を実行します:
1. `api-ref-generator`サブモジュールをプルする:
1. `api-ref-generator`サブモジュールを取得します:
```bash
git submodule update --init --recursive --depth 1
```
2. Swagger仕様書を更新する:
2. Swagger Specificationを更新します:
```bash
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-assets/api/swagger.json
```
3. 新しいリリースの変更を反映するため、`api-ref-assets/config/`で`toc.yaml`と`fields.yaml`を適用する
3. `api-ref-assets/config/`内の`toc.yaml`と`fields.yaml`を新しいリリースの変更に合わせます
4. 次に、ページをビルドす:
4. 次に、ページをビルドします:
```bash
make api-reference
```
コンテナイメージからサイトを作成サーブする事でローカルで結果をテストすることができます:
コンテナイメージからサイトを作成サーブする事でローカルで結果をテストすることができます:
```bash
make container-image
@ -94,19 +111,19 @@ make serve
APIリファレンスを見るために、ブラウザで<http://localhost:1313/docs/reference/kubernetes-api/>を開いてください。
5. 新しいコントラクトのすべての変更が設定ファイル`toc.yaml`と`fields.yaml`に反映されたら、新しく生成されたAPIリファレンスページとともにPull Requestを作成します。
5. 新しいコントラクトのすべての変更が設定ファイル`toc.yaml`と`fields.yaml`に反映されたら、新しく生成されたAPIリファレンスページとともにPull Requestを作成します。
## トラブルシューティング
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
Hugoは、技術的な理由から2種類のバイナリがリリースされています。現在のウェブサイトは**Hugo Extended**バージョンのみに基づいて運営されています。[リリースページ](https://github.com/gohugoio/hugo/releases)で名前に「extended」が含まれるアーカイブを探します。確認するには、`hugo version`を実行し、「extended」という単語を探します。
Hugoは、技術的な理由から2種類のバイナリがリリースされています。現在のウェブサイトは**Hugo Extended**バージョンのみに基づいて運営されています。[リリースページ](https://github.com/gohugoio/hugo/releases)で名前に`extended`が含まれるアーカイブを探します。確認するには、`hugo version`を実行し、`extended`という単語を探します。
### macOSにてtoo many open filesというエラーが表示される
macOS上で`make serve`を実行した際に以下のエラーが表示される場合
```
```bash
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
make: *** [serve] Error 1
```
@ -115,9 +132,9 @@ OS上で同時に開けるファイルの上限を確認してください。
`launchctl limit maxfiles`
続いて、以下のコマンドを実行します(https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c より引用)。
続いて、以下のコマンドを実行します(<https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c>より引用)。
```
```shell
#!/bin/sh
# These are the original gist links, linking to my gists now.
@ -140,31 +157,40 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
## SIG Docsに参加する
[コミュニティのページ](https://github.com/kubernetes/community/tree/master/sig-docs#meetings)をご覧になることで、SIG Docs Kubernetesコミュニティとの関わり方を学ぶことができます。
[コミュニティのページ](https://github.com/kubernetes/community/tree/master/sig-docs#meetings)を確認することで、SIG Docs Kubernetesコミュニティとの関わり方を学ぶことができます。
本プロジェクトのメンテナーには以下の方法で連絡することができます:
- [Slack](https://kubernetes.slack.com/messages/kubernetes-docs-ja)
- [Slack #kubernetes-docs-ja チャンネル](https://kubernetes.slack.com/messages/kubernetes-docs-ja)
- [メーリングリスト](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
## ドキュメントに貢献する {#contributing-to-the-docs}
GitHubの画面右上にある**Fork**ボタンをクリックすると、お使いのGitHubアカウントに紐付いた本リポジトリのコピーが作成され、このコピーのことを*フォーク*と呼びます。フォークリポジトリの中ではお好きなように変更を加えていただいて構いません。加えた変更をこのリポジトリに追加したい任意のタイミングにて、フォークリポジトリからPull Reqeustを作成してください。
GitHubの画面右上にある**Fork**ボタンをクリックすると、GitHubアカウントに紐付いた本リポジトリのコピーが作成されます。このコピーのことを*フォーク*と呼びます。フォークリポジトリの中では好きなように変更を加えることができます。加えた変更をこのリポジトリに反映したい好きなタイミングで、フォークリポジトリからPull Reqeustを作成してください。
Pull Requestが作成されると、レビュー担当者が責任を持って明確かつ実用的なフィードバックを返します。Pull Requestの所有者は作成者であるため、**ご自身で作成したPull Requestを編集し、フィードバックに対応するのはご自身の役目です。**
Pull Requestが作成されると、レビュー担当者が責任を持って明確かつ実用的なフィードバックを返します。Pull Requestの所有者は作成者であるため、**自分自身で作成したPull Requestを編集し、フィードバックに対応するのはあなたの責任です。**
また、状況によっては2人以上のレビュアーからフィードバックが返されたり、アサインされていないレビュー担当者からのフィードバックが来ることがある点もご注意ください。
また、状況によっては2人以上のレビュアーからフィードバックが返されたり、アサインされていないレビューからのフィードバックが来ることがある点も留意してください。
さらに、特定のケースにおいては、レビュー担当者がKubernetesの技術的なレビュアーに対してレビューを依頼することもあります。レビュー担当者はタイムリーにフィードバックを提供するために最善を尽くしますが、応答時間は状況に応じて異なる場合があります。
さらに、特定のケースにおいては、レビューがKubernetesの技術的なレビュアーに対してレビューを依頼することもあります。レビュー担当者はタイムリーにフィードバックを提供するために最善を尽くしますが、応答時間は状況に応じて異なる場合があります。
Kubernetesのドキュメントへの貢献に関する詳細については以下のページをご覧ください:
> [!NOTE]
ローカライゼーションにおいては、技術的なレビューを行うことはありません。技術的なレビューは英語版のドキュメントに対してのみ行われます。
Kubernetesのドキュメントへの貢献に関する詳細については以下のページを確認してください:
> [!NOTE]
日本語のローカライゼーションを行う際は、[Kubernetesのドキュメントを翻訳する](https://kubernetes.io/ja/docs/contribute/localization/)が参照すべきガイドとなります。
* [Kubernetesのドキュメントへの貢献](https://kubernetes.io/ja/docs/contribute/)
* [ページコンテントタイプ](https://kubernetes.io/docs/contribute/style/page-content-types/)
* [ドキュメントのスタイルガイド](https://kubernetes.io/docs/contribute/style/style-guide/)
* [Kubernetesドキュメントの翻訳方法](https://kubernetes.io/docs/contribute/localization/)
### New Contributor Ambassadors
### 新たなコントリビューターのためのアンバサダー
> [!NOTE]
日本語のローカライゼーションに関する質問は、[Slack #kubernetes-docs-ja チャンネル](https://kubernetes.slack.com/messages/kubernetes-docs-ja)にてお気軽にお尋ねください。
コントリビュートする時に何か助けが必要なら、[New Contributor Ambassadors](https://kubernetes.io/docs/contribute/advanced/#serve-as-a-new-contributor-ambassador)に聞いてみると良いでしょう。彼らはSIG Docsのapproverで、最初の数回のPull Requestを通して新しいコントリビューターを指導し助けることを責務としています。New Contributors Ambassadorsにコンタクトするには、[Kubernetes Slack](https://slack.k8s.io)が最適な場所です。現在のSIG DocsのNew Contributor Ambassadorは次の通りです:
@ -186,7 +212,7 @@ Kubernetesのドキュメントへの貢献に関する詳細については以
### 行動規範
Kubernetesコミュニティへの参加については、[CNCFの行動規範](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)によって管理されています。
Kubernetesコミュニティへの参加については、[CNCFの行動規範](https://github.com/cncf/foundation/blob/main/code-of-conduct-languages/ja.md)によって管理されています。
## ありがとうございます!

View File

@ -192,13 +192,14 @@ If you need help at any point when contributing, the [New Contributor Ambassador
| Language | Language |
| -------------------------- | -------------------------- |
| [Chinese](README-zh.md) | [Korean](README-ko.md) |
| [French](README-fr.md) | [Polish](README-pl.md) |
| [German](README-de.md) | [Portuguese](README-pt.md) |
| [Hindi](README-hi.md) | [Russian](README-ru.md) |
| [Indonesian](README-id.md) | [Spanish](README-es.md) |
| [Italian](README-it.md) | [Ukrainian](README-uk.md) |
| [Japanese](README-ja.md) | [Vietnamese](README-vi.md) |
| [Bengali](README-bn.md) | [Korean](README-ko.md) |
| [Chinese](README-zh.md) | [Polish](README-pl.md) |
| [French](README-fr.md) | [Portuguese](README-pt.md) |
| [German](README-de.md) | [Russian](README-ru.md) |
| [Hindi](README-hi.md) | [Spanish](README-es.md) |
| [Indonesian](README-id.md) | [Ukrainian](README-uk.md) |
| [Italian](README-it.md) | [Vietnamese](README-vi.md) |
| [Japanese](README-ja.md) | |
## Code of conduct

View File

@ -17,3 +17,5 @@ tengqm
onlydole
kbhawkey
natalisucks
salaxander
katcosgrove

View File

Before

Width:  |  Height:  |  Size: 249 B

After

Width:  |  Height:  |  Size: 249 B

View File

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

View File

@ -77,10 +77,10 @@ footer {
font-size: 1rem;
border: 0px;
}
.button:hover {
background-color: darken($blue, 10%);
&:hover {
background-color: darken($blue, 10%);
color: white;
}
}
#cellophane {
@ -104,6 +104,11 @@ main {
text-decoration: none;
font-size: 1rem;
border: 0px;
&:hover {
background-color: darken($blue, 10%);
color: white;
}
}
}
@ -554,7 +559,7 @@ section#cncf {
#desktopKCButton:hover{
background-color: #ffffff;
color: #3371e3;
color: #326ce5;
transition: 150ms;
}

View File

@ -135,7 +135,7 @@ body.td-404 main .error-details {
height: 44px;
background-repeat: no-repeat;
background-size: contain;
background-image: url("/images/favicon.png");
background-image: url("/images/logo-header.png");
}
#hamburger {
@ -497,14 +497,33 @@ body {
border-left-width: calc(max(0.5em, 4px));
border-top-left-radius: calc(max(0.5em, 4px));
border-bottom-left-radius: calc(max(0.5em, 4px));
padding-top: 0.75rem;
}
.alert.callout.caution {
.alert.alert-caution {
border-left-color: #f0ad4e;
}
.alert.callout.note {
.alert.alert-info {
border-left-color: #428bca;
h4, h4.alert-heading {
color: #000;
display: block;
float: left;
font-size: 1rem;
padding: 0;
padding-right: 0.5rem;
margin: 0;
line-height: 1.5;
font-weight: bolder;
}
}
.alert.callout.warning {
.alert.alert-caution {
border-left-color: #f0ad4e;
h4, h4.alert-heading {
font-size: 1em;
font-weight: bold;
}
}
.alert.alert-warning {
border-left-color: #d9534f;
}
.alert.third-party-content {
@ -728,7 +747,7 @@ body.cid-partners {
line-height: 40px;
color: #ffffff;
font-size: 16px;
background-color: #3371e3;
background-color: #326ce5;
text-decoration: none;
}
@ -850,6 +869,44 @@ body.cid-community > #deprecation-warning > .deprecation-warning > * {
background-color: inherit;
}
body.cid-code-of-conduct main {
max-width: calc(min(90vw, 100em));
padding-top: 3rem;
padding-left: 0.5em;
padding-right: 0.5em;
margin-left: auto;
margin-right: auto;
#cncf-code-of-conduct {
margin-top: 4rem;
margin-bottom: 4rem;
padding-left: 4rem;
> h2, h3, h4, h5 {
color: #0662EE;
}
> h2:first-child {
margin-top: 0.25em;
margin-bottom: 1em;
}
}
> hr {
margin-top: 4rem;
margin-bottom: 4rem;
}
> hr:last-of-type ~ * {
text-align: center;
font-size: 1.15rem;
}
> *:last-child {
margin-bottom: 4rem;
}
}
#caseStudies body > #deprecation-warning > .deprecation-warning, body.cid-casestudies > #deprecation-warning > .deprecation-warning {
color: inherit;
background: inherit;
@ -1296,10 +1353,11 @@ div.alert > em.javascript-required {
flex-grow: 1;
overflow-x: hidden;
width: auto;
}
.search-bar:focus-within {
border: 2.5px solid rgba(47, 135, 223, 0.7);
&:focus-within {
outline: 1.5px solid rgba(47, 135, 223, 0.7);
border: 1px solid rgba(47, 135, 223, 0.7);
}
}
.search-bar i.search-icon {
@ -1313,3 +1371,46 @@ div.alert > em.javascript-required {
outline: none;
padding: .5em 0 .5em 0;
}
/* CSS for 'figure' full-screen display */
/* Define styles for full-screen overlay */
.figure-fullscreen-overlay {
position: fixed;
inset: 0;
z-index: 9999;
background-color: rgba(255, 255, 255, 0.95); /* White background with some transparency */
display: flex;
justify-content: center;
align-items: center;
padding: calc(5% + 20px);
box-sizing: border-box;
}
/* CSS class to scale the image when zoomed */
.figure-zoomed {
transform: scale(1.2);
}
/* Define styles for full-screen image */
.figure-fullscreen-img {
max-width: 100%;
max-height: 100%;
object-fit: contain; /* Maintain aspect ratio and fit within the container */
}
/* Define styles for close button */
.figure-close-button {
position: absolute;
top: 1%;
right: 2%;
cursor: pointer;
font-size: calc(5vw + 10px);
color: #333;
}
.code-sample > .copy-code-icon {
cursor: pointer;
text-align: right;
padding: 0.2rem;
}

View File

@ -1,4 +1,4 @@
$blue: #3371e3;
$blue: #326ce5;
$light-grey: #f7f7f7;
$dark-grey: #303030;
$medium-grey: #4c4c4c;

View File

@ -12,7 +12,7 @@ Add styles or override variables from the theme here. */
@import "tablet";
@import "desktop";
$primary: #3371e3;
$primary: #326ce5;
// tooltip
$tooltip-bg: #555;

14
content/bn/OWNERS Normal file
View File

@ -0,0 +1,14 @@
# See the OWNERS docs at https://go.k8s.io/owners
# This is the localization project for Bengali.
# Teams and members are visible at https://github.com/orgs/kubernetes/teams.
reviewers:
- sig-docs-bn-reviews
approvers:
- sig-docs-bn-owners
labels:
- area/localization
- language/bn

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 214 210"><defs><style>.cls-1{fill:#316ce6;}.cls-2{fill:#09c1d1;}.cls-3{fill:#c9e9ec;}</style></defs><title>kubernetes_icons</title><rect class="cls-1" x="10.24" y="8.775" width="33.6" height="33.6" rx="4.158"/><rect class="cls-1" x="10.24" y="46.975" width="33.6" height="33.6" rx="4.158"/><rect class="cls-1" x="170.2" y="8.775" width="33.6" height="33.6" rx="4.158"/><rect class="cls-1" x="90.18" y="86.925" width="33.6" height="33.6" rx="4.158"/><rect class="cls-1" x="50.23" y="126.825" width="33.6" height="33.6" rx="4.158"/><rect class="cls-1" x="10.24" y="166.085" width="33.6" height="33.6" rx="4.158"/><rect class="cls-1" x="90.22" y="166.085" width="33.6" height="33.6" rx="4.158"/><rect class="cls-2" x="89.99" y="8.775" width="33.6" height="33.6" rx="4.158"/><rect class="cls-3" x="89.99" y="48.1" width="33.6" height="33.6" rx="4.158"/><rect class="cls-3" x="49.914" y="8.775" width="33.6" height="33.6" rx="4.158"/><rect class="cls-3" x="10.24" y="86.925" width="33.6" height="33.6" rx="4.158"/><rect class="cls-3" x="50.23" y="166.085" width="33.6" height="33.6" rx="4.158"/><rect class="cls-3" x="130.21" y="126.825" width="33.6" height="33.6" rx="4.158"/><rect class="cls-3" x="170.2" y="166.085" width="33.6" height="33.6" rx="4.158"/><rect class="cls-2" x="49.914" y="48.1" width="33.6" height="33.6" rx="4.158"/><rect class="cls-2" x="50.21" y="86.925" width="33.6" height="33.6" rx="4.158"/><rect class="cls-2" x="130.15" y="86.925" width="33.6" height="33.6" rx="4.158"/><rect class="cls-2" x="170.2" y="126.825" width="33.6" height="33.6" rx="4.158"/><rect class="cls-2" x="10.24" y="126.825" width="33.6" height="33.6" rx="4.158"/><rect class="cls-2" x="130.21" y="166.085" width="33.6" height="33.6" rx="4.158"/><rect class="cls-2" x="90.22" y="126.825" width="33.6" height="33.6" rx="4.158"/></svg>

After

Width:  |  Height:  |  Size: 1.9 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 693 KiB

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 214 210"><defs><style>.cls-1{fill:#316ce6;}.cls-2{fill:#08c1d1;}.cls-3{fill:#fff;}</style></defs><title>kubernetes_icons</title><rect class="cls-1" x="24.43837" y="69.59095" width="68.66984" height="68.66984" rx="2.37703" transform="translate(-56.27239 71.99817) rotate(-45)"/><rect class="cls-1" x="122.20077" y="69.59095" width="68.66984" height="68.66984" rx="2.37703" transform="translate(-27.63844 141.12663) rotate(-45)"/><rect class="cls-1" x="78.16563" y="128.81421" width="59.80067" height="59.80067" rx="2.07002" transform="translate(-80.57634 122.90059) rotate(-45)"/><rect class="cls-1" x="78.16563" y="19.38512" width="59.80067" height="59.80067" rx="2.07002" transform="translate(-3.19829 90.84955) rotate(-45)"/><rect class="cls-2" x="84.26447" y="80.01949" width="47.6109" height="47.6109" rx="5.89185" transform="translate(-41.76237 106.82659) rotate(-45)"/><rect class="cls-3" x="105.64189" y="55.09395" width="4.80943" height="43.28484"/><polygon class="cls-3" points="100.231 56.897 115.862 56.897 108.047 41.868 100.231 56.897"/><rect class="cls-3" x="114.14747" y="100.79333" width="43.28484" height="4.80943"/><polygon class="cls-3" points="155.629 95.382 155.629 111.013 170.658 103.198 155.629 95.382"/><rect class="cls-3" x="105.64143" y="109.47241" width="4.80943" height="43.28484"/><polygon class="cls-3" points="115.862 150.954 100.231 150.954 108.047 165.984 115.862 150.954"/><rect class="cls-3" x="58.69934" y="100.79348" width="43.28484" height="4.80943"/><polygon class="cls-3" points="60.503 111.013 60.503 95.382 45.474 103.198 60.503 111.013"/><circle class="cls-3" cx="107.94678" cy="103.92587" r="9.01768"/></svg>

After

Width:  |  Height:  |  Size: 1.7 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 5.2 KiB

View File

@ -0,0 +1,3 @@
---
headless: true
---

65
content/bn/_index.html Normal file
View File

@ -0,0 +1,65 @@
---
title: "প্রোডাকশন-গ্রেড কন্টেইনার অর্কেস্ট্রেশন"
abstract: "স্বয়ংক্রিয় কন্টেইনার ডিপ্লয়মেন্ট, স্কেলিং এবং ব্যবস্থাপনা"
cid: home
sitemap:
priority: 1.0
---
{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
[কুবারনেটিস]({{< relref "/docs/concepts/overview/" >}}), K8s নামেও পরিচিত, কনটেইনারাইজড অ্যাপ্লিকেশনের স্বয়ংক্রিয় ডিপ্লয়মেন্ট, স্কেলিং এবং পরিচালনার জন্য একটি ওপেন-সোর্স সিস্টেম।
এটি সহজ ব্যবস্থাপনা এবং আবিষ্কারের জন্য লজিক্যাল ইউনিটে একটি অ্যাপ্লিকেশন তৈরি করে এমন কন্টেইনারগুলিকে গোষ্ঠীভুক্ত করে। কুবারনেটিস [Google-এ প্রোডাকশন ওয়ার্কলোড চালানোর 15 বছরের অভিজ্ঞতার ভিত্তিতে](http://queue.acm.org/detail.cfm?id=2898444) তৈরি করে, কমিউনিটির সেরা ধারণা এবং অনুশীলনের সাথে মিলিত ভাবে।
{{% /blocks/feature %}}
{{% blocks/feature image="scalable" %}}
#### বিশ্বব্যাপী স্কেল
Google সপ্তাহে বিলিয়ন কন্টেইনার চালানোর জন্য যে নীতিতে ডিজাইন প্রয়োগ করে, সেই একই নীতিতে কুবারনেটিস ডিজাইন করা হয়, ফলস্বরূপ কুবারনেটিস ব্যবহারকারীরা অপারেশন টিম না বাড়িয়ে স্কেল করতে পারে।
{{% /blocks/feature %}}
{{% blocks/feature image="blocks" %}}
#### কখনই আউটগ্রো করবে না
স্থানীয়ভাবে পরীক্ষা করা হোক বা বিশ্বব্যাপী এন্টারপ্রাইজ চালানো হোক না কেন, আপনার প্রয়োজনীয়তা যত জটিলই হোক না কেন আপনার অ্যাপ্লিকেশনগুলিকে ধারাবাহিকভাবে এবং সহজে সরবরাহ করতে কুবারনেটিসের নমনীয়তা আপনার সাথে বৃদ্ধি পায়।
{{% /blocks/feature %}}
{{% blocks/feature image="suitcase" %}}
#### যে কোন জায়গায় K8s চালান
কুবারনেটিস হল ওপেন সোর্স যা আপনাকে অন-প্রিমিসেস, হাইব্রিড বা পাবলিক ক্লাউড অবকাঠামোর সুবিধা নেওয়ার স্বাধীনতা দেয়, যাতে আপনি সহজেই কাজের চাপগুলি যেখানে আপনার কাছে গুরুত্বপূর্ণ সেখানে স্থানান্তর করতে পারেন।
কুবারনেটিস ডাউনলোড করতে, [ডাউনলোড](/bn/releases/download/) বিভাগে যান।
{{% /blocks/feature %}}
{{< /blocks/section >}}
{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}}
<div class="light-text">
<h2>150+ মাইক্রোসার্ভিস কুবারনেটিসে স্থানান্তরিত করার চ্যালেঞ্জ</h2>
<p>সারাহ ওয়েলস দ্বারা, অপারেশনস এবং নির্ভরযোগ্যতার জন্য প্রযুক্তিগত পরিচালক, ফিনান্সিয়াল টাইমস</p>
<button id="desktopShowVideoButton" onclick="kub.showVideo()">ভিডিও দেখুন</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">১৯-২২ মার্চ, ২০২৪-এ KubeCon + CloudNativeCon ইউরোপে যোগ দিন</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america-2024/" button id="desktopKCButton">১২-১৫ নভেম্বর, ২০২৪-এ KubeCon + CloudNativeCon উত্তর আমেরিকাতে যোগ দিন</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
<button id="closeButton"></button>
</div>
{{< /blocks/section >}}
{{< blocks/kubernetes-features >}}
{{< blocks/case-studies >}}

14
content/bn/blog/_index.md Normal file
View File

@ -0,0 +1,14 @@
---
title: কুবারনেটিস ব্লগ
linkTitle: ব্লগ
menu:
main:
title: "ব্লগ"
weight: 20
---
{{< comment >}}
ব্লগে অবদান সম্পর্কে তথ্যের জন্য, দেখুন
https://kubernetes.io/docs/contribute/new-content/blogs-case-studies/#write-a-blog-post
{{< /comment >}}

View File

@ -0,0 +1,99 @@
---
layout: blog
title: "কুবারনেটিস 1.29: পারসিস্টেন্টভলিউম গ্র্যাজুয়েটদের জন্য একক পড অ্যাক্সেস মোড"
date: 2023-12-18
slug: read-write-once-pod-access-mode-ga
author: >
Chris Henzie (Google)
---
Kubernetes v1.29 প্রকাশের সাথে, ReadWriteOncePod ভলিউম অ্যাক্সেস মোড
সবার জন্য গ্র্যাজুয়েট হয়েছে: এটি কুবারনেটিস এর স্থিতিশীল API এর অংশ।
এই ব্লগ পোস্টে, আমি এই অ্যাক্সেস মোড এবং এটি কী করে তা আরও ঘনিষ্ঠভাবে দেখব।
## `ReadWriteOncePod` কি?
`ReadWriteOncePod` হলো
[পারসিস্টেন্টভলিউম(PersistentVolumes (PVs))](/docs/concepts/storage/persistent-volumes/#persistent-volumes) এবং
[পারসিস্টেন্টভলিউমক্লেমস(PersistentVolumeClaims (PVCs))](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
এর জন্য একটি অ্যাক্সেস মোড যা কুবারনেটিস v1.22-এ চালু করা হয়েছে। এই অ্যাক্সেস মোড
আপনাকে ক্লাস্টারে একটি একক পডে ভলিউম অ্যাক্সেস সীমাবদ্ধ করতে সক্ষম করে, এটি নিশ্চিত
করে যে একটি সময়ে শুধুমাত্র একটি পড ভলিউমে লিখতে পারে। এটি স্টেটফুল ওয়ার্কলোডগুলির
জন্য বিশেষভাবে উপযোগী হতে পারে যার জন্য স্টোরেজে একক-লেখকের অ্যাক্সেস প্রয়োজন।
অ্যাক্সেস মোড এবং `ReadWriteOncePod` কীভাবে কাজ করে সে সম্পর্কে আরও প্রসঙ্গের জন্য পড়ুন
[অ্যাক্সেস মোডগুলি কী এবং কেন সেগুলি গুরুত্বপূর্ণ?](/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#what-are-access-modes-and-why-are-they-important)
2021 থেকে পারসিস্টেন্টভলিউম নিবন্ধের জন্য একক পড অ্যাক্সেস মোড প্রবর্তন করা হয়েছে ।
## কিভাবে আমি `ReadWriteOncePod` ব্যবহার শুরু করতে পারি?
ReadWriteOncePod ভলিউম অ্যাক্সেস মোড ডিফল্টরূপে কুবারনেটিস ভার্সন v1.27
এবং তার পরে উপলব্ধ। কুবারনেটিস v1.29 এবং পরবর্তীতে, কুবারনেটিস API
সর্বদা এই অ্যাক্সেস মোডকে স্বীকৃতি দেয়।
মনে রাখবেন যে `ReadWriteOncePod`
[শুধুমাত্র CSI ভলিউমগুলির জন্য সাপোর্টেড](/docs/concepts/storage/persistent-volumes/#access-modes),
এবং এই বৈশিষ্ট্যটি ব্যবহার করার আগে, আপনাকে নিম্নলিখিত
[CSI সাইডকারগুলিকে](https://kubernetes-csi.github.io/docs/sidecar-containers.html)
এই ভার্সনগুলিতে বা তার বেশি আপডেট করতে হবে:
- [csi-provisioner:v3.0.0+](https://github.com/kubernetes-csi/external-provisioner/releases/tag/v3.0.0)
- [csi-attacher:v3.3.0+](https://github.com/kubernetes-csi/external-attacher/releases/tag/v3.3.0)
- [csi-resizer:v1.3.0+](https://github.com/kubernetes-csi/external-resizer/releases/tag/v1.3.0)
`ReadWriteOncePod` ব্যবহার শুরু করতে, আপনাকে `ReadWriteOncePod`
অ্যাক্সেস মোড সহ একটি PVC তৈরি করতে হবে:
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: single-writer-only
spec:
accessModes:
- ReadWriteOncePod # Allows only a single pod to access single-writer-only.
resources:
requests:
storage: 1Gi
```
যদি আপনার স্টোরেজ প্লাগইন
[ডায়নামিক প্রভিশনিং সাপোর্টে করে](/docs/concepts/storage/dynamic-provisioning/),
তাহলে `ReadWriteOncePod` অ্যাক্সেস মোড প্রয়োগ করে নতুন
পারসিস্টেন্টভলিউম তৈরি করা হবে।
`ReadWriteOncePod` ব্যবহার করার জন্য বিদ্যমান ভলিউম স্থানান্তরিত করার বিশদ বিবরণের জন্য
[বিদ্যমান পারসিস্টেন্টভলিউম স্থানান্তর করা](/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#migrating-existing-persistentvolumes) পড়ুন ।
## আমি কীভাবে আরও শিখতে পারি?
`ReadWriteOncePod` অ্যাক্সেস মোড এবং CSI স্পেক পরিবর্তনের প্রেরণা সম্পর্কে
আরও বিশদ বিবরণের জন্য অনুগ্রহ করে ব্লগ পোস্টগুলি
[alpha](/blog/2021/09/13/read-write-once-pod-access-mode-alpha),
[beta](/blog/2023/04/20/read-write-once-pod-access-mode-beta), এবং
[KEP-2485](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2485-read-write-once-pod-pv-access-mode/README.md) দেখুন৷
## আমি কিভাবে জড়িত হতে পারি?
[কুবারনেটিস #csi স্ল্যাক চ্যানেল](https://kubernetes.slack.com/messages/csi) এবং
যে কোনো স্ট্যান্ডার্ড
[SIG স্টোরেজ কমিউনিকেশন চ্যানেল](https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact)
হলো SIG স্টোরেজ এবং CSI টিমের কাছে পৌঁছানোর দুর্দান্ত পদ্ধতি।
নিম্নলিখিত ব্যক্তিদের বিশেষ ধন্যবাদ যাদের চিন্তাশীল পর্যালোচনা এবং প্রতিক্রিয়া এই বৈশিষ্ট্যটি গঠনে সহায়তা করেছে:
* Abdullah Gharaibeh (ahg-g)
* Aldo Culquicondor (alculquicondor)
* Antonio Ojea (aojea)
* David Eads (deads2k)
* Jan Šafránek (jsafrane)
* Joe Betz (jpbetz)
* Kante Yin (kerthcet)
* Michelle Au (msau42)
* Tim Bannister (sftim)
* Xing Yang (xing-yang)
আপনি যদি CSI বা কুবারনেটিস স্টোরেজ সিস্টেমের যেকোন অংশের ডিজাইন
এবং বিকাশের সাথে জড়িত হতে আগ্রহী হন, তাহলে
[কুবারনেটিস স্টোরেজ স্পেশাল ইন্টারেস্ট গ্রুপে](https://github.com/kubernetes/community/tree/master/sig-storage) (Special Interest Group(SIG)) যোগ দিন।
আমরা দ্রুত বৃদ্ধি করছি এবং সবসময় নতুন অবদানকারীদের স্বাগত জানাই।

View File

@ -0,0 +1,13 @@
---
title: কেস স্টাডিজ
linkTitle: কেস স্টাডিজ
bigheader: কুবারনেটিস ব্যবহারকারীদের কেস স্টাডিজ
abstract: কুবারনেটিস ব্যবহারকারীদের একটি সংগ্রহ যারা প্রোডাকশনে এটি ব্যবহার করে.
layout: basic
class: gridPage
body_class: caseStudies
cid: caseStudies
menu:
main:
weight: 60
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.9 KiB

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#64c3a5;}.cls-2,.cls-3,.cls-4,.cls-5{fill-rule:evenodd;}.cls-3{fill:#e9d661;}.cls-4{fill:#0582bd;}.cls-5{fill:#808285;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.21396" y="-3.67916" width="223.25536" height="134.51136"/><path class="cls-2" d="M109.66922,82.43741A19.57065,19.57065,0,0,1,90.35516,66.01248a19.57588,19.57588,0,0,1,17.35089-16.32394A19.571,19.571,0,0,0,96.842,72.59509a13.28048,13.28048,0,0,0,12.82726,9.84232"/><path class="cls-3" d="M107.70538,49.68854q.96877-.09621,1.96384-.09714a19.59034,19.59034,0,0,1,5.971.93243c-1.19879,3.42362-2.9112,5.60261-4.58257,5.4263a13.51316,13.51316,0,0,0-1.38838-.07249,13.2911,13.2911,0,0,0-12.94535,16.2528,19.572,19.572,0,0,1,10.98151-22.4419"/><path class="cls-4" d="M118.5549,59.2884c-1.24844-1.1244-.77543-3.85447.96614-7.03615a19.56137,19.56137,0,1,1-29.16566,13.762,19.57091,19.57091,0,0,0,19.31384,16.42316,13.27982,13.27982,0,0,0,8.88568-23.149"/><path class="cls-5" d="M148.72465,56.68991a6.07242,6.07242,0,0,0-9.10641,5.25828v24.793H133.331v-24.793a12.36017,12.36017,0,0,1,18.53664-10.70293l-3.143,5.44465Zm24.605-2.97568a12.35685,12.35685,0,0,1,21.57057,8.234v24.793h-6.28659v-24.793a6.07039,6.07039,0,1,0-12.14078,0v24.793h-6.28662v-24.793a6.07006,6.07006,0,1,0-12.14012,0v24.793h-6.28747v-24.793a12.35715,12.35715,0,0,1,21.571-8.234m-79.275-11.71068a6.07292,6.07292,0,0,0-9.10642,5.25869v3.15422h6.60732l-2.79511,6.27307H84.94817V86.74119H78.66091v-39.479A12.3589,12.3589,0,0,1,97.19756,36.55965L94.625,42.333l-.57036-.32949ZM20.49076,54.8028a13.15543,13.15543,0,0,1,10.7037-5.2114c7.03714,0,12.74106,5.1005,12.74106,11.39221l-.01237,16.34832c0,6.29168-5.704,11.39208-12.74112,11.39208s-12.741-5.1004-12.741-11.39208c0-5.26947,3.663-9.84144,9.772-11.47815l9.43621-2.52868.01242-2.34149c0-2.82005-2.895-5.106-6.46627-5.106a7.12669,7.12669,0,0,0-5.22007,2.0919l-.66586.73592L20.49076,54.8028ZM37.64921,69.84037l-.00073,7.49156c-.00018,2.81872-2.89507,5.10548-6.46645,5.10548s-6.46627-2.28565-6.46627-5.10548c0-2.41256,2.2001-4.61169,5.09086-5.38735l7.84259-2.10421Zm29.58343-32.952v14.2779a13.83819,13.83819,0,0,0-6.46716-1.57488c-7.03708,0-12.74094,5.1005-12.74094,11.39221V77.33193c0,6.29168,5.70386,11.39208,12.74094,11.39208s12.74117-5.1004,12.74117-11.39208V36.88838Zm0,24.09523V77.33193c0,2.81983-2.89487,5.10548-6.46628,5.10548s-6.46627-2.28565-6.46627-5.10548V60.98361c0-2.82005,2.89487-5.106,6.46627-5.106s6.46628,2.28592,6.46628,5.106"/></svg>

After

Width:  |  Height:  |  Size: 2.5 KiB

View File

@ -0,0 +1,86 @@
---
title: Adform Case Study
linkTitle: Adform
case_study_styles: true
cid: caseStudies
logo: adform_featured_logo.png
draft: false
featured: true
weight: 47
quote: >
Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier.
new_case_study_styles: true
heading_background: /images/case-studies/adform/banner1.jpg
heading_title_logo: /images/adform_logo.png
subheading: >
Improving Performance and Morale with Cloud Native
case_study_details:
- Company: AdForm
- Location: Copenhagen, Denmark
- Industry: Adtech
---
<h2>Challenge</h2>
<p><a href="https://site.adform.com/">Adform's</a> mission is to provide a secure and transparent full stack of advertising technology to enable digital ads across devices. The company has a large infrastructure: <a href="https://www.openstack.org/">OpenStack</a>-based private clouds running on 1,100 physical servers in 7 data centers around the world, 3 of which were opened in the past year. With the company's growth, the infrastructure team felt that "our private cloud was not really flexible enough," says IT System Engineer Edgaras Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software takes time. We were really struggling with our releases, and we didn't have self-healing infrastructure."</p>
<h2>Solution</h2>
<p>The team, which had already been using <a href="https://prometheus.io/">Prometheus</a> for monitoring, embraced <a href="https://kubernetes.io/">Kubernetes</a> and cloud native practices in 2017. "To start our Kubernetes journey, we had to adapt all our software, so we had to choose newer frameworks," says Apšega. "We also adopted the microservices way, so observability is much better because you can inspect the bug or the services separately."</p>
<h2>Impact</h2>
<p>"Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. The release process went from several hours to several minutes. Autoscaling has been at least 6 times faster than the semi-manual VM bootstrapping and application deployment required before. The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching 2-3 times more efficiency over virtual machines. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes," says Apšega. Prometheus has also had a positive impact: "It provides high availability for metrics and alerting. We monitor everything starting from hardware to applications. Having all the metrics in <a href="https://grafana.com/">Grafana</a> dashboards provides great insight on your systems."</p>
{{< case-studies/quote author="Edgaras Apšega, IT Systems Engineer, Adform" >}}
"Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
Adform made <a href="https://www.wsj.com/articles/fake-ad-operation-used-to-steal-from-publishers-is-uncovered-1511290981">headlines</a> last year when it detected the HyphBot ad fraud network that was costing some businesses hundreds of thousands of dollars a day.
{{< /case-studies/lead >}}
<p>With its mission to provide a secure and transparent full stack of advertising technology to enable an open internet, Adform published a <a href="https://site.adform.com/media/85132/hyphbot_whitepaper_.pdf">white paper</a> revealing what it did—and others could too—to limit customers' exposure to the scam.</p>
<p>In that same spirit, Adform is sharing its cloud native journey. "When you see that everyone shares their best practices, it inspires you to contribute back to the project," says IT Systems Engineer Edgaras Apšega.</p>
<p>The company has a large infrastructure: <a href="https://www.openstack.org/">OpenStack</a>-based private clouds running on 1,100 physical servers in their own seven data centers around the world, three of which were opened in the past year. With the company's growth, the infrastructure team felt that "our private cloud was not really flexible enough," says Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software really takes time. We were really struggling with our releases, and we didn't have self-healing infrastructure."</p>
{{< case-studies/quote
image="/images/case-studies/adform/banner3.jpg"
author="Edgaras Apšega, IT Systems Engineer, Adform"
>}}
"The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral. And we can see that a community really gathers around it. Everyone shares their experiences, their knowledge, and the fact that it's open source, you can contribute."
{{< /case-studies/quote >}}
<p>The team, which had already been using Prometheus for monitoring, embraced Kubernetes, microservices, and cloud native practices. "The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral," says Apšega. "And we can see that a community really gathers around it."</p>
<p>A proof of concept project was started, with a Kubernetes cluster running on bare metal in the data center. When developers saw how quickly containers could be spun up compared to the virtual machine process, "they wanted to ship their containers in production right away, and we were still doing proof of concept," says IT Systems Engineer Andrius Cibulskis.</p>
<p>Of course, a lot of work still had to be done. "First of all, we had to learn Kubernetes, see all of the moving parts, how they glue together," says Apšega. "Second of all, the whole CI/CD part had to be redone, and our DevOps team had to invest more man hours to implement it. And third is that developers had to rewrite the code, and they're still doing it."</p>
<p>The first production cluster was launched in the spring of 2018, and is now up to 20 physical machines dedicated for pods throughout three data centers, with plans for separate clusters in the other four data centers. The user-facing Adform application platform, data distribution platform, and back ends are now all running on Kubernetes. "Many APIs for critical applications are being developed for Kubernetes," says Apšega. "Teams are rewriting their applications to .NET core, because it supports containers, and preparing to move to Kubernetes. And new applications, by default, go in containers."</p>
{{< case-studies/quote
image="/images/case-studies/adform/banner4.jpg"
author="Andrius Cibulskis, IT Systems Engineer, Adform"
>}}
"Releases are really nice for them, because they just push their code to Git and that's it. They don't have to worry about their virtual machines anymore."
{{< /case-studies/quote >}}
<p>This big push has been driven by the real impact that these new practices have had. "Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes." The release process went from several hours to several minutes. Autoscaling is at least six times faster than the semi-manual VM bootstrapping and application deployment required before.</p>
<p>The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching two to three times more efficiency over virtual machines.</p>
<p>Prometheus has also had a positive impact: "It provides high availability for metrics and alerting," says Apšega. "We monitor everything starting from hardware to applications. Having all the metrics in Grafana dashboards provides great insight on our systems."</p>
{{< case-studies/quote author="Edgaras Apšega, IT Systems Engineer, Adform" >}}
"I think that our company just started our cloud native journey. It seems like a huge road ahead, but we're really happy that we joined it."
{{< /case-studies/quote >}}
<p>All of these benefits have trickled down to individual team members, whose working lives have been changed for the better. "They used to have to get up at night to re-start some services, and now Kubernetes handles all of that," says Apšega. Adds Cibulskis: "Releases are really nice for them, because they just push their code to Git and that's it. They don't have to worry about their virtual machines anymore." Even the security teams have been impacted. "Security teams are always not happy," says Apšega, "and now they're happy because they can easily inspect the containers."</p>
<p>The company plans to remain in the data centers for now, "mostly because we want to keep all the data, to not share it in any way," says Cibulskis, "and it's cheaper at our scale." But, Apšega says, the possibility of using a hybrid cloud for computing is intriguing: "One of the projects we're interested in is the <a href="https://github.com/virtual-kubelet/virtual-kubelet">Virtual Kubelet</a> that lets you spin up the working nodes on different clouds to do some computing."</p>
<p>Apšega, Cibulskis and their colleagues are keeping tabs on how the cloud native ecosystem develops, and are excited to contribute where they can. "I think that our company just started our cloud native journey," says Apšega. "It seems like a huge road ahead, but we're really happy that we joined it."</p>

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#040606;}</style></defs><title>kubernetes.io-54664</title><rect class="cls-1" x="-4.55738" y="-4.48481" width="223.25536" height="134.51136"/><path class="cls-2" d="M169.05853,71.307c.23549,6.07483,5.42615,10.38072,14.09569,10.38072,7.0783,0,12.91766-3.06693,12.91766-9.85009,0-4.71846-2.65436-7.49024-8.78885-8.66954l-4.77685-.94382c-3.06756-.59028-5.19066-1.17992-5.19066-3.0079,0-2.00568,2.06471-2.89047,4.65943-2.89047,3.77525,0,5.3081,1.887,5.42678,4.1288H194.951c-.41258-5.89838-5.1304-9.8501-12.73994-9.8501-7.845,0-12.50382,4.30588-12.50382,9.90912,0,6.84157,5.54483,7.96247,10.3217,8.84662l3.95171.70834c2.83082.53062,4.06977,1.35638,4.06977,3.0079,0,1.47444-1.41541,2.94887-4.77748,2.94887-4.89553,0-6.488-2.53631-6.54705-4.71845Zm-27.4259-5.13164a8.58413,8.58413,0,0,1,8.557-8.61113q.02706-.00009.05409,0a8.58415,8.58415,0,0,1,8.61115,8.557q.00009.02706,0,.0541a8.58413,8.58413,0,0,1-8.55641,8.61177q-.02738.00009-.05473,0a8.58413,8.58413,0,0,1-8.61113-8.557q-.00009-.02738,0-.05473M158.79589,81.098h7.49149V50.95811h-7.49149v2.41825a15.58033,15.58033,0,1,0-9.01133,28.37034q.05285.00018.10567,0a15.47693,15.47693,0,0,0,8.90566-2.77179ZM106.59844,66.17532a8.58413,8.58413,0,0,1,8.557-8.61113q.02706-.00009.05409,0a8.58414,8.58414,0,0,1,8.61114,8.557q.00009.02706,0,.05409a8.58413,8.58413,0,0,1-8.55641,8.61177q-.02738.00009-.05473,0a8.58413,8.58413,0,0,1-8.61113-8.557q-.00009-.02738,0-.05473m17.16387-25.47987V53.37636a15.58048,15.58048,0,1,0-9.01195,28.37034q.05251.00018.105,0a15.48233,15.48233,0,0,0,8.90691-2.77179V81.098h7.49023V40.69545ZM96.21772,50.95811H88.72811V81.098h7.49023Zm0-10.26266H88.72811v7.49023h7.49023ZM60.41805,66.17532a8.58414,8.58414,0,0,1,8.557-8.61113q.02673-.00009.05346,0a8.58414,8.58414,0,0,1,8.61114,8.557q.00009.02706,0,.05409a8.58414,8.58414,0,0,1-8.55642,8.61177q-.02736.00009-.05472,0a8.58412,8.58412,0,0,1-8.61113-8.557q-.00009-.02738,0-.05473M77.58067,40.69545V53.37636A15.58033,15.58033,0,1,0,68.56935,81.7467q.05283.00018.10567,0a15.4769,15.4769,0,0,0,8.90565-2.77179V81.098h7.4915V40.69545ZM25.38259,66.176a8.58414,8.58414,0,0,1,8.557-8.61114q.02736-.00009.05472,0a8.58414,8.58414,0,0,1,8.61114,8.557q.00009.027,0,.05409a8.58414,8.58414,0,0,1-8.55642,8.61176q-.02736.00009-.05472,0a8.58413,8.58413,0,0,1-8.61177-8.55641q-.00009-.02768,0-.05535m17.16388,14.9227H50.038V50.95872H42.54647V53.377a15.58048,15.58048,0,1,0-9.01069,28.37035q.0522.00016.10441,0a15.48032,15.48032,0,0,0,8.90628-2.77179Z"/></svg>

After

Width:  |  Height:  |  Size: 2.5 KiB

View File

@ -0,0 +1,78 @@
---
title: adidas Case Study
linkTitle: adidas
case_study_styles: true
cid: caseStudies
featured: false
new_case_study_styles: true
heading_background: /images/case-studies/adidas/banner1.png
heading_title_text: adidas
use_gradient_overlay: true
subheading: >
Staying True to Its Culture, adidas Got 40% of Its Most Impactful Systems Running on Kubernetes in a Year
case_study_details:
- Company: adidas
- Location: Herzogenaurach, Germany
- Industry: Fashion
---
<h2>Challenge</h2>
<p>In recent years, the adidas team was happy with its software choices from a technology perspective—but accessing all of the tools was a problem. For instance, "just to get a developer VM, you had to send a request form, give the purpose, give the title of the project, who's responsible, give the internal cost center a call so that they can do recharges," says Daniel Eichten, Senior Director of Platform Engineering. "The best case is you got your machine in half an hour. Worst case is half a week or sometimes even a week."</p>
<h2>Solution</h2>
<p>To improve the process, "we started from the developer point of view," and looked for ways to shorten the time it took to get a project up and running and into the adidas infrastructure, says Senior Director of Platform Engineering Fernando Cornago. They found the solution with containerization, agile development, continuous delivery, and a cloud native platform that includes Kubernetes and Prometheus.</p>
<h2>Impact</h2>
<p>Just six months after the project began, 100% of the adidas e-commerce site was running on Kubernetes. Load time for the e-commerce site was reduced by half. Releases went from every 4-6 weeks to 3-4 times a day. With 4,000 pods, 200 nodes, and 80,000 builds per month, adidas is now running 40% of its most critical, impactful systems on its cloud native platform.</p>
{{< case-studies/quote
image="/images/case-studies/adidas/banner2.png"
author="FERNANDO CORNAGO, SENIOR DIRECTOR OF PLATFORM ENGINEERING AT ADIDAS"
>}}
"For me, Kubernetes is a platform made by engineers for engineers. It's relieving the development team from tasks that they don't want to do, but at the same time giving the visibility of what is behind the curtain, so they can also control it."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
In recent years, the adidas team was happy with its software choices from a technology perspective—but accessing all of the tools was a problem.
{{< /case-studies/lead >}}
<p>For engineers at adidas, says Daniel Eichten, Senior Director of Platform Engineering, "it felt like being an artist with your hands tied behind your back, and you're supposed to paint something."</p>
<p>For instance, "just to get a developer VM, you had to send a request form, give the purpose, give the title of the project, who's responsible, give the internal cost center a call so that they can do recharges," says Eichten. "Eventually, after a ton of approvals, then the provisioning of the machine happened within minutes, and then the best case is you got your machine in half an hour. Worst case is half a week or sometimes even a week."</p>
<p>To improve the process, "we started from the developer point of view," and looked for ways to shorten the time it took to get a project up and running and into the adidas infrastructure, says Senior Director of Platform Engineering Fernando Cornago.</p>
{{< case-studies/quote author="DANIEL EICHTEN, SENIOR DIRECTOR OF PLATFORM ENGINEERING AT ADIDAS" >}}
"I call our cloud native platform the field of dreams. We built it, and we never anticipated that people would come and just love it."
{{< /case-studies/quote >}}
<p>"We were engineers before," adds Eichten. "We know what a typical engineer needs, is craving for, what he or she doesn't want to take care of. For us it was pretty clear. We filled the gaps that no one wants to take care of, and we make the stuff that is usually painful as painless as possible." The goals: to improve speed, operability, and observability.</p>
<p>Cornago and Eichten found the solution with containerization, agile development, continuous delivery, and a cloud native platform that includes Kubernetes and Prometheus. "Choosing Kubernetes was pretty clear," says Eichten. "Day zero, deciding, easy. Day one, installing, configuring, easy. Day two, keeping it up and running even with small workloads, if something goes wrong, you don't know how these things work in detail, you're lost. For day two problems, we needed a partner who's helping us."</p>
<p>In early 2017, adidas chose Giant Swarm to consult, install, configure, and run all of its Kubernetes clusters in AWS and on premise. "There is no competitive edge over our competitors like Puma or Nike in running and operating a Kubernetes cluster," says Eichten. "Our competitive edge is that we teach our internal engineers how to build cool e-comm stores that are fast, that are resilient, that are running perfectly."</p>
{{< case-studies/quote
image="/images/case-studies/adidas/banner3.png"
author="DANIEL EICHTEN, SENIOR DIRECTOR OF PLATFORM ENGINEERING AT ADIDAS"
>}}
"There is no competitive edge over our competitors like Puma or Nike in running and operating a Kubernetes cluster. Our competitive edge is that we teach our internal engineers how to build cool e-comm stores that are fast, that are resilient, that are running perfectly."
{{< /case-studies/quote >}}
<p>Adds Cornago: "For me, our Kubernetes platform is made by engineers for engineers. It's relieving the development team from tasks that they don't want to do, but at the same time giving the visibility of what is behind the curtain, so they can also control it."</p>
<p>Case in point: For Cyber Week, the team has to create a lot of custom metrics. In November 2017, "because we used the same Prometheus that we use for monitoring the cluster, we really filled the Prometheus database, and we were not able to reduce the retention period [enough]," says Cornago. So during the freeze period before the peak shopping week, five engineers from the platform team worked with five engineers from the e-comm team to figure out a federated solution that was implemented in two days.</p>
<p>In addition to being ready for Cyber Week—100% of the adidas e-commerce site was running on Kubernetes then, just six months after the project began—the cloud native stack has had other impressive results. Load time for the e-commerce site was reduced by half. Releases went from every 4-6 weeks to 3-4 times a day. With 4,000 pods, 200 nodes, and 80,000 builds per month, adidas is now running 40% of its most critical, impactful systems on its cloud native platform.</p>
<p>And adoption has spread quickly among adidas's 300-strong engineering corps. "I call our cloud native platform the field of dreams," says Eichten. "We built it, and we never anticipated that people would come and just love it."</p>
<p>For one thing, "everybody who can touch a line of code" has spent one full week onboarding and learning the platform with members of the 35-person platform engineering team, says Cornago. "We try to spend 50% of our time sitting with the teams, because this is the only way to understand how our platform is being used. And this is how the teams will feel safe that there is someone on the other side of the wall, also feeling the pain."</p>
<p>Additionally, Cornago and Eichten took advantage of the fact that as a fashion athletic wear brand, adidas has sports and competition in its DNA. "Top-down mandates don't work at adidas, but gamification works," says Cornago. "So this year we had a DevOps Cup competition. Every team created new technical capabilities and had a hypothesis of how this affected business value. We announced the winner at a big internal tech summit with more than 600 people. It's been really, really useful for the teams."</p>
<p>So if they had any advice for other companies looking to start a cloud native journey, it would be this: "There is no one-size-fits-all for all companies," says Cornago. "Apply your company's culture to everything that you do."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.8 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

View File

@ -0,0 +1,84 @@
---
title: Amadeus Case Study
case_study_styles: true
cid: caseStudies
new_case_study_styles: true
heading_background: /images/case-studies/amadeus/banner1.jpg
heading_title_logo: /images/amadeus_logo.png
subheading: >
Another Technical Evolution for a 30-Year-Old Company
case_study_details:
- Company: Amadeus IT Group
- Location: Madrid, Spain
- Industry: Travel Technology
---
<h2>Challenge</h2>
<p>In the past few years, Amadeus, which provides IT solutions to the travel industry around the world, found itself in need of a new platform for the 5,000 services supported by its service-oriented architecture. The 30-year-old company operates its own data center in Germany, and there were growing demands internally and externally for solutions that needed to be geographically dispersed. And more generally, "we had objectives of being even more highly available," says Eric Mountain, Senior Expert, Distributed Systems at Amadeus. Among the company's goals: to increase automation in managing its infrastructure, optimize the distribution of workloads, use data center resources more efficiently, and adopt new technologies more easily.</p>
<h2>Solution</h2>
<p>Mountain has been overseeing the company's migration to <a href="https://kubernetes.io/">Kubernetes</a>, using <a href="https://www.openshift.org/">OpenShift</a> Container Platform, <a href="https://www.redhat.com/en">Red Hat</a>'s enterprise container platform.</p>
<h2>Impact</h2>
<p>One of the first projects the team deployed in Kubernetes was the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume. "It's now handling in production several thousand transactions per second, and it's deployed in multiple data centers throughout the world," says Mountain. "It's not a migration of an existing workload; it's a whole new workload that we couldn't have done otherwise. [This platform] gives us access to market opportunities that we didn't have before."</p>
{{< case-studies/quote author="Eric Mountain, Senior Expert, Distributed Systems at Amadeus IT Group" >}}
"We want multi-data center capabilities, and we want them for our mainstream system as well. We didn't think that we could achieve them with our existing system. We need new automation, things that Kubernetes and OpenShift bring."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
In his two decades at Amadeus, Eric Mountain has been the migrations guy.
{{< /case-studies/lead >}}
<p>Back in the day, he worked on the company's move from Unix to Linux, and now he's overseeing the journey to cloud native. "Technology just keeps changing, and we embrace it," he says. "We are celebrating our 30 years this year, and we continue evolving and innovating to stay cost-efficient and enhance everyone's travel experience, without interrupting workflows for the customers who depend on our technology."</p>
<p>That was the challenge that Amadeus—which provides IT solutions to the travel industry around the world, from flight searches to hotel bookings to customer feedback—faced in 2014. The technology team realized it was in need of a new platform for the 5,000 services supported by its service-oriented architecture.</p>
<p>The tipping point occurred when they began receiving many requests, internally and externally, for solutions that needed to be geographically outside the company's main data center in Germany. "Some requests were for running our applications on customer premises," Mountain says. "There were also new services we were looking to offer that required response time to the order of a few hundred milliseconds, which we couldn't achieve with transatlantic traffic. Or at least, not without eating into a considerable portion of the time available to our applications for them to process individual queries."</p>
<p>More generally, the company was interested in leveling up on high availability, increasing automation in managing infrastructure, optimizing the distribution of workloads and using data center resources more efficiently. "We have thousands and thousands of servers," says Mountain. "These servers are assigned roles, so even if the setup is highly automated, the machine still has a given role. It's wasteful on many levels. For instance, an application doesn't necessarily use the machine very optimally. Virtualization can help a bit, but it's not a silver bullet. If that machine breaks, you still want to repair it because it has that role and you can't simply say, 'Well, I'll bring in another machine and give it that role.' It's not fast. It's not efficient. So we wanted the next level of automation."</p>
{{< case-studies/quote image="/images/case-studies/amadeus/banner3.jpg" >}}
"We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier."
{{< /case-studies/quote >}}
<p>While mainly a C++ and Java shop, Amadeus also wanted to be able to adopt new technologies more easily. Some of its developers had started using languages like <a href="https://www.python.org/">Python</a> and databases like <a href="https://www.couchbase.com/">Couchbase</a>, but Mountain wanted still more options, he says, "in order to better adapt our technical solutions to the products we offer, and open up entirely new possibilities to our developers." Working with recent technologies and cool new things would also make it easier to attract new talent.</p>
<p>All of those needs led Mountain and his team on a search for a new platform. "We did a set of studies and proofs of concept over a fairly short period, and we considered many technologies," he says. "In the end, we were left with three choices: build everything on premise, build on top of <a href="https://kubernetes.io/">Kubernetes</a> whatever happens to be missing from our point of view, or go with <a href="https://www.openshift.com/">OpenShift</a> and build whatever remains there."</p>
<p>The team decided against building everything themselves—though they'd done that sort of thing in the past—because "people were already inventing things that looked good," says Mountain.</p>
<p>Ultimately, they went with OpenShift Container Platform, <a href="https://www.redhat.com/en">Red Hat</a>'s Kubernetes-based enterprise offering, instead of building on top of Kubernetes because "there was a lot of synergy between what we wanted and the way Red Hat was anticipating going with OpenShift," says Mountain. "They were clearly developing Kubernetes, and developing certain things ahead of time in OpenShift, which were important to us, such as more security."</p>
<p>The hope was that those particular features would eventually be built into Kubernetes, and, in the case of security, Mountain feels that has happened. "We realize that there's always a certain amount of automation that we will probably have to develop ourselves to compensate for certain gaps," says Mountain. "The less we do that, the better for us. We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier."</p>
{{< case-studies/quote image="/images/case-studies/amadeus/banner4.jpg" >}}
"It's not a migration of an existing workload; it's a whole new workload that we couldn't have done otherwise. [This platform] gives us access to market opportunities that we didn't have before."
{{< /case-studies/quote >}}
<p>The first project the team tackled was one that they knew had to run outside the data center in Germany. Because of the project's needs, "We couldn't rely only on the built-in Kubernetes service discovery; we had to layer on top of that an extra service discovery level that allows us to load balance at the operation level within our system," says Mountain. They also built a stream dedicated to monitoring, which at the time wasn't offered in the Kubernetes or OpenShift ecosystem. Now that <a href="https://www.prometheus.io/">Prometheus</a> and other products are available, Mountain says the company will likely re-evaluate their monitoring system: "We obviously always like to leverage what Kubernetes and OpenShift can offer."
</p>
<p>The second project ended up going into production first: the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume and was deployed in public cloud. Launched in early 2016, it is "now handling in production several thousand transactions per second, and it's deployed in multiple data centers throughout the world," says Mountain. "It's not a migration of an existing workload; it's a whole new workload that we couldn't have done otherwise. [This platform] gives us access to market opportunities that we didn't have before."</p>
<p>Having been through this kind of technical evolution more than once, Mountain has advice on how to handle the cultural changes. "That's one aspect that we can tackle progressively," he says. "We have to go on supplying our customers with new features on our pre-existing products, and we have to keep existing products working. So we can't simply do absolutely everything from one day to the next. And we mustn't sell it that way."</p>
<p>The first order of business, then, is to pick one or two applications to demonstrate that the technology works. Rather than choosing a high-impact, high-risk project, Mountain's team selected a smaller application that was representative of all the company's other applications in its complexity: "We just made sure we picked something that's complex enough, and we showed that it can be done."</p>
{{< case-studies/quote >}}
"The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we don't think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring."
{{< /case-studies/quote >}}
<p>Next comes convincing people. "On the operations side and on the R&D side, there will be people who say quite rightly, 'There is a system, and it works, so why change?'" Mountain says. "The only thing that really convinces people is showing them the value." For Amadeus, people realized that the Airline Cloud Availability product could not have been made available on the public cloud with the company's existing system. The question then became, he says, "Do we go into a full-blown migration? Is that something that is justified?"</p>
<p>"The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we don't think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring."</p>
<p>So how do you get everyone on board? "Make sure you have good links between your R&D and your operations," he says. "Also make sure you're going to talk early on to the investors and stakeholders. Figure out what it is that they will be expecting from you, that will convince them or not, that this is the right way for your company."</p>
<p>His other advice is simply to make the technology available for people to try it. "Kubernetes and OpenShift Origin are open source software, so there's no complicated license key for the evaluation period and you're not limited to 30 days," he points out. "Just go and get it running." Along with that, he adds, "You've got to be prepared to rethink how you do things. Of course making your applications as cloud native as possible is how you'll reap the most benefits: 12 factors, CI/CD, which is continuous integration, continuous delivery, but also continuous deployment."</p>
<p>And while they explore that aspect of the technology, Mountain and his team will likely be practicing what he preaches to others taking the cloud native journey. "See what happens when you break it, because it's important to understand the limits of the system," he says. Or rather, he notes, the advantages of it. "Breaking things on Kube is actually one of the nice things about it—it recovers. It's the only real way that you'll see that you might be able to do things."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

View File

@ -0,0 +1,92 @@
---
title: Ancestry Case Study
case_study_styles: true
cid: caseStudies
new_case_study_styles: true
heading_background: /images/case-studies/ancestry/banner1.jpg
heading_title_logo: /images/ancestry_logo.png
subheading: >
Digging Into the Past With New Technology
case_study_details:
- Company: Ancestry
- Location: Lehi, Utah
- Industry: Internet Company, Online Services
---
<h2>Challenge</h2>
<p>Ancestry, the global leader in family history and consumer genomics, uses sophisticated engineering and technology to help everyone, everywhere discover the story of what led to them. The company has spent more than 30 years innovating and building products and technologies that at their core, result in real and emotional human responses. <a href="https://www.ancestry.com">Ancestry</a> currently serves more than 2.6 million paying subscribers, holds 20 billion historical records, 90 million family trees and more than four million people are in its AncestryDNA network, making it the largest consumer genomics DNA network in the world. The company's popular website, <a href="https://www.ancestry.com">ancestry.com</a>, has been working with big data long before the term was popularized. The site was built on hundreds of services, technologies and a traditional deployment methodology. "It's worked well for us in the past," says Paul MacKay, software engineer and architect at Ancestry, "but had become quite cumbersome in its processing and is time-consuming. As a primarily online service, we are constantly looking for ways to accelerate to be more agile in delivering our solutions and our&nbsp;products."</p>
<h2>Solution</h2>
<p>The company is transitioning to cloud native infrastructure, using <a href="https://www.docker.com">Docker</a> containerization, <a href="https://kubernetes.io">Kubernetes</a> orchestration and <a href="https://prometheus.io">Prometheus</a> for cluster monitoring.</p>
<h2>Impact</h2>
<p>"Every single product, every decision we make at Ancestry, focuses on delighting our customers with intimate, sometimes life-changing discoveries about themselves and their families," says MacKay. "As the company continues to grow, the increased productivity gains from using Kubernetes has helped Ancestry make customer discoveries faster. With the move to Dockerization for example, instead of taking between 20 to 50 minutes to deploy a new piece of code, we can now deploy in under a minute for much of our code. We've truly experienced significant time savings in addition to the various features and benefits from cloud native and Kubernetes-type technologies."</p>
{{< case-studies/quote author="PAUL MACKAY, SOFTWARE ENGINEER AND ARCHITECT AT ANCESTRY" >}}
"At a certain point, you have to step back if you're going to push a new technology and get key thought leaders with engineers within the organization to become your champions for new technology adoption. At training sessions, the development teams were always the ones that were saying, 'Kubernetes saved our time tremendously; it's an enabler. It really is incredible.'"
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
It started with a Shaky Leaf.
{{< /case-studies/lead >}}
<p>Since its introduction a decade ago, the Shaky Leaf icon has become one of Ancestry's signature features, which signals to users that there's a helpful hint you can use to find out more about your family tree.</p>
<p>So when the company decided to begin moving its infrastructure to cloud native technology, the first service that was launched on <a href="https://kubernetes.io">Kubernetes</a>, the open source platform for managing application containers across clusters of hosts, was this hint system. Think of it as Amazon's recommended products, but instead of recommending products the company recommends records, stories, or familial connections. "It was a very important part of the site," says Ancestry software engineer and architect Paul MacKay, "but also small enough for a pilot project that we knew we could handle in a very appropriate, secure way."</p>
<p>And when it went live smoothly in early 2016, "our deployment time for this service literally was cut down from 50 minutes to 2 or 5 minutes," MacKay adds. "The development team was just thrilled because we're focused on supplying a great experience for our customers. And that means features, it means stability, it means all those things that we need for a first-in-class type operation."</p>
<p>The stability of that Shaky Leaf was a signal for MacKay and his team that their decision to embrace cloud native technologies was the right one for the company. With a private data center, Ancestry built its website (which launched in 1996) on hundreds of services and technologies and a traditional deployment methodology. "It worked well for us in the past, but the sum of the legacy systems became quite cumbersome in its processing and was time-consuming," says MacKay. "We were looking for other ways to accelerate, to be more agile in delivering our solutions and our products."</p>
{{< case-studies/quote image="/images/case-studies/ancestry/banner3.jpg" >}}
"And when it [Kubernetes] went live smoothly in early 2016, 'our deployment time for this service literally was cut down from 50 minutes to 2 or 5 minutes,' MacKay adds. 'The development team was just thrilled because we're focused on supplying a great experience for our customers. And that means features, it means stability, it means all those things that we need for a first-in-class type operation.'"
{{< /case-studies/quote >}}
<p>That need led them in 2015 to explore containerization. Ancestry engineers had already been using technology like <a href="https://www.java.com/en/">Java</a> and <a href="https://www.python.org">Python</a> on Linux, so part of the decision was about making the infrastructure more Linux-friendly. They quickly decided that they wanted to go with Docker for containerization, "but it always comes down to the orchestration part of it to make it really work," says MacKay.</p>
<p>His team looked at orchestration platforms offered by <a href="https://docs.docker.com/compose/">Docker Compose</a>, <a href="http://mesos.apache.org">Mesos</a> and <a href="https://www.openstack.org/software/">OpenStack</a>, and even started to prototype some homegrown solutions. And then they started hearing rumblings of the imminent release of Kubernetes v1.0. "At the forefront, we were looking at the secret store, so we didn't have to manage that all ourselves, the config maps, the methodology of seamless deployment strategy," he says. "We found that how Kubernetes had done their resources, their types, their labels and just their interface was so much further advanced than the other things we had seen. It was a feature fit."</p>
{{< case-studies/lead >}}
Plus, MacKay says, "I just believed in the confidence that comes with the history that Google has with containerization. So we started out right on the leading edge of it. And we haven't looked back since."
{{< /case-studies/lead >}}
<p>Which is not to say that adopting a new technology hasn't come with some challenges. "Change is hard," says MacKay. "Not because the technology is hard or that the technology is not good. It's just that people like to do things like they had done [before]. You have the early adopters and you have those who are coming in later. It was a learning experience on both sides."</p>
<p>Figuring out the best deployment operations for Ancestry was a big part of the work it took to adopt cloud native infrastructure. "We want to make sure the process is easy and also controlled in the manner that allows us the highest degree of security that we demand and our customers demand," says MacKay. "With Kubernetes and other products, there are some good solutions, but a little bit of glue is needed to bring it into corporate processes and governances. It's like having a set of gloves that are generic, but when you really do want to grab something you have to make it so it's customized to you. That's what we had to do."</p>
<p>Their best practices include allowing their developers to deploy into development stage and production, but then controlling the aspects that need governance and auditing, such as secrets. They found that having one namespace per service is useful for achieving that containment of secrets and config maps. And for their needs, having one container per pod makes it easier to manage and to have a smaller unit of deployment.
</p>
{{< case-studies/quote image="/images/case-studies/ancestry/banner4.jpg" >}}
"The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology."
{{< /case-studies/quote >}}
<p>With that process established, the time spent on deployment was cut down to under a minute for some services. "As programmers, we have what's called REPL: read, evaluate, print, and loop, but with Kubernetes, we have CDEL: compile, deploy, execute, and loop," says MacKay. "It's a very quick loop back and a great benefit to understand that when our services are deployed in production, they're the same as what we tested in the pre-production environments. The approach of cloud native for Ancestry provides us a better ability to scale and to accommodate the business needs as work loads occur."</p>
<p>The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology. "Engineers like to code, they like to do features, they don't like to sit around waiting for things to be deployed and worrying about scaling up and out and down," says MacKay. "After a while the engineers became our champions. At training sessions, the development teams were always the ones saying, 'Kubernetes saved our time tremendously; it's an enabler; it really is incredible.' Over time, we were able to convince our management that this was a transition that the industry is making and that we needed to be a part of it."</p>
<p>A year later, Ancestry has transitioned a good number of applications to Kubernetes. "We have many different services that make up the rich environment that [the website] has from both the DNA side and the family history side," says MacKay. "We have front-end stacks, back-end stacks and back-end processing type stacks that are in the cluster."</p>
<p>The company continues to weigh which services it will move forward to Kubernetes, which ones will be kept as is, and which will be replaced in the future and thus don't have to be moved over. MacKay estimates that the company is "approaching halfway on those features that are going forward. We don't have to do a lot of convincing anymore. It's more of an issue of timing with getting product management and engineering staff the knowledge and information that they need."</p>
{{< case-studies/quote >}}
"... 'I believe in Kubernetes. I believe in containerization. I think if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about, and it'll&nbsp;go&nbsp;forward.'"
{{< /case-studies/quote >}}
<p>Looking ahead, MacKay sees Ancestry maximizing the benefits of Kubernetes in 2017. "We're very close to having everything that should be or could be in a Linux-friendly world in Kubernetes by the end of the year," he says, adding that he's looking forward to features such as federation and horizontal pod autoscaling that are currently in the works. "Kubernetes has been very wonderful for us and we continue to ride the wave."</p>
<p>That wave, he points out, has everything to do with the vibrant Kubernetes community, which has grown by leaps and bounds since Ancestry joined it as an early adopter. "This is just a very rough way of judging it, but on Slack in June 2015, there were maybe 500 on there," MacKay says. "The last time I looked there were maybe 8,500 just on the Slack channel. There are so many major companies and different kinds of companies involved now. It's the variety of contributors, the number of contributors, the incredibly competent and friendly community."</p>
<p>As much as he and his team at Ancestry have benefited from what he calls "the goodness and the technical abilities of many" in the community, they've also contributed information about best practices, logged bug issues and participated in the open source conversation. And they've been active in attending <a href="https://www.meetup.com/Utah-Kubernetes-Meetup/">meetups</a> to help educate and give back to the local tech community in Utah. Says MacKay: "We're trying to give back as far as our experience goes, rather than just code."</p>
<p>When he meets with companies considering adopting cloud native infrastructure, the best advice he has to give from Ancestry's Kubernetes journey is this: "Start small, but with hard problems," he says. And "you need a patron who understands the vision of containerization, to help you tackle the political as well as other technical roadblocks that can occur when change is needed."</p>
<p>With the changes that MacKay's team has led over the past year and a half, cloud native will be part of Ancestry's technological genealogy for years to come. MacKay has been such a champion of the technology that he says people have jokingly accused him of having a Kubernetes tattoo.</p>
<p>"I really don't," he says with a laugh. "But I'm passionate. I'm not exclusive to any technology; I use whatever I need that's out there that makes us great. If it's something else, I'll use it. But right now I believe in Kubernetes. I believe in containerization. I think if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about, and it'll go forward."</p>
<p>He pauses. "So, yeah, I guess you can say I'm an evangelist for Kubernetes," he says. "But I'm not getting a tattoo!"</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -0,0 +1,82 @@
---
title: Ant Financial Case Study
linkTitle: ant-financial
case_study_styles: true
cid: caseStudies
featured: false
new_case_study_styles: true
heading_background: /images/case-studies/antfinancial/banner1.jpg
heading_title_logo: /images/antfinancial_logo.png
subheading: >
Ant Financial's Hypergrowth Strategy Using Kubernetes
case_study_details:
- Company: Ant Financial
- Location: Hangzhou, China
- Industry: Financial Services
---
<h2>Challenge</h2>
<p>Officially founded in October 2014, <a href="https://www.antfin.com/index.htm?locale=en_us">Ant Financial</a> originated from <a href="https://global.alipay.com/">Alipay</a>, the world's largest online payment platform that launched in 2004. The company also offers numerous other services leveraging technology innovation. With the volume of transactions Alipay handles for its 900+ million users worldwide (through its local and global partners)—256,000 transactions per second at the peak of Double 11 Singles Day 2017, and total gross merchandise value of $31 billion for Singles Day 2018—not to mention that of its other services, Ant Financial faces "data processing challenge in a whole new way," says Haojie Hang, who is responsible for Product Management for the Storage and Compute Group. "We see three major problems of operating at that scale: how to provide real-time compute, storage, and processing capability, for instance to make real-time recommendations for fraud detection; how to provide intelligence on top of this data, because there's too much data and then we're not getting enough insight; and how to apply security in the application level, in the middleware level, the system level, even the chip level." In order to provide reliable and consistent services to its customers, Ant Financial embraced containers in early 2014, and soon needed an orchestration solution for the tens-of-thousands-of-node clusters in its data centers.</p>
<h2>Solution</h2>
<p>After investigating several technologies, the team chose <a href="https://kubernetes.io/">Kubernetes</a> for orchestration, as well as a number of other CNCF projects, including <a href="https://prometheus.io/">Prometheus</a>, <a href="https://opentracing.io/">OpenTracing</a>, <a href="https://coreos.com/etcd/">etcd</a> and <a href="https://coredns.io/">CoreDNS</a>. "In late 2016, we decided that Kubernetes will be the de facto standard," says Hang. "Looking back, we made the right bet on the right technology. But then we needed to move the production workload from the legacy infrastructure to the latest Kubernetes-enabled platform, and that took some time, because we are very careful in terms of reliability and consistency." All core financial systems were containerized by November 2017, and the migration to Kubernetes is ongoing.</p>
<h2>Impact</h2>
<p>"We've seen at least tenfold in improvement in terms of the operations with cloud native technology, which means you can have tenfold increase in terms of output," says Hang. Ant also provides its fully integrated financial cloud platform to business partners around the world, and hopes to power the next generation of digital banking with deep experience in service innovation and technology expertise. Hang says the team hasn't begun to focus on optimizing the Kubernetes platform, either: "Because we're still in the hyper growth stage, we're not in a mode where we do cost saving yet."</p>
{{< case-studies/quote author="HAOJIE HANG, PRODUCT MANAGEMENT, ANT FINANCIAL" >}}
"In late 2016, we decided that Kubernetes will be the de facto standard. Looking back, we made the right bet on the right technology."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
A spinoff of the multinational conglomerate Alibaba, Ant Financial boasts a $150+ billion valuation and the scale to match. The fintech startup, launched in 2014, is comprised of Alipay, the world's largest online payment platform, and numerous other services leveraging technology innovation.
{{< /case-studies/lead >}}
<p>And the volume of transactions that Alipay handles for over 900 million users worldwide (through its local and global partners) is staggering: 256,000 per second at the peak of Double 11 Singles Day 2017, and total gross merchandise value of $31 billion for Singles Day 2018. With the mission of "bringing the world equal opportunities," Ant Financial is dedicated to creating an open, shared credit system and financial services platform through technology innovations.</p>
<p>Combine that with the operations of its other properties—such as the Huabei online credit system, Jiebei lending service, and the 350-million-user <a href="https://en.wikipedia.org/wiki/Ant_Forest">Ant Forest</a> green energy mobile app—and Ant Financial faces "data processing challenge in a whole new way," says Haojie Hang, who is responsible for Product Management for the Storage and Compute Group. "We see three major problems of operating at that scale: how to provide real-time compute, storage, and processing capability, for instance to make real-time recommendations for fraud detection; how to provide intelligence on top of this data, because there's too much data and we're not getting enough insight; and how to apply security in the application level, in the middleware level, the system level, even the chip level."</p>
<p>To address those challenges and provide reliable and consistent services to its customers, Ant Financial embraced <a href="https://www.docker.com/">Docker</a> containerization in 2014. But they soon realized that they needed an orchestration solution for some tens-of-thousands-of-node clusters in the company's data centers.</p>
{{< case-studies/quote
image="/images/case-studies/antfinancial/banner3.jpg"
author="RANGER YU, GLOBAL TECHNOLOGY PARTNERSHIP & DEVELOPMENT, ANT FINANCIAL"
>}}
"On Double 11 this year, we had plenty of nodes on Kubernetes, but compared to the whole scale of our infrastructure, this is still in progress."
{{< /case-studies/quote >}}
<p>The team investigated several technologies, including Docker Swarm and Mesos. "We did a lot of POCs, but we're very careful in terms of production systems, because we want to make sure we don't lose any data," says Hang. "You cannot afford to have a service downtime for one minute; even one second has a very, very big impact. We operate every day under pressure to provide reliable and consistent services to consumers and businesses in China and globally."</p>
<p>Ultimately, Hang says Ant chose Kubernetes because it checked all the boxes: a strong community, technology that "will be relevant in the next three to five years," and a good match for the company's engineering talent. "In late 2016, we decided that Kubernetes will be the de facto standard," says Hang. "Looking back, we made the right bet on the right technology. But then we needed to move the production workload from the legacy infrastructure to the latest Kubernetes-enabled platform. We spent a lot of time learning and then training our people to build applications on Kubernetes well."</p>
<p>All core financial systems were containerized by November 2017, and the migration to Kubernetes is ongoing. Ant's platform also leverages a number of other CNCF projects, including <a href="https://prometheus.io/">Prometheus</a>, <a href="https://opentracing.io/">OpenTracing</a>, <a href="https://coreos.com/etcd/">etcd</a> and <a href="https://coredns.io/">CoreDNS</a>. "On Double 11 this year, we had plenty of nodes on Kubernetes, but compared to the whole scale of our infrastructure, this is still in progress," says Ranger Yu, Global Technology Partnership & Development.</p>
{{< case-studies/quote
image="/images/case-studies/antfinancial/banner4.jpg"
author="HAOJIE HANG, PRODUCT MANAGEMENT, ANT FINANCIAL"
>}}
"We're very grateful for CNCF and this amazing technology, which we need as we continue to scale globally. We're definitely embracing the community and open source more in the future."
{{< /case-studies/quote >}}
<p>Still, there has already been an impact. "Cloud native technology has benefited us greatly in terms of efficiency," says Hang. "In general, we want to make sure our infrastructure is nimble and flexible enough for the work that could happen tomorrow. That's the goal. And with cloud native technology, we've seen at least tenfold improvement in operations, which means you can have tenfold increase in terms of output. Let's say you are operating 10 nodes with one person. With cloud native, tomorrow you can have 100 nodes."</p>
<p>Ant also provides its financial cloud platform to partners around the world, and hopes to power the next generation of digital banking with deep experience in service innovation and technology expertise. Hang says the team hasn't begun to focus on optimizing the Kubernetes platform, either: "Because we're still in the hyper growth stage, we're not in a mode where we do cost-saving yet."</p>
<p>The CNCF community has also been a valuable asset during Ant Financial's move to cloud native. "If you are applying a new technology, it's very good to have a community to discuss technical problems with other users," says Hang. "We're very grateful for CNCF and this amazing technology, which we need as we continue to scale globally. We're definitely embracing the community and open sourcing more in the future."</p>
{{< case-studies/quote
image="/images/case-studies/antfinancial/banner4.jpg"
author="RANGER YU, GLOBAL TECHNOLOGY PARTNERSHIP & DEVELOPMENT, ANT FINANCIAL"
>}}
"In China, we are the North Star in terms of innovation in financial and other related services," says Hang. "We definitely want to make sure we're still leading in the next 5 to 10 years with our investment in technology."
{{< /case-studies/quote >}}
<p>In fact, the company has already started to open source some of its <a href="https://github.com/alipay">cloud native middleware</a>. "We are going to be very proactive about that," says Yu. "CNCF provided a platform so everyone can plug in or contribute components. This is very good open source governance."</p>
<p>Looking ahead, the Ant team will continue to evaluate many other CNCF projects. Building a service mesh community in China, the team has brought together many China-based companies and developers to discuss the potential of that technology. "Service mesh is very attractive for Chinese developers and end users because we have a lot of legacy systems running now, and it's an ideal mid-layer to glue everything together, both new and legacy," says Hang. "For new technologies, we look very closely at whether they will last."</p>
<p>At Ant, Kubernetes passed that test with flying colors, and the team hopes other companies will follow suit. "In China, we are the North Star in terms of innovation in financial and other related services," says Hang. "We definitely want to make sure we're still leading in the next 5 to 10 years with our investment in technology."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.5 KiB

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#57565a;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.37342" y="-3.34411" width="223.25536" height="134.51136"/><path class="cls-2" d="M81.14242,56.12567,77.99668,64.0731h6.29149l-3.14575-7.94743Zm7.0534,17.78238-2.58263-6.49093h-8.9411L74.089,73.90805h-3.775l9.17231-23.114h3.31256l9.17318,23.114Z"/><path class="cls-2" d="M102.9876,62.3189a2.97367,2.97367,0,0,0-2.74772-1.35853A2.91125,2.91125,0,0,0,97.4913,62.3189a3.7299,3.7299,0,0,0-.43087,1.48937v4.23808A3.73648,3.73648,0,0,0,97.4913,69.537a3.40988,3.40988,0,0,0,5.4963,0,7.0611,7.0611,0,0,0,.66165-3.60929,7.3468,7.3468,0,0,0-.66165-3.60884Zm2.94694,8.8744a6.31949,6.31949,0,0,1-5.69466,3.07919,6.89537,6.89537,0,0,1-3.17945-.72845v7.64965H93.68261v-23.379h3.37782v.49738a6.8926,6.8926,0,0,1,3.17945-.72842,6.31935,6.31935,0,0,1,5.69466,3.079,10.006,10.006,0,0,1,1.09252,5.29783,10.16391,10.16391,0,0,1-1.09252,5.23285Z"/><path class="cls-2" d="M118.79367,62.3189a2.97667,2.97667,0,0,0-2.74944-1.35853,2.9108,2.9108,0,0,0-2.74859,1.35853,3.75853,3.75853,0,0,0-.43,1.48937v4.23808a3.76517,3.76517,0,0,0,.43,1.49068,3.41159,3.41159,0,0,0,5.498,0,7.07027,7.07027,0,0,0,.66078-3.60929,7.35645,7.35645,0,0,0-.66078-3.60884Zm2.94608,8.8744a6.32,6.32,0,0,1-5.69552,3.07919,6.88941,6.88941,0,0,1-3.1786-.72845v7.64965h-3.3778v-23.379h3.3778v.49738a6.88664,6.88664,0,0,1,3.1786-.72842,6.31981,6.31981,0,0,1,5.69552,3.079,10.00267,10.00267,0,0,1,1.093,5.29783,10.16449,10.16449,0,0,1-1.093,5.23285Z"/><path class="cls-2" d="M132.96442,54.20565h-3.80825v16.259h3.80825c6.58968,0,6.35717-2.25263,6.35717-8.113,0-5.8944.23251-8.14595-6.35717-8.14595Zm9.60232,13.179c-.828,5.53-4.6687,6.52343-9.60232,6.52343h-7.31875v-23.114h7.31875c4.93362,0,8.7743.96057,9.60232,6.52326a36.3302,36.3302,0,0,1,.26577,5.03426,36.58042,36.58042,0,0,1-.26577,5.033Z"/><polygon class="cls-2" points="146.109 73.908 146.109 57.55 149.52 57.55 149.52 73.908 146.109 73.908 146.109 73.908"/><polygon class="cls-2" points="146.109 54.565 146.109 50.638 149.52 50.638 149.52 54.565 146.109 54.565 146.109 54.565"/><path class="cls-2" d="M161.67111,61.45749a3.97853,3.97853,0,0,0-2.185-.66277c-1.35871,0-3.08048.79467-3.08048,2.352V73.90805h-3.30952V57.71553h3.30952v.49609a7.36373,7.36373,0,0,1,3.08048-.662,8.37517,8.37517,0,0,1,3.676.86135l-1.491,3.04647Z"/><path class="cls-2" d="M173.35993,62.25243a3.00221,3.00221,0,0,0-2.71661-1.39145,3.30382,3.30382,0,0,0-3.609,2.98125h6.72194a4.32873,4.32873,0,0,0-.3963-1.5898ZM166.934,67.11957a5.0516,5.0516,0,0,0,.66295,2.352,3.97682,3.97682,0,0,0,3.345,1.48937,4.29333,4.29333,0,0,0,3.44307-1.55606l2.71705,2.01943a7.84563,7.84563,0,0,1-6.12728,2.78251,7.20688,7.20688,0,0,1-6.22535-2.94833,9.72791,9.72791,0,0,1-1.2589-5.397,9.71813,9.71813,0,0,1,1.2589-5.39724,6.77573,6.77573,0,0,1,5.92674-2.88061,6.353,6.353,0,0,1,5.5313,2.91333c1.19191,1.78819,1.02553,4.53689.99181,6.62261Z"/><path class="cls-2" d="M186.53109,74.17309c-4.70242,0-7.48386-3.54363-7.48386-8.411,0-4.90111,2.78144-8.44475,7.48386-8.44475,2.31728,0,3.84152.62923,6.2258,3.17962l-2.48411,2.18509c-1.78787-1.92-2.51607-2.18509-3.74169-2.18509a3.67433,3.67433,0,0,0-3.14532,1.49064,6.08445,6.08445,0,0,0-.86174,3.77449,6.04264,6.04264,0,0,0,.86174,3.74221,3.59542,3.59542,0,0,0,3.14532,1.48939c1.22562,0,1.95382-.26505,3.74169-2.15137l2.48411,2.18534c-2.38428,2.51638-3.90852,3.14543-6.2258,3.14543Z"/><path class="cls-2" d="M194.70206,57.54967V53.72892l3.24471-.55054v4.37129h2.88082v3.04669h-2.88082V69.6035a2.65978,2.65978,0,0,0,.03327.7282.3886.3886,0,0,0,.3963.23233h2.0878v3.344h-2.0878a3.50427,3.50427,0,0,1-3.07832-1.655,4.8599,4.8599,0,0,1-.596-2.64952V57.54967Z"/><polygon class="cls-2" points="16.87 77.995 30.665 71.993 37.34 52.213 30.585 41.486 16.87 77.995 16.87 77.995"/><polygon class="cls-2" points="30.277 38.06 47.2 64.828 57.151 60.51 41.284 38.06 30.277 38.06 30.277 38.06"/><polygon class="cls-2" points="64.356 59.132 14.131 80.907 24.836 88.213 58.498 73.908 64.356 59.132 64.356 59.132"/></svg>

After

Width:  |  Height:  |  Size: 4.0 KiB

View File

@ -0,0 +1,85 @@
---
title: AppDirect Case Study
linkTitle: AppDirect
case_study_styles: true
cid: caseStudies
logo: appdirect_featured_logo.png
featured: true
weight: 4
quote: >
We made the right decisions at the right time. Kubernetes and the cloud native technologies are now seen as the de facto ecosystem.
new_case_study_styles: true
heading_background: /images/case-studies/appdirect/banner1.jpg
heading_title_logo: /images/appdirect_logo.png
subheading: >
AppDirect: How AppDirect Supported the 10x Growth of Its Engineering Staff with Kubernetes
case_study_details:
- Company: AppDirect
- Location: San Francisco, California
- Industry: Software
---
<h2>Challenge</h2>
<p><a href="https://www.appdirect.com/">AppDirect</a> provides an end-to-end commerce platform for cloud-based products and services. When Director of Software Development Pierre-Alexandre Lacerte began working there in 2014, the company had a monolith application deployed on a "tomcat infrastructure, and the whole release process was complex for what it should be," he says. "There were a lot of manual steps involved, with one engineer building a feature, then another team picking up the change. So you had bottlenecks in the pipeline to ship a feature to production." At the same time, the engineering team was growing, and the company realized it needed a better infrastructure to both support that growth and increase velocity.</p>
<h2>Solution</h2>
<p>"My idea was: Let's create an environment where teams can deploy their services faster, and they will say, 'Okay, I don't want to build in the monolith anymore. I want to build a service,'" says Lacerte. They considered and prototyped several different technologies before deciding to adopt <a href="https://kubernetes.io/">Kubernetes</a> in early 2016. Lacerte's team has also integrated <a href="https://prometheus.io/">Prometheus</a> monitoring into the platform; tracing is next. Today, AppDirect has more than 50 microservices in production and 15 Kubernetes clusters deployed on <a href="https://aws.amazon.com/">AWS</a> and on premise around the world.</p>
<h2>Impact</h2>
<p>The Kubernetes platform has helped support the engineering team's 10x growth over the past few years. Coupled with the fact that they were continually adding new features, Lacerte says, "I think our velocity would have slowed down a lot if we didn't have this new infrastructure." Moving to Kubernetes and services has meant that deployments have become much faster due to less dependency on custom-made, brittle shell scripts with SCP commands. Time to deploy a new version has shrunk from 4 hours to a few minutes. Additionally, the company invested a lot of effort to make things self-service for developers. "Onboarding a new service doesn't require <a href="https://www.atlassian.com/software/jira">Jira</a> tickets or meeting with three different teams," says Lacerte. Today, the company sees 1,600 deployments per week, compared to 1-30 before. The company also achieved cost savings by moving its marketplace and billing monoliths to Kubernetes from legacy EC2 hosts as well as by leveraging autoscaling, as traffic is higher during business hours.</p>
{{< case-studies/quote author="Alexandre Gervais, Staff Software Developer, AppDirect" >}}
"It was an immense engineering culture shift, but the benefits are undeniable in terms of scale and speed."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
With its end-to-end commerce platform for cloud-based products and services, <a href="https://www.appdirect.com/">AppDirect</a> has been helping organizations such as Comcast and GoDaddy simplify the digital supply chain since 2009.
{{< /case-studies/lead >}}
<p>When Director of Software Development Pierre-Alexandre Lacerte started working there in 2014, the company had a monolith application deployed on a "tomcat infrastructure, and the whole release process was complex for what it should be," he says. "There were a lot of manual steps involved, with one engineer building a feature then creating a pull request, and a QA or another engineer validating the feature. Then it gets merged and someone else will take care of the deployment. So we had bottlenecks in the pipeline to ship a feature to production."</p>
<p>At the same time, the engineering team of 40 was growing, and the company wanted to add an increasing number of features to its products. As a member of the platform team, Lacerte began hearing from multiple teams that wanted to deploy applications using different frameworks and languages, from <a href="https://nodejs.org/">Node.js</a> to <a href="http://spring.io/projects/spring-boot">Spring Boot Java</a>. He soon realized that in order to both support growth and increase velocity, the company needed a better infrastructure, and a system in which teams are autonomous, can do their own deploys, and be responsible for their services in production.</p>
{{< case-studies/quote
image="/images/case-studies/appdirect/banner3.jpg"
author="Alexandre Gervais, Staff Software Developer, AppDirect"
>}}
"We made the right decisions at the right time. Kubernetes and the cloud native technologies are now seen as the de facto ecosystem. We know where to focus our efforts in order to tackle the new wave of challenges we face as we scale out. The community is so active and vibrant, which is a great complement to our awesome internal team."
{{< /case-studies/quote >}}
<p>From the beginning, Lacerte says, "My idea was: Let's create an environment where teams can deploy their services faster, and they will say, 'Okay, I don't want to build in the monolith anymore. I want to build a service.'" (Lacerte left the company in 2019.)</p>
<p>Working with the operations team, Lacerte's group got more control and access to the company's <a href="https://aws.amazon.com/">AWS infrastructure</a>, and started prototyping several orchestration technologies. "Back then, Kubernetes was a little underground, unknown," he says. "But we looked at the community, the number of pull requests, the velocity on GitHub, and we saw it was getting traction. And we found that it was much easier for us to manage than the other technologies."</p>
<p>They spun up the first few services on Kubernetes using <a href="https://www.chef.io/">Chef</a> and <a href="https://www.terraform.io/">Terraform</a> provisioning, and as more services were added, more automation was, too. "We have clusters around the world—in Korea, in Australia, in Germany, and in the U.S.," says Lacerte. "Automation is critical for us." They're now largely using <a href="https://github.com/kubernetes/kops">Kops</a>, and are looking at managed Kubernetes offerings from several cloud providers.</p>
<p>Today, though the monolith still exists, there are fewer and fewer commits and features. All teams are deploying on the new infrastructure, and services are the norm. AppDirect now has more than 50 microservices in production and 15 Kubernetes clusters deployed on AWS and on premise around the world.</p>
<p>Lacerte's strategy ultimately worked because of the very real impact the Kubernetes platform has had to deployment time. Due to less dependency on custom-made, brittle shell scripts with SCP commands, time to deploy a new version has shrunk from 4 hours to a few minutes. Additionally, the company invested a lot of effort to make things self-service for developers. "Onboarding a new service doesn't require <a href="https://www.atlassian.com/software/jira">Jira</a> tickets or meeting with three different teams," says Lacerte. Today, the company sees 1,600 deployments per week, compared to 1-30 before.</p>
{{< case-studies/quote
image="/images/case-studies/appdirect/banner4.jpg"
author="Pierre-Alexandre Lacerte, Director of Software Development, AppDirect"
>}}
"I think our velocity would have slowed down a lot if we didn't have this new infrastructure."
{{< /case-studies/quote >}}
<p>Additionally, the Kubernetes platform has helped support the engineering team's 10x growth over the past few years. "Ownership, a core value of AppDirect, reflects in our ability to ship services independently of our monolith code base," says Staff Software Developer Alexandre Gervais, who worked with Lacerte on the initiative. "Small teams now own critical parts of our business domain model, and they operate in their decoupled domain of expertise, with limited knowledge of the entire codebase. This reduces and isolates some of the complexity." Coupled with the fact that they were continually adding new features, Lacerte says, "I think our velocity would have slowed down a lot if we didn't have this new infrastructure."</p>
<p>The company also achieved cost savings by moving its marketplace and billing monoliths to Kubernetes from legacy EC2 hosts as well as by leveraging autoscaling, as traffic is higher during business hours.</p>
<p>AppDirect's cloud native stack also includes <a href="https://grpc.io/">gRPC</a> and <a href="https://www.fluentd.org/">Fluentd</a>, and the team is currently working on setting up <a href="https://opencensus.io/">OpenCensus</a>. The platform already has <a href="https://prometheus.io/">Prometheus</a> integrated, so "when teams deploy their service, they have their notifications, alerts and configurations," says Lacerte. "For example, in the test environment, I want to get a message on Slack, and in production, I want a <a href="https://slack.com/">Slack</a> message and I also want to get paged. We have integration with pager duty. Teams have more ownership on their services."</p>
{{< case-studies/quote author="Pierre-Alexandre Lacerte, Director of Software Development, AppDirect" >}}
"We moved from a culture limited to 'pushing code in a branch' to exciting new responsibilities outside of the code base: deployment of features and configurations; monitoring of application and business metrics; and on-call support in case of outages. It was an immense engineering culture shift, but the benefits are undeniable in terms of scale and speed."
{{< /case-studies/quote >}}
<p>That of course also means more responsibility. "We asked engineers to expand their horizons," says Gervais. "We moved from a culture limited to 'pushing code in a branch' to exciting new responsibilities outside of the code base: deployment of features and configurations; monitoring of application and business metrics; and on-call support in case of outages. It was an immense engineering culture shift, but the benefits are undeniable in terms of scale and speed."</p>
<p>As the engineering ranks continue to grow, the platform team has a new challenge, of making sure that the Kubernetes platform is accessible and easily utilized by everyone. "How can we make sure that when we add more people to our team that they are efficient, productive, and know how to ramp up on the platform?" Lacerte says. So we have the evangelists, the documentation, some project examples. We do demos, we have AMA sessions. We're trying different strategies to get everyone's attention."</p>
<p>Three and a half years into their Kubernetes journey, Gervais feels AppDirect "made the right decisions at the right time," he says. "Kubernetes and the cloud native technologies are now seen as the de facto ecosystem. We know where to focus our efforts in order to tackle the new wave of challenges we face as we scale out. The community is so active and vibrant, which is a great complement to our awesome internal team. Going forward, our focus will really be geared towards benefiting from the ecosystem by providing added business value in our day-to-day operations."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.5 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 224 KiB

View File

@ -0,0 +1,84 @@
---
title: Babylon Case Study
linkTitle: Babylon
case_study_styles: true
cid: caseStudies
logo: babylon_featured_logo.svg
featured: true
weight: 1
quote: >
Kubernetes is a great platform for machine learning because it comes with all the scheduling and scalability that you need.
new_case_study_styles: true
heading_background: /images/case-studies/babylon/banner4.jpg
heading_title_text: Babylon
use_gradient_overlay: true
subheading: >
AppDirect: How Cloud Native Is Enabling Babylon's Medical AI Innovations
case_study_details:
- Company: Babylon
- Location: United Kingdom
- Industry: AI, Healthcare
---
<h2>Challenge</h2>
<p>A large number of Babylon's products leverage machine learning and artificial intelligence, and in 2019, there wasn't enough computing power in-house to run a particular experiment. The company was also growing (from 100 to 1,600 in three years) and planning expansion into other countries.</p>
<h2>Solution</h2>
<p>Babylon had migrated its user-facing applications to a Kubernetes platform in 2018, so the infrastructure team turned to Kubeflow, a toolkit for machine learning on Kubernetes. "We tried to create a Kubernetes core server, we deployed Kubeflow, and we orchestrated the whole experiment, which ended up being a really good success," says AI Infrastructure Lead Jérémie Vallée. The team began building a self-service AI training platform on top of Kubernetes.</p>
<h2>Impact</h2>
<p>Instead of waiting hours or days to be able to compute, teams can get access instantaneously. Clinical validations used to take 10 hours; now they are done in under 20 minutes. The portability of the cloud native platform has also enabled Babylon to expand into other countries.</p>
{{< case-studies/quote
image="/images/case-studies/babylon/banner1.jpg"
author="JÉRÉMIE VALLÉE, AI INFRASTRUCTURE LEAD AT BABYLON"
>}}
"Kubernetes is a great platform for machine learning because it comes with all the scheduling and scalability that you need."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
Babylon's mission is to put accessible and affordable healthcare services in the hands of every person on earth.
{{< /case-studies/lead >}}
<p>Since its launch in the U.K. in 2013, the startup has facilitated millions of digital consultations around the world. In the U.K., patients were typically waiting a week or two for a doctor's appointment. Through Babylon's NHS service, GP at Hand—which has more than 75,000 registered patients—39% get an appointment through their phone within 30 minutes, and 89% within 6 hours.</p>
<p>That's just the start. "We try to combine different types of technology with the medical expertise that we have in-house to build products that will help patients manage and understand their health, and also help doctors be more efficient at what they do," says Jérémie Vallée, AI Infrastructure Lead at Babylon. </p>
<p>A large number of these products leverage machine learning and artificial intelligence, and in 2019, researchers hit a pain point. "We have some servers in-house where our researchers were doing a lot of AI experiments and some training of models, and we came to a point where we didn't have enough compute in-house to run a particular experiment," says Vallée. </p>
<p>Babylon had migrated its user-facing applications to a Kubernetes platform in 2018, "and we had a lot of Kubernetes knowledge thanks to the migration," he adds. To optimize some of the models that had been created, the team turned to Kubeflow, a toolkit for machine learning on Kubernetes. "We tried to create a Kubernetes core server, we deployed Kubeflow, and we orchestrated the whole experiment, which ended up being a really good success," he says.</p>
<p>Based on that experience, Vallée's team was tasked with building a self-service platform to help Babylon's AI teams become more efficient, and by extension help get products to market faster. The main requirements: (1) the ability to give researchers and engineers access to the compute they needed, regardless of the size of the experiments they may need to run; (2) a way to provide teams with the best tools that they needed to do their work, on demand and in a centralized way; and (3) the training platform had to be close to the data that was being managed, because of the company's expansion into different countries.</p>
{{< case-studies/quote author="CAROLINE HARGROVE, CHIEF TECHNOLOGY OFFICER AT BABYLON" >}}
"Delivering a self-service platform where users are empowered to run their own workload has enabled our data scientist community to do hyper parameter tuning and general algorithm development without any cloud skill and without the help of platform engineers, thus accelerating our innovation."
{{< /case-studies/quote >}}
<p>Kubernetes was an enabler on every count. "Kubernetes is a great platform for machine learning because it comes with all the scheduling and scalability that you need," says Vallée. The need to keep data in every country in which Babylon operates requires a multi-region, multi-cloud strategy, and some countries might not even have a public cloud provider at all. "We wanted to make this platform portable so that we can run training jobs anywhere," he says. "Kubernetes offered a base layer that allows you to deploy the platform outside of the cloud provider, and then deploy whatever tooling you need. That was a very good selling point for us."</p>
<p>Once the team decided to build the Babylon AI Research platform on top of Kubernetes, they referred to the Cloud Native Landscape to build out the stack: Prometheus and Grafana for monitoring; an Istio service mesh to control the network on the training platform and control what access all of the workflows would have; Helm to deploy the stack; and Flux to manage the GitOps part of the pipeline.</p>
<p>The cloud native AI platform has had a huge impact at Babylon. The first research projects run on the platform mostly involved machine learning and natural language processing. These experiments required a huge amount of compute—1600 CPU, 3.2 TB RAM—which was much more than Babylon had in-house. Plus, access to compute used to take hours, or sometimes even days, depending on how busy the platform team was. "Now, with Kubernetes and the self-service platform that we provide, it's pretty much instantaneous," says Vallée.</p>
<p>Another important type of work that's done on the platform is clinical validation for new applications such as Babylon's Symptom Checker, which calculates the probability of a disease given the evidence input by the user. "Being in healthcare, we want all of our models to be safe before they're going to hit production," says Vallée. Using Argo for GitOps "enabled us to scale the process massively."</p>
{{< case-studies/quote
image="/images/case-studies/babylon/banner2.jpg"
author="JEAN MARIE FERDEGUE, DIRECTOR OF PLATFORM OPERATIONS AT BABYLON"
>}}
"Giving a Kubernetes-based platform to our data scientists has meant increased security, increased innovation through empowerment, and a more affordable health service as our cloud engineers are building an experience that is used by hundreds on a daily basis, rather than supporting specific bespoke use cases."
{{< /case-studies/quote >}}
<p>Researchers used to have to wait up to 10 hours to get results on new versions of their models. With Kubernetes, that time is now down to under 20 minutes. Plus, previously they could only run one clinical validation at a time, now they can run many parallel ones if they need to—a huge benefit considering that in the past three years, Babylon has grown from 100 to 1,600 employees.</p>
<p>"Delivering a self-service platform where users are empowered to run their own workload has enabled our data scientist community to do hyper parameter tuning and general algorithm development without any cloud skill and without the help of platform engineers, thus accelerating our innovation," says Chief Technology Officer Caroline Hargrove.</p>
<p>Adds Director of Platform Operations Jean Marie Ferdegue: "Giving a Kubernetes-based platform to our data scientists has meant increased security, increased innovation through empowerment, and a more affordable health service as our cloud engineers are building an experience that is used by hundreds on a daily basis, rather than supporting specific bespoke use cases."</p>
<p>Plus, as Babylon continues to expand, "it will be very easy to onboard new countries," says Vallée. "Fifteen months ago when we deployed this platform, we had one big environment in the U.K., but now we have one in Canada, we have one in Asia, and we have one coming in the U.S. This is one of the things that Kubernetes and the other cloud native projects have enabled for us."</p>
<p>Babylon's road map for cloud native involves onboarding all of the company's AI efforts to the platform. Increasingly, that includes AI services of care. "I think this is going to be an interesting field where AI and healthcare meet," Vallée says. "It's kind of a complex problem and there's a lot of issues around this. So with our platform, we want to say, 'What can we do to make this less painful for our developers and machine learning engineers?'"</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

View File

@ -0,0 +1,85 @@
---
title: BlaBlaCar Case Study
case_study_styles: true
cid: caseStudies
new_case_study_styles: true
heading_background: /images/case-studies/blablacar/banner1.jpg
heading_title_logo: /images/blablacar_logo.png
subheading: >
Turning to Containerization to Support Millions of Rideshares
case_study_details:
- Company: BlaBlaCar
- Location: Paris, France
- Industry: Ridesharing Company
---
<h2>Challenge</h2>
<p>The world's largest long-distance carpooling community, <a href="https://www.blablacar.com/">BlaBlaCar</a>, connects 40 million members across 22 countries. The company has been experiencing exponential growth since 2012 and needed its infrastructure to keep up. "When you're thinking about doubling the number of servers, you start thinking, 'What should I do to be more efficient?'" says Simon Lallemand, Infrastructure Engineer at BlaBlaCar. "The answer is not to hire more and more people just to deal with the servers and installation." The team knew they had to scale the platform, but wanted to stay on their own bare metal servers.</p>
<h2>Solution</h2>
<p>Opting not to shift to cloud virtualization or use a private cloud on their own servers, the BlaBlaCar team became early adopters of containerization, using the CoreOs runtime <a href="https://coreos.com/rkt">rkt</a>, initially deployed using <a href="https://coreos.com/fleet/docs/latest/launching-containers-fleet.html">fleet</a> cluster manager. Last year, the company switched to <a href="http://kubernetes.io/">Kubernetes</a> orchestration, and now also uses <a href="https://prometheus.io/">Prometheus</a> for monitoring.</p>
<h2>Impact</h2>
<p>"Before using containers, it would take sometimes a day, sometimes two, just to create a new service," says Lallemand. "With all the tooling that we made around the containers, copying a new service now is a matter of minutes. It's really a huge gain. We are better at capacity planning in our data center because we have fewer constraints due to this abstraction between the services and the hardware we run on. For the developers, it also means they can focus only on the features that they're developing, and not on the infrastructure."</p>
{{< case-studies/quote author="Simon Lallemand, Infrastructure Engineer at BlaBlaCar" >}}
"When you're switching to this cloud-native model and running everything in containers, you have to make sure that at any moment you can reboot without any downtime and without losing traffic. [With Kubernetes] our infrastructure is much more resilient and we have better availability than before."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
For the 40 million users of <a href="https://www.blablacar.com/">BlaBlaCar</a>, it's easy to find strangers headed in the same direction to share rides and costs. You can even choose how much "bla bla" chatter you want from a long-distance ride mate.
{{< /case-studies/lead >}}
<p>Behind the scenes, though, the infrastructure was falling woefully behind the rider community's exponential growth. Founded in 2006, the company hit its current stride around 2012. "Our infrastructure was very traditional," says Infrastructure Engineer Simon Lallemand, who began working at the company in 2014. "In the beginning, it was a bit chaotic because we had to [grow] fast. But then comes the time when you have to design things to make it manageable."</p>
<p>By 2015, the company had about 50 bare metal servers. The team was using a <a href="https://www.mysql.com/">MySQL</a> database and <a href="http://php.net/">PHP</a>, but, Lallemand says, "it was a very static way." They also utilized the configuration management system, <a href="https://www.chef.io/chef/">Chef</a>, but had little automation in its process. "When you're thinking about doubling the number of servers, you start thinking, 'What should I do to be more efficient?'" says Lallemand. "The answer is not to hire more and more people just to deal with the servers and installation."</p>
<p>Instead, BlaBlaCar began its cloud-native journey but wasn't sure which route to take. "We could either decide to go into cloud virtualization or even use a private cloud on our own servers," says Lallemand. "But going into the cloud meant we had to make a lot of changes in our application work, and we were just not ready to make the switch from on premise to the cloud." They wanted to keep the great performance they got on bare metal, so they didn't want to go to virtualization on premise.</p>
<p>The solution: containerization. This was early 2015 and containers were still relatively new. "It was a bold move at the time," says Lallemand. "We decided that the next servers that we would buy in the new data center would all be the same model, so we could outsource the maintenance of the servers. And we decided to go with containers and with <a href="https://coreos.com/">CoreOS</a> Container Linux as an abstraction for this hardware. It seemed future-proof to go with containers because we could see what companies were already doing with containers."</p>
{{< case-studies/quote image="/images/case-studies/blablacar/banner3.jpg">}}
"With all the tooling that we made around the containers, copying a new service is a matter of minutes. It's a huge gain. For the developers, it means they can focus only on the features that they're developing and not on the infrastructure or the hour they would test their code, or the hour that it would get deployed."
{{< /case-studies/quote >}}
<p>Next, they needed to choose a runtime for the containers, but "there were very few deployments in production at that time," says Lallemand. They experimented with <a href="https://www.docker.com/">Docker</a> but decided to go with <a href="https://coreos.com/rkt">rkt</a>. Lallemand explains that for BlaBlaCar, it was "much simpler to integrate things that are on rkt." At the time, the project was still pre-v1.0, so "we could speak with the developers of rkt and give them feedback. It was an advantage." Plus, he notes, rkt was very stable, even at this early stage.</p>
<p>Once those decisions were made that summer, the company came up with a plan for implementation. First, they formed a task force to create a workflow that would be tested by three of the 10 members on Lallemand's team. But they took care to run regular workshops with all 10 members to make sure everyone was on board. "When you're focused on your product sometimes you forget if it's really user friendly, whether other people can manage to create containers too," Lallemand says. "So we did a lot of iterations to find a good workflow."</p>
<p>After establishing the workflow, Lallemand says with a smile that "we had this strange idea that we should try the most difficult thing first. Because if it works, it will work for everything." So the first project the team decided to containerize was the database. "Nobody did that at the time, and there were really no existing tools for what we wanted to do, including building container images," he says. So the team created their own tools, such as <a href="https://github.com/blablacar/dgr">dgr</a>, which builds container images so that the whole team has a common framework to build on the same images with the same standards. They also revamped the service-discovery tools <a href="https://github.com/airbnb/nerve">Nerve</a> and <a href="http://airbnb.io/projects/synapse/">Synapse</a>; their versions, <a href="https://github.com/blablacar/go-nerve">Go-Nerve</a> and <a href="https://github.com/blablacar/go-synapse">Go-Synapse</a>, were written in Go and built to be more efficient and include new features. All of these tools were open-sourced.</p>
<p>At the same time, the company was working to migrate its entire platform to containers with a deadline set for Christmas 2015. With all the work being done in parallel, BlaBlaCar was able to get about 80 percent of its production into containers by its deadline with live traffic running on containers during December. (It's now at 100 percent.) "It's a really busy time for traffic," says Lallemand. "We knew that by using those new servers with containers, it would help us handle the traffic."</p>
<p>In the middle of that peak season for carpooling, everything worked well. "The biggest impact that we had was for the deployment of new services," says Lallemand. "Before using containers, we had to first deploy a new server and create configurations with Chef. It would take sometimes a day, sometimes two, just to create a new service. And with all the tooling that we made around the containers, copying a new service is a matter of minutes. So it's really a huge gain. For the developers, it means they can focus only on the features that they're developing and not on the infrastructure or the hour they would test their code, or the hour that it would get deployed."</p>
{{< case-studies/quote image="/images/case-studies/blablacar/banner4.jpg" >}}
"We realized that there was a really strong community around it [Kubernetes], which meant we would not have to maintain a lot of tools of our own," says Lallemand. "It was better if we could contribute to some bigger project like Kubernetes."
{{< /case-studies/quote >}}
<p>In order to meet their self-imposed deadline, one of the decisions they made was to not do any "orchestration magic" for containers in the first production alignment. Instead, they used the basic <a href="https://coreos.com/fleet/docs/latest/launching-containers-fleet.html">fleet</a> tool from CoreOS to deploy their containers. (They did build a tool called <a href="https://github.com/blablacar/ggn">GGN</a>, which they've open-sourced, to make it more manageable for their system engineers to use.)</p>
<p>Still, the team knew that they'd want more orchestration. "Our tool was doing a pretty good job, but at some point you want to give more autonomy to the developer team," Lallemand says. "We also realized that we don't want to be the single point of contact for developers when they want to launch new services." By the summer of 2016, they found their answer in <a href="http://kubernetes.io/">Kubernetes</a>, which had just begun supporting rkt implementation.</p>
<p>After discussing their needs with their contacts at CoreOS and Google, they were convinced that Kubernetes would work for BlaBlaCar. "We realized that there was a really strong community around it, which meant we would not have to maintain a lot of tools of our own," says Lallemand. "It was better if we could contribute to some bigger project like Kubernetes." They also started using <a href="https://prometheus.io/">Prometheus</a>, as they were looking for "service-oriented monitoring that could be updated nightly." Production on Kubernetes began in December 2016. "We like to do crazy stuff around Christmas," he adds with a laugh.</p>
<p>BlaBlaCar now has about 3,000 pods, with 1200 of them running on Kubernetes. Lallemand leads a "foundations team" of 25 members who take care of the networks, databases and systems for about 100 developers. There have been some challenges getting to this point. "The rkt implementation is still not 100 percent finished," Lallemand points out. "It's really good, but there are some features still missing. We have questions about how we do things with stateful services, like databases. We know how we will be migrating some of the services; some of the others are a bit more complicated to deal with. But the Kubernetes community is making a lot of progress on that part."</p>
<p>The team is particularly happy that they're now able to plan capacity better in the company's data center. "We have fewer constraints since we have this abstraction between the services and the hardware we run on," says Lallemand. "If we lose a server because there's a hardware problem on it, we just move the containers onto another server. It's much more efficient. We do that by just changing a line in the configuration file. And with Kubernetes, it should be automatic, so we would have nothing to do."</p>
{{< case-studies/quote >}}
"If we lose a server because there's a hardware problem on it, we just move the containers onto another server. It's much more efficient. We do that by just changing a line in the configuration file. With Kubernetes, it should be automatic, so we would have nothing to do."
{{< /case-studies/quote >}}
<p>And these advances ultimately trickle down to BlaBlaCar's users. "We have improved availability overall on our website," says Lallemand. "When you're switching to this cloud-native model with running everything in containers, you have to make sure that you can at any moment reboot a server or a data container without any downtime, without losing traffic. So now our infrastructure is much more resilient and we have better availability than before."</p>
<p>Within BlaBlaCar's technology department, the cloud-native journey has created some profound changes. Lallemand thinks that the regular meetings during the conception stage and the training sessions during implementation helped. "After that everybody took part in the migration process," he says. "Then we split the organization into different 'tribes'—teams that gather developers, product managers, data analysts, all the different jobs, to work on a specific part of the product. Before, they were organized by function. The idea is to give all these tribes access to the infrastructure directly in a self-service way without having to ask. These people are really autonomous. They have responsibility of that part of the product, and they can make decisions faster."</p>
<p>This DevOps transformation turned out to be a positive one for the company's staffers. "The team was very excited about the DevOps transformation because it was new, and we were working to make things more reliable, more future-proof," says Lallemand. "We like doing things that very few people are doing, other than the internet giants."</p>
<p>With these changes already making an impact, BlaBlaCar is looking to split up more and more of its application into services. "I don't say microservices because they're not so micro," Lallemand says. "If we can split the responsibilities between the development teams, it would be easier to manage and more reliable, because we can easily add and remove services if one fails. You can handle it easily, instead of adding a big monolith that we still have."</p>
<p>When Lallemand speaks to other European companies curious about what BlaBlaCar has done with its infrastructure, he tells them to come along for the ride. "I tell them that it's such a pleasure to deal with the infrastructure that we have today compared to what we had before," he says. "They just need to keep in mind their real motive, whether it's flexibility in development or reliability or so on, and then go step by step towards reaching those objectives. That's what we've done. It's important not to do technology for the sake of technology. Do it for a purpose. Our focus was on helping the developers."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.00001" y="-3.45836" width="223.25536" height="134.51136"/><g id="layer1"><path id="path14" d="M52.57518,56.39988v16.0054h8.94419l-1.2105,3.09349H48.6747V56.39991h3.90048"/><path id="path16" d="M127.89467,65.07506c2.48824-.94148,5.38-3.09349,5.38-6.65769,0-4.43848-3.83322-6.65769-9.28044-6.65769h-7.12845V75.49877h4.23673V65.74759h2.48823l6.59044,9.75118h4.77473l-7.0612-10.42368Zm-4.70747-2.35373h-2.08473V54.58412h2.62273c3.42972,0,5.31272,1.076,5.31272,4.035,0,3.16072-2.28649,4.10223-5.85071,4.10223"/><path id="path18" d="M68.98407,56.39988l.60525,1.47949L62.19187,75.49874h3.63148l1.614-3.90048h8.40619l1.68124,3.90048h4.304L73.3553,56.39988H68.98407Zm-.33625,12.30668,2.89174-7.06121,2.959,7.06121H68.64783"/><path id="path28" d="M41.27726,62.78859a5.40414,5.40414,0,0,0,4.10222-5.24547c0-5.78347-6.99395-5.78347-9.21318-5.78347H28.7016V75.49874h7.12845c4.16947,0,10.89442-1.00874,10.89442-6.85945C46.72447,65.41131,44.77423,63.32659,41.27726,62.78859Zm-5.649-8.13721c3.16072,0,5.31271.33625,5.31271,3.16073,0,3.83322-3.90047,3.63146-5.78345,3.63146h-2.152V54.58412l2.62273.06726Zm.06724,17.75387-2.69-.06726V64.537h2.48823c3.766,0,6.7922.538,6.7922,3.90048,0,3.29522-2.48823,3.96772-6.59046,3.96772"/><path id="path30" d="M98.84287,56.39988h3.96772V75.49874H98.84287Zm12.23941,0-8.06994,9.14595,8.13719,9.95294h4.50571L107.4508,65.34408l7.59921-8.9442h-3.96773"/><path id="path40" d="M80.95449,65.34405c0,6.25421,3.90048,10.49094,9.21318,10.49094a10.18329,10.18329,0,0,0,6.38871-1.95023l-.94149-1.883A9.90036,9.90036,0,0,1,91.17642,73.145c-3.42973,0-5.918-2.48823-5.918-7.66644,0-4.16946,1.54675-6.65769,4.50572-6.65769,1.614,0,2.959.538,3.497,2.152l3.42972-.87425c-.33625-1.345-2.152-3.96771-7.12845-3.96771-4.573,0-8.60794,3.09349-8.60794,9.21318"/><path id="path50" d="M171.67412,56.39988h3.90048V75.49874h-3.90048Zm12.23941,0-8.13721,9.14595,8.13721,9.95294h4.64021L180.282,65.34408l7.5992-8.9442h-3.96771"/><path id="path52" d="M153.71849,65.34405c0,6.25421,3.83322,10.49094,9.07869,10.49094a10.11878,10.11878,0,0,0,6.456-1.95023l-.94148-1.883A9.953,9.953,0,0,1,163.806,73.145c-3.42973,0-5.8507-2.48823-5.8507-7.66644,0-4.16946,1.54674-6.65769,4.573-6.65769a3.35862,3.35862,0,0,1,3.497,2.152l3.36248-.87425c-.33625-1.345-2.08475-3.96771-7.0612-3.96771-4.64021,0-8.60795,3.09349-8.60795,9.21318"/><path id="path54" d="M143.16032,56.13087c-5.17821,0-8.94419,3.09349-8.94419,9.81844s3.56423,9.88568,8.94419,9.88568,8.9442-3.16072,8.9442-9.88568S148.33853,56.13087,143.16032,56.13087Zm0,17.01414c-3.96771,0-4.90922-3.228-4.90922-7.1957,0-3.90048.94148-7.26295,4.90922-7.26295s4.90922,3.36247,4.90922,7.26295c0,3.96772-.94148,7.1957-4.90922,7.1957"/></g></svg>

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.3 KiB

View File

@ -0,0 +1,83 @@
---
title: BlackRock Case Study
case_study_styles: true
cid: caseStudies
new_case_study_styles: true
heading_background: /images/case-studies/blackrock/banner1.jpg
heading_title_logo: /images/blackrock_logo.png
subheading: >
Rolling Out Kubernetes in Production in 100 Days
case_study_details:
- Company: BlackRock
- Location: New York, NY
- Industry: Financial Services
---
<h2>Challenge</h2>
<p>The world's largest asset manager, <a href="https://www.blackrock.com/investing">BlackRock</a> operates a very controlled static deployment scheme, which has allowed for scalability over the years. But in their data science division, there was a need for more dynamic access to resources. "We want to be able to give every investor access to data science, meaning <a href="https://www.python.org">Python</a> notebooks, or even something much more advanced, like a MapReduce engine based on <a href="https://spark.apache.org">Spark</a>," says Michael Francis, a Managing Director in BlackRock's Product Group, which runs the company's investment management platform. "Managing complex Python installations on users' desktops is really hard because everyone ends up with slightly different environments. We have existing environments that do these things, but we needed to make it real, expansive and scalable. Being able to spin that up on demand, tear it down, make that much more dynamic, became a critical thought process for us. It's not so much that we had to solve our main core production problem, it's how do we extend that? How do we evolve?"</p>
<h2>Solution</h2>
<p>Drawing from what they learned during a pilot done last year using <a href="https://www.docker.com">Docker</a> environments, Francis put together a cross-sectional team of 20 to build an investor research web app using <a href="https://kubernetes.io">Kubernetes</a> with the goal of getting it into production within one quarter.</p>
<h2>Impact</h2>
<p>"Our goal was: How do you give people tools rapidly without having to install them on their desktop?" says Francis. And the team hit the goal within 100 days. Francis is pleased with the results and says, "We're going to use this infrastructure for lots of other application workloads as time goes on. It's not just data science; it's this style of application that needs the dynamism. But I think we're 6-12 months away from making a [large scale] decision. We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues. What's interesting is that just having this technology there is changing the way our developers are starting to think about their future development."</p>
{{< case-studies/quote author="Michael Francis, Managing Director, BlackRock">}}
"My message to other enterprises like us is you can actually integrate Kubernetes into an existing, well-orchestrated machinery. You don't have to throw out everything you do. And using Kubernetes made a complex problem significantly easier."
{{< /case-studies/quote >}}
<p>One of the management objectives for BlackRock's Product Group employees in 2017 was to "build cool stuff." Led by Managing Director Michael Francis, a cross-sectional group of 20 did just that: They rolled out a full production Kubernetes environment and released a new investor research web app on it. In 100 days.</p>
<p>For a company that's the world's largest asset manager, "just equipment procurement can take 100 days sometimes, let alone from inception to delivery," says Karl Wieman, a Senior System Administrator. "It was an aggressive schedule. But it moved the dial." In fact, the project achieved two goals: It solved a business problem (creating the needed web app) as well as provided real-world, in-production experience with Kubernetes, a cloud-native technology that the company was eager to explore. "It's not so much that we had to solve our main core production problem, it's how do we extend that? How do we evolve?" says Francis. The ultimate success of this project, beyond delivering the app, lies in the fact that "we've managed to integrate a radically new thought process into a controlled infrastructure that we didn't want to change."</p>
<p>After all, in its three decades of existence, BlackRock has "a very well-established environment for managing our compute resources," says Francis. "We manage large cluster processes on machines, so we do a lot of orchestration and management for our main production processes in a way that's very cloudish in concept. We're able to manage them in a very controlled, static deployment scheme, and that has given us a huge amount of scalability."</p>
<p>Though that works well for the core production, the company has found that some data science workloads require more dynamic access to resources. "It's a very bursty process," says Francis, who is head of data for the company's Aladdin investment management platform division.</p>
<p>Aladdin, which connects the people, information and technology needed for money management in real time, is used internally and is also sold as a platform to other asset managers and insurance companies. "We want to be able to give every investor access to data science, meaning <a href="https://www.python.org">Python</a> notebooks, or even something much more advanced, like a MapReduce engine based on <a href="https://spark.apache.org">Spark</a>," says Francis. But "managing complex Python installations on users' desktops is really hard because everyone ends up with slightly different environments. Docker allows us to flatten that environment."</p>
{{< case-studies/quote image="/images/case-studies/blackrock/banner3.jpg">}}
"We manage large cluster processes on machines, so we do a lot of orchestration and management for our main production processes in a way that's very cloudish in concept. We're able to manage them in a very controlled, static deployment scheme, and that has given us a huge amount of scalability."
{{< /case-studies/quote >}}
<p>Still, challenges remain. "If you have a shared cluster, you get this storming herd problem where everyone wants to do the same thing at the same time," says Francis. "You could put limits on it, but you'd have to build an infrastructure to define limits for our processes, and the Python notebooks weren't really designed for that. We have existing environments that do these things, but we needed to make it real, expansive, and scalable. Being able to spin that up on demand, tear it down, and make that much more dynamic, became a critical thought process for us."</p>
<p>Made up of managers from technology, infrastructure, production operations, development and information security, Francis's team was able to look at the problem holistically and come up with a solution that made sense for BlackRock. "Our initial straw man was that we were going to build everything using <a href="https://www.ansible.com">Ansible</a> and run it all using some completely different distributed environment," says Francis. "That would have been absolutely the wrong thing to do. Had we gone off on our own as the dev team and developed this solution, it would have been a very different product. And it would have been very expensive. We would not have gone down the route of running under our existing orchestration system. Because we don't understand it. These guys [in operations and infrastructure] understand it. Having the multidisciplinary team allowed us to get to the right solutions and that actually meant we didn't build anywhere near the amount we thought we were going to end up building."</p>
<p>In search of a solution in which they could manage usage on a user-by-user level, Francis's team gravitated to Red Hat's <a href="https://www.openshift.com">OpenShift</a> Kubernetes offering. The company had already experimented with other cloud-native environments, but the team liked that Kubernetes was open source, and "we felt the winds were blowing in the direction of Kubernetes long term," says Francis. "Typically we make technology choices that we believe are going to be here in 5-10 years' time, in some form. And right now, in this space, Kubernetes feels like the one that's going to be there." Adds Uri Morris, Vice President of Production Operations: "When you see that the non-Google committers to Kubernetes overtook the Google committers, that's an indicator of the momentum."</p>
<p>Once that decision was made, the major challenge was figuring out how to make Kubernetes work within BlackRock's existing framework. "It's about understanding how we can operate, manage and support a platform like this, in addition to tacking it onto our existing technology platform," says Project Manager Michael Maskallis. "All the controls we have in place, the change management process, the software development lifecycle, onboarding processes we go through—how can we do all these things?"</p>
<p>The first (anticipated) speed bump was working around issues behind BlackRock's corporate firewalls. "One of our challenges is there are no firewalls in most open source software," says Francis. "So almost all install scripts fail in some bizarre way, and pulling down packages doesn't necessarily work." The team ran into these types of problems using <a href="/docs/getting-started-guides/minikube/">Minikube</a> and did a few small pushes back to the open source project.</p>
{{< case-studies/quote image="/images/case-studies/blackrock/banner4.jpg">}}
"Typically we make technology choices that we believe are going to be here in 5-10 years' time, in some form. And right now, in this space, Kubernetes feels like the one that's going to be there."
{{< /case-studies/quote >}}
<p>There were also questions about service discovery. "You can think of Aladdin as a cloud of services with APIs between them that allows us to build applications rapidly," says Francis. "It's all on a proprietary message bus, which gives us all sorts of advantages but at the same time, how does that play in a third party [platform]?"</p>
<p>Another issue they had to navigate was that in BlackRock's existing system, the messaging protocol has different instances in the different development, test and production environments. While Kubernetes enables a more DevOps-style model, it didn't make sense for BlackRock. "I think what we are very proud of is that the ability for us to push into production is still incredibly rapid in this [new] infrastructure, but we have the control points in place, and we didn't have to disrupt everything," says Francis. "A lot of the cost of this development was thinking how best to leverage our internal tools. So it was less costly than we actually thought it was going to be."</p>
<p>The project leveraged tools associated with the messaging bus, for example. "The way that the Kubernetes cluster will talk to our internal messaging platform is through a gateway program, and this gateway program already has built-in checks and throttles," says Morris. "We can use them to control and potentially throttle the requests coming in from Kubernetes's very elastic infrastructure to the production infrastructure. We'll continue to go in that direction. It enables us to scale as we need to from the operational perspective."</p>
<p>The solution also had to be complementary with BlackRock's centralized operational support team structure. "The core infrastructure components of Kubernetes are hooked into our existing orchestration framework, which means that anyone in our support team has both control and visibility to the cluster using the existing operational tools," Morris explains. "That means that I don't need to hire more people."</p>
<p>With those points established, the team created a procedure for the project: "We rolled this out first to a development environment, then moved on to a testing environment and then eventually to two production environments, in that sequential order," says Maskallis. "That drove a lot of our learning curve. We have all these moving parts, the software components on the infrastructure side, the software components with Kubernetes directly, the interconnectivity with the rest of the environment that we operate here at BlackRock, and how we connect all these pieces. If we came across issues, we fixed them, and then moved on to the different environments to replicate that until we eventually ended up in our production environment where this particular cluster is supposed to live."</p>
<p>The team had weekly one-hour working sessions with all the members (who are located around the world) participating, and smaller breakout or deep-dive meetings focusing on specific technical details. Possible solutions would be reported back to the group and debated the following week. "I think what made it a successful experiment was people had to work to learn, and they shared their experiences with others," says Vice President and Software Developer Fouad Semaan. Then, Francis says, "We gave our engineers the space to do what they're good at. This hasn't been top-down."</p>
{{< case-studies/quote >}}
"The core infrastructure components of Kubernetes are hooked into our existing orchestration framework, which means that anyone in our support team has both control and visibility to the cluster using the existing operational tools. That means that I don't need to hire more people."
{{< /case-studies/quote >}}
<p>They were led by one key axiom: To stay focused and avoid scope creep. This meant that they wouldn't use features that weren't in the core of Kubernetes and Docker. But if there was a real need, they'd build the features themselves. Luckily, Francis says, "Because of the rapidity of the development, a lot of things we thought we would have to build ourselves have been rolled into the core product. [The package manager<a href="https://helm.sh"> Helm</a> is one example]. People have similar problems."</p>
<p>By the end of the 100 days, the app was up and running for internal BlackRock users. The initial capacity of 30 users was hit within hours, and quickly increased to 150. "People were immediately all over it," says Francis. In the next phase of this project, they are planning to scale up the cluster to have more capacity.</p>
<p>Even more importantly, they now have in-production experience with Kubernetes that they can continue to build on—and a complete framework for rolling out new applications. "We're going to use this infrastructure for lots of other application workloads as time goes on. It's not just data science; it's this style of application that needs the dynamism," says Francis. "Is it the right place to move our core production processes onto? It might be. We're not at a point where we can say yes or no, but we felt that having real production experience with something like Kubernetes at some form and scale would allow us to understand that. I think we're 6-12 months away from making a [large scale] decision. We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues."</p>
<p>For other big companies considering a project like this, Francis says commitment and dedication are key: "We got the signoff from [senior management] from day one, with the commitment that we were able to get the right people. If I had to isolate what makes something complex like this succeed, I would say senior hands-on people who can actually drive it make a huge difference." With that in place, he adds, "My message to other enterprises like us is you can actually integrate Kubernetes into an existing, well-orchestrated machinery. You don't have to throw out everything you do. And using Kubernetes made a complex problem significantly easier."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.1 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 9.2 KiB

View File

@ -0,0 +1,86 @@
---
title: Booking.com Case Study
linkTitle: Booking.com
case_study_styles: true
cid: caseStudies
logo: booking.com_featured_logo.png
featured: true
weight: 3
quote: >
We realized that we needed to learn Kubernetes better in order to fully use the potential of it. At that point, we made the shift to build our own Kubernetes platform.
new_case_study_styles: true
heading_background: /images/case-studies/booking/banner1.jpg
heading_title_text: Booking.com
use_gradient_overlay: true
subheading: >
After Learning the Ropes with a Kubernetes Distribution, Booking.com Built a Platform of Its Own
case_study_details:
- Company: Booking.com
- Location: Netherlands
- Industry: Travel
---
<h2>Challenge</h2>
<p>In 2016, Booking.com migrated to an OpenShift platform, which gave product developers faster access to infrastructure. But because Kubernetes was abstracted away from the developers, the infrastructure team became a "knowledge bottleneck" when challenges arose. Trying to scale that support wasn't sustainable.</p>
<h2>Solution</h2>
<p>After a year operating OpenShift, the platform team decided to build its own vanilla Kubernetes platform—and ask developers to learn some Kubernetes in order to use it. "This is not a magical platform," says Ben Tyler, Principal Developer, B Platform Track. "We're not claiming that you can just use it with your eyes closed. Developers need to do some learning, and we're going to do everything we can to make sure they have access to that knowledge."</p>
<h2>Impact</h2>
<p>Despite the learning curve, there's been a great uptick in adoption of the new Kubernetes platform. Before containers, creating a new service could take a couple of days if the developers understood Puppet, or weeks if they didn't. On the new platform, it can take as few as 10 minutes. About 500 new services were built on the platform in the first 8 months.</p>
{{< case-studies/quote
image="/images/case-studies/booking/banner2.jpg"
author="BEN TYLER, PRINCIPAL DEVELOPER, B PLATFORM TRACK AT BOOKING.COM"
>}}
"As our users learn Kubernetes and become more sophisticated Kubernetes users, they put pressure on us to provide a better, more native Kubernetes experience, which is great. It's a super healthy dynamic."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
Booking.com has a long history with Kubernetes: In 2015, a team at the travel platform prototyped a container platform based on Mesos and Marathon.
{{< /case-studies/lead >}}
<p>Impressed by what the technology offered, but in need of enterprise features at its scale—the site handles more than 1.5 million room-night reservations a day on average—the team decided to adopt an OpenShift platform.</p>
<p>This platform, which was wrapped in a Heroku-style, high-level CLI interface, "was definitely popular with our product developers," says Ben Tyler, Principal Developer, B Platform Track. "We gave them faster access to infrastructure."</p>
<p>But, he adds, "anytime something went slightly off the rails, developers didn't have any of the knowledge required to support themselves."</p>
<p>And after a year of operating this platform, the infrastructure team found that it had become "a knowledge bottleneck," he says. "Most of the developers who used it did not know it was Kubernetes underneath. An application failure and a platform failure both looked like failures of that Heroku-style tool."</p>
<p>Scaling the necessary support did not seem feasible or sustainable, so the platform team needed a new solution. The understanding of Kubernetes that they had gained operating the OpenShift platform gave them confidence to build a vanilla Kubernetes platform of their own and customize it to suit the company's needs.</p>
{{< case-studies/quote author="EDUARD IACOBOAIA, SENIOR SYSTEM ADMINISTRATOR, B PLATFORM TRACK AT BOOKING.COM" >}}
"For entering the landscape, OpenShift was definitely very helpful. It shows you what the technology can do, and it makes it easy for you to use it. After we spent some time on it, we realized that we needed to learn Kubernetes better in order to fully use the potential of it. At that point, we made the shift to build our own Kubernetes platform. We definitely benefit in the long term for taking that step and investing the time in gaining that knowledge."
{{< /case-studies/quote >}}
<p>"For entering the landscape, OpenShift was definitely very helpful," says Eduard Iacoboaia, Senior System Administrator, B Platform Track. "It shows you what the technology can do, and it makes it easy for you to use it. After we spent some time on it, we realized that we needed to learn Kubernetes better in order to fully use the potential of it. At that point, we made the shift to build our own Kubernetes platform. We definitely benefit in the long term for taking that step and investing the time in gaining that knowledge."</p>
<p>Iacoboaia's team had customized a lot of OpenShift tools to make them work at Booking.com, and "those integrations points were kind of fragile," he says. "We spent much more time understanding all the components of Kubernetes, how they work, how they interact with each other." That research led the team to switch from OpenShift's built-in Ansible playbooks to Puppet deployments, which are used for the rest of Booking's infrastructure. The control plane was also moved from inside the cluster onto bare metal, as the company runs tens of thousands of bare-metal servers and a large infrastructure for running applications on bare metal. (Booking runs Kubernetes in multiple clusters in multiple data centers across the various regions where it has compute.) "We decided to keep it as simple as possible and to also use the tools that we know best," says Iacoboaia.</p>
<p>The other big change was that product engineers would have to learn Kubernetes in order to onboard. "This is not a magical platform," says Tyler. "We're not claiming that you can just use it with your eyes closed. Developers need to do some learning, and we're going to do everything we can to make sure they have access to that knowledge." That includes trainings, blog posts, videos, and Udemy courses.</p>
<p>Despite the learning curve, there's been a great uptick in adoption of the new Kubernetes platform. "I think the reason we've been able to strike this bargain successfully is that we're not asking them to learn a proprietary app system," says Tyler. "We're asking them to learn something that's open source, where the knowledge is transferable. They're investing in their own careers by learning Kubernetes."</p>
<p>One clear sign that this strategy has been a success is that in the support channel, when users have questions, other product engineers are jumping in to respond. "I haven't seen that kind of community engagement around a particular platform product internally before," says Tyler. "It helps a lot that it's visibly an ecosystem standard outside of the company, so people feel value in investing in that knowledge and sharing it with others, which is really, really powerful."</p>
{{< case-studies/quote
image="/images/case-studies/booking/banner3.jpg"
author="BEN TYLER, PRINCIPAL DEVELOPER, B PLATFORM TRACK AT BOOKING.COM"
>}}
"We have a tutorial. You follow the tutorial. Your code is running. Then, it's business-logic time. The time to gain access to resources is decreased enormously."
{{< /case-studies/quote >}}
<p>There's other quantifiable evidence too: Before containers, creating a new service could take a couple of days if the developers understood Puppet, or weeks if they didn't. On the new platform, it takes 10 minutes. "We have a tutorial. You follow the tutorial. Your code is running. Then, it's business-logic time," says Tyler. "The time to gain access to resources is decreased enormously." About 500 new services were built in the first 8 months on the platform, with hundreds of releases per day.</p>
<p>The platform offers different "layers of contracts, so to speak," says Tyler. "At the very base, it's just Kubernetes. If you're a pro Kubernetes user, here's a Kubernetes API, just like you get from GKE or AKS. We're trying to be a provider on that same level. But our whole job inside the company is to be a bigger value add than just vanilla infrastructure, so we provide a set of base images for our main stacks, Perl and Java."</p>
<p>And "as our users learn Kubernetes and become more sophisticated Kubernetes users, they put pressure on us to provide a better more native Kubernetes experience, which is great," says Tyler. "It's a super healthy dynamic."</p>
<p>The platform also includes other CNCF technologies, such as Envoy, Helm, and Prometheus. Most of the critical service traffic for Booking.com is routed through Envoy, and Prometheus is used primarily to monitor infrastructure components. Helm is consumed as a packaging standard. The team also developed and open sourced Shipper, an extension for Kubernetes to add more complex rollout strategies and multi-cluster orchestration.</p>
<p>To be sure, there have been internal discussions about the wisdom of building a Kubernetes platform from the ground up. "This is not really our core competency—Kubernetes and travel, they're kind of far apart, right?" says Tyler. "But we've made a couple of bets on CNCF components that have worked out really well for us. Envoy and Kubernetes, in particular, have been really beneficial to our organization. We were able to customize them, either because we could look at the source code or because they had extension points, and we were able to get value out of them very quickly without having to change any paradigms internally."</p>

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 KiB

View File

@ -0,0 +1,80 @@
---
title: Booz Allen Case Study
linkTitle: Booz Allen Hamilton
case_study_styles: true
cid: caseStudies
logo: booz-allen-featured-logo.svg
featured: true
weight: 2
quote: >
Kubernetes is a great solution for us. It allows us to rapidly iterate on our clients' demands.
new_case_study_styles: true
heading_background: /images/case-studies/booz-allen/banner4.jpg
heading_title_text: Booz Allen Hamilton
use_gradient_overlay: true
subheading: >
How Booz Allen Hamilton Is Helping Modernize the Federal Government with Kubernetes
case_study_details:
- Company: Booz Allen Hamilton
- Location: United States
- Industry: Government
---
<h2>Challenge</h2>
<p>In 2017, Booz Allen Hamilton's Strategic Innovation Group worked with the federal government to relaunch the decade-old recreation.gov website, which provides information and real-time booking for more than 100,000 campsites and facilities on federal lands across the country. The infrastructure needed to be agile, reliable, and scalable—as well as repeatable for the other federal agencies that are among Booz Allen Hamilton's customers.</p>
<h2>Solution</h2>
<p>"The only way that we thought we could be successful with this problem across all the different agencies is to create a microservice architecture and containers, so that we could be very dynamic and very agile to any given agency for whatever requirements that they may have," says Booz Allen Hamilton Senior Lead Technologist Martin Folkoff. To meet those requirements, Folkoff's team looked to Kubernetes for orchestration.</p>
<h2>Impact</h2>
<p>With the recreation.gov Kubernetes platform, changes can be implemented in about 30 minutes, compared to the multiple hours or even days legacy government applications require to review the code, get approval, and deploy the fix. Recreation.gov deploys to production on average 10 times a day. With monitoring, security, and logging built in, developers can create and publish new services to production within a week. Additionally, Folkoff says, "supporting the large, existing monoliths in the government is extremely expensive," and migrating into a more modern platform has resulted in perhaps 50% cost savings.</p>
{{< case-studies/quote
image="/images/case-studies/booz-allen/banner2.jpg"
author="JOSH BOYD, CHIEF TECHNOLOGIST AT BOOZ ALLEN HAMILTON"
>}}
"When there's a regulatory change in an agency, or a legislative change in Congress, or an executive order that changes the way you do business, how do I deploy that and get that out to the people who need it rapidly? At the end of the day, that's the problem we're trying to help the government solve with tools like Kubernetes."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
The White House launched an IT modernization effort in 2017, and in addition to improving cybersecurity and shifting to the public cloud and a consolidated IT model, "the federal government is looking to provide a better experience to citizens in every way that we interact with the government through every channel," says Booz Allen Hamilton Senior Lead Technologist Martin Folkoff.
{{< /case-studies/lead >}}
<p>To that end, Folkoff's Strategic Innovation Group worked with the federal government last year to relaunch the decade-old recreation.gov website, which provides information and real-time booking for more than 100,000 campsites and facilities on federal lands across the country.</p>
<p>The infrastructure needed to be agile, reliable, and scalable—as well as repeatable for the other federal agencies that are among Booz Allen Hamilton's customers. "The only way that we thought we could be successful with this problem across all the different agencies is to create a microservice architecture, so that we could be very dynamic and very agile to any given agency for whatever requirements that they may have," says Folkoff.</p>
{{< case-studies/quote author="MARTIN FOLKOFF, SENIOR LEAD TECHNOLOGIST AT BOOZ ALLEN HAMILTON" >}}
"With CNCF, there's a lot of focus on scale, and so there's a lot of comfort knowing that as the project grows, we're going to be comfortable using that tool set."
{{< /case-studies/quote >}}
<p>Booz Allen Hamilton, which has provided consulting services to the federal government for more than a century, introduced microservices, Docker containers, and AWS to its federal agency clients about five years ago. The next logical step was Kubernetes for orchestration. "Knowing that we had to be really agile and really reliable and scalable, we felt that the only technology that we know that can enable those kinds of things are the ones the CNCF provides," Folkoff says. "One of the things that is always important for the government is to make sure that the things that we build really endure. Using technology that is supported across multiple different companies and has strong governance gives people a lot of confidence."</p>
<p>Kubernetes was also aligned with the government's open source and IT modernization initiatives, so there has been an uptick in its usage at federal agencies over the past two years. "Now that Kubernetes is becoming offered as a service by the cloud providers like AWS and Microsoft, we're starting to see even more interest," says Chief Technologist Josh Boyd. Adds Folkoff: "With CNCF, there's a lot of focus on scale, and so there's a lot of comfort knowing that as the project grows, we're going to be comfortable using that tool set."</p>
<p>The greenfield recreation.gov project allowed the team to build a new Kubernetes-enabled site running on AWS, and the migration lasted only a week, when the old site didn't take bookings. "For the actual transition, we just swapped a DNS server, and it only took about 35 seconds between the old site being down and our new site being up and available," Folkoff adds. </p>
{{< case-studies/quote
image="/images/case-studies/booz-allen/banner1.png"
author="MARTIN FOLKOFF, SENIOR LEAD TECHNOLOGIST AT BOOZ ALLEN HAMILTON"
>}}
"Kubernetes alone enables a dramatic reduction in cost as resources are prioritized to the day's event"
{{< /case-studies/quote >}}
<p>In addition to its work with the Department of Interior for recreation.gov, Booz Allen Hamilton has brought Kubernetes to various Defense, Intelligence, and civilian agencies. Says Boyd: "When there's a regulatory change in an agency, or a legislative change in Congress, or an executive order that changes the way you do business, how do I deploy that and get that out to the people who need it rapidly? At the end of the day, that's the problem we're trying to help the government solve with tools like Kubernetes."</p>
<p>For recreation.gov, the impact was clear and immediate. With the Kubernetes platform, Folkoff says, "if a new requirement for a permit comes out, we have the ability to design and develop and implement that completely independently of reserving a campsite. It provides a much better experience to users." Today, changes can be implemented in about 30 minutes, compared to the multiple hours or even days legacy government applications require to review the code, get approval, and deploy the fix. Recreation.gov deploys to production on average 10 times a day.</p>
<p>Developer velocity has been improved. "When I want to do monitoring or security or logging, I don't have to do anything to my services or my application to enable that anymore," says Boyd. "I get all of this magic just by being on the Kubernetes platform." With all of those things built in, developers can create and publish new services to production within one week. </p>
<p>Additionally, Folkoff says, "supporting the large, existing monoliths in the government is extremely expensive," and migrating into a more modern platform has resulted in perhaps 50% cost savings. "Kubernetes alone enables a dramatic reduction in cost as resources are prioritized to the day's event," he says. "For example, during a popular campsite release, camping-related services are scaled out while permit services are scaled down."</p>
<p>So far, "Kubernetes is a great solution for us," says Folkoff. "It allows us to rapidly iterate on our clients' demands." Looking ahead, the team sees further adoption of the Kubernetes platform across federal agencies. Says Boyd: "You get the ability for the rapid delivery of business value for your customers. You now have observability into everything that you're doing. You don't have these onesies and twosies unicorn servers anymore. Now everything that you deploy is deployed in the same way, it's all instrumented the same way, and it's all built and deployed the same way through our CI/CD processes."</p>
<p>They also see a push toward re-platforming. "There's still a lot of legacy workloads out there," says Boyd. "We've got the new challenges of greenfield development and integration with legacy systems, but also that brown field of 'Hey, how do I take this legacy monolith and get it onto a platform where now it's instrumented with all the magic of the Kubernetes platform without having to do a whole lot to my application?' I think re-platforming is a pretty big use case for the government right now."</p>
<p>And given the success that they've had with Kubernetes so far, Boyd says, "I think at this point that technology is becoming pretty easy to sell." Adds Folkoff: "People are really excited about being able to deploy, scale, be reliable, and do cheaper maintenance of all of this."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.55202" y="-3.16104" width="223.25536" height="134.51136"/><path id="path5" d="M141.73824,63.05113H127.82345l2.25647-3.85486c.93932-1.59742,1.78587-2.16178,3.19675-2.16178,1.40994,0,1.88056,1.12776,1.12871,2.5377h14.197l1.4109-2.5377c1.034-1.88055,1.034-3.94955-1.50463-3.94955h-25.0097c-2.25648,0-4.04329.28218-5.35949,2.44492l-4.41922,7.52127c-1.2215,2.16274-1.12776,3.94956,1.88056,3.94956h14.3854L127.636,71.13676a3.487,3.487,0,0,1-3.47894,1.88056c-1.69307,0-1.59837-1.12776-.84654-2.25552H109.20724l-1.69307,2.82084c-1.034,1.78681-.18748,3.38519,2.25648,3.38519h24.82127a6.07295,6.07295,0,0,0,5.35949-2.91457l4.23078-7.05162C145.12343,65.21387,144.65281,63.05113,141.73824,63.05113Z"/><path id="path7" d="M202.14279,57.7299a2.01608,2.01608,0,1,0,2.06708,2.00395A2.04547,2.04547,0,0,0,202.14279,57.7299Zm0,3.63676a1.62259,1.62259,0,1,1,1.673-1.63377A1.64448,1.64448,0,0,1,202.14279,61.36666Z"/><path id="path9" d="M109.67787,53.08494H88.42834c-2.91458,0-6.67569,0-8.93217,3.94955l-9.4028,15.98283c-1.31619,2.16274-.18748,3.94956,2.25648,3.94956H97.07833c2.53865,0,4.137-.7528,6.11133-3.94956l9.40279-15.98283C113.814,54.96645,111.9334,53.08494,109.67787,53.08494ZM97.73642,59.19627l-7.05065,11.9405c-.94028,1.69307-1.78682,1.88055-3.94955,1.88055s-2.7271-.75184-2.068-1.88055l7.14535-11.94049a3.9896,3.9896,0,0,1,4.0433-2.16179C97.64364,57.03449,98.6767,57.59885,97.73642,59.19627Z"/><path id="path11" d="M72.44455,53.08494H45.17837L33.42631,73.01732h-25.95l.01294,3.94956H59.846c3.47893,0,4.60669-1.69308,5.35949-3.00928l2.81988-4.88887c.7528-1.3162.7528-2.82084-.75184-3.66642a5.94452,5.94452,0,0,0,4.79513-3.19676l2.82085-4.79514C76.30036,55.06019,74.60728,53.08494,72.44455,53.08494ZM58.436,69.16342,57.02512,71.419a2.662,2.662,0,0,1-2.6324,1.59837H47.43485l3.57554-6.01663H57.777C58.99942,67.00069,58.99942,68.12845,58.436,69.16342Zm5.9229-9.96715L62.948,61.45275a2.662,2.662,0,0,1-2.6324,1.59838H53.35774l3.57268-6.01664h6.76943C64.92327,57.03449,64.92327,58.16225,64.35891,59.19627Z"/><path id="path13" d="M203.20455,59.20871a.55269.55269,0,0,0-.25635-.485,1.13731,1.13731,0,0,0-.56532-.10809h-1.01107v2.25457h.34244V59.83046h.40462l.66193,1.0388h.3941l-.7021-1.0388C202.88507,59.81994,203.20455,59.6535,203.20455,59.20871Zm-1.11341.34244h-.37687V58.872h.5988c.29173,0,.54809.03922.54809.331C202.86211,59.60854,202.41636,59.55115,202.09114,59.55115Z"/><path id="polygon1317" d="M168.49306,57.03164h35.25806l-.011-3.94955H156.741L142.73151,76.964h32.71941l2.35022-3.94955H159.09122l3.57267-6.01664h18.70993l2.25649-3.94955H165.01508Z"/></svg>

After

Width:  |  Height:  |  Size: 2.7 KiB

View File

@ -0,0 +1,87 @@
---
title: Bose Case Study
linkTitle: Bose
case_study_styles: true
cid: caseStudies
logo: bose_featured_logo.png
featured: false
weight: 2
quote: >
The CNCF Landscape quickly explains what's going on in all the different areas from storage to cloud providers to automation and so forth. This is our shopping cart to build a cloud infrastructure. We can go choose from the different aisles.
new_case_study_styles: true
heading_background: /images/case-studies/bose/banner1.jpg
heading_title_logo: /images/bose_logo.png
subheading: >
Bose: Supporting Rapid Development for Millions of IoT Products With Kubernetes
case_study_details:
- Company: Bose Corporation
- Location: Framingham, Massachusetts
- Industry: Consumer Electronics
---
<h2>Challenge</h2>
<p>A household name in high-quality audio equipment, <a href="https://www.bose.com/en_us/index.html">Bose</a> has offered connected products for more than five years, and as that demand grew, the infrastructure had to change to support it. "We needed to provide a mechanism for developers to rapidly prototype and deploy services all the way to production pretty fast," says Lead Cloud Engineer Josh West. In 2016, the company decided to start building a platform from scratch. The primary goal: "To be one to two steps ahead of the different product groups so that we are never scrambling to catch up with their scale," says Cloud Architecture Manager Dylan O'Mahony.</p>
<h2>Solution</h2>
<p>From the beginning, the team knew it wanted a microservices architecture. After evaluating and prototyping a couple of orchestration solutions, the team decided to adopt <a href="https://kubernetes.io/">Kubernetes</a> for its scaled IoT Platform-as-a-Service running on AWS. The platform, which also incorporated Prometheus monitoring, launched in production in 2017, serving over 3 million connected products from the get-go. Bose has since adopted a number of other CNCF technologies, including <a href="https://www.fluentd.org/">Fluentd</a>, <a href="https://coredns.io/">CoreDNS</a>, <a href="https://www.jaegertracing.io/">Jaeger</a>, and <a href="https://opentracing.io/">OpenTracing</a>.</p>
<h2>Impact</h2>
<p>With about 100 engineers onboarded, the platform is now enabling 30,000 non-production deployments across dozens of microservices per year. In 2018, there were 1250+ production deployments. Just one production cluster holds 1,800 namespaces and 340 worker nodes. "We had a brand new service taken from concept through coding and deployment all the way to production, including hardening, security testing and so forth, in less than two and a half weeks," says O'Mahony.</p>
{{< case-studies/quote author="Josh West, Lead Cloud Engineer, Bose" >}}
"At Bose we're building an IoT platform that has enabled our physical products. If it weren't for Kubernetes and the rest of the CNCF projects being free open source software with such a strong community, we would never have achieved scale, or even gotten to launch on schedule."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
A household name in high-quality audio equipment, <a href="https://www.bose.com/en_us/index.html">Bose</a> has offered connected products for more than five years, and as that demand grew, the infrastructure had to change to support it.
{{< /case-studies/lead >}}
<p>"We needed to provide a mechanism for developers to rapidly prototype and deploy services all the way to production pretty fast," says Lead Cloud Engineer Josh West. "There were a lot of cloud capabilities we wanted to provide to support our audio equipment and experiences."</p>
<p>In 2016, the company decided to start building an IoT platform from scratch. The primary goal: "To be one to two steps ahead of the different product groups so that we are never scrambling to catch up with their scale," says Cloud Architecture Manager Dylan O'Mahony. "If they release a new connected product, we want to be already well ahead of being able to handle whatever scale that they're going to throw at us."</p>
<p>From the beginning, the team knew it wanted a microservices architecture and platform as a service. After evaluating and prototyping orchestration solutions, including Mesos and Docker Swarm, the team decided to adopt <a href="https://kubernetes.io/">Kubernetes</a> for its platform running on AWS. Kubernetes was still in 1.5, but already the technology could do much of what the team wanted and needed for the present and the future. For West, that meant having storage and network handled. O'Mahony points to Kubernetes' portability in case Bose decides to go multi-cloud.</p>
<p>"Bose is a company that looks out for the long term," says West. "Going with a quick commercial off-the-shelf solution might've worked for that point in time, but it would not have carried us forward, which is what we needed from Kubernetes and the CNCF."</p>
{{< case-studies/quote
image="/images/case-studies/bose/banner3.jpg"
author="Dylan O'Mahony, Cloud Architecture Manager, Bose"
>}}
"Everybody on the team thinks in terms of automation, leaning out the processes, getting things done as quickly as possible. When you step back and look at what it means for a 50-plus-year-old speaker company to have that sort of culture, it really is quite incredible, and I think the tools that we use and the foundation that we've built with them is a huge piece of that."
{{< /case-studies/quote >}}
<p>The team spent time working on choosing tooling to make the experience easier for developers. "Our developers interact with tools provided by our Ops team, and the Ops team run all of their tooling on top of Kubernetes," says O'Mahony. "We try not to make direct Kubernetes access the only way. In fact, ideally, our developers wouldn't even need to know that they're running on Kubernetes."</p>
<p>The platform, which also incorporated <a href="https://prometheus.io/">Prometheus</a> monitoring from the beginning, backdoored its way into production in 2017, serving over 3 million connected products from the get-go. "Even though the speakers and the products that we were designing this platform for were still quite a ways away from being launched, we did have some connected speakers on the market," says O'Mahony. "We basically started to point certain features of those speakers and the apps that go with those speakers to this platform."</p>
<p>Today, just one of Bose's production clusters holds 1,800 namespaces/discrete services and 340 nodes. With about 100 engineers now onboarded, the platform infrastructure is now enabling 30,000 non-production deployments across dozens of microservices per year. In 2018, there were 1250+ production deployments.. It's a staggering improvement over some of Bose's previous deployment processes, which supported far fewer deployments and services.</p>
{{< case-studies/quote
image="/images/case-studies/bose/banner4.jpg"
author="Josh West, Lead Cloud Engineer, Bose"
>}}
"The CNCF Landscape quickly explains what's going on in all the different areas from storage to cloud providers to automation and so forth. This is our shopping cart to build a cloud infrastructure. We can go choose from the different aisles."
{{< /case-studies/quote >}}
<p>"We had a brand new service deployed from concept through coding and deployment all the way to production, including hardening, security testing and so forth, in less than two and a half weeks," says O'Mahony. "Everybody thinks in terms of automation, leaning out the processes, getting things done as quickly as possible. When you step back and look at what it means for a 50-plus-year-old speaker company to have that sort of culture, it really is quite incredible, and I think the tools that we use and the foundation that we've built is a huge piece of that."</p>
<p>Many of those technologies—such as <a href="https://www.fluentd.org/">Fluentd</a>, <a href="https://coredns.io/">CoreDNS</a>, <a href="https://www.jaegertracing.io/">Jaeger</a>, and <a href="https://opentracing.io/">OpenTracing</a>—come from the <a href="https://landscape.cncf.io/">CNCF Landscape</a>, which West and O'Mahony have relied upon throughout Bose's cloud native journey. "The CNCF Landscape quickly explains what's going on in all the different areas from storage to cloud providers to automation and so forth," says West. "This is our shopping cart to build a cloud infrastructure. We can go choose from the different aisles."</p>
<p>And, he adds, "If it weren't for Kubernetes and the rest of the CNCF projects being free open source software with such a strong community, we would never have achieved scale, or even gotten to launch on schedule."</p>
<p>Another benefit of going cloud native: "We are even attracting much more talent into Bose because we're so involved with the <a href="http://careers.bose.com">CNCF Landscape</a>," says West. (Yes, they're hiring.) "It's just enabled so many people to do so many great things and really brought Bose into the future of cloud."</p>
{{< case-studies/quote author="Dylan O'Mahony, Cloud Architecture Manager, Bose" >}}
"We have a lot going on to support many more of our business units at Bose in addition to the consumer electronics division, which we currently do. It's only because of the cloud native landscape and the tools and the features that are available that we can provide such a fantastic cloud platform for all the developers and divisions that are trying to enable some pretty amazing experiences."
{{< /case-studies/quote >}}
<p>In the coming year, the team wants to work on service mesh and serverless, as well as expansion around the world. "Getting our latency down by going multi-region is going to be a big focus for us," says O'Mahony. "In order to make sure that our customers in Japan, Australia, and everywhere else are having a good experience, we want to have points of presence closer to them. It's never been done at Bose before."</p>
<p>That won't stop them, because the team is all about lofty goals. "We want to get to billions of connected products!" says West. "We have a lot going on to support many more of our business units at Bose in addition to the consumer electronics division, which we currently do. It's only because of the cloud native landscape and the tools and the features that are available that we can provide such a fantastic cloud platform for all the developers and divisions that are trying to enable some pretty amazing experiences."</p>
<p>In fact, given the scale the platform is already supporting, says O'Mahony, "doing anything other than Kubernetes, I think, would be folly at this point."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#0071f7;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.55202" y="-3.16104" width="223.25536" height="134.51136"/><path class="cls-2" d="M174.27862,87.85821a5.22494,5.22494,0,0,1-.67887,7.12815,5.66277,5.66277,0,0,1-7.46759-.67888l-11.88024-15.2746L142.71111,93.968c-1.69718,2.37606-5.09153,2.37606-7.46758.67888a4.96011,4.96011,0,0,1-1.01831-7.12815l13.57742-17.65065L134.22522,52.21747a5.32925,5.32925,0,0,1,8.48589-6.44927l11.54081,15.2746,11.88024-14.59574c1.69718-2.376,4.75211-2.71548,7.46759-1.0183,2.376,1.69717,2.376,5.09153.67887,7.46758l-13.238,17.31121Zm-61.77727-2.03662a15.64926,15.64926,0,0,1-15.95348-15.614,15.95716,15.95716,0,0,1,31.907,0A16.08546,16.08546,0,0,1,112.50135,85.82159Zm-46.84212,0a15.64926,15.64926,0,0,1-15.95347-15.614,15.95716,15.95716,0,0,1,31.907,0A15.64926,15.64926,0,0,1,65.65923,85.82159Zm46.84212-41.41114A26.28086,26.28086,0,0,0,89.41973,57.98787,26.42342,26.42342,0,0,0,65.99867,44.41045,27.90043,27.90043,0,0,0,50.0452,49.502V27.77811a5.22038,5.22038,0,0,0-5.09153-5.09154,5.297,5.297,0,0,0-5.431,5.09154V70.547A26.00829,26.00829,0,0,0,65.65923,96.00466,26.718,26.718,0,0,0,89.08029,82.0878a26.58818,26.58818,0,0,0,23.08162,13.91686,26.22192,26.22192,0,0,0,26.476-26.13654C138.97732,55.95126,127.09708,44.41045,112.50135,44.41045Z"/></svg>

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.3 KiB

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 102 60"><defs><style>.cls-1{fill:#0071f7;}</style></defs><title>box</title><path class="cls-1" d="M96.96648,49.45314a3.5963,3.5963,0,0,1-.46727,4.90627,3.89764,3.89764,0,0,1-5.1399-.46727L83.1822,43.37871l-7.94348,10.2798c-1.16816,1.63543-3.50448,1.63543-5.1399.46727a3.414,3.414,0,0,1-.7009-4.90627l9.34527-12.14886L69.39792,24.9218a3.926,3.926,0,0,1,.7009-5.1399,3.926,3.926,0,0,1,5.1399.7009L83.1822,30.99623l8.17711-10.04617c1.16816-1.63543,3.27085-1.86906,5.1399-.7009,1.63543,1.16816,1.63543,3.50448.46727,5.1399L87.85483,37.30428Zm-42.521-1.40179a10.77131,10.77131,0,0,1-10.9807-10.74707,10.98323,10.98323,0,0,1,21.96139,0A11.07154,11.07154,0,0,1,54.44549,48.05135Zm-32.24119,0A10.77131,10.77131,0,0,1,11.2236,37.30428a10.98323,10.98323,0,0,1,21.96139,0A10.77131,10.77131,0,0,1,22.2043,48.05135ZM54.44549,19.54827a18.089,18.089,0,0,0-15.887,9.34527,18.18708,18.18708,0,0,0-16.12059-9.34528,19.20379,19.20379,0,0,0-10.9807,3.50448V8.10031A3.59315,3.59315,0,0,0,7.95276,4.59583,3.64592,3.64592,0,0,0,4.21465,8.10031V37.53792A17.90138,17.90138,0,0,0,22.2043,55.0603a18.38986,18.38986,0,0,0,16.12059-9.5789,18.30052,18.30052,0,0,0,15.887,9.5789A18.04842,18.04842,0,0,0,72.43514,37.07065C72.66877,27.49175,64.49166,19.54827,54.44549,19.54827Z"/></svg>

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

@ -0,0 +1,100 @@
---
title: Box Case Study
case_study_styles: true
cid: caseStudies
video: https://www.youtube.com/embed/of45hYbkIZs?autoplay=1
quote: >
Kubernetes has the opportunity to be the new cloud platform. The amount of innovation that's going to come from being able to standardize on Kubernetes as a platform is incredibly exciting - more exciting than anything I've seen in the last 10 years of working on the cloud.
new_case_study_styles: true
heading_background: /images/case-studies/box/banner1.jpg
heading_title_logo: /images/box_logo.png
subheading: >
An Early Adopter Envisions a New Cloud Platform
case_study_details:
- Company: Box
- Location: Redwood City, California
- Industry: Technology
---
<h2>Challenge</h2>
<p>Founded in 2005, the enterprise content management company allows its more than 50 million users to manage content in the cloud. <a href="https://www.box.com/home">Box</a> was built primarily with bare metal inside the company's own data centers, with a monolithic PHP code base. As the company was expanding globally, it needed to focus on "how we run our workload across many different cloud infrastructures from bare metal to public cloud," says Sam Ghods, Cofounder and Services Architect of Box. "It's been a huge challenge because of different clouds, especially bare metal, have very different interfaces."</p>
<h2>Solution</h2>
<p>Over the past couple of years, Box has been decomposing its infrastructure into microservices, and became an early adopter of, as well as contributor to, <a href="http://kubernetes.io/">Kubernetes</a> container orchestration. Kubernetes, Ghods says, has allowed Box's developers to "target a universal set of concepts that are portable across all clouds."</p>
<h2>Impact</h2>
<p>"Before Kubernetes," Ghods says, "our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Today, a new microservice takes less than five days to deploy. And we're working on getting it to an hour."</p>
{{< case-studies/quote author="SAM GHOUDS, CO-FOUNDER AND SERVICES ARCHITECT OF BOX" >}}
"We looked at a lot of different options, but Kubernetes really stood out....the fact that on day one it was designed to run on bare metal just as well as Google Cloud meant that we could actually migrate to it inside of our data centers, and then use those same tools and concepts to run across public cloud providers as&nbsp;well."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
In the summer of 2014, Box was feeling the pain of a decade's worth of hardware and software infrastructure that wasn't keeping up with the company's needs.
{{< /case-studies/lead >}}
<p>A platform that allows its more than 50 million users (including governments and big businesses like <a href="https://www.ge.com/">General Electric</a>) to manage and share content in the cloud, Box was originally a <a href="http://php.net/">PHP</a> monolith of millions of lines of code built exclusively with bare metal inside of its own data centers. It had already begun to slowly chip away at the monolith, decomposing it into microservices. And "as we've been expanding into regions around the globe, and as the public cloud wars have been heating up, we've been focusing a lot more on figuring out how we run our workload across many different environments and many different cloud infrastructure providers," says Box Cofounder and Services Architect Sam Ghods. "It's been a huge challenge thus far because of all these different providers, especially bare metal, have very different interfaces and ways in which you work with them."</p>
<p>Box's cloud native journey accelerated that June, when Ghods attended <a href="https://www.docker.com/events/dockercon">DockerCon</a>. The company had come to the realization that it could no longer run its applications only off bare metal, and was researching containerizing with Docker, virtualizing with OpenStack, and supporting public cloud.</p>
<p>At that conference, Google announced the release of its Kubernetes container management system, and Ghods was won over. "We looked at a lot of different options, but Kubernetes really stood out, especially because of the incredibly strong team of <a href="https://research.google.com/pubs/pub43438.html">Borg</a> veterans and the vision of having a completely infrastructure-agnostic way of being able to run cloud software," he says, referencing Google's internal container orchestrator Borg. "The fact that on day one it was designed to run on bare metal just as well as <a href="https://cloud.google.com/">Google Cloud</a> meant that we could actually migrate to it inside of our data centers, and then use those same tools and concepts to run across public cloud providers as well."</p>
<p>Another plus: Ghods liked that <a href="https://kubernetes.io/">Kubernetes</a> has a universal set of API objects like pod, service, replica set and deployment object, which created a consistent surface to build tooling against. "Even PaaS layers like <a href="https://www.openshift.com/">OpenShift</a> or <a href="http://deis.io/">Deis</a> that build on top of Kubernetes still treat those objects as first-class principles," he says. "We were excited about having these abstractions shared across the entire ecosystem, which would result in a lot more momentum than we saw in other potential solutions."</p>
<p>Box deployed Kubernetes in a cluster in a production data center just six months later. Kubernetes was then still pre-beta, on version 0.11. They started small: The very first thing Ghods's team ran on Kubernetes was a Box API checker that confirms Box is up. "That was just to write and deploy some software to get the whole pipeline functioning," he says. Next came some daemons that process jobs, which was "nice and safe because if they experienced any interruptions, we wouldn't fail synchronous incoming requests from customers."</p>
{{< case-studies/quote image="/images/case-studies/box/banner3.jpg">}}
"As we've been expanding into regions around the globe, and as the public cloud wars have been heating up, we've been focusing a lot more on figuring out how we [can have Kubernetes help] run our workload across many different environments and many different cloud infrastructure providers."
{{< /case-studies/quote >}}
<p>The first live service, which the team could route to and ask for information, was launched a few months later. At that point, Ghods says, "We were comfortable with the stability of the Kubernetes cluster. We started to port some services over, then we would increase the cluster size and port a few more, and that's ended up to about 100 servers in each data center that are dedicated purely to Kubernetes. And that's going to be expanding a lot over the next 12 months, probably too many hundreds if not thousands."</p>
<p>While observing teams who began to use Kubernetes for their microservices, "we immediately saw an uptick in the number of microservices being released," Ghods&nbsp;notes. "There was clearly a pent-up demand for a better way of building software through microservices, and the increase in agility helped our developers be more productive and make better architectural choices."</p>
{{< case-studies/lead >}}
"There was clearly a pent-up demand for a better way of building software through microservices, and the increase in agility helped our developers be more productive and make better architectural choices."
{{< /case-studies/lead >}}
<p>Ghods reflects that as early adopters, Box had a different journey from what companies experience now. "We were definitely lock step with waiting for certain things to stabilize or features to get released," he says. "In the early days we were doing a lot of contributions [to components such as kubectl apply] and waiting for Kubernetes to release each of them, and then we'd upgrade, contribute more, and go back and forth several times. The entire project took about 18 months from our first real deployment on Kubernetes to having general availability. If we did that exact same thing today, it would probably be no more than six."</p>
<p>In any case, Box didn't have to make too many modifications to Kubernetes for it to work for the company. "The vast majority of the work our team has done to implement Kubernetes at Box has been making it work inside of our existing (and often legacy) infrastructure," says Ghods, "such as upgrading our base operating system from RHEL6 to RHEL7 or integrating it into <a href="https://www.nagios.org/">Nagios</a>, our monitoring infrastructure. But overall Kubernetes has been remarkably flexible with fitting into many of our constraints, and we've been running it very successfully on our bare metal infrastructure."</p>
<p>Perhaps the bigger challenge for Box was a cultural one. "Kubernetes, and cloud native in general, represents a pretty big paradigm shift, and it's not very incremental," Ghods says. "We're essentially making this pitch that Kubernetes is going to solve everything because it does things the right way and everything is just suddenly better. But it's important to keep in mind that it's not nearly as proven as many other solutions out there. You can't say how long this or that company took to do it because there just aren't that many yet. Our team had to really fight for resources because our project was a bit of a moonshot."</p>
{{< case-studies/quote image="/images/case-studies/box/banner4.jpg">}}
"The vast majority of the work our team has done to implement Kubernetes at Box has been making it work inside of our existing [and often legacy] infrastructure....overall Kubernetes has been remarkably flexible with fitting into many of our constraints, and we've been running it very successfully on our bare metal infrastructure."
{{< /case-studies/quote >}}
<p>Having learned from experience, Ghods offers these two pieces of advice for companies going through similar challenges:</p>
{{< case-studies/lead >}}
1. Deliver early and often.
{{< /case-studies/lead >}}
<p>Service discovery was a huge problem for Box, and the team had to decide whether to build an interim solution or wait for Kubernetes to natively satisfy Box's unique requirements. After much debate, "we just started focusing on delivering something that works, and then dealing with potentially migrating to a more native solution later," Ghods says. "The above-all-else target for the team should always be to serve real production use cases on the infrastructure, no matter how trivial. This helps keep the momentum going both for the team itself and for the organizational perception of the project."</p>
{{< case-studies/lead >}}
2. Keep an open mind about what your company has to abstract away from developers and what it doesn't.
{{< /case-studies/lead >}}
<p>Early on, the team built an abstraction on top of Docker files to help ensure that images had the right security updates. This turned out to be superfluous work, since container images are considered immutable and you can easily scan them post-build to ensure they do not contain vulnerabilities. Because managing infrastructure through containerization is such a discontinuous leap, it's better to start by interacting directly with the native tools and learning their unique advantages and caveats. An abstraction should be built only after a practical need for it arises.</p>
<p>In the end, the impact has been powerful. "Before Kubernetes," Ghods says, "our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Now a new microservice takes less than five days to deploy. And we're working on getting it to an hour. Granted, much of that six months was due to how broken our systems were, but bare metal is intrinsically a difficult platform to support unless you have a system like Kubernetes to help manage it."</p>
<p>By Ghods's estimate, Box is still several years away from his goal of being a 90-plus percent Kubernetes shop. "We're very far along on having a mission-critical, stable Kubernetes deployment that provides a lot of value," he says. "Right now about five percent of all of our compute runs on Kubernetes, and I think in the next six months we'll likely be between 20 to 50 percent. We're working hard on enabling all stateless service use cases, and shift our focus to stateful services after that."</p>
{{< case-studies/quote >}}
"Ghods predicts that Kubernetes has the opportunity to be the new cloud platform. '...because it's a never-before-seen level of automation and intelligence surrounding infrastructure that is portable and agnostic to every way you can run your infrastructure.'"
{{< /case-studies/quote >}}
<p>In fact, that's what he envisions across the industry: Ghods predicts that Kubernetes has the opportunity to be the new cloud platform. Kubernetes provides an API consistent across different cloud platforms including bare metal, and "I don't think people have seen the full potential of what's possible when you can program against one single interface," he says. "The same way <a href="https://aws.amazon.com/">AWS</a> changed infrastructure so that you don't have to think about servers or cabinets or networking equipment anymore, Kubernetes enables you to focus exclusively on the containers that you're running, which is pretty exciting. That's the vision."</p>
<p>Ghods points to projects that are already in development or recently released for Kubernetes as a cloud platform: cluster federation, the Dashboard UI, and <a href="https://coreos.com/">CoreOS</a>'s etcd operator. "I honestly believe it's the most exciting thing I've seen in cloud infrastructure," he says, "because it's a never-before-seen level of automation and intelligence surrounding infrastructure that is portable and agnostic to every way you can run your infrastructure."</p>
<p>Box, with its early decision to use bare metal, embarked on its Kubernetes journey out of necessity. But Ghods says that even if companies don't have to be agnostic about cloud providers today, Kubernetes may soon become the industry standard, as more and more tooling and extensions are built around the API.</p>
<p>"The same way it doesn't make sense to deviate from Linux because it's such a standard," Ghods says, "I think Kubernetes is going down the same path. It is still early days—the documentation still needs work and the user experience for writing and publishing specs to the Kubernetes clusters is still rough. When you're on the cutting edge you can expect to bleed a little. But the bottom line is, this is where the industry is going. Three to five years from now it's really going to be shocking if you run your infrastructure any other way."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fff;}.cls-2{fill:transparent;}.cls-3{fill:#192534;}.cls-4{mask:url(#mask);}.cls-5{mask:url(#mask-2-2);}.cls-6{mask:url(#mask-3);}</style><mask id="mask" x="14.75095" y="70.72244" width="32.48851" height="10.3304" maskUnits="userSpaceOnUse"><g id="mask-2"><polygon id="path-1" class="cls-1" points="14.751 81.053 14.751 70.722 47.239 70.722 47.239 81.053 14.751 81.053"/></g></mask><mask id="mask-2-2" x="14.75095" y="60.57418" width="32.48851" height="10.3304" maskUnits="userSpaceOnUse"><g id="mask-4"><polygon id="path-3" class="cls-1" points="14.751 70.905 14.751 60.574 47.239 60.574 47.239 70.905 14.751 70.905"/></g></mask><mask id="mask-3" x="14.75079" y="45.87467" width="32.48853" height="14.92678" maskUnits="userSpaceOnUse"><g id="mask-6"><polygon id="path-5" class="cls-1" points="14.751 60.801 14.751 45.875 47.239 45.875 47.239 60.801 14.751 60.801"/></g></mask></defs><title>kubernetes.io-logos</title><rect class="cls-2" x="-3.55202" y="-3.16104" width="223.25536" height="134.51136"/><g id="Symbols"><g id="Mobile-Header"><g id="Group-4"><g id="Mobile-Logo"><g id="Group"><g id="Group-3"><path id="Fill-1" class="cls-3" d="M190.28379,79.32487h-6.37906V54.34018h6.37906v3.41436a10.33414,10.33414,0,0,1,7.73637-4.03515v6.41449a8.21414,8.21414,0,0,0-1.75906-.15516c-2.10911,0-4.92212,1.24163-5.97731,2.84489ZM173.10344,64.47949a5.69574,5.69574,0,0,0-5.97731-5.53592c-3.96771,0-5.67637,3.05266-5.97732,5.53592ZM167.5278,79.94566c-7.38379,0-12.9595-5.12161-12.9595-13.13943,0-7.24181,5.174-13.08684,12.55783-13.08684,7.234,0,12.15607,5.58723,12.15607,13.76022v1.4481H161.24952c.40168,3.15535,2.86336,5.79371,6.98219,5.79371a10.15786,10.15786,0,0,0,6.47972-2.48319l2.813,4.24168c-2.41134,2.27549-6.22917,3.46575-9.99665,3.46575Zm-19.24929-.62079h-6.43068V60.08257h-4.018V54.34018h4.018V52.99592c0-5.32809,3.31543-8.68991,8.18843-8.68991a8.489,8.489,0,0,1,6.32989,2.32674l-2.41135,3.88a3.50681,3.50681,0,0,0-2.66312-1.03509c-1.75782,0-3.01317,1.19026-3.01317,3.51828v1.34426h4.92207v5.74239h-4.92207Zm-16.426,0H125.4218V60.08257h-4.0194V54.34018h4.0194V52.99592c0-5.32809,3.31541-8.68991,8.18843-8.68991a8.48418,8.48418,0,0,1,6.3286,2.32674l-2.41012,3.88a3.50683,3.50683,0,0,0-2.66312-1.03509c-1.75782,0-3.01318,1.19026-3.01318,3.51828v1.34426h4.92212v5.74239h-4.92212v19.2423Zm-14.43053,0h-6.38022V76.16952a11.21263,11.21263,0,0,1-8.53978,3.77613c-5.32385,0-7.83586-3.00012-7.83586-7.86259V54.34018h6.379v15.157c0,3.46574,1.75911,4.60345,4.4714,4.60345a7.07632,7.07632,0,0,0,5.52531-2.8449V54.34017H117.422v24.9847ZM71.87264,71.307a7.0348,7.0348,0,0,0,5.4762,2.79359c3.7171,0,6.17877-2.8975,6.17877-7.24181,0-4.34567-2.46168-7.29441-6.17877-7.29441a6.9786,6.9786,0,0,0-5.47621,2.8975V71.307Zm0,8.01788h-6.379V43.734h6.379V57.54807a9.2539,9.2539,0,0,1,7.48587-3.82868c6.17748,0,10.74961,4.96639,10.74961,13.13944,0,8.32821-4.62118,13.08683-10.74961,13.08683a9.33043,9.33043,0,0,1-7.48587-3.82745v3.20666Z"/></g><g id="Group-6"><g class="cls-4"><path id="Fill-4" class="cls-3" d="M47.00136,72.68049l-3.69521-1.83918a1.4583,1.4583,0,0,0-1.15329,0L31.57117,76.108a1.455,1.455,0,0,1-1.15194,0L19.83748,70.84131a1.45828,1.45828,0,0,0-1.15328,0L14.989,72.68049c-.31738.15841-.31738.416,0,.57308L30.41923,80.934a1.45553,1.45553,0,0,0,1.15194,0l15.43019-7.68046c.31738-.15706.31738-.41467,0-.57308"/></g></g><g id="Group-9"><g class="cls-5"><path id="Fill-7" class="cls-3" d="M47.00136,62.53223l-3.69521-1.83917a1.45872,1.45872,0,0,0-1.15329,0L31.57117,65.95984a1.46694,1.46694,0,0,1-1.15194,0L19.83748,60.69306a1.4587,1.4587,0,0,0-1.15328,0L14.989,62.53223c-.31738.15842-.31738.416,0,.5745l15.43024,7.67905a1.45553,1.45553,0,0,0,1.15194,0l15.43019-7.67905c.31738-.15848.31738-.41608,0-.5745"/></g></g><g id="Group-12"><g class="cls-6"><path id="Fill-10" class="cls-3" d="M14.98887,53.6031l15.43018,7.08965a1.58033,1.58033,0,0,0,1.15194,0L47.00123,53.6031c.31744-.14628.31744-.38408,0-.52907L31.571,45.98438a1.56676,1.56676,0,0,0-1.15194,0L14.98887,53.074c-.31744.145-.31744.38279,0,.52907"/></g></g></g></g></g></g></g></svg>

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 KiB

View File

@ -0,0 +1,83 @@
---
title: Buffer Case Study
case_study_styles: true
cid: caseStudies
new_case_study_styles: true
heading_background: /images/case-studies/buffer/banner3.jpg
heading_title_logo: /images/buffer.png
subheading: >
Making Deployments Easy for a Small, Distributed Team
case_study_details:
- Company: Buffer
- Location: Around the World
- Industry: Social Media Technology
---
<h2>Challenge</h2>
<p>With a small but fully distributed team of 80 working across almost a dozen time zones, Buffer—which offers social media management to agencies and marketers—was looking to solve its "classic monolithic code base problem," says Architect Dan Farrelly. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary."</p>
<h2>Solution</h2>
<p>Embracing containerization, Buffer moved its infrastructure from Amazon Web Services' Elastic Beanstalk to Docker on AWS, orchestrated with Kubernetes.</p>
<h2>Impact</h2>
<p>The new system "leveled up our ability with deployment and rolling out new changes," says Farrelly. "Building something on your computer and knowing that it's going to work has shortened things up a lot. Our feedback cycles are a lot faster now too."</p>
{{< case-studies/quote author="DAN FARRELLY, BUFFER ARCHITECT" >}}
"It's amazing that we can use the Kubernetes solution off the shelf with our team. And it just keeps getting better. Before we even know that we need something, it's there in the next release or it's coming in the next few months."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
Dan Farrelly uses a carpentry analogy to explain the problem his company, <a href="https://buffer.com">Buffer</a>, began having as its team of developers grew over the past few years.
{{< /case-studies/lead >}}
<p>"If you're building a table by yourself, it's fine," the company's architect says. "If you bring in a second person to work on the table, maybe that person can start sanding the legs while you're sanding the top. But when you bring a third or fourth person in, someone should probably work on a different table." Needing to work on more and more different tables led Buffer on a path toward microservices and containerization made possible by Kubernetes.</p>
<p>Since around 2012, Buffer had already been using <a href="https://aws.amazon.com/elasticbeanstalk/">Elastic Beanstalk</a>, the orchestration service for deploying infrastructure offered by <a href="https://aws.amazon.com">Amazon Web Services</a>. "We were deploying a single monolithic <a href="http://php.net/manual/en/intro-whatis.php">PHP</a> application, and it was the same application across five or six environments," says Farrelly. "We were very much a product-driven company. It was all about shipping new features quickly and getting things out the door, and if something was not broken, we didn't spend too much time on it. If things were getting a little bit slow, we'd maybe use a faster server or just scale up one instance, and it would be good enough. We'd move on."</p>
<p>But things came to a head in 2016. With the growing number of committers on staff, Farrelly and Buffer's then-CTO, Sunil Sadasivan, decided it was time to re-architect and rethink their infrastructure. "It was a classic monolithic code base problem," says Farrelly.<br><br>Some of the company's team was already successfully using <a href="https://www.docker.com">Docker</a> in their development environment, but the only application running on Docker in production was a marketing website that didn't see real user traffic. They wanted to go further with Docker, and the next step was looking at options for orchestration.</p>
{{< case-studies/quote image="/images/case-studies/buffer/banner1.jpg" >}}
And all the things Kubernetes did well suited Buffer's needs. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary," says Farrelly. "We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it [Kubernetes]."
{{< /case-studies/quote >}}
<p>First they considered <a href="https://mesosphere.com">Mesosphere</a>, <a href="https://dcos.io">DC/OS</a> and <a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service</a> (which their data systems team was already using for some data pipeline jobs). While they were impressed by these offerings, they ultimately went with Kubernetes. "We run on AWS still, so spinning up, creating services and creating load balancers on demand for us without having to configure them manually was a great way for our team to get into this," says Farrelly. "We didn't need to figure out how to configure this or that, especially coming from a former Elastic Beanstalk environment that gave us an automatically-configured load balancer. I really liked Kubernetes' controls of the command line. It just took care of ports. It was a lot more flexible. Kubernetes was designed for doing what it does, so it does it very well."</p>
<p>And all the things Kubernetes did well suited Buffer's needs. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary," says Farrelly. "We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it [Kubernetes]."</p>
<p>Above all, it provided a powerful solution for one of the company's most distinguishing characteristics: their remote team that's spread across a dozen different time zones. "The people with deep knowledge of our infrastructure live in time zones different from our peak traffic time zones, and most of our product engineers live in other places," says Farrelly. "So we really wanted something where anybody could get a grasp of the system early on and utilize it, and not have to worry that the deploy engineer is asleep. Otherwise people would sit around for 12 to 24 hours for something. It's been really cool to see people moving much faster."</p>
<p>With a relatively small engineering team—just 25 people, and only a handful working on infrastructure, with the majority front-end developers—Buffer needed "something robust for them to deploy whatever they wanted," says Farrelly. Before, "it was only a couple of people who knew how to set up everything in the old way. With this system, it was easy to review documentation and get something out extremely quickly. It lowers the bar for us to get everything in production. We don't have the big team to build all these tools or manage the infrastructure like other larger companies might."</p>
{{< case-studies/quote image="/images/case-studies/buffer/banner4.jpg" >}}
"In our old way of working, the feedback loop was a lot longer, and it was delicate because if you deployed something, the risk was high to potentially break something else," Farrelly says. "With the kind of deploys that we built around Kubernetes, we were able to detect bugs and fix them, and get them deployed super fast. The second someone is fixing [a bug], it's out the door."
{{< /case-studies/quote >}}
<p>To help with this, Buffer developers wrote a deploy bot that wraps the Kubernetes deploy process and can be used by every team. "Before, our data analysts would update, say, a <a href="https://www.python.org">Python</a> analysis script and have to wait for the lead on that team to click the button and deploy it," Farrelly explains. "Now our data analysts can make a change, enter a <a href="https://slack.com">Slack</a> command, '/deploy,' and it goes out instantly. They don't need to wait on these slow turnaround times. They don't even know where it's running; it doesn't matter."</p>
<p>One of the first applications the team built from scratch using Kubernetes was a new image resizing service. As a social media management tool that allows marketing teams to collaborate on posts and send updates across multiple social media profiles and networks, Buffer has to be able to resize photographs as needed to meet the varying limitations of size and format posed by different social networks. "We always had these hacked together solutions," says Farrelly.</p>
<p>To create this new service, one of the senior product engineers was assigned to learn Docker and Kubernetes, then build the service, test it, deploy it and monitor it—which he was able to do relatively quickly. "In our old way of working, the feedback loop was a lot longer, and it was delicate because if you deployed something, the risk was high to potentially break something else," Farrelly says. "With the kind of deploys that we built around Kubernetes, we were able to detect bugs and fix them, and get them deployed super fast. The second someone is fixing [a bug], it's out the door."</p>
<p>Plus, unlike with their old system, they could scale things horizontally with one command. "As we rolled it out," Farrelly says, "we could anticipate and just click a button. This allowed us to deal with the demand that our users were placing on the system and easily scale it to handle it."</p>
<p>Another thing they weren't able to do before was a canary deploy. This new capability "made us so much more confident in deploying big changes," says Farrelly. "Before, it took a lot of testing, which is still good, but it was also a lot of 'fingers crossed.' And this is something that gets run 800,000 times a day, the core of our business. If it doesn't work, our business doesn't work. In a Kubernetes world, I can do a canary deploy to test it for 1 percent and I can shut it down very quickly if it isn't working. This has leveled up our ability to deploy and roll out new changes quickly while reducing risk."</p>
{{< case-studies/quote >}}
"If you want to run containers in production, with nearly the power that Google uses internally, this [Kubernetes] is a great way to do that," Farrelly says. "We're a relatively small team that's actually running Kubernetes, and we've never run anything like it before. So it's more approachable than you might think. That's the one big thing that I tell people who are experimenting with it. Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this way."
{{< /case-studies/quote >}}
<p>By October 2016, 54 percent of Buffer's traffic was going through their Kubernetes cluster. "There's a lot of our legacy functionality that still runs alright, and those parts might move to Kubernetes or stay in our old setup forever," says Farrelly. But the company made the commitment at that time that going forward, "all new development, all new features, will be running on Kubernetes."</p>
<p>The plan for 2017 is to move all the legacy applications to a new Kubernetes cluster, and run everything they've pulled out of their old infrastructure, plus the new services they're developing in Kubernetes, on another cluster. "I want to bring all the benefits that we've seen on our early services to everyone on the team," says Farrelly.</p>
{{< case-studies/lead >}}
For Buffer's engineers, it's an exciting process. "Every time we're deploying a new service, we need to figure out: OK, what's the architecture? How do these services communicate? What's the best way to build this service?" Farrelly says. "And then we use the different features that Kubernetes has to glue all the pieces together. It's enabling us to experiment as we're learning how to design a service-oriented architecture. Before, we just wouldn't have been able to do it. This is actually giving us a blank white board so we can do whatever we want on it."
{{< /case-studies/lead >}}
<p>Part of that blank slate is the flexibility that Kubernetes offers should the time come when Buffer may want or need to change its cloud. "It's cloud agnostic so maybe one day we could switch to Google or somewhere else," Farrelly says. "We're very deep in Amazon but it's nice to know we could move away if we need to."</p>
<p>At this point, the team at Buffer can't imagine running their infrastructure any other way—and they're happy to spread the word. "If you want to run containers in production, with nearly the power that Google uses internally, this [Kubernetes] is a great way to do that," Farrelly says. "We're a relatively small team that's actually running Kubernetes, and we've never run anything like it before. So it's more approachable than you might think. That's the one big thing that I tell people who are experimenting with it. Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this way."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 6.7 KiB

View File

@ -0,0 +1,62 @@
---
title: Capital One Case Study
case_study_styles: true
cid: caseStudies
new_case_study_styles: true
heading_background: /images/case-studies/capitalone/banner1.jpg
heading_title_logo: /images/capitalone-logo.png
subheading: >
Supporting Fast Decisioning Applications with Kubernetes
case_study_details:
- Company: Capital One
- Location: McLean, Virginia
- Industry: Retail banking
---
<h2>Challenge</h2>
<p>The team set out to build a provisioning platform for <a href="https://www.capitalone.com/">Capital One</a> applications deployed on AWS that use streaming, big-data decisioning, and machine learning. One of these applications handles millions of transactions a day; some deal with critical functions like fraud detection and credit decisioning. The key considerations: resilience and speed—as well as full rehydration of the cluster from base AMIs.</p>
<h2>Solution</h2>
<p>The decision to run <a href="https://kubernetes.io/">Kubernetes</a> "is very strategic for us," says John Swift, Senior Director Software Engineering. "We use Kubernetes as a substrate or an operating system, if you will. There's a degree of affinity in our product development."</p>
<h2>Impact</h2>
<p>"Kubernetes is a significant productivity multiplier," says Lead Software Engineer Keith Gasser, adding that to run the platform without Kubernetes would "easily see our costs triple, quadruple what they are now for the amount of pure AWS expense." Time to market has been improved as well: "Now, a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer." Deployments increased by several orders of magnitude. Plus, the rehydration/cluster-rebuild process, which took a significant part of a day to do manually, now takes a couple hours with Kubernetes automation and declarative configuration.</p>
{{< case-studies/quote author="Jamil Jadallah, Scrum Master" >}}
<iframe width="560" height="315" src="https://www.youtube.com/embed/UHVW01ksg-s" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
<br>
"With the scalability, the management, the coordination, Kubernetes really empowers us and gives us more time back than we had before."
{{< /case-studies/quote >}}
<p>As a top 10 U.S. retail bank, Capital One has applications that handle millions of transactions a day. Big-data decisioning—for fraud detection, credit approvals and beyond—is core to the business. To support the teams that build applications with those functions for the bank, the cloud team led by Senior Director Software Engineering John Swift embraced Kubernetes for its provisioning platform. "Kubernetes and its entire ecosystem are very strategic for us," says Swift. "We use Kubernetes as a substrate or an operating system, if you will. There's a degree of affinity in our product development."</p>
<p>Almost two years ago, the team embarked on this journey by first working with Docker. Then came Kubernetes. "We wanted to put streaming services into Kubernetes as one feature of the workloads for fast decisioning, and to be able to do batch alongside it," says Lead Software Engineer Keith Gasser. "Once the data is streamed and batched, there are so many tool sets in <a href="https://flink.apache.org/">Flink</a> that we use for decisioning. We want to provide the tools in the same ecosystem, in a consistent way, rather than have a large custom snowflake ecosystem where every tool needs its own custom deployment. Kubernetes gives us the ability to bring all of these together, so the richness of the open source and even the license community dealing with big data can be corralled."</p>
{{< case-studies/quote image="/images/case-studies/capitalone/banner3.jpg" >}}
"We want to provide the tools in the same ecosystem, in a consistent way, rather than have a large custom snowflake ecosystem where every tool needs its own custom deployment. Kubernetes gives us the ability to bring all of these together, so the richness of the open source and even the license community dealing with big data can be corralled."
{{< /case-studies/quote >}}
<p>In this first year, the impact has already been great. "Time to market is really huge for us," says Gasser. "Especially with fraud, you have to be very nimble in the way you respond to threats in the marketplace—being able to add and push new rules, detect new patterns of behavior, detect anomalies in account and transaction flows." With Kubernetes, "a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer. Kubernetes is a manifold productivity multiplier."</p>
<p>Teams now have the tools to be autonomous in their deployments, and as a result, deployments have increased by two orders of magnitude. "And that was with just seven dedicated resources, without needing a whole group sitting there watching everything," says Scrum Master Jamil Jadallah. "That's a huge cost savings. With the scalability, the management, the coordination, Kubernetes really empowers us and gives us more time back than we had before."</p>
{{< case-studies/quote image="/images/case-studies/capitalone/banner4.jpg" >}}
With Kubernetes, "a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer. Kubernetes is a manifold productivity multiplier."
{{< /case-studies/quote >}}
<p>Kubernetes has also been a great time-saver for Capital One's required period "rehydration" of clusters from base AMIs. To minimize the attack vulnerability profile for applications in the cloud, "Our entire clusters get rebuilt from scratch periodically, with new fresh instances and virtual server images that are patched with the latest and greatest security patches," says Gasser. This process used to take the better part of a day, and personnel, to do manually. It's now a quick Kubernetes job.</p>
<p>Savings extend to both capital and operating expenses. "It takes very little to get into Kubernetes because it's all open source," Gasser points out. "We went the DIY route for building our cluster, and we definitely like the flexibility of being able to embrace the latest from the community immediately without waiting for a downstream company to do it. There's capex related to those licenses that we don't have to pay for. Moreover, there's capex savings for us from some of the proprietary software that we get to sunset in our particular domain. So that goes onto our ledger in a positive way as well." (Some of those open source technologies include Prometheus, Fluentd, gRPC, Istio, CNI, and Envoy.)</p>
{{< case-studies/quote >}}
"If we had to do all of this without Kubernetes, on underlying cloud services, I could easily see our costs triple, quadruple what they are now for the amount of pure AWS expense. That doesn't account for personnel to deploy and maintain all the additional infrastructure."
{{< /case-studies/quote >}}
<p>And on the opex side, Gasser says, the savings are high. "We run dozens of services, we have scores of pods, many daemon sets, and since we're data-driven, we take advantage of EBS-backed volume claims for all of our stateful services. If we had to do all of this without Kubernetes, on underlying cloud services, I could easily see our costs triple, quadruple what they are now for the amount of pure AWS expense. That doesn't account for personnel to deploy and maintain all the additional infrastructure."</p>
<p>The team is confident that the benefits will continue to multiply—without a steep learning curve for the engineers being exposed to the new technology. "As we onboard additional tenants in this ecosystem, I think the need for folks to understand Kubernetes may not necessarily go up. In fact, I think it goes down, and that's good," says Gasser. "Because that really demonstrates the scalability of the technology. You start to reap the benefits, and they can concentrate on all the features they need to build for great decisioning in the business— fraud decisions, credit decisions—and not have to worry about, 'Is my AWS server broken? Is my pod not running?'"</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 KiB

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 52.34 57.73"><defs><style>.cls-1{fill:#3b97d2;}.cls-2{fill:#3b97d3;}</style></defs><title>cern</title><path class="cls-1" d="M47.29,10.36s-18.61-.15-24.54-.14c-.93,0-1.55.07-1.75.08A14,14,0,0,0,8.13,24.21a28.07,28.07,0,0,0,1.42,7.43c1.16,4.1,2.53,8.79,2.53,8.79h.74l-2.75-9.27h0a13.7,13.7,0,0,0,12,7,13.1,13.1,0,0,0,7.57-2.27l0,0-12.22,13h1l11.5-12.23A33.36,33.36,0,0,0,33.76,32,13.09,13.09,0,0,0,36,24.6h0L41.4,48.94h.77S37.68,29,36.81,24.85s-1.87-6.64-3-8.09a6.22,6.22,0,0,0-1.24-.51A13.17,13.17,0,1,1,22.14,11a13.21,13.21,0,0,1,8.55,3.14,8,8,0,0,1,1.4.33h0a13.84,13.84,0,0,0-6-3.53v0L47.29,11Z"/><path class="cls-1" d="M43.34,13.69,42.06,26.86h0a14.35,14.35,0,0,0-3-7.33,14,14,0,0,0-21.39-.47l.59.46A13.19,13.19,0,0,1,39.17,20.9a14.57,14.57,0,0,1,2.17,9.72,13.31,13.31,0,0,1-1.78,4.93l.22,1c1.56-2.42,2.36-4.58,3-10.1.48-4.23,1.29-12.74,1.29-12.74Z"/><path class="cls-2" d="M16.08,22.1l-.38.49a2.34,2.34,0,0,0-1.65-.7,2.2,2.2,0,1,0,0,4.4,2.43,2.43,0,0,0,1.65-.67l.39.44a3.08,3.08,0,0,1-2.07.86,2.81,2.81,0,1,1,0-5.63A3,3,0,0,1,16.08,22.1Z"/><path class="cls-2" d="M17.87,21.92v1.83h2.81v.61H17.87v1.91h3.25v.61H17.23V21.3H21v.61Z"/><path class="cls-2" d="M24.92,25.11l-.37,0H23.06v1.76h-.64V21.3h2.12c1.39,0,2.2.69,2.2,1.87A1.75,1.75,0,0,1,25.49,25l1.31,1.9h-.73Zm-.37-.6c1,0,1.59-.45,1.59-1.32s-.57-1.28-1.59-1.28H23.06v2.59Z"/><path class="cls-2" d="M32.17,26.88,28.82,22.4v4.48h-.64V21.3h.66l3.35,4.49V21.3h.63v5.58Z"/></svg>

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@ -0,0 +1,81 @@
---
title: CERN Case Study
linkTitle: cern
case_study_styles: true
cid: caseStudies
logo: cern_featured_logo.png
new_case_study_styles: true
heading_background: /images/case-studies/cern/banner1.jpg
heading_title_text: CERN
subheading: >
CERN: Processing Petabytes of Data More Efficiently with Kubernetes
case_study_details:
- Company: CERN
- Location: Geneva, Switzerland
- Industry: Particle physics research
---
<h2>Challenge</h2>
<p>At CERN, the European Organization for Nuclear Research, physicists conduct experiments to learn about fundamental science. In its particle accelerators, "we accelerate protons to very high energy, close to the speed of light, and we make the two beams of protons collide," says CERN Software Engineer Ricardo Rocha. "The end result is a lot of data that we have to process." CERN currently stores 330 petabytes of data in its data centers, and an upgrade of its accelerators expected in the next few years will drive that number up by 10x. Additionally, the organization experiences extreme peaks in its workloads during periods prior to big conferences, and needs its infrastructure to scale to those peaks. "We want to have a more hybrid infrastructure, where we have our on premise infrastructure but can make use of public clouds temporarily when these peaks come up," says Rocha. "We've been looking to new technologies that can help improve our efficiency in our infrastructure so that we can dedicate more of our resources to the actual processing of the data."</p>
<h2>Solution</h2>
<p>CERN's technology team embraced containerization and cloud native practices, choosing Kubernetes for orchestration, Helm for deployment, Prometheus for monitoring, and CoreDNS for DNS resolution inside the clusters. Kubernetes federation has allowed the organization to run some production workloads both on premise and in public clouds.</p>
<h2>Impact</h2>
<p>"Kubernetes gives us the full automation of the application," says Rocha. "It comes with built-in monitoring and logging for all the applications and the workloads that deploy in Kubernetes. This is a massive simplification of our current deployments." The time to deploy a new cluster for a complex distributed storage system has gone from more than 3 hours to less than 15 minutes. Adding new nodes to a cluster used to take more than an hour; now it takes less than 2 minutes. The time it takes to autoscale replicas for system components has decreased from more than an hour to less than 2 minutes. Initially, virtualization gave 20% overhead, but with tuning this was reduced to ~5%. Moving to Kubernetes on bare metal would get this to 0%. Not having to host virtual machines is expected to also get 10% of memory capacity back.</p>
{{< case-studies/quote author="Ricardo Rocha, Software Engineer, CERN" >}}
"Kubernetes is something we can relate to very much because it's naturally distributed. What it gives us is a uniform API across heterogeneous resources to define our workloads. This is something we struggled with a lot in the past when we want to expand our resources outside our infrastructure."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
With a mission of researching fundamental science, and a stable of extremely large machines, the European Organization for Nuclear Research (CERN) operates at what can only be described as hyperscale.
{{< /case-studies/lead >}}
<p>Experiments are conducted in particle accelerators, the biggest of which is 27 kilometers in circumference. "We accelerate protons to very high energy, to close to the speed of light, and we make the two beams of protons collide in well-defined places," says CERN Software Engineer Ricardo Rocha. "We build experiments around these places where we do the collisions. The end result is a lot of data that we have to process."</p>
<p>And he does mean a lot: CERN currently stores and processes 330 petabytes of data—gathered from 4,300 projects and 3,300 users—using 10,000 hypervisors and 320,000 cores in its data centers.</p>
<p>Over the years, the CERN technology department has built a large computing infrastructure, based on OpenStack private clouds, to help the organization's physicists analyze and treat all this data. The organization experiences extreme peaks in its workloads. "Very often, just before conferences, physicists want to do an enormous amount of extra analysis to publish their papers, and we have to scale to these peaks, which means overcommitting resources in some cases," says Rocha. "We want to have a more hybrid infrastructure, where we have our on premise infrastructure but can make use of public clouds temporarily when these peaks come up."</p>
<p>Additionally, few years ago, CERN announced that it would be doing a big upgrade of its accelerators, which will mean a ten-fold increase in the amount of data that can be collected. "So we've been looking to new technologies that can help improve our efficiency in our infrastructure, so that we can dedicate more of our resources to the actual processing of the data," says Rocha.</p>
{{< case-studies/quote
image="/images/case-studies/cern/banner3.jpg"
author="Ricardo Rocha, Software Engineer, CERN"
>}}
"Before, the tendency was always: 'I need this, I get a couple of developers, and I implement it.' Right now it's 'I need this, I'm sure other people also need this, so I'll go and ask around.' The CNCF is a good source because there's a very large catalog of applications available. It's very hard right now to justify developing a new product in-house. There is really no real reason to keep doing that. It's much easier for us to try it out, and if we see it's a good solution, we try to reach out to the community and start working with that community."
{{< /case-studies/quote >}}
<p>Rocha's team started looking at Kubernetes and containerization in the second half of 2015. "We've been using distributed infrastructures for decades now," says Rocha. "Kubernetes is something we can relate to very much because it's naturally distributed. What it gives us is a uniform API across heterogeneous resources to define our workloads. This is something we struggled with a lot in the past when we want to expand our resources outside our infrastructure."</p>
<p>The team created a prototype system for users to deploy their own Kubernetes cluster in CERN's infrastructure, and spent six months validating the use cases and making sure that Kubernetes integrated with CERN's internal systems. The main use case is batch workloads, which represent more than 80% of resource usage at CERN. (One single project that does most of the physics data processing and analysis alone consumes 250,000 cores.) "This is something where the investment in simplification of the deployment, logging, and monitoring pays off very quickly," says Rocha. Other use cases include Spark-based data analysis and machine learning to improve physics analysis. "The fact that most of these technologies integrate very well with Kubernetes makes our lives easier," he adds.</p>
<p>The system went into production in October 2016, also using Helm for deployment, Prometheus for monitoring, and CoreDNS for DNS resolution within the cluster. "One thing that Kubernetes gives us is the full automation of the application," says Rocha. "So it comes with built-in monitoring and logging for all the applications and the workloads that deploy in Kubernetes. This is a massive simplification of our current deployments." The time to deploy a new cluster for a complex distributed storage system has gone from more than 3 hours to less than 15 minutes.</p>
<p>Adding new nodes to a cluster used to take more than an hour; now it takes less than 2 minutes. The time it takes to autoscale replicas for system components has decreased from more than an hour to less than 2 minutes.</p>
{{< case-studies/quote
image="/images/case-studies/cern/banner4.jpg"
author="Ricardo Rocha, Software Engineer, CERN"
>}}
"With Kubernetes, there's a well-established technology and a big community that we can contribute to. It allows us to do our physics analysis without having to focus so much on the lower level software. This is just exciting. We are looking forward to keep contributing to the community and collaborating with everyone."
{{< /case-studies/quote >}}
<p>Rocha points out that the metric used in the particle accelerators may be events per second, but in reality "it's how fast and how much of the data we can process that actually counts." And efficiency has certainly been improved with Kubernetes. Initially, virtualization gave 20% overhead, but with tuning this was reduced to ~5%. Moving to Kubernetes on bare metal would get this to 0%. Not having to host virtual machines is expected to also get 10% of memory capacity back.</p>
<p>Kubernetes federation, which CERN has been using for a portion of its production workloads since February 2018, has allowed the organization to adopt a hybrid cloud strategy. And it was remarkably simple to do. "We had a summer intern working on federation," says Rocha. "For many years, I've been developing distributed computing software, which took like a decade and a lot of effort from a lot of people to stabilize and make sure it works. And for our intern, in a couple of days he was able to demo to me and my team that we had a cluster at CERN and a few clusters outside in public clouds that were federated together and that we could submit workloads to. This was shocking for us. It really shows the power of using this kind of well-established technologies."</p>
<p>With such results, adoption of Kubernetes has made rapid gains at CERN, and the team is eager to give back to the community. "If we look back into the '90s and early 2000s, there were not a lot of companies focusing on systems that have to scale to this kind of size, storing petabytes of data, analyzing petabytes of data," says Rocha. "The fact that Kubernetes is supported by such a wide community and different backgrounds, it motivates us to contribute back."</p>
{{< case-studies/quote author="Ricardo Rocha, Software Engineer, CERN" >}}
This means that the physicist can build his or her analysis and publish it in a repository, share it with colleagues, and in 10 years redo the same analysis with new data. If we looked back even 10 years, this was just a dream."
{{< /case-studies/quote >}}
<p>These new technologies aren't just enabling infrastructure improvements. CERN also uses the Kubernetes-based <a href="https://github.com/recast-hep">Reana/Recast</a> platform for reusable analysis, which is "the ability to define physics analysis as a set of workflows that are fully containerized in one single entry point," says Rocha. "This means that the physicist can build his or her analysis and publish it in a repository, share it with colleagues, and in 10 years redo the same analysis with new data. If we looked back even 10 years, this was just a dream."</p>
<p>All of these things have changed the culture at CERN considerably. A decade ago, "The tendency was always: 'I need this, I get a couple of developers, and I implement it,'" says Rocha. "Right now it's 'I need this, I'm sure other people also need this, so I'll go and ask around.' The CNCF is a good source because there's a very large catalog of applications available. It's very hard right now to justify developing a new product in-house. There is really no real reason to keep doing that. It's much easier for us to try it out, and if we see it's a good solution, we try to reach out to the community and start working with that community."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 9.0 KiB

View File

@ -0,0 +1,77 @@
---
title: China Unicom Case Study
linkTitle: chinaunicom
case_study_styles: true
cid: caseStudies
featured: false
new_case_study_styles: true
heading_background: /images/case-studies/chinaunicom/banner1.jpg
heading_title_logo: /images/chinaunicom_logo.png
subheading: >
China Unicom: How China Unicom Leveraged Kubernetes to Boost Efficiency and Lower IT Costs
case_study_details:
- Company: China Unicom
- Location: Beijing, China
- Industry: Telecom
---
<h2>Challenge</h2>
<p>China Unicom is one of the top three telecom operators in China, and to serve its 300 million users, the company runs several data centers with thousands of servers in each, using <a href="https://www.docker.com/">Docker</a> containerization and <a href="https://www.vmware.com/">VMWare</a> and <a href="https://www.openstack.org/">OpenStack</a> infrastructure since 2016. Unfortunately, "the resource utilization rate was relatively low," says Chengyu Zhang, Group Leader of Platform Technology R&D, "and we didn't have a cloud platform to accommodate our hundreds of applications." Formerly an entirely state-owned company, China Unicom has in recent years taken private investment from BAT (Baidu, Alibaba, Tencent) and JD.com, and is now focusing on internal development using open source technology, rather than commercial products. As such, Zhang's China Unicom Lab team began looking for open source orchestration for its cloud infrastructure.</p>
<h2>Solution</h2>
<p>Because of its rapid growth and mature open source community, Kubernetes was a natural choice for China Unicom. The company's Kubernetes-enabled cloud platform now hosts 50 microservices and all new development going forward. "Kubernetes has improved our experience using cloud infrastructure," says Zhang. "There is currently no alternative technology that can replace it." China Unicom also uses <a href="https://istio.io/">Istio</a> for its microservice framework, <a href="https://www.envoyproxy.io/">Envoy</a>, <a href="https://coredns.io/">CoreDNS</a>, and <a href="https://www.fluentd.org/">Fluentd</a>.</p>
<h2>Impact</h2>
<p>At China Unicom, Kubernetes has improved both operational and development efficiency. Resource utilization has increased by 20-50%, lowering IT infrastructure costs, and deployment time has gone from a couple of hours to 5-10 minutes. "This is mainly because of the self-healing and scalability, so we can increase our efficiency in operation and maintenance," Zhang says. "For example, we currently have only five people maintaining our multiple systems. We could never imagine we can achieve this scalability in such a short time."</p>
{{< case-studies/quote author="Chengyu Zhang, Group Leader of Platform Technology R&D, China Unicom" >}}
"Kubernetes has improved our experience using cloud infrastructure. There is currently no alternative technology that can replace it."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
With more than 300 million users, China Unicom is one of the country's top three telecom operators.
{{< /case-studies/lead >}}
<p>Behind the scenes, the company runs multiple data centers with thousands of servers in each, using Docker containerization and VMWare and OpenStack infrastructure since 2016. Unfortunately, "the resource utilization rate was relatively low," says Chengyu Zhang, Group Leader of Platform Technology R&D, "and we didn't have a cloud platform to accommodate our hundreds of applications."</p>
<p>Zhang's team, which is responsible for new technology, R&D and platforms, set out to find an IT management solution. Formerly an entirely state-owned company, China Unicom has in recent years taken private investment from BAT (Baidu, Alibaba, Tencent) and JD.com, and is now focusing on homegrown development using open source technology, rather than commercial products. For that reason, the team began looking for open source orchestration for its cloud infrastructure.</p>
{{< case-studies/quote
image="/images/case-studies/chinaunicom/banner3.jpg"
author="Chengyu Zhang, Group Leader of Platform Technology R&D, China Unicom"
>}}
"We could never imagine we can achieve this scalability in such a short time."
{{< /case-studies/quote >}}
<p>Though China Unicom was already using Mesos for a core telecom operator system, the team felt that Kubernetes was a natural choice for the new cloud platform. "The main reason was that it has a mature community," says Zhang. "It grows very rapidly, and so we can learn a lot from others' best practices." China Unicom also uses Istio for its microservice framework, Envoy, CoreDNS, and Fluentd.</p>
<p>The company's Kubernetes-enabled cloud platform now hosts 50 microservices and all new development going forward. China Unicom developers can easily leverage the technology through APIs, without doing the development work themselves. The cloud platform provides 20-30 services connected to the company's data center PaaS platform, as well as supports things such as big data analysis for internal users in the branch offices across the 31 provinces in China.</p>
<p>"Kubernetes has improved our experience using cloud infrastructure," says Zhang. "There is currently no alternative technology that can replace it."</p>
{{< case-studies/quote
image="/images/case-studies/chinaunicom/banner4.jpg"
author="Jie Jia, Member of Platform Technology R&D, China Unicom"
>}}
"This technology is relatively complicated, but as long as developers get used to it, they can enjoy all the benefits."
{{< /case-studies/quote >}}
<p>In fact, Kubernetes has boosted both operational and development efficiency at China Unicom. Resource utilization has increased by 20-50%, lowering IT infrastructure costs, and deployment time has gone from a couple of hours to 5-10 minutes. "This is mainly because of the self-healing and scalability of Kubernetes, so we can increase our efficiency in operation and maintenance," Zhang says. "For example, we currently have only five people maintaining our multiple systems."</p>
<p>With the wins China Unicom has experienced with Kubernetes, Zhang and his team are eager to give back to the community. That starts with participating in meetups and conferences, and offering advice to other companies that are considering a similar path. "Especially for those companies who have had traditional cloud computing system, I really recommend them to join the cloud native computing community," says Zhang.</p>
{{< case-studies/quote author="Jie Jia, Member of Platform Technology R&D, China Unicom" >}}
"Companies can use the managed services offered by companies like Rancher, because they have already customized this technology, you can easily leverage this technology."
{{< /case-studies/quote >}}
<p>Platform Technology R&D team member Jie Jia adds that though "this technology is relatively complicated, as long as developers get used to it, they can enjoy all the benefits." And Zhang points out that in his own experience with virtual machine cloud, "Kubernetes and these cloud native technologies are relatively simpler."</p>
<p>Plus, "companies can use the managed services offered by companies like <a href="https://rancher.com/">Rancher</a>, because they have already customized this technology," says Jia. "You can easily leverage this technology."</p>
<p>Looking ahead, China Unicom plans to develop more applications on Kubernetes, focusing on big data and machine learning. The team is continuing to optimize the cloud platform that it built, and hopes to pass the conformance test to join CNCF's <a href="https://www.cncf.io/announcement/2017/11/13/cloud-native-computing-foundation-launches-certified-kubernetes-program-32-conformant-distributions-platforms/">Certified Kubernetes Conformance Program</a>. They're also hoping to someday contribute code back to the community.</p>
<p>If that sounds ambitious, it's because the results they've gotten from adopting Kubernetes have been beyond even their greatest expectations. Says Zhang: "We could never imagine we can achieve this scalability in such a short time."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.4 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -0,0 +1,81 @@
---
title: City of Montreal Case Study
linkTitle: city-of-montreal
case_study_styles: true
cid: caseStudies
featured: false
new_case_study_styles: true
heading_background: /images/case-studies/montreal/banner1.jpg
heading_title_logo: /images/montreal_logo.png
subheading: >
City of Montréal - How the City of Montréal Is Modernizing Its 30-Year-Old, Siloed Architecture with Kubernetes
case_study_details:
- Company: City of Montréal
- Location: Montréal, Québec, Canada
- Industry: Government
---
<h2>Challenge</h2>
<p>Like many governments, Montréal has a number of legacy systems, and "we have systems that are older than some developers working here," says the city's CTO, Jean-Martin Thibault. "We have mainframes, all flavors of Windows, various flavors of Linux, old and new Oracle systems, Sun servers, all kinds of databases. Like all big corporations, some of the most important systems, like Budget and Human Resources, were developed on mainframes in-house over the past 30 years." There are over 1,000 applications in all, and most of them were running on different ecosystems. In 2015, a new management team decided to break down those silos, and invest in IT in order to move toward a more integrated governance for the city. They needed to figure out how to modernize the architecture.</p>
<h2>Solution</h2>
<p>The first step was containerization. The team started with a small Docker farm with four or five servers, with Rancher for providing access to the Docker containers and their logs and Jenkins to deploy. "We based our effort on the new trends; we understood the benefits of immutability and deployments without downtime and such things," says Solutions Architect Marc Khouzam. They soon realized they needed orchestration as well, and opted for Kubernetes. Says Enterprise Architect Morgan Martinet: "Kubernetes offered concepts on how you would describe an architecture for any kind of application, and based on those concepts, deploy what's required to run the infrastructure. It was becoming a de facto standard."</p>
<h2>Impact</h2>
<p>The time to market has improved drastically, from many months to a few weeks. Deployments went from months to hours. "In the past, you would have to ask for virtual machines, and that alone could take weeks, easily," says Thibault. "Now you don't even have to ask for anything. You just create your project and it gets deployed." Kubernetes has also improved the efficiency of how the city uses its compute resources: "Before, the 200 application components we currently run on Kubernetes would have required hundreds of virtual machines, and now, if we're talking about a single environment of production, we are able to run them on 8 machines, counting the masters of Kubernetes," says Martinet. And it's all done with a small team of just 5 people operating the Kubernetes clusters.</p>
{{< case-studies/quote author="JEAN-MARTIN THIBAULT, CTO, CITY OF MONTRÉAL" >}}
"We realized the limitations of having a non-orchestrated Docker environment. Kubernetes came to the rescue, bringing in all these features that make it a lot easier to manage and give a lot more benefits to the users."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
The second biggest municipality in Canada, Montréal has a large number of legacy systems keeping the government running. And while they don't quite date back to the city's founding in 1642, "we have systems that are older than some developers working here," jokes the city's CTO, Jean-Martin Thibault.
{{< /case-studies/lead >}}
<p>"We have mainframes, all flavors of Windows, various flavors of Linux, old and new Oracle systems, Sun servers, all kinds of databases. Some of the most important systems, like Budget and Human Resources, were developed on mainframes in-house over the past 30 years."</p>
<p>In recent years, that fact became a big pain point. There are over 1,000 applications in all, running on almost as many different ecosystems. In 2015, a new city management team decided to break down those silos, and invest in IT in order to move toward a more integrated governance. "The organization was siloed, so as a result the architecture was siloed," says Thibault. "Once we got integrated into one IT team, we decided to redo an overall enterprise architecture."</p>
<p>The first step to modernize the architecture was containerization. "We based our effort on the new trends; we understood the benefits of immutability and deployments without downtime and such things," says Solutions Architect Marc Khouzam. The team started with a small Docker farm with four or five servers, with Rancher for providing access to the Docker containers and their logs and Jenkins for deployment.</p>
{{< case-studies/quote
image="/images/case-studies/montreal/banner3.jpg"
author="MARC KHOUZAM, SOLUTIONS ARCHITECT, CITY OF MONTRÉAL"
>}}
"Getting a project running in Kubernetes is entirely dependent on how long you need to program the actual software. It's no longer dependent on deployment. Deployment is so fast that it's negligible."
{{< /case-studies/quote >}}
<p>But this Docker farm setup had some limitations, including the lack of self-healing and dynamic scaling based on traffic, and the effort required to optimize server resources and scale to multiple instances of the same container. The team soon realized they needed orchestration as well. "Kubernetes came to the rescue," says Thibault, "bringing in all these features that make it a lot easier to manage and give a lot more benefits to the users."</p>
<p>The team had evaluated several orchestration solutions, but Kubernetes stood out because it addressed all of the pain points. (They were also inspired by Yahoo! Japan's use case, which the team members felt came close to their vision.) "Kubernetes offered concepts on how you would describe an architecture for any kind of application, and based on those concepts, deploy what's required to run the infrastructure," says Enterprise Architect Morgan Martinet. "It was becoming a de facto standard. It also promised portability across cloud providers. The choice of Kubernetes now gives us many options such as running clusters in-house or in any IaaS provider, or even using Kubernetes-as-a-service in any of the major cloud providers."</p>
<p>Another important factor in the decision was vendor neutrality. "As a government entity, it is essential for us to be neutral in our selection of products and providers," says Thibault. "The independence of the Cloud Native Computing Foundation from any company provides this."</p>
{{< case-studies/quote
image="/images/case-studies/montreal/banner4.jpg"
author="MORGAN MARTINET, ENTERPRISE ARCHITECT, CITY OF MONTRÉAL"
>}}
"Kubernetes has been great. It's been stable, and it provides us with elasticity, resilience, and robustness. While re-architecting for Kubernetes, we also benefited from the monitoring and logging aspects, with centralized logging, Prometheus logging, and Grafana dashboards. We have enhanced visibility of what's being deployed."
{{< /case-studies/quote >}}
<p>The Kubernetes implementation began with the deployment of a small cluster using an internal Ansible playbook, which was soon replaced by the Kismatic distribution. Given the complexity they saw in operating a Kubernetes platform, they decided to provide development groups with an automated CI/CD solution based on Helm. "An integrated CI/CD solution on Kubernetes standardized how the various development teams designed and deployed their solutions, but allowed them to remain independent," says Khouzam.</p>
<p>During the re-architecting process, the team also added Prometheus for monitoring and alerting, Fluentd for logging, and Grafana for visualization. "We have enhanced visibility of what's being deployed," says Martinet. Adds Khouzam: "The big benefit is we can track anything, even things that don't run inside the Kubernetes cluster. It's our way to unify our monitoring effort."</p>
<p>All together, the cloud native solution has had a positive impact on velocity as well as administrative overhead. With standardization, code generation, automatic deployments into Kubernetes, and standardized monitoring through Prometheus, the time to market has improved drastically, from many months to a few weeks. Deployments went from months and weeks of planning down to hours. "In the past, you would have to ask for virtual machines, and that alone could take weeks to properly provision," says Thibault. Plus, for dedicated systems, experts often had to be brought in to install them with their own recipes, which could take weeks and months.</p>
<p>Now, says Khouzam, "we can deploy pretty much any application that's been Dockerized without any help from anybody. Getting a project running in Kubernetes is entirely dependent on how long you need to program the actual software. It's no longer dependent on deployment. Deployment is so fast that it's negligible."</p>
{{< case-studies/quote author="MORGAN MARTINET, ENTERPRISE ARCHITECT, CITY OF MONTRÉAL">}}
"We're working with the market when possible, to put pressure on our vendors to support Kubernetes, because it's a much easier solution to manage"
{{< /case-studies/quote >}}
<p>Kubernetes has also improved the efficiency of how the city uses its compute resources: "Before, the 200 application components we currently run in Kubernetes would have required hundreds of virtual machines, and now, if we're talking about a single environment of production, we are able to run them on 8 machines, counting the masters of Kubernetes," says Martinet. And it's all done with a small team of just five people operating the Kubernetes clusters. Adds Martinet: "It's a dramatic improvement no matter what you measure."</p>
<p>So it should come as no surprise that the team's strategy going forward is to target Kubernetes as much as they can. "If something can't run inside Kubernetes, we'll wait for it," says Thibault. That means they haven't moved any of the city's Windows systems onto Kubernetes, though it's something they would like to do. "We're working with the market when possible, to put pressure on our vendors to support Kubernetes, because it's a much easier solution to manage," says Martinet.</p>
<p>Thibault sees a near future where 60% of the city's workloads are running on a Kubernetes platform—basically any and all of the use cases that they can get to work there. "It's so much more efficient than the way we used to do things," he says. "There's no looking back."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.6 KiB

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2,.cls-3{fill-rule:evenodd;}.cls-2{fill:url(#linear-gradient);}.cls-3{fill:url(#linear-gradient-2);}</style><linearGradient id="linear-gradient" x1="-3137.65754" y1="1644.36414" x2="-3137.70622" y2="1644.36292" gradientTransform="matrix(2793, 0, 0, -441.00006, 8763675.71297, 725229.10048)" gradientUnits="userSpaceOnUse"><stop offset="0" stop-color="#f5286e"/><stop offset="1" stop-color="#fa461e"/></linearGradient><linearGradient id="linear-gradient-2" x1="-3134.50784" y1="1645.52546" x2="-3134.55653" y2="1645.47965" gradientTransform="matrix(800, 0, 0, -776, 2507658.98376, 1276974.5)" xlink:href="#linear-gradient"/></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.55202" y="-3.16104" width="223.25536" height="134.51136"/><path class="cls-2" d="M73.12738,75.22779a11.43076,11.43076,0,0,0,7.99043-3.25709L77.915,69.25159a6.54848,6.54848,0,0,1-4.78767,2.18113,6.27512,6.27512,0,1,1,0-12.54865A6.54851,6.54851,0,0,1,77.915,61.0652l3.20276-2.71911A11.30858,11.30858,0,0,0,73.12738,55.089c-5.89573,0-10.92371,4.58875-10.92371,10.05486C62.20367,70.61115,67.23166,75.22779,73.12738,75.22779ZM87.2185,62.96274V60.24363H82.847V74.8885h4.37024V67.89326c0-2.74823,2.06429-4.504,5.29615-4.504V59.90555A5.81225,5.81225,0,0,0,87.21723,62.964ZM101.398,75.22779c4.57892,0,8.41035-3.51278,8.41035-7.67626s-3.83144-7.64715-8.41035-7.64715c-4.60807,0-8.46857,3.48368-8.46857,7.64715S96.79,75.22779,101.398,75.22779Zm0-3.48372a4.17862,4.17862,0,1,1,4.25-4.19254A4.2883,4.2883,0,0,1,101.398,71.74407Zm13.67233,3.14321h3.92121l2.90169-9.20544,2.87388,9.20544h3.91994l5.267-14.64366H129.675l-2.93333,9.12063-3.05221-9.12063H120.068l-3.05221,9.12063-2.93333-9.12063h-4.27915l5.267,14.64366Zm31.95657-19.7995v7.25216a6.48235,6.48235,0,0,0-5.20633-2.43556c-4.12993,0-7.12271,3.17228-7.12271,7.64715,0,4.50393,2.99273,7.67626,7.12144,7.67626a6.48139,6.48139,0,0,0,5.20758-2.43556V74.8885h4.37024V55.089h-4.37024v-.00121Zm-3.95028,16.65629a4.17829,4.17829,0,0,1,0-8.356,4.18457,4.18457,0,0,1,0,8.356ZM159.533,59.45116a1.52877,1.52877,0,0,1,1.466-1.6431,2.32868,2.32868,0,0,1,1.49515.48107l1.0486-2.52038a5.82958,5.82958,0,0,0-3.562-1.27474,4.36676,4.36676,0,0,0-4.63845,4.47492v1.27469h-2.125v3.34317h2.125V74.88728h4.18943V63.58679h3.352V60.24363h-3.352v-.79247h.00127Zm7.47558-1.21778a2.24007,2.24007,0,1,0,0-4.47609,2.24316,2.24316,0,1,0,0,4.47609Zm-2.21356,16.6539h4.369V60.24363h-4.37024V74.8885l.00126-.00122ZM176.46128,62.964V60.24362H172.091V74.88849h4.37024V67.89326c0-2.74823,2.06428-4.504,5.29615-4.504V59.90555a5.81228,5.81228,0,0,0-5.29615,3.05845Zm14.24029,8.61049a3.72844,3.72844,0,0,1-3.6809-2.66336h11.16279c0-5.523-2.8435-9.0067-7.69063-9.0067-4.52078,0-7.96131,3.20012-7.96131,7.6193,0,4.50393,3.59107,7.70411,8.1991,7.70411a9.76074,9.76074,0,0,0,6.31568-2.26594l-2.78279-2.69a5.79593,5.79593,0,0,1-3.562,1.30259Zm-.03038-8.01677a3.204,3.204,0,0,1,3.32165,2.37987h-6.91269a3.73543,3.73543,0,0,1,3.59228-2.37987h-.00124Z"/><path class="cls-3" d="M48.815,64.51747H41.02529v-7.5563H33.23554v-7.5563H48.815Zm3.89487,0H44.92016A11.51324,11.51324,0,0,1,33.23554,75.85073a11.33911,11.33911,0,1,1,0-22.66768v-7.5563c-10.75594,0-19.47437,8.45767-19.47437,18.89072S22.4796,83.407,33.23554,83.407,52.70991,74.94936,52.70991,64.5163Z"/></svg>

After

Width:  |  Height:  |  Size: 3.3 KiB

View File

@ -0,0 +1,85 @@
---
title: Crowdfire Case Study
case_study_styles: true
cid: caseStudies
new_case_study_styles: true
heading_background: /images/case-studies/crowdfire/banner1.jpg
heading_title_logo: /images/crowdfire_logo.png
subheading: >
How to Keep Iterating a Fast-Growing App With a Cloud-Native Approach
case_study_details:
- Company: Crowdfire
- Location: Mumbai, India
- Industry: Social Media Software
---
<h2>Challenge</h2>
<p><a href="https://www.crowdfireapp.com/">Crowdfire</a> helps content creators create their content anywhere on the Internet and publish it everywhere else in the right format. Since its launch in 2010, it has grown to 16 million users. The product began as a monolith app running on <a href="https://cloud.google.com/appengine/">Google App Engine</a>, and in 2015, the company began a transformation to microservices running on Amazon Web Services <a href="https://aws.amazon.com/elasticbeanstalk/">Elastic Beanstalk</a>. "It was okay for our use cases initially, but as the number of services, development teams and scale increased, the deploy times, self-healing capabilities and resource utilization started to become problems for us," says Software Engineer Amanpreet Singh, who leads the infrastructure team for Crowdfire.</p>
<h2>Solution</h2>
<p>"We realized that we needed a more cloud-native approach to deal with these issues," says Singh. The team decided to implement a custom setup of Kubernetes based on <a href="https://www.terraform.io/">Terraform</a> and <a href="https://www.ansible.com/">Ansible</a>.</p>
<h2>Impact</h2>
<p>"Kubernetes has helped us reduce the deployment time from 15 minutes to less than a minute," says Singh. "Due to Kubernetes's self-healing nature, the operations team doesn't need to do any manual intervention in case of a node or pod failure." Plus, he says, "Dev-Prod parity has improved since developers can experiment with options in dev/staging clusters, and when it's finalized, they just commit the config changes in the respective code repositories. These changes automatically get replicated on the production cluster via CI/CD pipelines."</p>
{{< case-studies/quote author="Amanpreet Singh, Software Engineer at Crowdfire" >}}
"In the 15 months that we've been using Kubernetes, it has been amazing for us. It enabled us to iterate quickly, increase development speed, and continuously deliver new features and bug fixes to our users, while keeping our operational costs and infrastructure management overhead under control."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
"If you build it, they will come."
{{< /case-studies/lead >}}
<p>For most content creators, only half of that movie quote may ring true. Sure, platforms like Wordpress, YouTube and Shopify have made it simple for almost anyone to start publishing new content online, but attracting an audience isn't as easy. Crowdfire "helps users publish their content to all possible places where their audience exists," says Amanpreet Singh, a Software Engineer at the company based in Mumbai, India. Crowdfire has gained more than 16 million users—from bloggers and artists to makers and small businesses—since its launch in 2010.</p>
<p>With that kind of growth—and a high demand from users for new features and continuous improvements—the Crowdfire team struggled to keep up behind the scenes. In 2015, they moved their monolith Java application to Amazon Web Services <a href="https://aws.amazon.com/elasticbeanstalk/">Elastic Beanstalk</a> and started breaking it down into microservices.</p>
<p>It was a good first step, but the team soon realized they needed to go further down the cloud-native path, which would lead them to Kubernetes. "It was okay for our use cases initially, but as the number of services and development teams increased and we scaled further, deploy times, self-healing capabilities and resource utilization started to become problematic," says Singh, who leads the infrastructure team at Crowdfire. "We realized that we needed a more cloud-native approach to deal with these issues."</p>
<p>As he looked around for solutions, Singh had a checklist of what Crowdfire needed. "We wanted to keep some things separate so they could be shipped independent of other things; this would help remove blockers and let different teams work at their own pace," he says. "We also make a lot of data-driven decisions, so shipping a feature and its iterations quickly was a must."</p>
<p>Kubernetes checked all the boxes and then some. "One of the best things was the built-in service discovery," he says. "When you have a bunch of microservices that need to call each other, having internal DNS readily available and service IPs and ports automatically set as environment variables help a lot." Plus, he adds, "Kubernetes's opinionated approach made it easier to get started."</p>
{{< case-studies/quote image="/images/case-studies/crowdfire/banner3.jpg" >}}
"We realized that we needed a more cloud-native approach to deal with these issues," says Singh. The team decided to implement a custom setup of Kubernetes based on Terraform and Ansible."
{{< /case-studies/quote >}}
<p>There was another compelling business reason for the cloud-native approach. "In today's world of ever-changing business requirements, using cloud native technology provides a variety of options to choose from—even the ability to run services in a hybrid cloud environment," says Singh. "Businesses can keep services in a region closest to the users, and thus benefit from high-availability and resiliency."</p>
<p>So in February 2016, Singh set up a test Kubernetes cluster using the kube-up scripts provided. "I explored the features and was able to deploy an application pretty easily," he says. "However, it seemed like a black box since I didn't understand the components completely, and had no idea what the kube-up script did under the hood. So when it broke, it was hard to find the issue and fix it."</p>
<p>To get a better understanding, Singh dove into the internals of Kubernetes, reading the docs and even some of the code. And he looked to the Kubernetes community for more insight. "I used to stay up a little late every night (a lot of users were active only when it's night here in India) and would try to answer questions on the Kubernetes community Slack from users who were getting started," he says. "I would also follow other conversations closely. I must admit I was able to avoid a lot of issues in our setup because I knew others had faced the same issues."</p>
<p>Based on the knowledge he gained, Singh decided to implement a custom setup of Kubernetes based on <a href="https://www.terraform.io/">Terraform</a> and <a href="https://www.ansible.com/">Ansible</a>. "I wrote Terraform to launch Kubernetes master and nodes (Auto Scaling Groups) and an Ansible playbook to install the required components," he says. (The company recently switched to using prebaked <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html">AMIs</a> to make the node bringup faster, and is planning to change its networking layer.)</p>
{{< case-studies/quote image="/images/case-studies/crowdfire/banner4.jpg" >}}
"Kubernetes helped us reduce the deployment time from 15 minutes to less than a minute. Due to Kubernetes's self-healing nature, the operations team doesn't need to do any manual intervention in case of a node or pod failure."
{{< /case-studies/quote >}}
<p>First, the team migrated a few staging services from Elastic Beanstalk to the new Kubernetes staging cluster, and then set up a production cluster a month later to deploy some services. The results were convincing. "By the end of March 2016, we established that all the new services must be deployed on Kubernetes," says Singh. "Kubernetes helped us reduce the deployment time from 15 minutes to less than a minute. Due to Kubernetes's self-healing nature, the operations team doesn't need to do any manual intervention in case of a node or pod failure." On top of that, he says, "Dev-Prod parity has improved since developers can experiment with options in dev/staging clusters, and when it's finalized, they just commit the config changes in the respective code repositories. These changes automatically get replicated on the production cluster via CI/CD pipelines. This brings more visibility into the changes being made, and keeping an audit trail."</p>
<p>Over the next six months, the team worked on migrating all the services from Elastic Beanstalk to Kubernetes, except for the few that were deprecated and would soon be terminated anyway. The services were moved one at a time, and their performance was monitored for two to three days each. Today, "We're completely migrated and we run all new services on Kubernetes," says Singh.</p>
<p>The impact has been considerable: With Kubernetes, the company has experienced a 90% cost savings on Elastic Load Balancer, which is now only used for their public, user-facing services. Their EC2 operating expenses have been decreased by as much as 50%.</p>
<p>All 30 engineers at Crowdfire were onboarded at once. "I gave an internal talk where I shared the basic components and demoed the usage of kubectl," says Singh. "Everyone was excited and happy about using Kubernetes. Developers have more control and visibility into their applications running in production now. Most of all, they're happy with the low deploy times and self-healing services."</p>
<p>And they're much more productive, too. "Where we used to do about 5 deployments per day," says Singh, "now we're doing 30+ production and 50+ staging deployments almost every day."</p>
{{< case-studies/quote >}}
The impact has been considerable: With Kubernetes, the company has experienced a 90% cost savings on Elastic Load Balancer, which is now only used for their public, user-facing services. Their EC2 operating expenses have been decreased by as much as 50%.
{{< /case-studies/quote >}}
<p>Singh notes that almost all of the engineers interact with the staging cluster on a daily basis, and that has created a cultural change at Crowdfire. "Developers are more aware of the cloud infrastructure now," he says. "They've started following cloud best practices like better health checks, structured logs to stdout [standard output], and config via files or environment variables."</p>
<p>With Crowdfire's commitment to Kubernetes, Singh is looking to expand the company's cloud-native stack. The team already uses <a href="https://prometheus.io/">Prometheus</a> for monitoring, and he says he is evaluating <a href="https://linkerd.io/">Linkerd</a> and <a href="https://envoyproxy.github.io/">Envoy Proxy</a> as a way to "get more metrics about request latencies and failures, and handle them better." Other CNCF projects, including <a href="http://opentracing.io/">OpenTracing</a> and <a href="https://grpc.io/">gRPC</a> are also on his radar.</p>
<p>Singh has found that the cloud-native community is growing in India, too, particularly in Bangalore. "A lot of startups and new companies are starting to run their infrastructure on Kubernetes," he says.</p>
<p>And when people ask him about Crowdfire's experience, he has this advice to offer: "Kubernetes is a great piece of technology, but it might not be right for you, especially if you have just one or two services or your app isn't easy to run in a containerized environment," he says. "Assess your situation and the value that Kubernetes provides before going all in. If you do decide to use Kubernetes, make sure you understand the components that run under the hood and what role they play in smoothly running the cluster. Another thing to consider is if your apps are 'Kubernetes-ready,' meaning if they have proper health checks and handle termination signals to shut down gracefully."</p>
<p>And if your company fits that profile, go for it. Crowdfire clearly did—and is now reaping the benefits. "In the 15 months that we've been using Kubernetes, it has been amazing for us," says Singh. "It enabled us to iterate quickly, increase development speed and continuously deliver new features and bug fixes to our users, while keeping our operational costs and infrastructure management overhead under control."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

View File

@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 450 450"><defs><style>.cls-1{fill:#0fd15d;}.cls-2{fill:#232323;}</style></defs><title>DaoCloud_logo</title><g id="Layer_9" data-name="Layer 9"><polygon class="cls-1" points="225 105.24 279.4 78.41 224.75 54.57 170.11 78.17 225 105.24"/><polygon class="cls-1" points="169.11 178.76 166.38 134.55 206.12 114.68 149.99 86.86 113.72 102.76 118.44 145.23 169.11 178.76"/><polygon class="cls-1" points="331.06 144.98 336.28 102.76 299.52 86.86 243.88 114.68 284.12 134.55 280.89 178.76 331.06 144.98"/><polygon class="cls-1" points="279.4 199.88 275.42 257.5 321.62 222.73 328.58 166.84 279.4 199.88"/><polygon class="cls-1" points="170.35 199.88 121.17 166.84 127.63 222.98 174.08 257.5 170.35 199.88"/><polygon class="cls-1" points="262.01 211.55 225.25 236.39 187.99 211.55 191.72 270.67 225 295.51 257.79 270.67 262.01 211.55"/></g><g id="Layer_3" data-name="Layer 3"><path class="cls-2" d="M50.54,405.08V339.86H74.48c13.33,0,21.62,8.2,21.62,21.25v22.73c0,13-8.29,21.24-21.62,21.24ZM63,394H74.48c5.87,0,9-3.82,9-11.27v-20.5c0-7.45-3.17-11.27-9-11.27H63Z"/><path class="cls-2" d="M154.15,375.36c0-12.2,7.74-19.66,20.31-19.66s20.22,7.46,20.22,19.66v11C194.77,398.56,187,406,174.46,406s-20.31-7.45-20.31-19.66Zm11.93,10c0,6.8,3,10.34,8.38,10.34s8.3-3.54,8.3-10.34v-8.95c0-6.8-3-10.34-8.3-10.34s-8.29,3.54-8.29,10.34Z"/><path class="cls-2" d="M144.32,405.08H134.07l-.67-4.65c-2.84,3.22-8.05,5.6-13.76,5.6-9.3,0-16.23-6.63-16.23-14.55,0-20.6,28.5-20.76,28.5-20.76,0-1.59-2.26-6.46-12.52-6.46a30.26,30.26,0,0,0-8.86,1.63l-2.32-6.4a39.34,39.34,0,0,1,16.12-3.8c12.62,0,20,5.27,20,17.31Zm-22.18-8.41a12.57,12.57,0,0,0,10-5.55l.05-13.21c-1.94,0-16.61,1.26-16.61,12.41A6.14,6.14,0,0,0,122.14,396.67Z"/><path class="cls-2" d="M226.32,406.36c-12.94,0-22.38-8.16-22.38-21V360.51c0-12.84,9.44-20.91,22.38-20.91,8.16,0,13,2.29,18.52,6.14l-4.77,8.62c-4.67-2.57-7.7-4-13.75-4s-10,4.31-10,11.28v22.65c0,7.25,4,11.28,10,11.28s9.08-1.47,13.75-4l4.77,8.71C239.34,404.07,234.48,406.36,226.32,406.36Z"/><path class="cls-2" d="M252.73,405.08V336.29h11.74v68.79Z"/><path class="cls-2" d="M274.74,375.82c0-12,7.62-19.35,20-19.35s19.9,7.34,19.9,19.35v10.82c.09,12-7.61,19.36-19.9,19.36s-20-7.34-20-19.36Zm11.74,9.82c0,6.69,2.94,10.18,8.26,10.18s8.16-3.49,8.16-10.18v-8.81c0-6.69-2.93-10.18-8.16-10.18s-8.17,3.49-8.17,10.18Z"/><path class="cls-2" d="M339.22,406c-10.91,0-14.49-6.61-14.49-18.71v-29.9h11.74v28.8c0,6.78,1.38,9.72,6,9.72a10.28,10.28,0,0,0,8.9-5.41V357.39h11.83v47.69H352.89l-.92-4.49C348.58,403.7,343.72,406,339.22,406Z"/><path class="cls-2" d="M402.78,401a18.88,18.88,0,0,1-12.93,5c-10.73,0-16.32-7.34-16.32-19.36V375c0-12.11,5.41-18.53,16-18.53a28.87,28.87,0,0,1,11.65,2.2V336.29H413v68.79h-9.63Zm-1.65-32.28a20.7,20.7,0,0,0-9-1.83c-4.49,0-7,2.75-7,9v9.73c0,6.69,1.84,10.18,7.16,10.18,4.12,0,7.15-1.84,8.8-4.86Z"/></g></svg>

After

Width:  |  Height:  |  Size: 2.8 KiB

View File

@ -0,0 +1,114 @@
---
title: DaoCloud Case Study
linkTitle: DaoCloud
case_study_styles: true
cid: caseStudies
logo: daocloud_featured_logo.svg
css: /css/style_daocloud.css
new_case_study_styles: true
heading_background: /images/case-studies/daocloud/banner1.jpg
heading_title_logo: /images/daocloud-light.svg
subheading: >
Seek Global Optimal Solutions for Digital World
case_study_details:
- Company: DaoCloud
- Location: Shanghai, China
- Industry: Cloud Native
---
<h2>Challenges</h2>
<p><a href="https://www.daocloud.io/en/">DaoCloud</a>, founded in 2014, is an innovation leader in the field of cloud native. It boasts independent intellectual property rights of core technologies for crafting an open cloud platform to empower the digital transformation of enterprises.</p>
<p>DaoCloud has been engaged in cloud native since its inception. As containerization is crucial for cloud native business, a cloud platform that does not have containers as infrastructure is unlikely to attract its potential users. Therefore, the first challenge confronting DaoCloud is how to efficiently manage and schedule numerous containers while maintaining stable connectivity between them.</p>
<p>As cloud native technology gains momentum, cloud native solutions proliferate like mushrooms after rain. However, having more choices is not always a good thing, because choosing from various products to globally maximize benefits and minimize cost is always challenging and demanding. Therefore, another obstacle ahead of DaoCloud is how to pick out the best runner in each field and organize them into one platform that can achieve global optimum for cloud native.</p>
<h2>Solutions</h2>
<p>As the de facto standard for container orchestration, Kubernetes is undoubtedly the preferred container solution. Paco Xu, head of the Open Source and Advanced Development team at DaoCloud, stated, "Kubernetes is a fundamental tool in the current container ecosystem. Most services or applications are deployed and managed in Kubernetes clusters."</p>
<p>Regarding finding the global optimal solutions for cloud native technology, Peter Pan, R&D Vice President of DaoCloud, believes that "the right way is to focus on Kubernetes, coordinate relevant best practices and advanced technologies, and build a widely applicable platform."</p>
<h2>Results</h2>
<p>In the process of embracing cloud native technology, DaoCloud continues to learn from Kubernetes and other excellent CNCF open source projects. It has formed a product architecture centered on DaoCloud Enterprise, a platform for cloud native applications. Using Kubernetes and other cutting-edge cloud native technologies as a foundation, DaoCloud provides solid cloud native solutions for military, finance, manufacturing, energy, government, and retail clients. It helps promote digital transformation of many companies, such as SPD Bank, Huatai Securities, Fullgoal Fund, SAIC Motor, Haier, Fudan University, Watsons, Genius Auto Finance, State Grid Corporation of China, etc.</p>
{{< case-studies/quote
image="/images/case-studies/daocloud/banner2.jpg"
author="Kebe Liu, Service Mesh Expert, DaoCloud"
>}}
"As DaoCloud Enterprise becomes more powerful and attracts more users, some customers need to use Kubernetes instead of Swarm for application orchestration. We, as providers, need to meet the needs of our users."
{{< /case-studies/quote >}}
<p>DaoCloud was founded to help traditional enterprises move their applications to the cloud and realize digital transformation. The first product released after the company's establishment, DaoCloud Enterprise 1.0, is a Docker-based container engine platform that can easily build images and run them in containers.</p>
<p>However, as applications and containers increase in number, coordinating and scheduling these containers became a bottleneck that restricted product performance. DaoCloud Enterprise 2.0 used Docker Swarm to manage containers, but the increasingly complex container scheduling system gradually went beyond the competence of Docker Swarm.</p>
<p>Fortunately, Kubernetes began to stand out at this time. It rapidly grew into the industrial standard for container orchestration with its competitive rich functions, stable performance, timely community support, and strong compatibility. Paco Xu said, "Enterprise container platforms need container orchestration to standardize the process of moving to the cloud. Kubernetes was accepted as the de facto standard for container orchestration around 2016 and 2017. Our products started to support it in 2017."</p>
<p>After thorough comparisons and evaluations, DaoCloud Enterprise 2.8, debuted in 2017, officially adopted Kubernetes (v1.6.7) as its container orchestration tool. Since then, DaoCloud Enterprise 3.0 (2018) used Kubernetes v1.10, and DaoCloud Enterprise 4.0 (2021) adopted Kubernetes v1.18. The latest version, DaoCloud Enterprise 5.0 (2022), supports Kubernetes v1.23 to v1.26.</p>
<p>Kubernetes served as an inseparable part of these four releases over six years, which speaks volumes about the fact that using Kubernetes in DaoCloud Enterprise was the right choice. DaoCloud has proven, through its own experience and actions, that Kubernetes is the best choice for container orchestration and that it has always been a loyal fan of Kubernetes.</p>
{{< case-studies/quote
image="/images/case-studies/daocloud/banner3.jpg"
author="Ting Ye, Vice President of Product Innovation, DaoCloud"
>}}
"Kubernetes is the cornerstone for refining our products towards world-class software."
{{< /case-studies/quote >}}
<p>Kubernetes helped our product and research teams realized automation of test, build, check, and release process, ensuring the quality of deliverables. It also helped build our smart systems of collaboration about product requirements & definition, multilingual product materials, debugging, and miscellaneous challenges, improving the efficiency of intra- and inter-department collaboration. </p>
<p>On the one hand, Kubernetes makes our products more performant and competitive. DaoCloud integrates relevant practices and technologies around Kubernetes to polish its flagship offering DaoCloud Enterprise. The latest 5th version, released in 2022, covers application stores, application delivery, microservice governance, observability, data services, multi-cloud management, cloud-edge collaboration, and other functions. DaoCloud Enterprise 5.0 is an inclusive integration of cloud native technologies.</p>
<p>DaoCloud deployed a Kubernetes platform for SPD Bank, improving its application deployment efficiency by 82%, shortening its delivery cycle from half a year to one month, and promoting its transaction success rate to 99.999%.</p>
<p>In terms of Sichuan Tianfu Bank, the scaling time was reduced from several hours to an average of 2 minutes, product iteration cycle was shortened from two months to two weeks, and application rollout time was cut by 76.76%.</p>
<p>As for a joint-venture carmaker, its delivery cycle shortened from two months to one or two weeks, success rate of application deployment increased by 53%, and application rollout became ten times more efficient. In the case of a multinational retailer, application deployment issues were solved by 46%, and fault location efficiency rose by more than 90%.</p>
<p>For a large-scale securities firm, its business procedure efficiency was enhanced by 30%, and resource costs were lowered by about 35%.</p>
<p>With this product, Fullgoal Fund shortened its middleware deployment time from hours to minutes, improved middleware operation and maintenance capabilities by 50%, containerization by 60%, and resource utilization by 40%.</p>
<p>On the other hand, our product development is also based on Kubernetes. DaoCloud deployed Gitlab based on Kubernetes and established a product development process of "Gitlab -> PR -> Auto Tests -> Builds & Releases", which significantly improved our development efficiency, reduced repetitive tests, and realized automatic release of applications. This approach greatly saves operation and maintenance costs, enabling technicians to invest more time and energy in product development to offer better cloud native products.</p>
{{< case-studies/quote
image="/images/case-studies/daocloud/banner4.jpg"
author="Paco Xu, Header of Open Source & Advanced Development Team, DaoCloud"
>}}
"Our developers actively contribute to open source projects and build technical expertise. DaoCloud has established a remarkable presence in the Kubernetes and Istio communities."
{{< /case-studies/quote >}}
<p>DaoCloud is deeply involved in contributing to Kubernetes and other cloud native open source projects. Our participation and contributions in these communities continue to grow. In the year of 2022, DaoCloud was ranked third globally in terms of cumulative contribution to Kubernetes (data from Stackalytics as of January 5, 2023).</p>
<p>In August 2022, Kubernetes officially organized an interview with community contributors, and four outstanding contributors from the Asia-Pacific region were invited. Half of them came from DaoCloud, namely <a href="https://github.com/wzshiming">Shiming Zhang</a> and <a href="https://github.com/pacoxu">Paco Xu</a>. Both are Reviewers of SIG Node. Furthermore, at the KubeCon + CloudNative North America 2022, <a href="https://github.com/kerthcet">Kante Yin</a> from DaoCloud won the 2022 Contributor Award of Kubernetes.</p>
<p>In addition, DaoCloud continue to practice its cloud native beliefs and contribute to the Kubernetes ecosystem by sharing source code of several excellent projects, including <a href="https://clusterpedia.io/">Clusterpedia</a>, <a href="https://github.com/kubean-io/kubean">Kubean</a>, <a href="https://github.com/cloudtty/cloudtty">CloudTTY</a>, <a href="https://github.com/klts-io/kubernetes-lts">KLTS</a>, <a href="https://merbridge.io/">Merbridge</a>, <a href="https://hwameistor.io/">HwameiStor</a>, <a href="https://github.com/spidernet-io/spiderpool">Spiderpool</a>, and <a href="https://github.com/kubernetes-sigs/kwok">KWOK</a>, on GitHub.</p>
<p>In particular:</p>
<ul type="disc">
<li><strong>Clusterpedia:</strong> Designed for resource synchronization across clusters, Clusterpedia is compatible with Kubernetes OpenAPIs and offers a powerful search function for quick and effective retrieval of all resources in clusters.</li>
<li><strong>Kubean:</strong> With Kubean, it's possible to quickly create production-ready Kubernetes clusters and integrate clusters from other providers.</li>
<li><strong>CloudTTY:</strong> CloudTTY is a web terminal and cloud shell operator for Kubernetes cloud native environments, allowing for management of Kubernetes clusters on a web page from anywhere and at any time.</li>
<li><strong>KLTS:</strong> Providing long-term free maintenance for earlier versions of Kubernetes, KLTS ensures stability and support for older Kubernetes deployments. Additionally, Piraeus is an easy and secure storage solution for Kubernetes with high performance and availability.</li>
<li><strong>KWOK:</strong> Short for Kubernetes WithOut Kubelet, KWOK is a toolkit that enables the setup of a cluster of thousands of nodes in seconds. All nodes are simulated to behave like real ones, resulting in low resource usage that makes it easy to experiment on a laptop.</li>
</ul>
<p>DaoCloud utilizes its practical experience across industries to contribute to Kubernetes-related open source projects, with an aim of making cloud native technologies, represented by Kubernetes, better function in production environment.</p>
{{< case-studies/quote
image="/images/case-studies/daocloud/banner5.jpg"
author="Song Zheng, Technology GM, DaoCloud"
>}}
"DaoCloud, as one of the first cloud native technology training partners certified by CNCF, will continue to carry out trainings to help more companies find their best ways for going to the cloud."
{{< /case-studies/quote >}}
<p>Enterprise users need a global optimal solution, which can be understood as an inclusive platform that can maximize the advantages of multi-cloud management, application delivery, observability, cloud-edge collaboration, microservice governance, application store, and data services. In today's cloud native ecosystem, these functions cannot be achieved without Kubernetes as the underlying container orchestration tool. Therefore, Kubernetes is crucial to DaoCloud's mission of finding the optimal solution in the digital world, and all future product development will continue to be based on Kubernetes.</p>
<p>Kubernetes training and promotion activities have always been attached great importance in DaoCloud. In 2017, the company took the lead in passing CNCF's Certified Kubernetes Conformance Program through its featured product — DaoCloud Enterprise. In 2018, it became a CNCF-certified Kubernetes service provider and training partner.</p>
<p>On November 18, 2022, the "Kubernetes Community Days" event was successfully held in Chengdu, organized by CNCF, DaoCloud, Huawei Cloud, Sichuan Tianfu Bank, and OPPO. The event brought together end-users, contributors, and technical experts from open-source communities to share best practices and innovative ideas about Kubernetes and cloud native. In the future, DaoCloud will continue to contribute to Kubernetes projects, and expand the influence of Kubernetes through project training, community contributions and other activities.</p>

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 215 127"><defs><style>.cls-1{fill:none;}.cls-2{fill:transparent;}.cls-3{clip-path:url(#clip-path);}.cls-4{fill:#ee3248;fill-rule:evenodd;}</style><clipPath id="clip-path"><rect class="cls-1" x="4.70501" y="-8.81268" width="206.90403" height="145.3255"/></clipPath></defs><rect class="cls-2" x="-3.37342" y="-3.34411" width="223.25536" height="134.51135"/><g class="cls-3"><g class="cls-3"><path class="cls-4" d="M28.02166,50.76936l7.69058-.0617c6.9525.12305,7.75228,4.1837,4.61435,13.41275-2.64567,7.87531-6.76768,12.67414-14.58165,12.61279h-6.091ZM169.22325,61.044c4.55274-11.01316,10.52065-15.19686,22.76431-15.50454,10.9518-.24609,15.81224,4.79908,11.813,17.41187C199.49376,76.67151,191.67987,82.209,179.86693,82.209c-12.98175.06134-16.24263-7.87514-10.64368-21.165m8.429.73843c2.83006-7.01411,6.95215-11.87463,13.04345-11.93633,5.47555-.06125,6.46007,3.938,3.99905,12.36706-3.13794,10.82842-7.69068,15.5659-13.90466,15.44286-6.89106-.06135-6.768-6.82946-3.13784-15.87359m-15.07382-8.67536,2.09172-6.02932a34.76316,34.76316,0,0,0-10.95146-1.66134c-7.62924-.0616-13.35114,2.33806-15.6892,7.69066-3.69162,8.183,1.4766,10.70564,7.69084,13.59749,9.10583,4.245,3.876,11.56684-4.86069,10.82842-3.50688-.3077-6.58311-1.90724-9.65961-3.38384l-1.96859,6.15264A33.79646,33.79646,0,0,0,142.64393,82.209c9.0444-.55369,14.64308-4.184,16.91972-9.90595,2.584-6.52159-1.41525-10.52064-7.38324-12.42789-12.42831-3.99939-4.98338-15.75088,10.398-6.76811M95.57659,46.15492a19.153,19.153,0,0,1,2.215,3.62993L87.76306,81.47059H93.854l8.36766-26.45618,9.59791,26.45618h8.61375l-.36939-.98451,10.89012-34.33116h-6.15273l-7.99827,25.34837-9.22879-25.34837c-3.999,0-7.99836.06135-11.99767,0m-34.39259,0a14.395,14.395,0,0,1,2.21468,3.62993L53.43173,81.47059H79.826L81.05656,77.533H63.21437L67.152,65.16635H78.96492l1.23051-3.87627H68.3825L71.95081,49.9695H89.17822L90.347,46.15492c-9.72121,0-19.44208.06135-29.163,0m-42.26817,0a16.4482,16.4482,0,0,1,2.21468,3.62993L11.102,81.47059h8.61366l5.04517-.06135c14.766,0,21.77988-7.32179,24.73333-16.85828,4.245-13.5973-.92316-18.33469-13.47418-18.33469Z"/></g></g></svg>

After

Width:  |  Height:  |  Size: 2.2 KiB

View File

@ -0,0 +1,82 @@
---
title: Denso Case Study
linkTitle: Denso
case_study_styles: true
cid: caseStudies
logo: denso_featured_logo.svg
featured: true
weight: 4
quote: >
We got Kubernetes experts involved on our team, and it dramatically accelerated development speed.
new_case_study_styles: true
heading_background: /images/case-studies/denso/banner2.jpg
heading_title_text: Denso
use_gradient_overlay: true
subheading: >
How DENSO Is Fueling Development on the Vehicle Edge with Kubernetes
case_study_details:
- Company: Denso
- Location: Japan
- Industry: Automotive, Edge
---
<h2>Challenge</h2>
<p>DENSO Corporation is one of the biggest automotive components suppliers in the world. With the advent of connected cars, the company launched a Digital Innovation Department to expand into software, working on vehicle edge and vehicle cloud products. But there were several technical challenges to creating an integrated vehicle edge/cloud platform: "the amount of computing resources, the occasional lack of mobile signal, and an enormous number of distributed vehicles," says R&D Product Manager Seiichi Koizumi.</p>
<h2>Solution</h2>
<p>Koizumi's team realized that because mobility services evolve every day, they needed the flexibility of the cloud native ecosystem for their platform. After considering other orchestrators, DENSO went with Kubernetes for orchestration and added Prometheus, Fluentd, Envoy, Istio, and Helm to the platform. Today, DENSO is using a vehicle edge computer, a private Kubernetes cloud, and managed Kubernetes (GKE, EKS, AKS).</p>
<h2>Impact</h2>
<p>Critical layer features can take 2-3 years to implement in the traditional, waterfall model of development at DENSO. With the Kubernetes platform and agile methods, there's a 2-month development cycle for non-critical software. Now, ten new applications are released a year, and a new prototype is introduced every week. "By utilizing Kubernetes managed services, such as GKE/EKS/AKS, we can unify the environment and simplify our maintenance operation," says Koizumi.</p>
{{< case-studies/quote
image="/images/case-studies/denso/banner1.png"
author="SEIICHI KOIZUMI, R&D PRODUCT MANAGER, DIGITAL INNOVATION DEPARTMENT AT DENSO"
>}}
"Another disruptive innovation is coming, so to survive in this situation, we need to change our culture."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
Spun off from Toyota in 1949, DENSO Corporation is one of the top automotive suppliers in the world today, with consolidated net revenue of $48.3 billion.
{{< /case-studies/lead >}}
<p>The company's mission is "contributing to a better world by creating value together with a vision for the future"—and part of that vision in recent years has been development on the vehicle edge and vehicle cloud.</p>
<p>With the advent of connected cars, DENSO established a Digital Innovation Department to expand its business beyond the critical layer of the engine, braking systems, and other automotive parts into the non-critical analytics and entertainment layer. Comparing connected cars to smartphones, R&D Product Manager Seiichi Koizumi says DENSO wants the ability to quickly and easily develop and install apps for the "blank slate" of the car, and iterate them based on the driver's preferences. Thus "we need a flexible application platform," he says.</p>
<p>But working on vehicle edge and vehicle cloud products meant there were several technical challenges: "the amount of computing resources, the occasional lack of mobile signal, and an enormous number of distributed vehicles," says Koizumi. "We are tackling these challenges to create an integrated vehicle edge/cloud platform."</p>
{{< case-studies/quote author="SEIICHI KOIZUMI, R&D PRODUCT MANAGER, DIGITAL INNOVATION DEPARTMENT AT DENSO" >}}
"We got Kubernetes experts involved on our team, and it dramatically accelerated development speed."
{{< /case-studies/quote >}}
<p>Koizumi's team realized that because mobility services evolve every day, they needed the flexibility of the cloud native ecosystem for their platform. As they evaluated technologies, they were led by these criteria: Because their service-enabler business needed to support multiple cloud and on-premise environments, the solution needed to be cloud agnostic, with no vendor lock-in and open governance. It also had to support an edge-cloud integrated environment.</p>
<p>After considering other orchestrators, DENSO went with Kubernetes for orchestration and added Prometheus, Fluentd, Envoy, Istio, and Helm to the platform. During implementation, the team used "design thinking to clarify use cases and their value proposition," says Koizumi. Next, an agile development team worked on a POC, then an MVP, in DevOps style. "Even in the development phase, we are keeping a channel to end users," he adds.</p>
<p>One lesson learned during this process was the value of bringing in experts. "We tried to learn Kubernetes and cloud native technologies from scratch, but it took more time than expected," says Koizumi. "We got Kubernetes experts involved on our team, and it dramatically accelerated development speed."</p>
{{< case-studies/quote
image="/images/case-studies/denso/banner4.jpg"
author="SEIICHI KOIZUMI, R&D PRODUCT MANAGER, DIGITAL INNOVATION DEPARTMENT AT DENSO"
>}}
"By utilizing Kubernetes managed services, such as GKE/EKS/AKS, we can unify the environment and simplify our maintenance operation."
{{< /case-studies/quote >}}
<p>Today, DENSO is using a vehicle edge computer, a private Kubernetes cloud, and managed Kubernetes on GKE, EKS, and AKS. "We are developing a vehicle edge/cloud integrated platform based on a microservice and service mesh architecture," says Koizumi. "We extend cloud into multiple vehicle edges and manage it as a unified platform."</p>
<p>Cloud native has enabled DENSO to deliver applications via its new dash cam, which has a secure connection that collects data to the cloud. "It's like a smartphone," he says. "We are installing new applications and getting the data through the cloud, and we can keep updating new applications all through the dash cam."</p>
<p>The unified cloud native platform, combined with agile development, has had a positive impact on productivity. Critical layer features—those involving engines or braking systems, for example—can take 2-3 years to implement at DENSO, because of the time needed to test safety, but also because of the traditional, waterfall model of development. With the Kubernetes platform and agile methods, there's a 2-month development cycle for non-critical software. Now, ten new applications are released a year, and with the department's scrum-style development, a new prototype is introduced every week.</p>
<p>Application portability has also led to greater developer efficiency. "There's no need to care about differences in the multi-cloud platform anymore," says Koizumi. Now, "we are also trying to have the same portability between vehicle edge and cloud platform."</p>
<p>Another improvement: Automotive Tier-1 suppliers like DENSO always have multiple Tier-2 suppliers. "To provide automotive-grade high-availability services, we tried to do the same thing on a multi-cloud platform," says Koizumi. Before Kubernetes, maintaining two different systems simultaneously was difficult. "By utilizing Kubernetes managed services, such as GKE/EKS/AKS, we can unify the environment and simplify our maintenance operation," he says.</p>
<p>Cloud native has also profoundly changed the culture at DENSO. The Digital Innovation Department is known as "Noah's Ark," and it has grown from 2 members to 70—with plans to more than double in the next year. The way they operate is completely different from the traditional Japanese automotive culture. But just as the company embraced change brought by hybrid cars in the past decade, Koizumi says, they're doing it again now, as technology companies have moved into the connected car space. "Another disruptive innovation is coming," he says, "so to survive in this situation, we need to change our culture."</p>
<p>Looking ahead, Koizumi and his team are expecting serverless and zero-trust security architecture to be important enhancements of Kubernetes. They are glad DENSO has come along for the ride. "Mobility service businesses require agility and flexibility," he says. "DENSO is trying to bring cloud native flexibility into the vehicle infrastructure."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

View File

@ -0,0 +1,89 @@
---
title: GolfNow Case Study
case_study_styles: true
cid: caseStudies
new_case_study_styles: true
heading_background: /images/case-studies/golfnow/banner1.jpg
heading_title_logo: /images/golfnow_logo.png
subheading: >
Saving Time and Money with Cloud Native Infrastructure
case_study_details:
- Company: GolfNow
- Location: Orlando, Florida
- Industry: Golf Industry Technology and Services Provider
---
<h2>Challenge</h2>
<p>A member of the <a href="http://www.nbcunicareers.com/our-businesses/nbc-sports-group">NBC Sports Group</a>, <a href="https://www.golfnow.com/">GolfNow</a> is the golf industry's technology and services leader, managing 10 different products, as well as the largest e-commerce tee time marketplace in the world. As its business began expanding rapidly and globally, GolfNow's monolithic application became problematic. "We kept growing our infrastructure vertically rather than horizontally, and the cost of doing business became problematic," says Sheriff Mohamed, GolfNow's Director, Architecture. "We wanted the ability to more easily expand globally."</p>
<h2>Solution</h2>
<p>Turning to microservices and containerization, GolfNow began moving its applications and databases from third-party services to its own clusters running on <a href="https://www.docker.com/">Docker</a> and <a href="http://kubernetes.io/">Kubernetes.</a></p>
<h2>Impact</h2>
<p>The results were immediate. While maintaining the same capacity—and beyond, during peak periods—GolfNow saw its infrastructure costs for the first application virtually cut in half.</p>
{{< case-studies/quote author="SHERIFF MOHAMED, DIRECTOR, ARCHITECTURE AT GOLFNOW" >}}
"With our growth we obviously needed to expand our infrastructure, and we kept growing vertically rather than horizontally. We were basically wasting money and doubling the cost of our infrastructure."
{{< /case-studies/quote >}}
{{< case-studies/lead >}}
It's not every day that you can say you've slashed an operating expense by half.
{{< /case-studies/lead >}}
<p>But Sheriff Mohamed and Josh Chandler did just that when they helped lead their company, <a href="https://www.golfnow.com/">GolfNow</a>, on a journey from a monolithic to a containerized, cloud native infrastructure managed by Kubernetes.</p>
<p>A top-performing business within the NBC Sports Group, GolfNow is a technology and services company with the largest tee time marketplace in the world. GolfNow serves 5 million active golfers across 10 different products. In recent years, the business had grown so fast that the infrastructure supporting their giant monolithic application (written in C#.NET and backed by SQL Server database management system) could not keep up. "With our growth we obviously needed to expand our infrastructure, and we kept growing vertically rather than horizontally," says Sheriff, GolfNow's Director, Architecture. "Our costs were growing exponentially. And on top of that, we had to build a Disaster Recovery (DR) environment, which then meant we'd have to copy exactly what we had in our original data center to another data center that was just the standby. We were basically wasting money and doubling the cost of our infrastructure."</p>
<p>In moving just the first of GolfNow's important applications—a booking engine for golf courses and B2B marketing platform—from third-party services to their own Kubernetes environment, "our bill went down drastically," says Sheriff.</p>
<p>The path to those stellar results began in late 2014. In order to support GolfNow's global growth, the team decided that the company needed to have multiple data centers and the ability to quickly and easily re-route traffic as needed. "From there we knew that we needed to go in a direction of breaking things apart, microservices, and containerization," says Sheriff. "At the time we were trying to get away from <a href="https://www.microsoft.com/net">C#.NET</a> and <a href="https://www.microsoft.com/en-cy/sql-server/sql-server-2016">SQL Server</a> since it didn't run very well on Linux, where everything container was running smoothly."</p>
<p>To that end, the team shifted to working with <a href="https://nodejs.org/">Node.js</a>, the open-source, cross-platform JavaScript runtime environment for developing tools and applications, and <a href="https://www.mongodb.com/">MongoDB</a>, the open-source database program. At the time, <a href="https://www.docker.com/">Docker</a>, the platform for deploying applications in containers, was still new. But once the team began experimenting with it, Sheriff says, "we realized that was the way we wanted to go, especially since that's the way the industry is heading."</p>
{{< case-studies/quote image="/images/case-studies/golfnow/banner3.jpg" >}}
"The team migrated the rest of the application into their Kubernetes cluster. And the impact was immediate: On top of cutting monthly costs by a large percentage, says Sheriff, 'Running at the same capacity and during our peak time, we were able to horizontally grow. Since we were using our VMs more efficiently with containers, we didn't have to pay extra money at all.'"
{{< /case-studies/quote >}}
<p>GolfNow's dev team ran an "internal, low-key" proof of concept and were won over. "We really liked how easy it was to be able to pass containers around to each other and have them up and running in no time, exactly the way it was running on my machine," says Sheriff. "Because that is always the biggest gripe that Ops has with developers, right? 'It worked on my machine!' But then we started getting to the point of, 'How do we make sure that these things stay up and running?'"</p>
<p>That led the team on a quest to find the right orchestration system for the company's needs. Sheriff says the first few options they tried were either too heavy or "didn't feel quite right." In late summer 2015, they discovered the just-released <a href="http://kubernetes.io/">Kubernetes</a>, which Sheriff immediately liked for its ease of use. "We did another proof of concept," he says, "and Kubernetes won because of the fact that the community backing was there, built on top of what Google had already done."</p>
<p>But before they could go with Kubernetes, <a href="http://www.nbc.com/">NBC</a>, GolfNow's parent company, also asked them to comparison shop with another company. Sheriff and his team liked the competing company's platform user interface, but didn't like that its platform would not allow containers to run natively on Docker. With no clear decision in sight, Sheriff's VP at GolfNow, Steve McElwee, set up a three-month trial during which a GolfNow team (consisting of Sheriff and Josh, who's now Lead Architect, Open Platforms) would build out a Kubernetes environment, and a large NBC team would build out one with the other company's platform.</p>
<p>"We spun up the cluster and we tried to get everything to run the way we wanted it to run," Sheriff says. "The biggest thing that we took away from it is that not only did we want our applications to run within Kubernetes and Docker, we also wanted our databases to run there. We literally wanted our entire infrastructure to run within Kubernetes."</p>
<p>At the time there was nothing in the community to help them get Kafka and MongoDB clusters running within a Kubernetes and Docker environment, so Sheriff and Josh figured it out on their own, taking a full month to get it right. "Everything started rolling from there," Sheriff says. "We were able to get all our applications connected, and we finished our side of the proof of concept a month in advance. My VP was like, 'Alright, it's over. Kubernetes wins.'"</p>
<p>The next step, beginning in January 2016, was getting everything working in production. The team focused first on one application that was already written in Node.js and MongoDB. A booking engine for golf courses and B2B marketing platform, the application was already going in the microservice direction but wasn't quite finished yet. At the time, it was running in <a href="https://devcenter.heroku.com/articles/mongohq">Heroku Compose</a> and other third-party services—resulting in a large monthly bill.</p>
{{< case-studies/quote image="/images/case-studies/golfnow/banner4.jpg" >}}
"'The time I spent actually moving the applications was under 30 seconds! We can move data centers in just incredible amounts of time. If you haven't come from the Kubernetes world you wouldn't believe me.' Sheriff puts it in these terms: 'Before Kubernetes I wasn't sleeping at night, literally. I was woken up all the time, because things were down. After Kubernetes, I've been sleeping at night.'"
{{< /case-studies/quote >}}
<p>"The goal was to take all of that out and put it within this new platform we've created with Kubernetes on <a href="https://cloud.google.com/compute/">Google Compute Engine (GCE)</a>," says Sheriff. "So we ended up building piece by piece, in parallel, what was out in Heroku and Compose, in our Kubernetes cluster. Then, literally, just switched configs in the background. So in Heroku we had the app running hitting a Compose database. We'd take the config, change it and make it hit the database that was running in our cluster."</p>
<p>Using this procedure, they were able to migrate piecemeal, without any downtime. The first migration was done during off hours, but to test the limits, the team migrated the second database in the middle of the day, when lots of users were running the application. "We did it," Sheriff says, "and again it was successful. Nobody noticed."</p>
<p>After three weeks of monitoring to make sure everything was running stable, the team migrated the rest of the application into their Kubernetes cluster. And the impact was immediate: On top of cutting monthly costs by a large percentage, says Sheriff, "Running at the same capacity and during our peak time, we were able to horizontally grow. Since we were using our VMs more efficiently with containers, we didn't have to pay extra money at all."</p>
<p>Not only were they saving money, but they were also saving time. "I had a meeting this morning about migrating some applications from one cluster to another," says Josh. "I spent about 2 hours explaining the process. The time I spent actually moving the applications was under 30 seconds! We can move data centers in just incredible amounts of time. If you haven't come from the Kubernetes world you wouldn't believe me." Sheriff puts it in these terms: "Before Kubernetes I wasn't sleeping at night, literally. I was woken up all the time, because things were down. After Kubernetes, I've been sleeping at night."</p>
<p>A small percentage of the applications on GolfNow have been migrated over to the Kubernetes environment. "Our Core Team is rewriting a lot of the .NET applications into <a href="https://www.microsoft.com/net/core">.NET Core</a> [which is compatible with Linux and Docker] so that we can run them within containers," says Sheriff.</p>
<p>Looking ahead, Sheriff and his team want to spend 2017 continuing to build a whole platform around Kubernetes with <a href="https://github.com/drone/drone">Drone</a>, an open-source continuous delivery platform, to make it more developer-centric. "Now they're able to manage configuration, they're able to manage their deployments and things like that, making all these subteams that are now creating all these microservices, be self sufficient," he says. "So it can pull us away from applications and allow us to just make sure the cluster is running and healthy, and then actually migrate that over to our Ops team."</p>
{{< case-studies/quote >}}
"Having gone from complete newbies to production-ready in three months, the GolfNow team is eager to encourage other companies to follow their lead. 'This is The Six Million Dollar Man of the cloud right now,' adds Josh. 'Just try it out, watch it happen. I feel like the proof is in the pudding when you look at these kinds of application stacks. They're faster, they're more resilient.'"
{{< /case-studies/quote >}}
<p>And long-term, Sheriff has an even bigger goal for getting more people into the Kubernetes fold. "We're actually trying to make this platform generic enough so that any of our sister companies can use it if they wish," he says. "Most definitely I think it can be used as a model. I think the way we migrated into it, the way we built it out, are all ways that I think other companies can learn from, and should not be afraid of."</p>
<p>The GolfNow team is also giving back to the Kubernetes community by open-sourcing a bot framework that Josh built. "We noticed that the dashboard user interface is actually moving a lot faster than when we started," says Sheriff. "However we realized what we needed was something that's more of a bot that really helps us administer Kubernetes as a whole through Slack." Josh explains: "With the Kubernetes-Slack integration, you can essentially hook into a cluster and the issue commands and edit configurations. We've tried to simplify the security configuration as much as possible. We hope this will be our major thank you to Kubernetes, for everything you've given us."</p>
<p>Having gone from complete newbies to production-ready in three months, the GolfNow team is eager to encourage other companies to follow their lead. The lessons they've learned: "You've got to have buy-in from your boss," says Sheriff. "Another big deal is having two to three people dedicated to this type of endeavor. You can't have people who are half in, half out." And if you don't have buy-in from the get go, proving it out will get you there.</p>
<p>"This is The Six Million Dollar Man of the cloud right now," adds Josh. "Just try it out, watch it happen. I feel like the proof is in the pudding when you look at these kinds of application stacks. They're faster, they're more resilient."</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 KiB

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#005da5;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-4.55738" y="-3.79362" width="223.25536" height="134.51136"/><path d="M70.73011,68.57377c-7.72723,0-10.07255-2.04907-10.07255-4.666V54.5512h5.061v8.83817c0,2.14782,2.24658,2.74033,5.25847,2.74033a39.0795,39.0795,0,0,0,4.71533-.1975V54.57589h5.061V68.006A79.45679,79.45679,0,0,1,70.73011,68.57377Z"/><polygon points="25.502 68.228 25.502 62.575 15.529 62.575 15.529 68.228 10.468 68.228 10.468 54.527 15.529 54.527 15.529 60.131 25.502 60.131 25.502 54.527 30.563 54.527 30.563 68.228 25.502 68.228"/><path d="M92.40584,57.26684h13.677V54.52651H91.34428c-3.9994,0-5.3819.71594-5.3819,2.7897V68.25283h5.061V62.5253H106.0581V60.08123H91.02333V58.1309C91.048,57.61246,91.34428,57.26684,92.40584,57.26684Z"/><path d="M116.501,57.26684c-1.08626,0-1.35782.34562-1.35782.83938v1.99969h15.03476v2.34533H115.14315v2.1972c0,.49375.29625.83938,1.35782.83938h13.67694v2.74032H115.4394c-3.9994,0-5.3819-.71594-5.3819-2.7897V57.29153c0-2.07377,1.3825-2.78971,5.3819-2.78971h14.73852v2.74033H116.501Z"/><path class="cls-2" d="M135.28825,65.66063c0-.69125.395-.93813,1.58-.93813h2.02439c1.16032,0,1.58.24688,1.58.93813V67.3147c0,.69126-.395.93813-1.58.93813h-2.02439c-1.16032,0-1.58-.24687-1.58-.93813Z"/><path d="M45.72154,59.785a50.47135,50.47135,0,0,0-8.1716.64187c-1.45657.22219-2.09845.79-2.09845,2.02439v3.45627c0,1.23438.64188,1.77751,2.09845,2.02438a50.47023,50.47023,0,0,0,8.1716.64188,66.931,66.931,0,0,0,9.85037-.61719V59.069c0-4.29564-1.65407-4.54252-7.25816-4.54252H37.05619v2.37H48.1903c2.09845,0,2.29595.56782,2.29595,1.82689V59.785Zm0,6.51753a24.08331,24.08331,0,0,1-4.1722-.32094c-.71595-.12344-1.03688-.395-1.03688-.98751V63.38937c0-.61719.32094-.88876,1.03688-.98751a24.08232,24.08232,0,0,1,4.1722-.32093h4.74V66.1297C49.49875,66.22845,47.62249,66.30251,45.72154,66.30251Z"/><path d="M152.17459,68.42564c-3.9994,0-7.1841-2.32063-7.1841-6.98659,0-4.22159,2.81438-6.9866,7.38159-6.9866a9.10468,9.10468,0,0,1,3.82659.74063l-.64188,1.13563a7.75929,7.75929,0,0,0-3.06127-.59251c-3.30814,0-5.1844,2.22189-5.1844,5.77691,0,3.82658,2.24657,5.62878,4.78941,5.62878a7.32732,7.32732,0,0,0,2.1972-.27157V61.76h-3.11065V60.62435h5.28316V67.7097A12.10491,12.10491,0,0,1,152.17459,68.42564Z"/><path d="M165.259,58.7481a4.6438,4.6438,0,0,0-1.35782-.17282,4.308,4.308,0,0,0-1.65407.27157v9.38129h-2.17251V58.15559a12.37618,12.37618,0,0,1,4.51783-.74063c.49375,0,1.03688.04938,1.16032.04938Z"/><path d="M172.44312,68.42564c-3.5797,0-5.20908-2.32063-5.20908-5.53s1.62938-5.50534,5.20908-5.50534,5.20908,2.32063,5.20908,5.50534S176.02283,68.42564,172.44312,68.42564Zm0-9.92442c-2.29594,0-3.06127,1.9997-3.06127,4.36971s.76531,4.3944,3.06127,4.3944,3.06127-2.02439,3.06127-4.3944S174.76376,58.50122,172.44312,58.50122Z"/><path d="M185.57694,68.42564c-3.16,0-4.81409-1.08625-4.81409-3.481V57.58778h2.14783v7.431c0,1.48126.86407,2.22189,2.691,2.22189A7.23093,7.23093,0,0,0,187.947,66.895V57.58778h2.14783V67.63564A12.37028,12.37028,0,0,1,185.57694,68.42564Z"/><path d="M197.89607,68.401a9.11009,9.11009,0,0,1-2.04907-.24688v4.32034h-2.12313V58.1309a10.93672,10.93672,0,0,1,4.09814-.74062c3.77721,0,5.851,2.09844,5.851,5.33252C203.64829,66.27782,201.377,68.401,197.89607,68.401Zm-.17281-9.94912a5.46342,5.46342,0,0,0-1.87626.27157v8.34441a5.51753,5.51753,0,0,0,1.58.22219c2.691,0,4.07346-1.62938,4.07346-4.46846C201.50046,60.15529,200.48827,58.45184,197.72326,58.45184Z"/></svg>

After

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 KiB

Some files were not shown because too many files have changed in this diff Show More