Add case studies from cncf.io (#14520)

* Add case studies from cncf.io

* Delete index.html

* Update index.html

* Update index.html

* Update index.html

* Update index.html

* remove Financial Times

* update links
pull/14747/head
Alex Contini 2019-06-05 15:08:12 -04:00 committed by Kubernetes Prow Robot
parent 83d5e1e77f
commit 82baf65a88
70 changed files with 1056 additions and 0 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -0,0 +1,96 @@
---
title: Ant Financial Case Study
linkTitle: ant-financial
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_antfinancial_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/antfinancial_logo.png" class="header_logo" style="width:20%;margin-bottom:-2.5%"><br> <div class="subhead" style="margin-top:1%">Ant Financials Hypergrowth Strategy Using Kubernetes
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>Ant Financial</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Hangzhou, China</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Financial Services</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
Officially founded in October 2014, <a href="https://www.antfin.com/index.htm?locale=en_us">Ant Financial</a> originated from <a href="https://global.alipay.com/">Alipay</a>, the worlds largest online payment platform that launched in 2004. The company also offers numerous other services leveraging technology innovation. With the volume of transactions Alipay handles for its 900+ million users worldwide (through its local and global partners)—256,000 transactions per second at the peak of Double 11 Singles Day 2017, and total gross merchandise value of $31 billion for Singles Day 2018—not to mention that of its other services, Ant Financial faces “data processing challenge in a whole new way,” says Haojie Hang, who is responsible for Product Management for the Storage and Compute Group. “We see three major problems of operating at that scale: how to provide real-time compute, storage, and processing capability, for instance to make real-time recommendations for fraud detection; how to provide intelligence on top of this data, because theres too much data and then were not getting enough insight; and how to apply security in the application level, in the middleware level, the system level, even the chip level.” In order to provide reliable and consistent services to its customers, Ant Financial embraced containers in early 2014, and soon needed an orchestration solution for the tens-of-thousands-of-node clusters in its data centers.
<h2>Solution</h2>
After investigating several technologies, the team chose <a href="https://kubernetes.io/">Kubernetes</a> for orchestration, as well as a number of other CNCF projects, including <a href="https://prometheus.io/">Prometheus</a>, <a href="https://opentracing.io/">OpenTracing</a>, <a href="https://coreos.com/etcd/">etcd</a> and <a href="https://coredns.io/">CoreDNS</a>. “In late 2016, we decided that Kubernetes will be the de facto standard,” says Hang. “Looking back, we made the right bet on the right technology. But then we needed to move the production workload from the legacy infrastructure to the latest Kubernetes-enabled platform, and that took some time, because we are very careful in terms of reliability and consistency.” All core financial systems were containerized by November 2017, and the migration to Kubernetes is ongoing.
<br>
<h2>Impact</h2>
“Weve seen at least tenfold in improvement in terms of the operations with cloud native technology, which means you can have tenfold increase in terms of output,” says Hang. Ant also provides its fully integrated financial cloud platform to business partners around the world, and hopes to power the next generation of digital banking with deep experience in service innovation and technology expertise. Hang says the team hasnt begun to focus on optimizing the Kubernetes platform, either: “Because were still in the hyper growth stage, were not in a mode where we do cost saving yet.”
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"In late 2016, we decided that Kubernetes will be the de facto standard. Looking back, we made the right bet on the right technology."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- HAOJIE HANG, PRODUCT MANAGEMENT, ANT FINANCIAL</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>A spinoff of the multinational conglomerate Alibaba, Ant Financial boasts a $150+ billion valuation and the scale to match. The fintech startup, launched in 2014, is comprised of Alipay, the worlds largest online payment platform, and numerous other services leveraging technology innovation.</h2>
And the volume of transactions that Alipay handles for over 900 million users worldwide (through its local and global partners) is staggering: 256,000 per second at the peak of Double 11 Singles Day 2017, and total gross merchandise value of $31 billion for Singles Day 2018. With the mission of “bringing the world equal opportunities,” Ant Financial is dedicated to creating an open, shared credit system and financial services platform through technology innovations.
<br><br>
Combine that with the operations of its other properties—such as the Huabei online credit system, Jiebei lending service, and the 350-million-user <a href="https://en.wikipedia.org/wiki/Ant_Forest">Ant Forest</a> green energy mobile app—and Ant Financial faces “data processing challenge in a whole new way,” says Haojie Hang, who is responsible for Product Management for the Storage and Compute Group. “We see three major problems of operating at that scale: how to provide real-time compute, storage, and processing capability, for instance to make real-time recommendations for fraud detection; how to provide intelligence on top of this data, because theres too much data and were not getting enough insight; and how to apply security in the application level, in the middleware level, the system level, even the chip level.”
<br><br>
To address those challenges and provide reliable and consistent services to its customers, Ant Financial embraced <a href="https://www.docker.com/">Docker</a> containerization in 2014. But they soon realized that they needed an orchestration solution for some tens-of-thousands-of-node clusters in the companys data centers.
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_antfinancial_banner3.jpg')">
<div class="banner3text">
"On Double 11 this year, we had plenty of nodes on Kubernetes, but compared to the whole scale of our infrastructure, this is still in progress."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- RANGER YU, GLOBAL TECHNOLOGY PARTNERSHIP & DEVELOPMENT, ANT FINANCIAL</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
The team investigated several technologies, including Docker Swarm and Mesos. “We did a lot of POCs, but were very careful in terms of production systems, because we want to make sure we dont lose any data,” says Hang. “You cannot afford to have a service downtime for one minute; even one second has a very, very big impact. We operate every day under pressure to provide reliable and consistent services to consumers and businesses in China and globally.”
<br><br>
Ultimately, Hang says Ant chose Kubernetes because it checked all the boxes: a strong community, technology that “will be relevant in the next three to five years,” and a good match for the companys engineering talent. “In late 2016, we decided that Kubernetes will be the de facto standard,” says Hang. “Looking back, we made the right bet on the right technology. But then we needed to move the production workload from the legacy infrastructure to the latest Kubernetes-enabled platform. We spent a lot of time learning and then training our people to build applications on Kubernetes well.”
<br><br>
All core financial systems were containerized by November 2017, and the migration to Kubernetes is ongoing. Ants platform also leverages a number of other CNCF projects, including <a href="https://prometheus.io/">Prometheus</a>, <a href="https://opentracing.io/">OpenTracing</a>, <a href="https://coreos.com/etcd/">etcd</a> and <a href="https://coredns.io/">CoreDNS</a>. “On Double 11 this year, we had plenty of nodes on Kubernetes, but compared to the whole scale of our infrastructure, this is still in progress,” says Ranger Yu, Global Technology Partnership & Development.
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_antfinancial_banner4.jpg')">
<div class="banner4text">
"Were very grateful for CNCF and this amazing technology, which we need as we continue to scale globally. Were definitely embracing the community and open source more in the future." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- HAOJIE HANG, PRODUCT MANAGEMENT, ANT FINANCIAL</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
Still, there has already been an impact. “Cloud native technology has benefited us greatly in terms of efficiency,” says Hang. “In general, we want to make sure our infrastructure is nimble and flexible enough for the work that could happen tomorrow. Thats the goal. And with cloud native technology, weve seen at least tenfold improvement in operations, which means you can have tenfold increase in terms of output. Lets say you are operating 10 nodes with one person. With cloud native, tomorrow you can have 100 nodes.”
<br><br>
Ant also provides its financial cloud platform to partners around the world, and hopes to power the next generation of digital banking with deep experience in service innovation and technology expertise. Hang says the team hasnt begun to focus on optimizing the Kubernetes platform, either: “Because were still in the hyper growth stage, were not in a mode where we do cost-saving yet.”
<br><br>
The CNCF community has also been a valuable asset during Ant Financials move to cloud native. “If you are applying a new technology, its very good to have a community to discuss technical problems with other users,” says Hang. “Were very grateful for CNCF and this amazing technology, which we need as we continue to scale globally. Were definitely embracing the community and open sourcing more in the future.”
</div>
<div class="banner5" >
<div class="banner5text">
"In China, we are the North Star in terms of innovation in financial and other related services,” says Hang. “We definitely want to make sure were still leading in the next 5 to 10 years with our investment in technology."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- RANGER YU, GLOBAL TECHNOLOGY PARTNERSHIP & DEVELOPMENT, ANT FINANCIAL</span></div>
</div>
<div class="fullcol">
In fact, the company has already started to open source some of its <a href="https://github.com/alipay">cloud native middleware</a>. “We are going to be very proactive about that,” says Yu. “CNCF provided a platform so everyone can plug in or contribute components. This is very good open source governance.”
<br><br>
Looking ahead, the Ant team will continue to evaluate many other CNCF projects. Building a service mesh community in China, the team has brought together many China-based companies and developers to discuss the potential of that technology. “Service mesh is very attractive for Chinese developers and end users because we have a lot of legacy systems running now, and its an ideal mid-layer to glue everything together, both new and legacy,” says Hang. “For new technologies, we look very closely at whether they will last.”
<br><br>
At Ant, Kubernetes passed that test with flying colors, and the team hopes other companies will follow suit. “In China, we are the North Star in terms of innovation in financial and other related services,” says Hang. “We definitely want to make sure were still leading in the next 5 to 10 years with our investment in technology.”
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.4 KiB

View File

@ -0,0 +1,99 @@
---
title: City of Montreal Case Study
linkTitle: city-of-montreal
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_montreal_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/montreal_logo.png" class="header_logo" style="width:20%;margin-bottom:-1.2%"><br> <div class="subhead" style="margin-top:1%">City of Montréal - How the City of Montréal Is Modernizing Its 30-Year-Old, Siloed&nbsp;Architecture&nbsp;with&nbsp;Kubernetes
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>City of Montréal</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Montréal, Québec, Canada</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Government</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
Like many governments, Montréal has a number of legacy systems, and “we have systems that are older than some developers working here,” says the citys CTO, Jean-Martin Thibault. “We have mainframes, all flavors of Windows, various flavors of Linux, old and new Oracle systems, Sun servers, all kinds of databases. Like all big corporations, some of the most important systems, like Budget and Human Resources, were developed on mainframes in-house over the past 30 years.” There are over 1,000 applications in all, and most of them were running on different ecosystems. In 2015, a new management team decided to break down those silos, and invest in IT in order to move toward a more integrated governance for the city. They needed to figure out how to modernize the architecture.
<h2>Solution</h2>
The first step was containerization. The team started with a small Docker farm with four or five servers, with Rancher for providing access to the Docker containers and their logs and Jenkins to deploy. “We based our effort on the new trends; we understood the benefits of immutability and deployments without downtime and such things,” says Solutions Architect Marc Khouzam. They soon realized they needed orchestration as well, and opted for Kubernetes. Says Enterprise Architect Morgan Martinet: “Kubernetes offered concepts on how you would describe an architecture for any kind of application, and based on those concepts, deploy whats required to run the infrastructure. It was becoming a de facto standard.”
<br>
<h2>Impact</h2>
The time to market has improved drastically, from many months to a few weeks. Deployments went from months to hours. “In the past, you would have to ask for virtual machines, and that alone could take weeks, easily,” says Thibault. “Now you dont even have to ask for anything. You just create your project and it gets deployed.” Kubernetes has also improved the efficiency of how the city uses its compute resources: “Before, the 200 application components we currently run on Kubernetes would have required hundreds of virtual machines, and now, if were talking about a single environment of production, we are able to run them on 8 machines, counting the masters of Kubernetes,” says Martinet. And its all done with a small team of just 5 people operating the Kubernetes clusters.
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"We realized the limitations of having a non-orchestrated Docker environment. Kubernetes came to the rescue, bringing in all these features that make it a lot easier to manage and give a lot more benefits to the users."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- JEAN-MARTIN THIBAULT, CTO, CITY OF MONTRÉAL</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>The second biggest municipality in Canada, Montréal has a large number of legacy systems keeping the government running. And while they dont quite date back to the citys founding in 1642, “we have systems that are older than some developers working here,” jokes the citys CTO, Jean-Martin Thibault.</h2>
“We have mainframes, all flavors of Windows, various flavors of Linux, old and new Oracle systems, Sun servers, all kinds of databases. Some of the most important systems, like Budget and Human Resources, were developed on mainframes in-house over the past 30 years.”
<br><br>
In recent years, that fact became a big pain point. There are over 1,000 applications in all, running on almost as many different ecosystems. In 2015, a new city management team decided to break down those silos, and invest in IT in order to move toward a more integrated governance. “The organization was siloed, so as a result the architecture was siloed,” says Thibault. “Once we got integrated into one IT team, we decided to redo an overall enterprise architecture.”
<br><br>
The first step to modernize the architecture was containerization. “We based our effort on the new trends; we understood the benefits of immutability and deployments without downtime and such things,” says Solutions Architect Marc Khouzam. The team started with a small Docker farm with four or five servers, with Rancher for providing access to the Docker containers and their logs and Jenkins for deployment.
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_montreal_banner3.jpg')">
<div class="banner3text">
"Getting a project running in Kubernetes is entirely dependent on how long you need to program the actual software. Its no longer dependent on deployment. Deployment is so fast that its negligible."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- MARC KHOUZAM, SOLUTIONS ARCHITECT, CITY OF MONTRÉAL</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
But this Docker farm setup had some limitations, including the lack of self-healing and dynamic scaling based on traffic, and the effort required to optimize server resources and scale to multiple instances of the same container. The team soon realized they needed orchestration as well. “Kubernetes came to the rescue,” says Thibault, “bringing in all these features that make it a lot easier to manage and give a lot more benefits to the users.”
<br><br>
The team had evaluated several orchestration solutions, but Kubernetes stood out because it addressed all of the pain points. (They were also inspired by Yahoo! Japans use case, which the team members felt came close to their vision.) “Kubernetes offered concepts on how you would describe an architecture for any kind of application, and based on those concepts, deploy whats required to run the infrastructure,” says Enterprise Architect Morgan Martinet. “It was becoming a de facto standard. It also promised portability across cloud providers. The choice of Kubernetes now gives us many options such as running clusters in-house or in any IaaS provider, or even using Kubernetes-as-a-service in any of the major cloud providers.”
<br><br>
Another important factor in the decision was vendor neutrality. “As a government entity, it is essential for us to be neutral in our selection of products and providers,” says Thibault. “The independence of the Cloud Native Computing Foundation from any company provides this.”
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_montreal_banner4.jpg')">
<div class="banner4text">
"Kubernetes has been great. Its been stable, and it provides us with elasticity, resilience, and robustness. While re-architecting for Kubernetes, we also benefited from the monitoring and logging aspects, with centralized logging, Prometheus logging, and Grafana dashboards. We have enhanced visibility of whats being deployed." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- MORGAN MARTINET, ENTERPRISE ARCHITECT, CITY OF MONTRÉAL</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
The Kubernetes implementation began with the deployment of a small cluster using an internal Ansible playbook, which was soon replaced by the Kismatic distribution. Given the complexity they saw in operating a Kubernetes platform, they decided to provide development groups with an automated CI/CD solution based on Helm. “An integrated CI/CD solution on Kubernetes standardized how the various development teams designed and deployed their solutions, but allowed them to remain independent,” says Khouzam.
<br><br>
During the re-architecting process, the team also added Prometheus for monitoring and alerting, Fluentd for logging, and Grafana for visualization. “We have enhanced visibility of whats being deployed,” says Martinet. Adds Khouzam: “The big benefit is we can track anything, even things that dont run inside the Kubernetes cluster. Its our way to unify our monitoring effort.”
<br><br>
All together, the cloud native solution has had a positive impact on velocity as well as administrative overhead. With standardization, code generation, automatic deployments into Kubernetes, and standardized monitoring through Prometheus, the time to market has improved drastically, from many months to a few weeks. Deployments went from months and weeks of planning down to hours. “In the past, you would have to ask for virtual machines, and that alone could take weeks to properly provision,” says Thibault. Plus, for dedicated systems, experts often had to be brought in to install them with their own recipes, which could take weeks and months.
<br><br>
Now, says Khouzam, “we can deploy pretty much any application thats been Dockerized without any help from anybody. Getting a project running in Kubernetes is entirely dependent on how long you need to program the actual software. Its no longer dependent on deployment. Deployment is so fast that its negligible.”
</div>
<div class="banner5" >
<div class="banner5text">
"Were working with the market when possible, to put pressure on our vendors to support Kubernetes, because its a much easier solution to manage"<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- MORGAN MARTINET, ENTERPRISE ARCHITECT, CITY OF MONTRÉAL</span></div>
</div>
<div class="fullcol">
Kubernetes has also improved the efficiency of how the city uses its compute resources: “Before, the 200 application components we currently run in Kubernetes would have required hundreds of virtual machines, and now, if were talking about a single environment of production, we are able to run them on 8 machines, counting the masters of Kubernetes,” says Martinet. And its all done with a small team of just five people operating the Kubernetes clusters. Adds Martinet: “Its a dramatic improvement no matter what you measure.”
<br><br>
So it should come as no surprise that the teams strategy going forward is to target Kubernetes as much as they can. “If something cant run inside Kubernetes, well wait for it,” says Thibault. That means they havent moved any of the citys Windows systems onto Kubernetes, though its something they would like to do. “Were working with the market when possible, to put pressure on our vendors to support Kubernetes, because its a much easier solution to manage,” says Martinet.
<br><br>
Thibault sees a near future where 60% of the citys workloads are running on a Kubernetes platform—basically any and all of the use cases that they can get to work there. “Its so much more efficient than the way we used to do things,” he says. “Theres no looking back.”
</div>
</section>

View File

@ -0,0 +1,97 @@
---
title: JD.com Case Study
linkTitle: jd-com
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_jdcom_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/jdcom_logo.png" class="header_logo" style="width:17%;margin-bottom:-1%"><br> <div class="subhead" style="margin-top:1%">JD.com: How JD.com Pioneered Kubernetes for E-Commerce at Hyperscale
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>JD.com</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Beijing, China</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>eCommerce</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
With more than 300 million active users and total 2017 revenue of more than $55 billion, <a href="https://corporate.JD.com/home">JD.com</a> is Chinas largest retailer, and its operations are the epitome of hyperscale. For example, there are more than a trillion images in JD.coms product databases—with 100 million being added daily—and this enormous amount of data needs to be instantly accessible. In 2014, JD.com moved its applications to containers running on bare metal machines using OpenStack and Docker to "speed up the delivery of our computing resources and make the operations much simpler," says Haifeng Liu, JD.coms Chief Architect. But by the end of 2015, with tens of thousands of nodes running in multiple data centers, "we encountered a lot of problems because our platform was not strong enough, and we suffered from bottlenecks and scalability issues," says Liu. "We needed infrastructure for the next five years of development, now."
<h2>Solution</h2>
JD.com turned to Kubernetes to accommodate its clusters. At the beginning of 2016, the company began to transition from OpenStack to Kubernetes, and today, JD.com runs the worlds largest Kubernetes cluster. "Kubernetes has provided a strong foundation on top of which we have customized the solution to suit our needs as Chinas largest retailer."
<br>
<h2>Impact</h2>
"We have greater data center efficiency, better managed resources, and smarter deployment with the Kubernetes platform," says Liu. Deployment time went from several hours to tens of seconds. Efficiency has improved by 20-30%, measured in IT costs. With the further optimizations the team is working on, Liu believes there is the potential to save hundreds of millions of dollars a year. But perhaps the best indication of success was the annual Singles Day shopping event, which ran on the Kubernetes platform for the first time in 2018. Over 11 days, transaction volume on JD.com was $23 billion, and "our e-commerce platforms did great," says Liu. "Infrastructure led the way to prep for 11.11. We took the approach of predicting volume, emulating the behavior of customers to prepare beforehand, and drilled for malfunctions. Because of Kubernetess scalability, we were able to handle an extremely high level of demand."
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"Kubernetes helped us reduce the complexity of operations to make distributed systems stable and scalable. Most importantly, we can leverage Kubernetes for scheduling resources to reduce hardware costs. Thats the&nbsp;big&nbsp;win."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- HAIFENG LIU, CHIEF ARCHITECT, JD.com</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>With more than 300 million active users and $55.7 billion in annual revenues last year, JD.com is Chinas largest retailer, and its operations are the epitome of hyperscale.</h2>
For example, there are more than a trillion images in JD.coms product databases for customers, with 100 million being added daily. And this enormous amount of data needs to be instantly accessible to enable a smooth online customer experience.
<br><br>
In 2014, JD.com moved its applications to containers running on bare metal machines using OpenStack and Docker to "speed up the delivery of our computing resources and make the operations much simpler," says Haifeng Liu, JD.coms Chief Architect. But by the end of 2015, with hundreds of thousands of nodes in multiple data centers, "we encountered a lot of problems because our platform was not strong enough, and we suffered from bottlenecks and scalability issues," Liu adds. "We needed infrastructure for the next five years of development, now."
<br><br>
After considering a number of orchestration technologies, JD.com decided to adopt Kubernetes to accommodate its ever-growing clusters. "The main reason is because Kubernetes can give us more efficient, scalable and much simpler application deployments, plus we can leverage it to do flexible platform scheduling," says Liu.
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_jdcom_banner3.jpg')">
<div class="banner3text">
"We customized Kubernetes and built a modern system on top of it. This entire ecosystem of Kubernetes plus our own optimizations have helped us save costs and time."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- HAIFENG LIU, CHIEF ARCHITECT, JD.com</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
The fact that Kubernetes is based on Googles Borg also gave the company confidence. The team liked that Kubernetes has a clear and simple architecture, and that its developed mostly in Go, which is a popular language within JD.com. Though he felt that at the time Kubernetes "was not mature enough," Liu says, "we adopted it anyway."
<br><br>
The team spent a year developing the new container engine platform based on Kubernetes, and at the end of 2016, began promoting it within the company. "We wanted the cluster to be the default way for creating services, so scalability is easier," says Liu. "We talked to developers, interest grew, and we solved problems together." Some of these problems included networking performance and etcd scalability. "But during the past two years, Kubernetes has become more mature and very stable," he adds.
<br><br>
Today, the company runs the worlds largest Kubernetes cluster. "We customized Kubernetes and built a modern system on top of it," says Liu. "This entire ecosystem of Kubernetes plus our own optimizations have helped us save costs and time. We have greater data center efficiency, better managed resources, and smarter deployment with the Kubernetes platform."
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_jdcom_banner4.jpg');width:100%">
<div class="banner4text">
"My advice is first you need to combine this technology with your own businesses, and the second is you need clear goals. You cannot just use the technology because others are using it. You need to consider your own objectives." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- HAIFENG LIU, CHIEF ARCHITECT, JD.com</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
The results are clear: Deployment time went from several hours to tens of seconds. Efficiency has improved by 20-30%, measured in IT costs. But perhaps the best indication of success was the annual <a href="https://JD.comcorporateblog.com/shoppers-snap-up-quality-and-imported-products-on-JD.com-com-for-record-breaking-singles-day-festival/">Singles Day</a> shopping event, which ran on the Kubernetes platform for the first time in 2018. Over 11 days, transaction volume on JD.com was $23 billion, and "our e-commerce platforms did great," says Liu. "Infrastructure led the way to prep for 11.11. We took the approach of predicting volume, emulating the behavior of customers to prepare beforehand, and drilled for malfunctions. Because of Kubernetess scalability, we were able to handle an extremely high level of demand."
<br><br>
JD.com is now in its second stage with Kubernetes: The platform is already stable, scalable, and flexible, so the focus is on how to run things much more efficiently to further reduce costs. With the optimizations the team is working on with resource management, Liu believes there is the potential to save hundreds of millions of dollars a year.
<br><br>
"We run Kubernetes and container clusters on roughly tens of thousands of physical bare metal nodes," he says. "Using Kubernetes and leveraging our own machine learning pipeline to predict how many resources we need for each application we use, and our own intelligent scaling algorithm, we can improve our resource usage. If we boost the resource usage, for example, by several percent, that means we can reduce huge hardware costs. Then we dont need that many servers to get that same amount of workload. That can save us a lot of resources."
</div>
<div class="banner5" style="width:100%">
<div class="banner5text">
"We can share our successful experience with the community, and we also receive good feedback from others. So its mutually beneficial."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- HAIFENG LIU, CHIEF ARCHITECT, JD.com</span></div>
</div>
<div class="fullcol">
JD.com, which won CNCFs 2018 End User Award, is also using <a href="https://helm.sh/">Helm</a>, <a href="https://github.com/containernetworking">CNI</a>, <a href="https://goharbor.io/">Harbor</a>, and <a href="https://vitess.io/">Vitess</a> on its platform. JD.com developers have made considerable contributions to Vitess, the CNCF project for scalable MySQL cluster management, and the company hopes to donate its own project to CNCF in the near future. Community participation is a priority for JD.com. "We have a good partnership with this community," says Liu. "We can share our successful experience with the community, and we also receive good feedback from others. So its mutually beneficial."
<br><br>
To that end, Liu offers this advice for other companies considering adopting cloud native technology. "First you need to combine this technology with your own businesses, and the second is you need clear goals," he says. "You cannot just use the technology because others are using it. You need to consider your own objectives."
<br><br>
For JD.coms objectives, these cloud native technologies have been an ideal fit with the companys own homegrown innovation. "Kubernetes helped us reduce the complexity of operations to make distributed systems stable and scalable," says Liu. "Most importantly, we can leverage Kubernetes for scheduling resources to reduce hardware costs. Thats the big win."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.4 KiB

View File

@ -0,0 +1,96 @@
---
title: Prowise Case Study
linkTitle: prowise
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_nerdalize_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/nerdalize_logo.png" class="header_logo" style="width:25%;margin-bottom:-1%"><br> <div class="subhead" style="margin-top:1%">Nerdalize: Providing Affordable and Sustainable Cloud Hosting with Kubernetes
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>Nerdalize</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Delft, Netherlands </b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Cloud Provider</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
Nerdalize offers affordable cloud hosting for customers—and free heat and hot water for people who sign up to house the heating devices that contain the companys servers. The savings Nerdalize realizes by not running data centers are passed on to its customers. When the team began using Docker to make its software more portable, it realized it also needed a container orchestration solution. “As a cloud provider, we have internal services for hosting our backends and billing our customers, but we also need to offer our compute to our end users,” says Digital Product Engineer Ad van der Veer. “Since we have these heating devices spread across the Netherlands, we need some way of tying that all together.”
<h2>Solution</h2>
After briefly using a basic scheduling setup with another open source tool, Nerdalize switched to Kubernetes. “On top of our heating devices throughout the Netherlands, we have a virtual machine layer, and on top of that we run Kubernetes clusters for our customers,” says van der Veer. “As a small company, we have to provide a rock solid story in terms of the technology. Kubernetes allows us to offer a hybrid solution: You can run this on our cloud, but you can run it on other clouds as well. It runs in your internal hardware if you like. And together with the Docker image standard and our multi-cloud dashboard, that allows them peace of mind.”
<h2>Impact</h2>
Nerdalize prides itself on being a Kubernetes-native cloud provider that charges its customers prices 40% below that of other cloud providers. “Every euro that we have to invest for licensing of software thats not open source comes from that 40%,” says van der Veer. If they had used a non-open source orchestration platform instead of Kubernetes, “that would reduce this proposition that we have of 40% less cost to like 30%. Kubernetes directly allows us to have this business model and this strategic advantage.” Nerdalize customers also benefit from time savings: One went from spending a day to set up VMs, network, and software, to spinning up a Kubernetes cluster in minutes. And for households using the heating devices, they save an average of 200 euro a year on their heating bill. The environmental impact? The annual reduction in CO2 emissions comes out to be 2 tons per Nerdalize household, which is equivalent to a car driving 8,000 km.
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
“We can walk into a boardroom and put a Kubernetes logo up, and people accept it as an established technology. It becomes this centerpiece where other cloud native projects can tie in, so theres a network effect that each project empowers each other. This is something that has a lot of value when we have to talk to customers and convince them that our cloud fits their&nbsp;needs.”
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>— AD VAN DER VEER, PRODUCT ENGINEER, NERDALIZE</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>Nerdalize is a cloud hosting provider that has no data centers. Instead, the four-year-old startup places its servers in homes across the Netherlands, inside heating devices it developed to turn the heat produced by the servers into heating and hot water for the residents.
</h2>
“Households save on their gas bills, and cloud users have a much more sustainable cloud solution,” says Maaike Stoops, Customer Experience Queen at Nerdalize. “And we dont have the overhead of building a data center, so our cloud is up to 40% more affordable.”
<br><br>
That business model has been enabled by the companys adoption of containerization and Kubernetes. “When we just got started, Docker was just introduced,” says Digital Product Engineer Ad van der Veer. “We began with a very basic bare metal setup, but once we developed the business, we saw that containerization technology was super useful to help our customers. As a cloud provider, we have internal services for hosting our backends and billing our customers, but we also need to offer our compute to our end users. Since we have these heating devices spread across the Netherlands, we need some way of tying that all together.”
<br><br>
After trying to develop its own scheduling system using another open source tool, Nerdalize found Kubernetes. “Kubernetes provided us with more functionality out of the gate,” says van der Veer.
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_nerdalize_banner3.jpg')">
<div class="banner3text">
“We always try to get a working version online first, like minimal viable products, and then move to stabilize that,” says van der Veer. “And I think that these kinds of day-two problems are now immediately solved. The rapid prototyping we saw internally is a very valuable aspect of Kubernetes.”<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>— AD VAN DER VEER, PRODUCT ENGINEER, NERDALIZE</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
The team first experimented with a basic use case to run customers workloads on Kubernetes. “Getting the data working was kind of difficult, and at the time the installation wasnt that simple,” says van der Veer. “Then CNCF started, we saw the community grow, these problems got solved, and from there it became a very easy decision.”
<br><br>
The first Nerdalize product that was launched in 2017 was “100% containerized and Kubernetes native,” says van der Veer. “On top of our heating devices throughout the Netherlands, we have a virtual machine layer, and on top of that we run Kubernetes clusters for our customers. As a small company, we have to provide a rock solid story in terms of the technology. Kubernetes allows us to offer a hybrid solution: You can run this on our cloud, but you can run it on other clouds as well. It runs in your internal hardware if you like. And together with the Docker image standard and our multi-cloud dashboard, that gives them peace of mind.”
<br><br>
Not to mention the 40% cost savings. “Every euro that we have to invest for licensing of software thats not open source comes from that 40%,” says van der Veer. If Nerdalize had used a non-open source orchestration platform instead of Kubernetes, “that would reduce our cost savings proposition to like 30%. Kubernetes directly allows us to have this business model and this strategic advantage.”
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_nerdalize_banner4.jpg')" style="width:100%">
<div class="banner4text">
“One of our customers used to spend up to a day setting up the virtual machines, network and software every time they wanted to run a project in the cloud. On our platform, with Docker and Kubernetes, customers can have their projects running in a couple of minutes.”
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- MAAIKE STOOPS, CUSTOMER EXPERIENCE QUEEN, NERDALIZE</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
Nerdalize now has customers, from individual engineers to data-intensive startups and established companies, all around the world. (For the time being, though, the heating devices are exclusive to the Netherlands.) One of the most common use cases is batch workloads used by data scientists and researchers, and the time savings for these end users is profound. “One of our customers used to spend up to a day setting up the virtual machines, network and software every time they wanted to run a project in the cloud,” says Stoops. “On our platform, with Docker and Kubernetes, customers can have their projects running in a couple of minutes.”
<br><br>
As for households using the heating devices, they save an average of 200 euro a year on their heating bill. The environmental impact? The annual reduction in CO2 emissions comes out to 2 tons per Nerdalize household, which is equivalent to a car driving 8,000 km.
<br><br>
For the Nerdalize team, feature development—such as the accessible command line interface called Nerd, which recently went live—has also been sped up by Kubernetes. “We always try to get a working version online first, like minimal viable products, and then move to stabilize that,” says van der Veer. “And I think that these kinds of day-two problems are now immediately solved. The rapid prototyping we saw internally is a very valuable aspect of Kubernetes.”
<br><br>
Another unexpected benefit has been the growing influence and reputation of Kubernetes. “We can walk into a boardroom and put a Kubernetes logo up, and people accept it as an established technology,” says van der Veer. “It becomes this centerpiece where other cloud native projects can tie in, so theres a network effect that each project empowers each other. This is something that has a lot of value when we have to convince customers that our cloud fits their needs.”
</div>
<div class="banner5" >
<div class="banner5text">
“It shouldnt be too big of a hassle and too large of a commitment. It should be fun and easy for end users. So we really love Kubernetes in that way.”<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- MAAIKE STOOPS, CUSTOMER EXPERIENCE QUEEN, NERDALIZE</span></div>
</div>
<div class="fullcol">
In fact, Nerdalize is currently looking into implementing other CNCF projects, such as <a href="https://prometheus.io/">Prometheus</a> for monitoring and <a href="https://rook.io/">Rook</a>, “which should help us with some of the data problems that we want to solve for our customers,” says van der Veer.
<br><br>
In the coming year, Nerdalize will scale up the number of households running its hardware to 50, or the equivalent of a small scale data center. Geographic redundancy and greater server ability for customers are two main goals. Spreading the word about Kubernetes is also in the game plan. “We offer a free namespace on our sandbox, multi-tenant Kubernetes cluster for anyone to try,” says van der Veer. “Whats more cool than trying your first Kubernetes project on houses, to warm a shower?”
<br><br>
Ultimately, this ties into Nerdalizes mission of supporting affordable and sustainable cloud hosting. “We want to be the disrupter of the cloud space, showing organizations that running in the cloud is easy and affordable,” says Stoops. “It shouldnt be too big of a hassle and too large of a commitment. It should be fun and easy for end users. So we really love Kubernetes in that way.”
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -0,0 +1,96 @@
---
title: pingcap Case Study
linkTitle: pingcap
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_pingcap_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/pingcap_logo.png" class="header_logo" style="width:20%;margin-bottom:-1.5%"><br> <div class="subhead" style="margin-top:1%">PingCAP Bets on Cloud Native for Its TiDB Database Platform
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>PingCAP</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Beijing, China, and San Mateo, CA</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Software</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
PingCAP is the company leading the development of the popular open source NewSQL database <a href="https://github.com/pingcap/tidb">TiDB</a>, which is MySQL-compatible, can handle hybrid transactional and analytical processing (HTAP) workloads, and has a cloud native architectural design. "Having a hybrid multi-cloud product is an important part of our global go-to-market strategy," says Kevin Xu, General Manager of Global Strategy and Operations. In order to achieve that, the team had to address two challenges: "how to deploy, run, and manage a distributed stateful application, such as a distributed database like TiDB, in a containerized world," Xu says, and "how to deliver an easy-to-use, consistent, and reliable experience for our customers when they use TiDB in the cloud, any cloud, whether thats one cloud provider or a combination of different cloud environments." Knowing that using a distributed system isnt easy, they began looking for the right orchestration layer to help reduce some of that complexity for end users.
<h2>Solution</h2>
The team started looking at Kubernetes for orchestration early on. "We knew Kubernetes had the promise of helping us solve our problems," says Xu. "We were just waiting for it to mature." In early 2018, PingCAP began integrating Kubernetes into its internal development as well as in its TiDB product. At that point, the team has already had experience using other cloud native technologies, having integrated both <a href="https://prometheus.io/">Prometheus</a> and <a href="https://grpc.io/">gRPC</a> as parts of the TiDB platform earlier on.
<br>
<h2>Impact</h2>
Xu says that PingCAP customers have had a "very positive" response so far to Kubernetes being the tool to deploy and manage TiDB. Prometheus, with <a href="https://grafana.com/">Grafana</a> as the dashboard, is installed by default when customers deploy TiDB, so that they can monitor performance and make any adjustments needed to reach their target before and while deploying TiDB in production. That monitoring layer "makes the evaluation process and communication much smoother," says Xu.
<br><br>
With the companys <a href="https://github.com/pingcap/tidb-operator">Kubernetes-based Operator implementation</a>, which is open sourced, customers are now able to deploy, run, manage, upgrade, and maintain their TiDB clusters in the cloud with no downtime, and reduced workload, burden and overhead. And internally, says Xu, "weve completely switched to Kubernetes for our own development and testing, including our data center infrastructure and <a href="https://thenewstack.io/chaos-tools-and-techniques-for-testing-the-tidb-distributed-newsql-database/">Schrodinger</a>, an automated testing platform for TiDB. With Kubernetes, our resource usage is greatly improved. Our developers can allocate and deploy clusters themselves, and the deploying process has gone from hours to minutes, so we can devote fewer people to manage IDC resources. The productivity improvement is about 15%, and as we gain more Kubernetes knowledge on the debugging and diagnosis front, the productivity should improve to more than 20%."
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"We knew Kubernetes had the promise of helping us solve our problems. We were just waiting for it to mature, so we can fold it into our own development and product roadmap."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- KEVIN XU, GENERAL MANAGER OF GLOBAL STRATEGY AND OPERATIONS, PINGCAP</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>Since it was introduced in 2015, the open source NewSQL database TiDB has gained a following for its compatibility with MySQL, its ability to handle hybrid transactional and analytical processing (HTAP) workloads—and its cloud native architectural design.</h2>
PingCAP, the company behind TiDB, designed the platform with cloud in mind from day one, says Kevin Xu, General Manager of Global Strategy and Operations, and "having a hybrid multi-cloud product is an important part of our global go-to-market strategy."
<br><br>
In order to achieve that, the team had to address two challenges: "how to deploy, run, and manage a distributed stateful application, such as a distributed database like TiDB, in a containerized world," Xu says, and "how to deliver an easy-to-use, consistent, and reliable experience for our customers when they use TiDB in the cloud, any cloud, whether thats one cloud provider or a combination of different cloud environments."
<br><br>
Knowing that using a distributed system isnt easy, the PingCAP team began looking for the right orchestration layer to help reduce some of that complexity for end users. Kubernetes had been on their radar for quite some time. "We knew Kubernetes had the promise of helping us solve our problems," says Xu. "We were just waiting for it to mature."
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_pingcap_banner3.jpg')">
<div class="banner3text">
"With the governance process being so open, its not hard to find out whats the latest development in the technology and community, or figure out who to reach out to if we have problems or issues."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- KEVIN XU, GENERAL MANAGER OF GLOBAL STRATEGY AND OPERATIONS, PINGCAP</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
That time came in early 2018, when PingCAP began integrating Kubernetes into its internal development as well as in its TiDB product. "Having Kubernetes be part of the CNCF, as opposed to having only the backing of one individual company, was valuable in having confidence in the longevity of the technology," says Xu. Plus, "with the governance process being so open, its not hard to find out whats the latest development in the technology and community, or figure out who to reach out to if we have problems or issues."
<br><br>
TiDBs cloud native architecture consists of a stateless SQL layer (also called TiDB) and a persistent key-value storage layer that supports distributed transactions (<a href="https://github.com/tikv/tikv">TiKV</a>, which is now in the CNCF Sandbox), which are loosely coupled. "You can scale both out or in depending on your computation and storage needs, and the two scaling processes can happen independent of each other," says Xu. The PingCAP team also built the <a href="https://github.com/pingcap/tidb-operator">TiDB Operator</a> based on Kubernetes, which helps bootstrap a TiDB cluster on any cloud environment and simplifies and automates deployment, scaling, scheduling, upgrades, and maintenance. The company also recently previewed its fully-managed <a href="https://www.pingcap.com/blog/announcing-tidb-cloud-managed-as-a-service-and-in-the-marketplace/">TiDB Cloud</a> offering.
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_pingcap_banner4.jpg')">
<div class="banner4text">
"A cloud native infrastructure will not only save you money and allow you to be more in control of the infrastructure resources you consume, but also empower new product innovation, new experience for your users, and new business possibilities. Its both a cost reducer and a money maker." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- KEVIN XU, GENERAL MANAGER OF GLOBAL STRATEGY AND OPERATIONS, PINGCAP</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
The entire TiDB platform leverages Kubernetes and other cloud native technologies, including <a href="https://prometheus.io/">Prometheus</a> for monitoring and <a href="https://grpc.io/">gRPC</a> for interservice communication.
<br><br>
So far, the customer response to the Kubernetes-enabled platform has been "very positive." Prometheus, with <a href="https://grafana.com/">Grafana</a> as the dashboard, is installed by default when customers deploy TiDB, so that they can monitor and make any adjustments needed to reach their performance requirements before deploying TiDB in production. That monitoring layer "makes the evaluation process and communication much smoother," says Xu. With the companys Kubernetes-based Operator implementation, customers are now able to deploy, run, manage, upgrade, and maintain their TiDB clusters in the cloud with no downtime, and reduced workload, burden and overhead.
<br><br>
These technologies have also had an impact internally. "Weve completely switched to Kubernetes for our own development and testing, including our data center infrastructure and <a href="https://thenewstack.io/chaos-tools-and-techniques-for-testing-the-tidb-distributed-newsql-database/">Schrodinger</a>, an automated testing platform for TiDB," says Xu. "With Kubernetes, our resource usage is greatly improved. Our developers can allocate and deploy clusters themselves, and the deploying process takes less time, so we can devote fewer people to manage IDC resources.
</div>
<div class="banner5" >
<div class="banner5text">
"The entire cloud native community, whether its Kubernetes, CNCF in general, or cloud native vendors like us, have all gained enough experience—and have the battle scars to prove it—and are ready to help you succeed."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- KEVIN XU, GENERAL MANAGER OF GLOBAL STRATEGY AND OPERATIONS, PINGCAP</span></div>
</div>
<div class="fullcol">
The productivity improvement is about 15%, and as we gain more Kubernetes knowledge on the debugging and diagnosis front, the productivity should improve to more than 20%."
<br><br>
Kubernetes is now a crucial part of PingCAPs product roadmap. For anyone else considering going cloud native, Xu has this advice: "Theres no better time to get started," he says. "The entire cloud native community, whether its Kubernetes, CNCF in general, or cloud native vendors like us, have all gained enough experience—and have the battle scars to prove it—and are ready to help you succeed."
<br><br>
In fact, the PingCAP team has seen more and more customers moving toward a cloud native approach, and for good reason. "IT infrastructure is quickly evolving from a cost-center and afterthought, to the core competency and competitiveness of any company," says Xu. "A cloud native infrastructure will not only save you money and allow you to be more in control of the infrastructure resources you consume, but also empower new product innovation, new experience for your users, and new business possibilities. Its both a cost reducer and a money maker."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.9 KiB

View File

@ -0,0 +1,99 @@
---
title: Prowise Case Study
linkTitle: prowise
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_prowise_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/prowise_logo.png" class="header_logo" style="width:25%;margin-bottom:-1%"><br> <div class="subhead" style="margin-top:1%">Prowise: How Kubernetes is Enabling the Edtech Solutions Global Expansion
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>Prowise</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Budel, The Netherlands </b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Edtech</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
A Dutch company that produces educational devices and software used around the world, <a href="https://www.prowise.com/en/">Prowise</a> had an infrastructure based on Linux services with multiple availability zones in Europe, Australia, and the U.S. “Weve grown a lot in the past couple of years, and we started to encounter problems with versioning and flexible scaling,” says Senior DevOps Engineer Victor van den Bosch, “not only scaling in demands, but also in being able to deploy multiple products which all have their own versions, their own development teams, and their own problems that theyre trying to solve. To be able to put that all on the same platform without much resistance is what we were looking for. We wanted to future proof our infrastructure, and also solve some of the problems that are associated with just running a normal Linux service.”
<h2>Solution</h2>
The Prowise team adopted containerization, spent time improving its CI/CD pipelines, and chose Microsoft Azures managed Kubernetes service, <a href="https://azure.microsoft.com/en-us/services/kubernetes-service/">AKS</a>, for orchestration. “Kubernetes solves things like networking really well, in a way that fits our business model,” says van den Bosch. “We want to focus on our core products, and thats the software that runs on it and not necessarily the infrastructure itself.”
<h2>Impact</h2>
With its first web-based applications now running in beta on Prowises Kubernetes platform, the team is seeing the benefits of rapid and smooth deployments. “The old way of deploying took half an hour of preparations and half an hour deploying it. With Kubernetes, its a couple of seconds,” says Senior Developer Bart Haalstra. As a result, adds van den Bosch, “Weve gone from quarterly releases to a release every month in production. Were pretty much deploying every hour or just when we find that a feature is ready for production; before, our releases were mostly done on off-hours, where it couldnt impact our customers, as our confidence in the process was relatively low. Kubernetes has also enabled us to follow up quickly on bugs and implement tweaks to our users with zero downtime between versions. For some bugs weve pushed code fixes to production minutes after detection.” Recently, the team launched a new single sign-on solution for use in an internal application. “Due to the resource based architecture of the Kubernetes platform, we were able to bring that application into an entirely new production environment in less than a day, most of that time used for testing after applying the already well-known resource definitions from staging to the new environment,” says van den Bosch. “On a traditional VM this would have likely cost a day or two, and then probably a few weeks to iron out the kinks in our provisioning scripts as we apply updates.”
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"Because of Kubernetes, things have been much easier, our individual applications are better, and we can spend more time on functional implementation. We do not want to go back."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>— VICTOR VAN DEN BOSCH, SENIOR DEVOPS ENGINEER, PROWISE</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>If you havent set foot in a school in awhile, you might be surprised by what youd see in a digitally connected classroom these days: touchscreen monitors, laptops, tablets, touch tables, and more.</h2>
One of the leaders in the space, the Dutch company Prowise, offers an integrated solution of hardware and software to help educators create a more engaging learning environment.
<br><br>
As the company expanded its offerings beyond the Netherlands in recent years—creating multiple availability zones in Europe, Australia, and the U.S., with as many as nine servers per zone—its Linux service-based infrastructure struggled to keep up. “Weve grown a lot in the past couple of years, and we started to encounter problems with versioning and flexible scaling,” says Senior DevOps Engineer Victor van den Bosch, who was hired by the company in late 2017 to build a new platform.
<br><br>
Prowises products support ten languages, so the problem wasnt just scaling in demands, he adds, “but also in being able to deploy multiple products which all have their own versions, their own development teams, and their own problems that theyre trying to solve. To be able to put that all on the same platform without much resistance is what we were looking for. We wanted to future proof our infrastructure, and also solve some of the problems that are associated with just running a normal Linux service.”
<br><br>
The companys existing infrastructure on Microsoft Azure Cloud was all on virtual machines, “a pretty traditional setup,” van den Bosch says. “We decided that we want some features in our software that requires being able to scale quickly, being able to deploy new applications and versions on different versions of different programming languages quickly. And we didnt really want the hassle of trying to keep those servers in a particular state.”
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_prowise_banner3.jpg')">
<div class="banner3text">
"You dont have to go all-in immediately. You can just take a few projects, a service, run it alongside your more traditional stack, and build it up from there. Kubernetes scales, so as you add applications and services to it, it will scale with you. You dont have to do it all at once, and thats really a secret to everything, but especially true to Kubernetes."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>— VICTOR VAN DEN BOSCH, SENIOR DEVOPS ENGINEER, PROWISE</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
After researching possible solutions, he opted for containerization and Kubernetes orchestration. “Containerization is the future,” van den Bosch says. “Kubernetes solves things like networking really well, in a way that fits our business model. We want to focus on our core products, and thats the software that runs on it and not necessarily the infrastructure itself.” Plus, the Prowise team liked that there was no vendor lock-in. “We dont want to be limited to one platform,” he says. “We try not to touch products that are very proprietary and cant be ported easily to another vendor.”
<br><br>
The time to market with Kubernetes was very short: The first web-based applications on the platform went into beta within a few months. That was largely made possible by van den Boschs decision to use Azures managed Kubernetes service, AKS. The team then had to figure out which components to keep and which to replace. Monitoring tools like New Relic were taken out “because they tend to become very expensive when you scale it to different availability zones, and its just not very maintainable,” he says.
<br><br>
A lot of work also went into improving Prowises CI/CD pipelines. “We wanted to make sure that the pipelines are automated and easy to use,” he says. “We have a lot of settings and configurations figured out for the pipelines, and its just applying those scripts and those configurations to new projects from here on out.”
<br><br>
With its first web-based applications now running in beta on Prowises Kubernetes platform, the team is seeing the benefits of rapid and smooth deployments. “The old way of deploying took half an hour of preparations and half an hour deploying it. With Kubernetes, its a couple of seconds,” says Senior Developer Bart Haalstra. As a result, adds van den Bosch, “Weve gone from quarterly releases to a release every month in production. Were pretty much deploying every hour or just when we find that a feature is ready for production. Before, our releases were mostly done on off-hours, where it couldnt impact our customers, as our confidence the process itself was relatively low. With Kubernetes, we dare to deploy in the middle of a busy day with high confidence the deployment will succeed.”
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_prowise_banner4.jpg')" style="width:100%">
<div class="banner4text">
"Kubernetes allows us to really consider the best tools for a problem. Want to have a full-fledged analytics application developed by a third party that is just right for your use case? Run it. Dabbling in machine learning and AI algorithms but getting tired of waiting days for training to complete? It takes only seconds to scale it. Got a stubborn developer that wants to use a programming language no one has heard of? Let him, if it runs in a container, of course. And all of that while your operations team/DevOps get to sleep at night." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- VICTOR VAN DEN BOSCH, SENIOR DEVOPS ENGINEER, PROWISE</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
Plus, van den Bosch says, “Kubernetes has enabled us to follow up quickly on bugs and implement tweaks to our users with zero downtime between versions. For some bugs weve pushed code fixes to production minutes after detection.”
<br><br>
Recently, the team launched a new single sign-on solution for use in an internal application. “Due to the resource based architecture of the Kubernetes platform, we were able to bring that application into an entirely new production environment in less than a day, most of that time used for testing after applying the already well-known resource definitions from staging to the new environment,” says van den Bosch. “On a traditional VM this would have likely cost a day or two, and then probably a few weeks to iron out the kinks in our provisioning scripts as we apply updates.”
<br><br>
Legacy applications are also being moved to Kubernetes. Not long ago, the team needed to set up a Java-based application for compiling and running a frontend. “On a traditional VM, it would have taken quite a bit of time to set it up and keep it up to date, not to mention maintenance for that setup down the line,” says van den Bosch. Instead, it took less than half a day to Dockerize it and get it running on Kubernetes. “It was much easier, and we were able to save costs too because we didnt have to spin up new VMs specially for it.”
</div>
<div class="banner5" >
<div class="banner5text">
"Were really trying to deliver integrated solutions with our hardware and software and making it as easy as possible for users to use and collaborate from different places,” says van den Bosch. And, says Haalstra, “We cannot do it without Kubernetes."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- VICTOR VAN DEN BOSCH, SENIOR DEVOPS ENGINEER, PROWISE</span></div>
</div>
<div class="fullcol">
Perhaps most importantly, van den Bosch says, “Kubernetes allows us to really consider the best tools for a problem and take full advantage of microservices architecture. Got a library in Node.js that excels at solving a certain problem? Use it. Want to have a full-fledged analytics application developed by a third party that is just right for your use case? Run it. Dabbling in machine learning and AI algorithms but getting tired of waiting days for training to complete? It takes only seconds to scale it. Got a stubborn developer that wants to use a programming language no one has heard of? Let him, if it runs in a container, of course. And all of that while your operations team/DevOps get to sleep at night.”
<br><br>
Looking ahead, all new web development, platforms, and APIs at Prowise will be on Kubernetes. One of the big greenfield projects is a platform for teachers and students that is launching for back-to-school season in September. Users will be able to log in and access a wide variety of educational applications. With the <a href="https://www.prowise.com/en/press-release-largest-dutch-education-innovators-join-forces/">recent acquisition</a> of the software company Oefenweb, Prowise plans to provide adaptive software that allows teachers to get an accurate view of their students progress and weak points, and automatically adjusts the difficulty level of assignments to suit individual students. “We will be leveraging Kubernetes power to integrate, supplement, and support our combined application portfolio and bring our solutions to more classrooms,” says van den Bosch.
<br><br>
Collaborative software is also a priority. With the single sign-in software, users settings and credentials are saved in the cloud and can be used on any screen in the world. “Were really trying to deliver integrated solutions with our hardware and software and making it as easy as possible for users to use and collaborate from different places,” says van den Bosch. And, says Haalstra, “We cannot do it without Kubernetes.”
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.4 KiB

View File

@ -0,0 +1,98 @@
---
title: ricardo.ch Case Study
linkTitle: ricardo-ch
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_ricardoch_banner1.png')">
<h1> CASE STUDY:<img src="/images/ricardoch_logo.png" class="/images/header_logo" style="width:25%;margin-bottom:-1%"><br> <div class="subhead" style="margin-top:1%">ricardo.ch: How Kubernetes Improved Velocity and DevOps Harmony
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>ricardo.ch</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Zurich, Switzerland </b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>E-commerce</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
A Swiss online marketplace, <a href="https://www.ricardo.ch/de/">ricardo.ch</a> was experiencing problems with velocity, as well as a "classic gap" between Development and Operations, with the two sides unable to work well together. "They wanted to, but they didnt have common ground," says Cedric Meury, Head of Platform Engineering. "This was one of the root causes that slowed us down." The company began breaking down the legacy monolith into microservices, and needed orchestration to support the new architecture in its own data centers—as well as bring together Dev and Ops.
<h2>Solution</h2>
The company adopted <a href="https://kubernetes.io/">Kubernetes</a> for cluster management, <a href="https://prometheus.io/">Prometheus</a> for monitoring, and <a href="https://www.fluentd.org/">Fluentd</a> for logging. The first cluster was deployed on premise in December 2016, with the first service in production three months later. The migration is about half done, and the company plans to move completely to <a href="https://cloud.google.com/">Google Cloud Platform</a> by the end of 2018.
<h2>Impact</h2>
Splitting up the monolith into microservices "allowed higher velocity, and Kubernetes was crucial to support that," says Meury. The number of deployments to production has gone from fewer than 10 a week to 30-60 per day. Before, "when there was a problem with something in production, tickets or complaints would be thrown over the wall to operations, the classical problem. Now, people have the chance to look into operations and troubleshoot for themselves first because everything is deployed in a standardized way," says Meury. He sees the impact in everyday interactions: "A couple of weeks ago, I saw a product manager doing a pull request for a JSON file that contains some variables, and someone else accepted it. And it was deployed after a couple of minutes or seconds even, which was unthinkable before. There used to be quite a chain of things that needed to happen, the whole monolith was difficult to understand, even for engineers. So, previously requests would go into large, inefficient Kanban boards and hopefully someone will have done the change after weeks and months." Before, infrastructure- and platform-related projects took months or years to complete; now developers and operators can work together to deploy infrastructure parts via Kubernetes in a matter of weeks and sometimes days. In the long run, the company also expects to notch 50% cost savings going from custom data center and virtual machines to containerized infrastructure and cloud services.
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"Splitting up the monolith allowed higher velocity, and Kubernetes was crucial to support that. Containerization and orchestration by Kubernetes helped us to drastically reduce the conflict between Dev and Ops and also allowed us to speak the same language on both sides of the aisle."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>— CEDRIC MEURY, HEAD OF PLATFORM ENGINEERING, RICARDO.CH</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
<h2>When Cedric Meury joined ricardo.ch in 2016, he saw a clear divide between Operations and Development. In fact, there was literal distance between them: The engineering team worked in France, while the rest of the org was based in Switzerland.
</h2><br><br>
"It was a classic gap between those departments and even some anger and frustration here and there," says Meury. "They wanted to work together, but they didnt have common ground. This was one of the root causes that slowed us down."
<br><br>
That gap was hurting velocity at ricardo.ch, a Swiss online marketplace. The website processes up to 2.6 million searches on a peak day from both web and mobile apps, serving 3.2 million members with its live auctions. The technology teams main challenge was to make sure that "the bids for items come in the right order, and before the auction is finished, and that this works in a fair way," says Meury. "We have a real-time requirement. We also provide an automated system to bid, and it needs to be accurate and correct. With a distributed system, you have the challenge of making sure that the ordering is right. And thats one of the things were currently dealing with."
<br><br>
To address the velocity issue, ricardo.ch CTO Jeremy Seitz established a new software factory called EPD, which consists of 65 engineers, 7 product managers and 2 designers. "We brought these three departments together so that they can kind of streamline this and talk to each other much more closely," says Meury.
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_ricardoch_banner3.png')">
<div class="banner3text">
"Being in the End User Community demonstrates that we stand behind these technologies. In Switzerland, if all the companies see that ricardo.chs using it, I think that will help adoption. I also like that were connected to the other end users, so if there is a really heavy problem, I could go to the Slack channel, and say, Hey, you guys… Like Reddit, Github and New York Times or whoever can give a recommendation on what to use here or how to solve that. So thats kind of a superpower."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>— CEDRIC MEURY, HEAD OF PLATFORM ENGINEERING, RICARDO.CH</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
The company also began breaking down the legacy monolith into more than 100 microservices, and needed orchestration to support the new architecture in its own data centers. "Splitting up the monolith allowed higher velocity, and Kubernetes was crucial to support that," says Meury. "Containerization and orchestration by Kubernetes helped us to drastically reduce the conflict between Dev and Ops and also allowed us to speak the same language on both sides of the aisle."
<br><br>
Meury put together a platform engineering team to choose the tools—including Fluentd for logging and Prometheus for monitoring, with Grafana visualization—and lay the groundwork for the first Kubernetes cluster, which was installed on premise in December 2016. Within a few weeks, the new platform was available to teams, who were given training sessions and documentation. The platform engineering team then embedded with engineers to help them deploy their applications on the new platform. The first service in production was the ricardo.ch jobs page. "It was an exercise in front-end development, so the developers could experiment with a new stack," says Meury.
<br><br>
Meury estimates that half of the application has been migrated to Kubernetes. And the plan is to move everything to the Google Cloud Platform by the end of 2018. "We are still running some servers in our own data centers, but all of the containerization efforts and describing our services as Kubernetes manifests will allow us to quite easily make that shift," says Meury.
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_ricardoch_banner4.png')" style="width:100%">
<div class="banner4text">
"One of the core moments was when a front-end developer asked me how to do a port forward from his laptop to a front-end application to debug, and I told him the command. And he was like, Wow, thats all I need to do? He was super excited and happy about it. That showed me that this power in the right hands can just accelerate development."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- CEDRIC MEURY, HEAD OF PLATFORM ENGINEERING, RICARDO.CH</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
The impact has been great. Moving from custom data center and virtual machines to containerized infrastructure and cloud services is expected to result in 50% cost savings for the company. The number of deployments to production has gone from fewer than 10 a week to 30-60 per day. Before, "when there was a problem with something in production, tickets or complaints would be thrown over the wall to operations, the classical problem," says Meury. "Now, people have the chance to look into operations and troubleshoot for themselves first because everything is deployed in a standardized way. That reduces time and uncertainty."
<br><br>
Meury also sees the impact in everyday interactions: "A couple of weeks ago, I saw a product manager doing a pull request for a JSON file that contains some variables, and someone else accepted it. And it was deployed after a couple of minutes or seconds even, which was unthinkable before. There used to be quite a chain of things that needed to happen, the whole monolith was difficult to understand, even for engineers. So, previously requests would go into large, inefficient Kanban boards and hopefully someone will have done the change after weeks and months."
<br><br>
The divide between Dev and Ops has also diminished. "After a couple of months, I got requests by people saying, Hey, could you help me install the Kubernetes client? I want to actually look at whats going on," says Meury. "People were directly looking at the state of the system, bringing them much, much closer to the operations." Before, infrastructure- and platform-related projects took months or years to complete; now developers and operators can work together to deploy infrastructure parts via Kubernetes in a matter of weeks and sometimes days.
</div>
<div class="banner5" >
<div class="banner5text">
"One of my colleagues was listening to all the talks at KubeCon, and he was overwhelmed by all the tools, technologies, frameworks out there that are currently lacking on our platform, but at the same time, hes very happy to know that in the future there is so much that we can still explore and we can improve and we can work on."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- CEDRIC MEURY, HEAD OF PLATFORM ENGINEERING, RICARDO.CH</span></div>
</div>
<div class="fullcol">
The ability to have insight into the system has extended to other parts of the company, too. "I found out that one of our customer support representatives looks at Grafana metrics to find out whether the system is running fine, which is fantastic," says Meury. "Prometheus is directly hooked into customer care."
<br><br>
The ricardo.ch cloud native journey has perhaps had the most impact on the Ops team. "We have an operations team that comes from a hardware-based background, and right now they are relearning how to operate in a more virtualized and cloud native world, with great success so far," says Meury. "So besides still operating on-site data center firewalls, they learn to code in Go or do some Python scripting at the same time. Former network administrators are writing Go code. Its just really cool.
<br><br>
For Meury, the journey boils down to this. "One of my colleagues was listening to all the talks at KubeCon, and he was overwhelmed by all the tools, technologies, frameworks out there that are currently lacking on our platform," says Meury. "But at the same time, hes very happy to know that in the future there is so much that we can still explore and we can improve and we can work on. Were transitioning from seeing problems everywhere—like, This is broken or This is down, and we have to fix it—more to, How can we actually improve and automate more, and make it nicer for developers and ultimately for the end users?"
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

View File

@ -0,0 +1,88 @@
---
title: Slamtec Case Study
linkTitle: slamtec
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_slamtec_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/slamtec_logo.png" class="header_logo" style="width:17%;margin-bottom:%"><br> </h1>
<br><br>
</div>
<div class="details">
Company &nbsp;<b>Slamtec</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Shanghai, China</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Robotics</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
Founded in 2013, SLAMTEC provides service robot autonomous localization and navigation solutions. The companys strength lies in its R&D teams ability to quickly introduce, and continually iterate on, its core products. In the past few years, the company, which had a legacy infrastructure based on Alibaba Cloud and VMware vSphere, began looking to build its own stable and reliable container cloud platform to host its Internet of Things applications. "Our needs for the cloud platform included high availability, scalability and security; multi-granularity monitoring alarm capability; friendliness to containers and microservices; and perfect CI/CD support," says Benniu Ji, Director of Cloud Computing Business Division.
<h2>Solution</h2>
Jis team chose Kubernetes for orchestration. "CNCF brings quality assurance and a complete ecosystem for <a href="https://kubernetes.io/">Kubernetes</a>, which is very important for the wide application of Kubernetes," says Ji. Thus Slamtec decided to adopt other CNCF projects as well: <a href="https://prometheus.io/">Prometheus</a> monitoring, <a href="https://www.fluentd.org/">Fluentd</a> logging, <a href="https://goharbor.io/">Harbor</a> registry, and <a href="https://helm.sh/">Helm</a> package manager.
<br>
<h2>Impact</h2>
With the new platform, Ji reports that Slamtec has experienced "18+ months of 100% stability!" For users, there is now zero service downtime and seamless upgrades. "Kubernetes with third-party service mesh integration (Istio, along with Jaeger and Envoy) significantly reduced the microservice configuration and maintenance efforts by 50%," he adds. With centralized metrics monitoring and log aggregation provided by Prometheus on Fluentd, teams are saving 50% of time spent on troubleshooting and debugging. Harbor replication has allowed production/staging/testing environments to cross public cloud and the private Kubernetes cluster to share the same container registry, resulting in 30% savings of CI/CD efforts. Plus, Ji says, "Helm has accelerated prototype development and environment setup with its rich sharing charts."
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"Cloud native technology helps us ensure high availability of our business, while improving development and testing efficiency, shortening the research and development cycle and enabling rapid product delivery."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- BENNIU JI, DIRECTOR OF CLOUD COMPUTING BUSINESS DIVISION</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>Founded in 2013, Slamtec provides service robot autonomous localization and navigation solutions. In this fast-moving space, the company built its success on the ability of its R&D team to quickly introduce, and continually iterate on, its core products.
</h2>
To sustain that development velocity, the company over the past few years began looking to build its own stable and reliable container cloud platform to host its Internet of Things applications. With a legacy infrastructure based on <a href="https://www.alibabacloud.com/">Alibaba Cloud</a> and <a href="https://www.vmware.com/products/vsphere.html">VMware vSphere</a>, Slamtec teams had already adopted microservice architecture and continuous delivery, for "fine granularity on-demand scaling, fault isolation, ease of development, testing, and deployment, and for facilitating high-speed iteration," says Benniu Ji, Director of Cloud Computing Business Division. So "our needs for the cloud platform included high availability, scalability and security; multi-granularity monitoring alarm capability; friendliness to containers and microservices; and perfect CI/CD support."
<br><br>
After an evaluation of existing technologies, Jis team chose <a href="https://kubernetes.io/">Kubernetes</a> for orchestration. "CNCF brings quality assurance and a complete ecosystem for Kubernetes, which is very important for the wide application of Kubernetes," says Ji. Plus, "avoiding binding to an infrastructure technology or provider can help us ensure that our business is deployed and migrated in cross-regional environments, and can serve users all over the world."
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_slamtec_banner3.jpg')">
<div class="banner3text">
"CNCF brings quality assurance and a complete ecosystem for Kubernetes, which is very important for the wide application of Kubernetes."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- BENNIU JI, DIRECTOR OF CLOUD COMPUTING BUSINESS DIVISION</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
Thus Slamtec decided to adopt other CNCF projects as well. "We built a monitoring and logging system based on <a href="https://prometheus.io/">Prometheus</a> and <a href="https://www.fluentd.org/">Fluentd</a>," says Ji. "The integration between Prometheus/Fluentd and Kubernetes is convenient, with multiple dimensions of data monitoring and log collection capabilities."
<br><br>
The company uses <a href="https://goharbor.io/">Harbor</a> as a container image repository. "Harbors replication function helps us implement CI/CD on both private and public clouds," says Ji. "In addition, multi-project support, certification and policy configuration, and integration with Kubernetes are also excellent functions." <a href="https://helm.sh/">Helm</a> is also being used as a package manager, and the team is evaluating the Istio framework. "Were very pleased that Kubernetes and these frameworks can be seamlessly integrated," Ji adds.
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_slamtec_banner4.jpg')">
<div class="banner4text">
"Cloud native is suitable for microservice architecture, its suitable for fast iteration and agile development, and it has a relatively perfect ecosystem and active community." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- BENNIU JI, DIRECTOR OF CLOUD COMPUTING BUSINESS DIVISION</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
With the new platform, Ji reports that Slamtec has experienced "18+ months of 100% stability!" For users, there is now zero service downtime and seamless upgrades. "We benefit from the abstraction of Kubernetes from network and storage," says Ji. "The dependence on external services can be decoupled from the service and placed under unified management in the cluster."
<br><br>
Using Kubernetes and Istio "significantly reduced the microservice configuration and maintenance efforts by 50%," he adds. With centralized metrics monitoring and log aggregation provided by Prometheus on Fluentd, teams are saving 50% of time spent on troubleshooting and debugging. Harbor replication has allowed production/staging/testing environments to cross public cloud and the private Kubernetes cluster to share the same container registry, resulting in 30% savings of CI/CD efforts. Plus, Ji adds, "Helm has accelerated prototype development and environment setup with its rich sharing charts."
<br><br>
In short, Ji says, Slamtecs new platform is helping it achieve one of its primary goals: the quick and easy release of products. With multiple release models and a centralized control interface, the platform is changing developers lives for the better. Slamtec also offers a unified API for the development of automated deployment tools according to users specific needs.
</div>
<div class="banner5" style="width:100%">
<div class="banner5text">
"We benefit from the abstraction of Kubernetes from network and storage, the dependence on external services can be decoupled from the service and placed under unified management in the cluster."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- BENNIU JI, DIRECTOR OF CLOUD COMPUTING BUSINESS DIVISION</span></div>
</div>
<div class="fullcol">
Given its own success with cloud native, Slamtec has just one piece of advice for organizations considering making the leap. "For already containerized services, you should migrate them to the cloud native architecture as soon as possible and enjoy the advantages brought by the cloud native ecosystem," Ji says. "To migrate traditional, non-containerized services, in addition to the architecture changes of the service itself, you need to fully consider the operation and maintenance workload required to build the cloud native architecture."
<br><br>
That said, the cost-benefit analysis has been simple for Slamtec. "Cloud native technology is suitable for microservice architecture, its suitable for fast iteration and agile development, and it has a relatively perfect ecosystem and active community," says Ji. "It helps us ensure high availability of our business, while improving development and testing efficiency, shortening the research and development cycle and enabling rapid product delivery."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -0,0 +1,94 @@
---
title: ThredUp Case Study
linkTitle: thredup
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_thredup_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/thredup_logo.png" class="header_logo" style="width:17%;margin-bottom:%"><br> </h1>
<br><br>
</div>
<div class="details">
Company &nbsp;<b>ThredUp</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>San Francisco, CA</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>eCommerce</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
The largest online consignment store for womens and childrens clothes, ThredUP launched in 2009 with a monolithic application running on Amazon Web Services. Though the company began breaking up the monolith into microservices a few years ago, the infrastructure team was still dealing with handcrafted servers, which hampered productivity. "Weve configured them just to get them out as fast as we could, but there was no standardization, and as we kept growing, that became a bigger and bigger chore to manage," says Cofounder/CTO Chris Homer. The infrastructure, they realized, needed to be modernized to enable the velocity the company needed. "Its really important to a company like us whos disrupting the retail industry to make sure that as were building software and getting it out in front of our users, we can do it on a fast cycle and learn a ton as we experiment," adds Homer. "We wanted to make sure that our engineers could embrace the DevOps mindset as they built software. It was really important to us that they could own the life cycle from end to end, from conception at design, through shipping it and running it in production, from marketing to ecommerce, the user experience and our internal distribution center operations."
<br><br>
<h2>Solution</h2>
In early 2017, the company adopted Kubernetes for container orchestration, and in the course of a year, the entire infrastructure was moved to Kubernetes.
<br><br>
<h2>Impact</h2>
Before, "even considering that we already have all the infrastructure in the cloud, databases and services, and all these good things," says Infrastructure Engineer Oleksandr Snagovskyi, setting up a new service meant waiting 2-4 weeks just to get the environment. With Kubernetes, new application roll-out time has decreased from several days or weeks to minutes or hours. Now, says Infrastructure Engineer Oleksii Asiutin, "our developers can experiment with existing applications and create new services, and do it all blazingly fast." In fact, deployment time has decreased about 50% on average for key services. "Lead time" for all applications is under 20 minutes, enabling engineers to deploy multiple times a day. Plus, 3200+ ansible scripts have been deprecated in favor of helm charts. And impressively, hardware cost has decreased 56% while the number of services ThredUP runs has doubled.
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
<iframe width="504" height="296" src="https://www.youtube.com/embed/t0csOf-uDrk" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><br><br>
"Moving towards cloud native technologies like Kubernetes really unlocks our ability to experiment quickly and learn from customers along the way."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- CHRIS HOMER, COFOUNDER/CTO, THREDUP</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>The largest online consignment store for womens and childrens clothes, ThredUP is focused on getting consumers to think second-hand first. "Were disrupting the retail industry, and its really important to us to make sure that as were building software and getting it out in front of our users, we can do it on a fast cycle and learn a ton as we experiment," says Cofounder/CTO Chris Homer.
</h2>
But over the past few years, ThredUP, which was launched in 2009 with a monolithic application running on Amazon Web Services, was feeling growing pains as its user base passed the 20- million mark. Though the company had begun breaking up the monolith into microservices, the infrastructure team was still dealing with handcrafted servers, which hampered productivity. "Weve configured them just to get them out as fast as we could, but there was no standardization, and as we kept growing, that became a bigger and bigger chore to manage," says Homer. The infrastructure, Homer realized, needed to be modernized to enable the velocity—and the culture—the company wanted.
<br><br>
"We wanted to make sure that our engineers could embrace the DevOps mindset as they built software," Homer says. "It was really important to us that they could own the life cycle from end to end, from conception at design, through shipping it and running it in production, from marketing to ecommerce, the user experience and our internal distribution center operations."
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_thredup_banner3.jpg')">
<div class="banner3text">
"Kubernetes enabled auto scaling in a seamless and easily manageable way on days like Black Friday. We no longer have to sit there adding instances, monitoring the traffic, doing a lot of manual work."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- CHRIS HOMER, COFOUNDER/CTO, THREDUP</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
In early 2017, Homer found the solution with Kubernetes container orchestration. In the course of a year, the company migrated its entire infrastructure to Kubernetes, starting with its website applications and concluding with its operations backend. Teams are now also using Fluentd and Helm. "Initially there were skeptics about the value that this move to cloud native technologies would bring, but as we went through the process, people very quickly started to realize the benefit of having seamless upgrades and easy rollbacks without having to worry about what was happening," says Homer. "It unlocks the developers confidence in being able to deploy quickly, learn, and if you make a mistake, you can roll it back without any issue."
<br><br>
According to the infrastructure team, the key improvement was the consistent experience Kubernetes enabled for developers. "It lets developers work in the same environment that their application will be running in production," says Infrastructure Engineer Oleksandr Snagovskyi. Plus, "It became easier to test, easier to refine, and easier to deploy, because everythings done automatically," says Infrastructure Engineer Oleksii Asiutin. "One of the main goals of our team is to make developers lives more comfortable, and we are achieving this with Kubernetes. They can experiment with existing applications and create new services, and do it all blazingly fast."
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_thredup_banner4.jpg')">
<div class="banner4text">
"One of the main goals of our team is to make developers lives more comfortable, and we are achieving this with Kubernetes. They can experiment with existing applications and create new services, and do it all blazingly fast." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- OLEKSII ASIUTIN, INFRASTRUCTURE ENGINEER, THREDUP</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
Before, "even considering that we already have all the infrastructure in the cloud, databases and services, and all these good things," says Snagovskyi, setting up a new service meant waiting 2-4 weeks just to get the environment. With Kubernetes, because of simple configuration and minimal dependency on the infrastructure team, the roll-out time for new applications has decreased from several days or weeks to minutes or hours.
<br><br>
In fact, deployment time has decreased about 50% on average for key services. "Fast deployment and parallel test execution in Kubernetes keep a lead time for all applications under 20 minutes," allowing engineers to do multiple releases a day, says Director of Infrastructure Roman Chepurnyi. The infrastructure teams jobs, he adds, have become less burdensome, too: "We can execute seamless upgrades frequently and keep cluster performance and security up-to-date because OS-level hardening and upgrades of a Kubernetes cluster is a non-blocking activity for production operations and does not involve coordination with multiple engineering teams."
<br><br>
More than 3,200 ansible scripts have been deprecated in favor of Helm charts. And impressively, hardware cost has decreased 56% while the number of services ThredUP runs has doubled.
</div>
<div class="banner5" style="width:100%">
<div class="banner5text">
"Our futures all about automation, and behind that, cloud native technologies are going to unlock our ability to embrace that and go full force towards the future."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- CHRIS HOMER, COFOUNDER/CTO, THREDUP</span></div>
</div>
<div class="fullcol">
Perhaps the impact is most evident on the busiest days in retail. "Kubernetes enabled auto scaling in a seamless and easily manageable way on days like Black Friday," says Homer. "We no longer have to sit there adding instances, monitoring the traffic, doing a lot of manual work. Thats handled for us, and instead we can actually have some turkey, drink some wine and enjoy our families."
<br><br>
For ThredUP, Kubernetes fits perfectly with the companys vision for how its changing retail. Some of what ThredUP does is still very manual: "As our customers send bags of items to our distribution centers, theyre photographed, inspected, tagged, and put online today," says Homer.
<br><br>
But in every other aspect, "we use different forms of technology to drive everything we do," Homer says. "We have machine learning algorithms to help predict the likelihood of sale for items, which drives our pricing algorithm. We have personalization algorithms that look at the images and try to determine style and match users preferences across our systems."
<br><br>
Count Kubernetes as one of those drivers. "Our futures all about automation," says Homer, "and behind that, cloud native technologies are going to unlock our ability to embrace that and go full force towards the future."
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.6 KiB

View File

@ -0,0 +1,97 @@
---
title: vsco Case Study
linkTitle: vsco
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_vsco_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/vsco_logo.png" class="header_logo" style="width:17%;margin-bottom:-2%"><br> <div class="subhead" style="margin-top:1%">VSCO: How a Mobile App Saved 70% on Its EC2 Bill with Cloud Native
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>VSCO</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Oakland, CA</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Photo Mobile App</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
After moving from <a href="https://www.rackspace.com/">Rackspace</a> to <a href="https://aws.amazon.com/">AWS</a> in 2015, <a href="https://vsco.co/">VSCO</a> began building <a href="https://nodejs.org/en/">Node.js</a> and <a href="https://golang.org/">Go</a> microservices in addition to running its <a href="http://php.net/">PHP monolith</a>. The team containerized the microservices using <a href="https://www.docker.com/">Docker</a>, but "they were all in separate groups of <a href="https://aws.amazon.com/ec2/">EC2</a> instances that were dedicated per service," says Melinda Lu, Engineering Manager for the Machine Learning Team. Adds Naveen Gattu, Senior Software Engineer on the Community Team: "That yielded a lot of wasted resources. We started looking for a way to consolidate and be more efficient in the AWS EC2 instances."
<h2>Solution</h2>
The team began exploring the idea of a scheduling system, and looked at several solutions including Mesos and Swarm before deciding to go with <a href="https://kubernetes.io/">Kubernetes</a>. VSCO also uses <a href="https://grpc.io/">gRPC</a> and <a href="https://www.envoyproxy.io/">Envoy</a> in their cloud native stack.
<br>
<h2>Impact</h2>
Before, deployments required "a lot of manual tweaking, in-house scripting that we wrote, and because of our disparate EC2 instances, Operations had to babysit the whole thing from start to finish," says Senior Software Engineer Brendan Ryan. "We didn't really have a story around testing in a methodical way, and using reusable containers or builds in a standardized way." There's a faster onboarding process now. Before, the time to first deploy was two days' hands-on setup time; now it's two hours. By moving to continuous integration, containerization, and Kubernetes, velocity was increased dramatically. The time from code-complete to deployment in production on real infrastructure went from one to two weeks to two to four hours for a typical service. Adds Gattu: "In man hours, that's one person versus a developer and a DevOps individual at the same time." With an 80% decrease in time for a single deployment to happen in production, the number of deployments has increased as well, from 1200/year to 3200/year. There have been real dollar savings too: With Kubernetes, VSCO is running at 2x to 20x greater EC2 efficiency, depending on the service, adding up to about 70% overall savings on the company's EC2 bill. Ryan points to the company's ability to go from managing one large monolithic application to 50+ microservices with "the same size developer team, more or less. And we've only been able to do that because we have increased trust in our tooling and a lot more flexibility, so we don't need to employ a DevOps engineer to tune every service." With Kubernetes, gRPC, and Envoy in place, VSCO has seen an 88% reduction in total minutes of outage time, mainly due to the elimination of JSON-schema errors and service-specific infrastructure provisioning errors, and an increased speed in fixing outages.
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
"I've been really impressed seeing how our engineers have come up with creative solutions to things by just combining a lot of Kubernetes primitives. Exposing Kubernetes constructs as a service to our engineers as opposed to exposing higher order constructs has worked well for us. It lets you get familiar with the technology and do more interesting things with it."
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- MELINDA LU, ENGINEERING MANAGER FOR VSCO'S MACHINE LEARNING TEAM</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>A photography app for mobile, VSCO was born in the cloud in 2011. In the beginning, "we were using Rackspace and had one PHP monolith application talking to MySQL database, with FTP deployments, no containerization, no orchestration," says Software Engineer Brendan Ryan, "which was sufficient at the time."</h2>
After VSCO moved to AWS in 2015 and its user base passed the 30 million mark, the team quickly realized that set-up wouldn't work anymore. Developers had started building some Node and Go microservices, which the team tried containerizing with Docker. But "they were all in separate groups of EC2 instances that were dedicated per service," says Melinda Lu, Engineering Manager for the Machine Learning Team. Adds Naveen Gattu, Senior Software Engineer on the Community Team: "That yielded a lot of wasted resources. We started looking for a way to consolidate and be more efficient in the EC2 instances."
<br><br>
With a checklist that included ease of use and implementation, level of support, and whether it was open source, the team evaluated a few scheduling solutions, including Mesos and Swarm, before deciding to go with Kubernetes. "Kubernetes seemed to have the strongest open source community around it," says Lu. Plus, "We had started to standardize on a lot of the Google stack, with Go as a language, and gRPC for almost all communication between our own services inside the data center. So it seemed pretty natural for us to choose Kubernetes."
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_vsco_banner2.jpg')">
<div class="banner3text">
"Kubernetes seemed to have the strongest open source community around it, plus, we had started to standardize on a lot of the Google stack, with Go as a language, and gRPC for almost all communication between our own services inside the data center. So it seemed pretty natural for us to choose Kubernetes."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- MELINDA LU, ENGINEERING MANAGER FOR VSCO'S MACHINE LEARNING TEAM</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
At the time, there were few managed Kubernetes offerings and less tooling available in the ecosystem, so the team stood up its own cluster and built some custom components for its specific deployment needs, such as an automatic ingress controller and policy constructs for canary deploys. "We had already begun breaking up the monolith, so we moved things one by one, starting with pretty small, low-risk services," says Lu. "Every single new service was deployed there." The first service was migrated at the end of 2016, and after one year, 80% of the entire stack was on Kubernetes, including the rest of the monolith.
<br><br>
The impact has been great. Deployments used to require "a lot of manual tweaking, in-house scripting that we wrote, and because of our disparate EC2 instances, Operations had to babysit the whole thing from start to finish," says Ryan. "We didn't really have a story around testing in a methodical way, and using reusable containers or builds in a standardized way." There's a faster onboarding process now. Before, the time to first deploy was two days' hands-on setup time; now it's two hours.
<br><br>
By moving to continuous integration, containerization, and Kubernetes, velocity was increased dramatically. The time from code-complete to deployment in production on real infrastructure went from one to two weeks to two to four hours for a typical service. Plus, says Gattu, "In man hours, that's one person versus a developer and a DevOps individual at the same time." With an 80% decrease in time for a single deployment to happen in production, the number of deployments has increased as well, from 1200/year to 3200/year.
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_vsco_banner4.jpg')">
<div class="banner4text">
"I've been really impressed seeing how our engineers have come up with really creative solutions to things by just combining a lot of Kubernetes primitives, exposing Kubernetes constructs as a service to our engineers as opposed to exposing higher order constructs has worked well for us. It lets you get familiar with the technology and do more interesting things with it." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- MELINDA LU, ENGINEERING MANAGER FOR VSCOS MACHINE LEARNING TEAM</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
There have been real dollar savings too: With Kubernetes, VSCO is running at 2x to 20x greater EC2 efficiency, depending on the service, adding up to about 70% overall savings on the companys EC2 bill.
<br><br>
Ryan points to the companys ability to go from managing one large monolithic application to 50+ microservices with “the same size developer team, more or less. And weve only been able to do that because we have increased trust in our tooling and a lot more flexibility when there are stress points in our system. You can increase CPU memory requirements of a service without having to bring up and tear down instances, and read through AWS pages just to be familiar with a lot of jargon, which isnt really tenable for a company at our scale.”
<br><br>
Envoy and gRPC have also had a positive impact at VSCO. “We get many benefits from gRPC out of the box: type safety across multiple languages, ease of defining services with the gRPC IDL, built-in architecture like interceptors, and performance improvements over HTTP/1.1 and JSON,” says Lu.
<br><br>
VSCO was one of the first users of Envoy, getting it in production five days after it was open sourced. “We wanted to serve gRPC and HTTP/2 directly to mobile clients through our edge load balancers, and Envoy was our only reasonable solution,” says Lu. “The ability to send consistent and detailed stats by default across all services has made observability and standardization of dashboards much easier.” The metrics that come built in with Envoy have also “greatly helped with debugging,” says DevOps Engineer Ryan Nguyen.
</div>
<div class="banner5" style="width:100%">
<div class="banner5text">
"Because theres now an organization that supports Kubernetes, does that build confidence? The answer is a resounding yes."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- NAVEEN GATTU, SENIOR SOFTWARE ENGINEER ON VSCOS COMMUNITY TEAM</span></div>
</div>
<div class="fullcol">
With Kubernetes, gRPC, and Envoy in place, VSCO has seen an 88% reduction in total minutes of outage time, mainly due to the elimination of JSON-schema errors and service-specific infrastructure provisioning errors, and an increased speed in fixing outages.
<br><br>
Given its success using CNCF projects, VSCO is starting to experiment with others, including <a href="https://github.com/containernetworking">CNI</a> and Prometheus. “To have a large organization backing these technologies, we have a lot more confidence trying this software and deploying to production,” says Nguyen.
<br><br>
The team has made contributions to gRPC and Envoy, and is hoping to be even more active in the CNCF community. “Ive been really impressed seeing how our engineers have come up with really creative solutions to things by just combining a lot of Kubernetes primitives,” says Lu. “Exposing Kubernetes constructs as a service to our engineers as opposed to exposing higher order constructs has worked well for us. It lets you get familiar with the technology and do more interesting things with it.”
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -0,0 +1,96 @@
---
title: Woorank Case Study
linkTitle: woorank
case_study_styles: true
cid: caseStudies
css: /css/style_case_studies.css
featured: false
---
<div class="banner1" style="background-image: url('/images/CaseStudy_woorank_banner1.jpg')">
<h1> CASE STUDY:<img src="/images/woorank_logo.png" class="header_logo" style="width:25%;margin-bottom:-1%"><br> <div class="subhead" style="margin-top:1%">Woorank: How Kubernetes Helped a Startup Manage 50 Microservices with<br>12 Engineers—At 30% Less Cost
</div></h1>
</div>
<div class="details">
Company &nbsp;<b>Woorank</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Brussels, Belgium</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Digital marketing tool</b>
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1" style="width:100%"">
<h2>Challenge</h2>
Founded in 2011, Woorank embraced microservices and containerization early on, so its core product, a tool that helps digital marketers improve their websites visibility on the internet, consists of 50 applications developed and maintained by a technical team of 12. For two years, the infrastructure ran smoothly on Mesos, but “there were still lots of our own libraries that we had to roll and applications that we had to bring in, so it was very cumbersome for us as a small team to keep those things alive and to update them,” says CTO/Cofounder Nils De Moor. So he began looking for a new solution with more automation and self-healing built in, that would better suit the companys human resources.
<h2>Solution</h2>
De Moor decided to switch to <a href="https://kubernetes.io/">Kubernetes</a> running on <a href="https://aws.amazon.com/">AWS</a>, which “allows us to just define applications, how they need to run, how scalable they need to be, and it takes pain away from the developers thinking about that,” he says. “When things fail and errors pop up, the system tries to heal itself, and thats really, for us, the key reason to work with Kubernetes.” The company now also uses <a href="https://www.fluentd.org/">Fluentd</a>, <a href="https://prometheus.io/">Prometheus</a>, and <a href="http://opentracing.io/">OpenTracing</a>.
<h2>Impact</h2>
The companys number one concern was immediately erased: Maintaining Kubernetes takes just one person on staff, and its not a fulltime job. Infrastructure updates used to take two active working days; now its just a matter of “a few hours of passively following the process,” says De Moor. Implementing new tools—which once took weeks of planning, installing, and onboarding—now only takes a few days. “We were already pretty flexible in our costs and taking on traffic peaks and higher load in general,” adds De Moor, “but with Kubernetes and the other CNCF tools we use, we have achieved about 30% in cost savings.” Plus, the rate of deployments per day has nearly doubled.
</div>
</div>
</section>
<div class="banner2">
<div class="banner2text">
“It was definitely important for us to have CNCF as an umbrella above everything. Weve always been working with open source libraries and tools and technologies. It works very well for us, but sometimes things can drift, maintainers drop out, and projects go haywire. For us, it was indeed important to know that whatever project gets taken under this umbrella, its taken very seriously. Our way of contributing back is also by joining this community. Its, for us, a way to show our appreciation for whats going on in this framework.”
<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>— NILS DE MOOR, CTO/COFOUNDER, WOORANK</span>
</div>
</div>
<section class="section2">
<div class="fullcol">
<h2>Wooranks core product is a tool that enables digital marketers to improve their websites visibility on the internet.</h2>
“We help them acquire lots of data and then present it to them in meaningful ways so they can work with it,” says CTO/Cofounder Nils De Moor. In its seven years as a startup, the company followed a familiar technological path to build that product: starting with a monolithic application, breaking it down into microservices, and then embracing containerization. “Thats where our modern infrastructure started out,” says De Moor.
<br><br>
As new features have been added to the product, it has grown to consist of 50 applications under the hood. Though Docker had made things easier to deploy, and the team had been using Mesos as an orchestration framework on AWS since 2015, De Moor realized there was still too much overhead to managing the infrastructure, especially with a technical team of just 12.
<br><br>
“The pain point was that there were still lots of our own libraries that we had to roll and applications that we had to bring in, so it was very cumbersome for us as a small team to keep those things alive and to update them,” says De Moor. “When things went wrong during deployment, someone manually had to come in and figure it out. It wasnt necessarily that the technology or anything was wrong with Mesos; it was just not really fitting our model of being a small company, not having the human resources to make sure it all works and can be updated.”
</div>
</section>
<div class="banner3" style="background-image: url('/images/CaseStudy_woorank_banner3.jpg')">
<div class="banner3text">
"Cloud native technologies have brought to us a transparency on everything going on in our system, from the code to the server. It has brought huge cost savings and a better way of dealing with those costs and keeping them under control. And performance-wise, it has helped our team understand how we can make our code work better on the cloud native infrastructure."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>— NILS DE MOOR, CTO/COFOUNDER, WOORANK</span>
</div>
</div>
<section class="section3">
<div class="fullcol">
Around the time Woorank was grappling with these issues, Kubernetes was emerging as a technology. De Moor knew that he wanted a platform that would be more automated and self-healing, and when he began experimenting with Kubernetes, he found that it checked all those boxes. “Kubernetes allows us to just define applications, how they need to run, how scalable they need to be, and it takes pain away from the developers thinking about that,” he says. “When things fail and errors pop up, the system tries to heal itself, and thats really, for us, the key reason to work with Kubernetes. It allowed us to set up certain testing frameworks to just be alerted when things go wrong, instead of having to look at whether everything went right. Its made peoples lives much easier. Its quite a big mindset change.”
<br><br>
Once one small Kubernetes cluster was up and running, the team began moving over a few applications at a time, gradually increasing the load over the course of several months. By early 2017, Woorank was 100% deployed on Kubernetes.
<br><br>
The companys number one concern was immediately erased: Maintaining Kubernetes is the responsibility of just one person on staff, and its not his fulltime job. Updating the old infrastructure “was always a pain,” says De Moor: It used to take two active working days, “and it was always a bit scary when we did that.” With Kubernetes, its just a matter of “a few hours of passively following the process.”
</div>
</section>
<div class="banner4" style="background-image: url('/images/CaseStudy_woorank_banner4.jpg')" style="width:100%">
<div class="banner4text">
"When things fail and errors pop up, the system tries to heal itself, and thats really, for us, the key reason to work with Kubernetes. It allowed us to set up certain testing frameworks to just be alerted when things go wrong, instead of having to look at whether everything went right. Its made peoples lives much easier. Its quite a big mindset change." <br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- NILS DE MOOR, CTO/COFOUNDER, WOORANK</span>
</div>
</div>
<section class="section5" style="padding:0px !important">
<div class="fullcol">
Transparency on all levels, from the code to the servers, has also been a byproduct of the move to Kubernetes. “Its easier for the entire team to get a better understanding of the infrastructure, how its working, how it looks like, whats going on,” says De Moor. “Its not that thing thats running, and no one really knows how it works except this one person. Now its really a team effort of everyone knowing, Okay, when something goes wrong, its probably in this area or we need to check this.’”
<br><br>
To that end, Woorank has begun implementing other cloud native tools that help with visibility, such as Fluentd for logging, Prometheus for monitoring, and OpenTracing for distributed tracing. Implementing these new tools—which once took weeks of planning, installing, and onboarding—now only takes a few days. “With all the tools and projects under the CNCF umbrella, its easier for us to test and play with technology than it used to be,” says De Moor. “With Prometheus, we used it fairly early and couldnt get it fairly stable. A couple of months ago, the question reappeared, so we set it up in two days, and now everyone is using it.”
<br><br>
Deployments, too, have been impacted: The rate has more than doubled, which De Moor partly attributes to the transparency of the new process. “With Kubernetes, you see that these three containers didnt start for this reason,” he says. Plus, “now we bring deployment messages into Slack. If you see deployments rolling by every day, it does somehow indirectly enforce you, okay, I need to be part of this train, so I also need to deploy.”
</div>
<div class="banner5" >
<div class="banner5text">
"We can plan those things over a certain timeline, try to fit our resource usage to that, and then bring in spot instances, which will hopefully drive the costs&nbsp;down&nbsp;more."<br style="height:25px"><span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;"><br>- NILS DE MOOR, CTO/COFOUNDER, WOORANK</span></div>
</div>
<div class="fullcol">
Perhaps the biggest impact, though, has been on the bottom line. “We were already pretty flexible in our costs and taking on traffic peaks and higher load in general, but with Kubernetes and the other CNCF tools we use, we have achieved about 30% in cost savings,” says De Moor.
<br><br>
And theres room for even greater savings. Currently, most of Wooranks infrastructure is running on AWS on demand; the company pays a fixed price and makes some reservations for its planned amount of resources needed. De Moor is planning to experiment more with spot instances with certain resource-heavy workloads such as web crawls: “We can plan those things over a certain timeline, try to fit our resource usage to that, and then bring in spot instances, which will hopefully drive the costs down more.”
<br><br>
Moving to Kubernetes has been so beneficial to Woorank that the company is doubling down on both cloud native technologies and the community. “It was definitely important for us to have CNCF as an umbrella above everything,” says De Moor. “Weve always been working with open source libraries and tools and technologies. It works very well for us, but sometimes things can drift, maintainers drop out, and projects go haywire. For us, it was indeed important to know that whatever project gets taken under this umbrella, its taken very seriously. Our way of contributing back is also by joining this community. Its, for us, a way to show our appreciation for whats going on in this framework.”
</div>
</section>

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 232 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 251 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 166 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 429 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.4 KiB

BIN
static/images/ft_logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

BIN
static/images/vsco_logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB