diff --git a/content/en/case-studies/ant-financial/ant-financial_featured_logo.png b/content/en/case-studies/ant-financial/ant-financial_featured_logo.png new file mode 100644 index 0000000000..cb40345027 Binary files /dev/null and b/content/en/case-studies/ant-financial/ant-financial_featured_logo.png differ diff --git a/content/en/case-studies/ant-financial/index.html b/content/en/case-studies/ant-financial/index.html new file mode 100644 index 0000000000..92b46526de --- /dev/null +++ b/content/en/case-studies/ant-financial/index.html @@ -0,0 +1,96 @@ +--- +title: Ant Financial Case Study +linkTitle: ant-financial +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +featured: false +--- + +
+

CASE STUDY:
Ant Financial’s Hypergrowth Strategy Using Kubernetes + +

+ +
+ +
+ Company  Ant Financial     Location  Hangzhou, China     Industry  Financial Services +
+ +
+
+
+
+

Challenge

+ Officially founded in October 2014, Ant Financial originated from Alipay, the world’s largest online payment platform that launched in 2004. The company also offers numerous other services leveraging technology innovation. With the volume of transactions Alipay handles for its 900+ million users worldwide (through its local and global partners)—256,000 transactions per second at the peak of Double 11 Singles Day 2017, and total gross merchandise value of $31 billion for Singles Day 2018—not to mention that of its other services, Ant Financial faces “data processing challenge in a whole new way,” says Haojie Hang, who is responsible for Product Management for the Storage and Compute Group. “We see three major problems of operating at that scale: how to provide real-time compute, storage, and processing capability, for instance to make real-time recommendations for fraud detection; how to provide intelligence on top of this data, because there’s too much data and then we’re not getting enough insight; and how to apply security in the application level, in the middleware level, the system level, even the chip level.” In order to provide reliable and consistent services to its customers, Ant Financial embraced containers in early 2014, and soon needed an orchestration solution for the tens-of-thousands-of-node clusters in its data centers. + +

Solution

+ After investigating several technologies, the team chose Kubernetes for orchestration, as well as a number of other CNCF projects, including Prometheus, OpenTracing, etcd and CoreDNS. “In late 2016, we decided that Kubernetes will be the de facto standard,” says Hang. “Looking back, we made the right bet on the right technology. But then we needed to move the production workload from the legacy infrastructure to the latest Kubernetes-enabled platform, and that took some time, because we are very careful in terms of reliability and consistency.” All core financial systems were containerized by November 2017, and the migration to Kubernetes is ongoing. +
+

Impact

+ “We’ve seen at least tenfold in improvement in terms of the operations with cloud native technology, which means you can have tenfold increase in terms of output,” says Hang. Ant also provides its fully integrated financial cloud platform to business partners around the world, and hopes to power the next generation of digital banking with deep experience in service innovation and technology expertise. Hang says the team hasn’t begun to focus on optimizing the Kubernetes platform, either: “Because we’re still in the hyper growth stage, we’re not in a mode where we do cost saving yet.” +
+ +
+
+
+
+ "In late 2016, we decided that Kubernetes will be the de facto standard. Looking back, we made the right bet on the right technology." +

- HAOJIE HANG, PRODUCT MANAGEMENT, ANT FINANCIAL
+
+
+
+
+

A spinoff of the multinational conglomerate Alibaba, Ant Financial boasts a $150+ billion valuation and the scale to match. The fintech startup, launched in 2014, is comprised of Alipay, the world’s largest online payment platform, and numerous other services leveraging technology innovation.

+ And the volume of transactions that Alipay handles for over 900 million users worldwide (through its local and global partners) is staggering: 256,000 per second at the peak of Double 11 Singles Day 2017, and total gross merchandise value of $31 billion for Singles Day 2018. With the mission of “bringing the world equal opportunities,” Ant Financial is dedicated to creating an open, shared credit system and financial services platform through technology innovations. +

+ Combine that with the operations of its other properties—such as the Huabei online credit system, Jiebei lending service, and the 350-million-user Ant Forest green energy mobile app—and Ant Financial faces “data processing challenge in a whole new way,” says Haojie Hang, who is responsible for Product Management for the Storage and Compute Group. “We see three major problems of operating at that scale: how to provide real-time compute, storage, and processing capability, for instance to make real-time recommendations for fraud detection; how to provide intelligence on top of this data, because there’s too much data and we’re not getting enough insight; and how to apply security in the application level, in the middleware level, the system level, even the chip level.” +

+ To address those challenges and provide reliable and consistent services to its customers, Ant Financial embraced Docker containerization in 2014. But they soon realized that they needed an orchestration solution for some tens-of-thousands-of-node clusters in the company’s data centers. +
+
+
+
+ "On Double 11 this year, we had plenty of nodes on Kubernetes, but compared to the whole scale of our infrastructure, this is still in progress."

- RANGER YU, GLOBAL TECHNOLOGY PARTNERSHIP & DEVELOPMENT, ANT FINANCIAL
+ +
+
+
+
+ The team investigated several technologies, including Docker Swarm and Mesos. “We did a lot of POCs, but we’re very careful in terms of production systems, because we want to make sure we don’t lose any data,” says Hang. “You cannot afford to have a service downtime for one minute; even one second has a very, very big impact. We operate every day under pressure to provide reliable and consistent services to consumers and businesses in China and globally.” +

+ Ultimately, Hang says Ant chose Kubernetes because it checked all the boxes: a strong community, technology that “will be relevant in the next three to five years,” and a good match for the company’s engineering talent. “In late 2016, we decided that Kubernetes will be the de facto standard,” says Hang. “Looking back, we made the right bet on the right technology. But then we needed to move the production workload from the legacy infrastructure to the latest Kubernetes-enabled platform. We spent a lot of time learning and then training our people to build applications on Kubernetes well.” +

+ All core financial systems were containerized by November 2017, and the migration to Kubernetes is ongoing. Ant’s platform also leverages a number of other CNCF projects, including Prometheus, OpenTracing, etcd and CoreDNS. “On Double 11 this year, we had plenty of nodes on Kubernetes, but compared to the whole scale of our infrastructure, this is still in progress,” says Ranger Yu, Global Technology Partnership & Development. +
+
+
+
+ "We’re very grateful for CNCF and this amazing technology, which we need as we continue to scale globally. We’re definitely embracing the community and open source more in the future."

- HAOJIE HANG, PRODUCT MANAGEMENT, ANT FINANCIAL
+
+
+ +
+
+ Still, there has already been an impact. “Cloud native technology has benefited us greatly in terms of efficiency,” says Hang. “In general, we want to make sure our infrastructure is nimble and flexible enough for the work that could happen tomorrow. That’s the goal. And with cloud native technology, we’ve seen at least tenfold improvement in operations, which means you can have tenfold increase in terms of output. Let’s say you are operating 10 nodes with one person. With cloud native, tomorrow you can have 100 nodes.” +

+ Ant also provides its financial cloud platform to partners around the world, and hopes to power the next generation of digital banking with deep experience in service innovation and technology expertise. Hang says the team hasn’t begun to focus on optimizing the Kubernetes platform, either: “Because we’re still in the hyper growth stage, we’re not in a mode where we do cost-saving yet.” +

+ The CNCF community has also been a valuable asset during Ant Financial’s move to cloud native. “If you are applying a new technology, it’s very good to have a community to discuss technical problems with other users,” says Hang. “We’re very grateful for CNCF and this amazing technology, which we need as we continue to scale globally. We’re definitely embracing the community and open sourcing more in the future.” +
+ +
+
+"In China, we are the North Star in terms of innovation in financial and other related services,” says Hang. “We definitely want to make sure we’re still leading in the next 5 to 10 years with our investment in technology."

- RANGER YU, GLOBAL TECHNOLOGY PARTNERSHIP & DEVELOPMENT, ANT FINANCIAL
+
+ +
+ In fact, the company has already started to open source some of its cloud native middleware. “We are going to be very proactive about that,” says Yu. “CNCF provided a platform so everyone can plug in or contribute components. This is very good open source governance.” +

+ Looking ahead, the Ant team will continue to evaluate many other CNCF projects. Building a service mesh community in China, the team has brought together many China-based companies and developers to discuss the potential of that technology. “Service mesh is very attractive for Chinese developers and end users because we have a lot of legacy systems running now, and it’s an ideal mid-layer to glue everything together, both new and legacy,” says Hang. “For new technologies, we look very closely at whether they will last.” +

+ At Ant, Kubernetes passed that test with flying colors, and the team hopes other companies will follow suit. “In China, we are the North Star in terms of innovation in financial and other related services,” says Hang. “We definitely want to make sure we’re still leading in the next 5 to 10 years with our investment in technology.” + +
+
diff --git a/content/en/case-studies/city-of-montreal/city-of-montreal_featured_logo.png b/content/en/case-studies/city-of-montreal/city-of-montreal_featured_logo.png new file mode 100644 index 0000000000..be2af029f0 Binary files /dev/null and b/content/en/case-studies/city-of-montreal/city-of-montreal_featured_logo.png differ diff --git a/content/en/case-studies/city-of-montreal/index.html b/content/en/case-studies/city-of-montreal/index.html new file mode 100644 index 0000000000..151ce44b21 --- /dev/null +++ b/content/en/case-studies/city-of-montreal/index.html @@ -0,0 +1,99 @@ +--- +title: City of Montreal Case Study +linkTitle: city-of-montreal +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +featured: false +--- + +
+

CASE STUDY:
City of Montréal - How the City of Montréal Is Modernizing Its 30-Year-Old, Siloed Architecture with Kubernetes + +

+ +
+ +
+ Company  City of Montréal     Location  Montréal, Québec, Canada     Industry  Government +
+ +
+
+
+
+

Challenge

+ Like many governments, Montréal has a number of legacy systems, and “we have systems that are older than some developers working here,” says the city’s CTO, Jean-Martin Thibault. “We have mainframes, all flavors of Windows, various flavors of Linux, old and new Oracle systems, Sun servers, all kinds of databases. Like all big corporations, some of the most important systems, like Budget and Human Resources, were developed on mainframes in-house over the past 30 years.” There are over 1,000 applications in all, and most of them were running on different ecosystems. In 2015, a new management team decided to break down those silos, and invest in IT in order to move toward a more integrated governance for the city. They needed to figure out how to modernize the architecture. + +

Solution

+ The first step was containerization. The team started with a small Docker farm with four or five servers, with Rancher for providing access to the Docker containers and their logs and Jenkins to deploy. “We based our effort on the new trends; we understood the benefits of immutability and deployments without downtime and such things,” says Solutions Architect Marc Khouzam. They soon realized they needed orchestration as well, and opted for Kubernetes. Says Enterprise Architect Morgan Martinet: “Kubernetes offered concepts on how you would describe an architecture for any kind of application, and based on those concepts, deploy what’s required to run the infrastructure. It was becoming a de facto standard.” +
+

Impact

+ The time to market has improved drastically, from many months to a few weeks. Deployments went from months to hours. “In the past, you would have to ask for virtual machines, and that alone could take weeks, easily,” says Thibault. “Now you don’t even have to ask for anything. You just create your project and it gets deployed.” Kubernetes has also improved the efficiency of how the city uses its compute resources: “Before, the 200 application components we currently run on Kubernetes would have required hundreds of virtual machines, and now, if we’re talking about a single environment of production, we are able to run them on 8 machines, counting the masters of Kubernetes,” says Martinet. And it’s all done with a small team of just 5 people operating the Kubernetes clusters. +
+ +
+
+
+
+ "We realized the limitations of having a non-orchestrated Docker environment. Kubernetes came to the rescue, bringing in all these features that make it a lot easier to manage and give a lot more benefits to the users." +

- JEAN-MARTIN THIBAULT, CTO, CITY OF MONTRÉAL
+
+
+
+
+

The second biggest municipality in Canada, Montréal has a large number of legacy systems keeping the government running. And while they don’t quite date back to the city’s founding in 1642, “we have systems that are older than some developers working here,” jokes the city’s CTO, Jean-Martin Thibault.

+ “We have mainframes, all flavors of Windows, various flavors of Linux, old and new Oracle systems, Sun servers, all kinds of databases. Some of the most important systems, like Budget and Human Resources, were developed on mainframes in-house over the past 30 years.” +

+ In recent years, that fact became a big pain point. There are over 1,000 applications in all, running on almost as many different ecosystems. In 2015, a new city management team decided to break down those silos, and invest in IT in order to move toward a more integrated governance. “The organization was siloed, so as a result the architecture was siloed,” says Thibault. “Once we got integrated into one IT team, we decided to redo an overall enterprise architecture.” +

+ The first step to modernize the architecture was containerization. “We based our effort on the new trends; we understood the benefits of immutability and deployments without downtime and such things,” says Solutions Architect Marc Khouzam. The team started with a small Docker farm with four or five servers, with Rancher for providing access to the Docker containers and their logs and Jenkins for deployment. +
+
+
+
+ "Getting a project running in Kubernetes is entirely dependent on how long you need to program the actual software. It’s no longer dependent on deployment. Deployment is so fast that it’s negligible."

- MARC KHOUZAM, SOLUTIONS ARCHITECT, CITY OF MONTRÉAL
+ +
+
+
+
+ But this Docker farm setup had some limitations, including the lack of self-healing and dynamic scaling based on traffic, and the effort required to optimize server resources and scale to multiple instances of the same container. The team soon realized they needed orchestration as well. “Kubernetes came to the rescue,” says Thibault, “bringing in all these features that make it a lot easier to manage and give a lot more benefits to the users.” +

+ The team had evaluated several orchestration solutions, but Kubernetes stood out because it addressed all of the pain points. (They were also inspired by Yahoo! Japan’s use case, which the team members felt came close to their vision.) “Kubernetes offered concepts on how you would describe an architecture for any kind of application, and based on those concepts, deploy what’s required to run the infrastructure,” says Enterprise Architect Morgan Martinet. “It was becoming a de facto standard. It also promised portability across cloud providers. The choice of Kubernetes now gives us many options such as running clusters in-house or in any IaaS provider, or even using Kubernetes-as-a-service in any of the major cloud providers.” +

+ Another important factor in the decision was vendor neutrality. “As a government entity, it is essential for us to be neutral in our selection of products and providers,” says Thibault. “The independence of the Cloud Native Computing Foundation from any company provides this.” +
+
+
+
+ "Kubernetes has been great. It’s been stable, and it provides us with elasticity, resilience, and robustness. While re-architecting for Kubernetes, we also benefited from the monitoring and logging aspects, with centralized logging, Prometheus logging, and Grafana dashboards. We have enhanced visibility of what’s being deployed."

- MORGAN MARTINET, ENTERPRISE ARCHITECT, CITY OF MONTRÉAL
+
+
+ +
+
+ The Kubernetes implementation began with the deployment of a small cluster using an internal Ansible playbook, which was soon replaced by the Kismatic distribution. Given the complexity they saw in operating a Kubernetes platform, they decided to provide development groups with an automated CI/CD solution based on Helm. “An integrated CI/CD solution on Kubernetes standardized how the various development teams designed and deployed their solutions, but allowed them to remain independent,” says Khouzam. +

+ During the re-architecting process, the team also added Prometheus for monitoring and alerting, Fluentd for logging, and Grafana for visualization. “We have enhanced visibility of what’s being deployed,” says Martinet. Adds Khouzam: “The big benefit is we can track anything, even things that don’t run inside the Kubernetes cluster. It’s our way to unify our monitoring effort.” +

+ All together, the cloud native solution has had a positive impact on velocity as well as administrative overhead. With standardization, code generation, automatic deployments into Kubernetes, and standardized monitoring through Prometheus, the time to market has improved drastically, from many months to a few weeks. Deployments went from months and weeks of planning down to hours. “In the past, you would have to ask for virtual machines, and that alone could take weeks to properly provision,” says Thibault. Plus, for dedicated systems, experts often had to be brought in to install them with their own recipes, which could take weeks and months. +

+ Now, says Khouzam, “we can deploy pretty much any application that’s been Dockerized without any help from anybody. Getting a project running in Kubernetes is entirely dependent on how long you need to program the actual software. It’s no longer dependent on deployment. Deployment is so fast that it’s negligible.” + +
+ +
+
+"We’re working with the market when possible, to put pressure on our vendors to support Kubernetes, because it’s a much easier solution to manage"

- MORGAN MARTINET, ENTERPRISE ARCHITECT, CITY OF MONTRÉAL
+
+ +
+ Kubernetes has also improved the efficiency of how the city uses its compute resources: “Before, the 200 application components we currently run in Kubernetes would have required hundreds of virtual machines, and now, if we’re talking about a single environment of production, we are able to run them on 8 machines, counting the masters of Kubernetes,” says Martinet. And it’s all done with a small team of just five people operating the Kubernetes clusters. Adds Martinet: “It’s a dramatic improvement no matter what you measure.” +

+ So it should come as no surprise that the team’s strategy going forward is to target Kubernetes as much as they can. “If something can’t run inside Kubernetes, we’ll wait for it,” says Thibault. That means they haven’t moved any of the city’s Windows systems onto Kubernetes, though it’s something they would like to do. “We’re working with the market when possible, to put pressure on our vendors to support Kubernetes, because it’s a much easier solution to manage,” says Martinet. +

+ Thibault sees a near future where 60% of the city’s workloads are running on a Kubernetes platform—basically any and all of the use cases that they can get to work there. “It’s so much more efficient than the way we used to do things,” he says. “There’s no looking back.” + +
+
diff --git a/content/en/case-studies/jd-com/index.html b/content/en/case-studies/jd-com/index.html new file mode 100644 index 0000000000..636f226339 --- /dev/null +++ b/content/en/case-studies/jd-com/index.html @@ -0,0 +1,97 @@ +--- +title: JD.com Case Study +linkTitle: jd-com +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +featured: false +--- + +
+

CASE STUDY:
JD.com: How JD.com Pioneered Kubernetes for E-Commerce at Hyperscale + +

+ +
+ +
+ Company  JD.com     Location  Beijing, China     Industry  eCommerce +
+ +
+
+
+
+

Challenge

+ With more than 300 million active users and total 2017 revenue of more than $55 billion, JD.com is China’s largest retailer, and its operations are the epitome of hyperscale. For example, there are more than a trillion images in JD.com’s product databases—with 100 million being added daily—and this enormous amount of data needs to be instantly accessible. In 2014, JD.com moved its applications to containers running on bare metal machines using OpenStack and Docker to "speed up the delivery of our computing resources and make the operations much simpler," says Haifeng Liu, JD.com’s Chief Architect. But by the end of 2015, with tens of thousands of nodes running in multiple data centers, "we encountered a lot of problems because our platform was not strong enough, and we suffered from bottlenecks and scalability issues," says Liu. "We needed infrastructure for the next five years of development, now." + +

Solution

+ JD.com turned to Kubernetes to accommodate its clusters. At the beginning of 2016, the company began to transition from OpenStack to Kubernetes, and today, JD.com runs the world’s largest Kubernetes cluster. "Kubernetes has provided a strong foundation on top of which we have customized the solution to suit our needs as China’s largest retailer." +
+

Impact

+ "We have greater data center efficiency, better managed resources, and smarter deployment with the Kubernetes platform," says Liu. Deployment time went from several hours to tens of seconds. Efficiency has improved by 20-30%, measured in IT costs. With the further optimizations the team is working on, Liu believes there is the potential to save hundreds of millions of dollars a year. But perhaps the best indication of success was the annual Singles Day shopping event, which ran on the Kubernetes platform for the first time in 2018. Over 11 days, transaction volume on JD.com was $23 billion, and "our e-commerce platforms did great," says Liu. "Infrastructure led the way to prep for 11.11. We took the approach of predicting volume, emulating the behavior of customers to prepare beforehand, and drilled for malfunctions. Because of Kubernetes’s scalability, we were able to handle an extremely high level of demand." +
+ +
+
+
+
+ "Kubernetes helped us reduce the complexity of operations to make distributed systems stable and scalable. Most importantly, we can leverage Kubernetes for scheduling resources to reduce hardware costs. That’s the big win." +

- HAIFENG LIU, CHIEF ARCHITECT, JD.com
+
+
+
+
+

With more than 300 million active users and $55.7 billion in annual revenues last year, JD.com is China’s largest retailer, and its operations are the epitome of hyperscale.

+ For example, there are more than a trillion images in JD.com’s product databases for customers, with 100 million being added daily. And this enormous amount of data needs to be instantly accessible to enable a smooth online customer experience. +

+ In 2014, JD.com moved its applications to containers running on bare metal machines using OpenStack and Docker to "speed up the delivery of our computing resources and make the operations much simpler," says Haifeng Liu, JD.com’s Chief Architect. But by the end of 2015, with hundreds of thousands of nodes in multiple data centers, "we encountered a lot of problems because our platform was not strong enough, and we suffered from bottlenecks and scalability issues," Liu adds. "We needed infrastructure for the next five years of development, now." +

+ After considering a number of orchestration technologies, JD.com decided to adopt Kubernetes to accommodate its ever-growing clusters. "The main reason is because Kubernetes can give us more efficient, scalable and much simpler application deployments, plus we can leverage it to do flexible platform scheduling," says Liu. + +
+
+
+
+ "We customized Kubernetes and built a modern system on top of it. This entire ecosystem of Kubernetes plus our own optimizations have helped us save costs and time."

- HAIFENG LIU, CHIEF ARCHITECT, JD.com
+ +
+
+
+
+ The fact that Kubernetes is based on Google’s Borg also gave the company confidence. The team liked that Kubernetes has a clear and simple architecture, and that it’s developed mostly in Go, which is a popular language within JD.com. Though he felt that at the time Kubernetes "was not mature enough," Liu says, "we adopted it anyway." +

+ The team spent a year developing the new container engine platform based on Kubernetes, and at the end of 2016, began promoting it within the company. "We wanted the cluster to be the default way for creating services, so scalability is easier," says Liu. "We talked to developers, interest grew, and we solved problems together." Some of these problems included networking performance and etcd scalability. "But during the past two years, Kubernetes has become more mature and very stable," he adds. +

+ Today, the company runs the world’s largest Kubernetes cluster. "We customized Kubernetes and built a modern system on top of it," says Liu. "This entire ecosystem of Kubernetes plus our own optimizations have helped us save costs and time. We have greater data center efficiency, better managed resources, and smarter deployment with the Kubernetes platform." + +
+
+
+
+ "My advice is first you need to combine this technology with your own businesses, and the second is you need clear goals. You cannot just use the technology because others are using it. You need to consider your own objectives."

- HAIFENG LIU, CHIEF ARCHITECT, JD.com
+
+
+ +
+
+ The results are clear: Deployment time went from several hours to tens of seconds. Efficiency has improved by 20-30%, measured in IT costs. But perhaps the best indication of success was the annual Singles Day shopping event, which ran on the Kubernetes platform for the first time in 2018. Over 11 days, transaction volume on JD.com was $23 billion, and "our e-commerce platforms did great," says Liu. "Infrastructure led the way to prep for 11.11. We took the approach of predicting volume, emulating the behavior of customers to prepare beforehand, and drilled for malfunctions. Because of Kubernetes’s scalability, we were able to handle an extremely high level of demand." +

+ JD.com is now in its second stage with Kubernetes: The platform is already stable, scalable, and flexible, so the focus is on how to run things much more efficiently to further reduce costs. With the optimizations the team is working on with resource management, Liu believes there is the potential to save hundreds of millions of dollars a year. +

+ "We run Kubernetes and container clusters on roughly tens of thousands of physical bare metal nodes," he says. "Using Kubernetes and leveraging our own machine learning pipeline to predict how many resources we need for each application we use, and our own intelligent scaling algorithm, we can improve our resource usage. If we boost the resource usage, for example, by several percent, that means we can reduce huge hardware costs. Then we don’t need that many servers to get that same amount of workload. That can save us a lot of resources." +
+ +
+
+"We can share our successful experience with the community, and we also receive good feedback from others. So it’s mutually beneficial."

- HAIFENG LIU, CHIEF ARCHITECT, JD.com
+
+ +
+ JD.com, which won CNCF’s 2018 End User Award, is also using Helm, CNI, Harbor, and Vitess on its platform. JD.com developers have made considerable contributions to Vitess, the CNCF project for scalable MySQL cluster management, and the company hopes to donate its own project to CNCF in the near future. Community participation is a priority for JD.com. "We have a good partnership with this community," says Liu. "We can share our successful experience with the community, and we also receive good feedback from others. So it’s mutually beneficial." +

+ To that end, Liu offers this advice for other companies considering adopting cloud native technology. "First you need to combine this technology with your own businesses, and the second is you need clear goals," he says. "You cannot just use the technology because others are using it. You need to consider your own objectives." +

+ For JD.com’s objectives, these cloud native technologies have been an ideal fit with the company’s own homegrown innovation. "Kubernetes helped us reduce the complexity of operations to make distributed systems stable and scalable," says Liu. "Most importantly, we can leverage Kubernetes for scheduling resources to reduce hardware costs. That’s the big win." +
+
diff --git a/content/en/case-studies/jd-com/jd-com_featured_logo.png b/content/en/case-studies/jd-com/jd-com_featured_logo.png new file mode 100644 index 0000000000..e897998429 Binary files /dev/null and b/content/en/case-studies/jd-com/jd-com_featured_logo.png differ diff --git a/content/en/case-studies/nerdalize/index.html b/content/en/case-studies/nerdalize/index.html new file mode 100644 index 0000000000..127d95c375 --- /dev/null +++ b/content/en/case-studies/nerdalize/index.html @@ -0,0 +1,96 @@ +--- +title: Prowise Case Study +linkTitle: prowise +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +featured: false +--- +
+

CASE STUDY:
Nerdalize: Providing Affordable and Sustainable Cloud Hosting with Kubernetes +

+ +
+ +
+ Company  Nerdalize     Location  Delft, Netherlands      Industry  Cloud Provider +
+ +
+
+
+
+ +

Challenge

+ Nerdalize offers affordable cloud hosting for customers—and free heat and hot water for people who sign up to house the heating devices that contain the company’s servers. The savings Nerdalize realizes by not running data centers are passed on to its customers. When the team began using Docker to make its software more portable, it realized it also needed a container orchestration solution. “As a cloud provider, we have internal services for hosting our backends and billing our customers, but we also need to offer our compute to our end users,” says Digital Product Engineer Ad van der Veer. “Since we have these heating devices spread across the Netherlands, we need some way of tying that all together.” +

Solution

+ After briefly using a basic scheduling setup with another open source tool, Nerdalize switched to Kubernetes. “On top of our heating devices throughout the Netherlands, we have a virtual machine layer, and on top of that we run Kubernetes clusters for our customers,” says van der Veer. “As a small company, we have to provide a rock solid story in terms of the technology. Kubernetes allows us to offer a hybrid solution: ‘You can run this on our cloud, but you can run it on other clouds as well. It runs in your internal hardware if you like.’ And together with the Docker image standard and our multi-cloud dashboard, that allows them peace of mind.” +

Impact

+ Nerdalize prides itself on being a Kubernetes-native cloud provider that charges its customers prices 40% below that of other cloud providers. “Every euro that we have to invest for licensing of software that’s not open source comes from that 40%,” says van der Veer. If they had used a non-open source orchestration platform instead of Kubernetes, “that would reduce this proposition that we have of 40% less cost to like 30%. Kubernetes directly allows us to have this business model and this strategic advantage.” Nerdalize customers also benefit from time savings: One went from spending a day to set up VMs, network, and software, to spinning up a Kubernetes cluster in minutes. And for households using the heating devices, they save an average of 200 euro a year on their heating bill. The environmental impact? The annual reduction in CO2 emissions comes out to be 2 tons per Nerdalize household, which is equivalent to a car driving 8,000 km. +
+
+
+
+
+ “We can walk into a boardroom and put a Kubernetes logo up, and people accept it as an established technology. It becomes this centerpiece where other cloud native projects can tie in, so there’s a network effect that each project empowers each other. This is something that has a lot of value when we have to talk to customers and convince them that our cloud fits their needs.” +

— AD VAN DER VEER, PRODUCT ENGINEER, NERDALIZE
+
+
+
+
+

Nerdalize is a cloud hosting provider that has no data centers. Instead, the four-year-old startup places its servers in homes across the Netherlands, inside heating devices it developed to turn the heat produced by the servers into heating and hot water for the residents. +

+ “Households save on their gas bills, and cloud users have a much more sustainable cloud solution,” says Maaike Stoops, Customer Experience Queen at Nerdalize. “And we don’t have the overhead of building a data center, so our cloud is up to 40% more affordable.” +

+ That business model has been enabled by the company’s adoption of containerization and Kubernetes. “When we just got started, Docker was just introduced,” says Digital Product Engineer Ad van der Veer. “We began with a very basic bare metal setup, but once we developed the business, we saw that containerization technology was super useful to help our customers. As a cloud provider, we have internal services for hosting our backends and billing our customers, but we also need to offer our compute to our end users. Since we have these heating devices spread across the Netherlands, we need some way of tying that all together.” +

+ After trying to develop its own scheduling system using another open source tool, Nerdalize found Kubernetes. “Kubernetes provided us with more functionality out of the gate,” says van der Veer. +
+
+
+
+ “We always try to get a working version online first, like minimal viable products, and then move to stabilize that,” says van der Veer. “And I think that these kinds of day-two problems are now immediately solved. The rapid prototyping we saw internally is a very valuable aspect of Kubernetes.”

— AD VAN DER VEER, PRODUCT ENGINEER, NERDALIZE
+ +
+
+
+
+ The team first experimented with a basic use case to run customers’ workloads on Kubernetes. “Getting the data working was kind of difficult, and at the time the installation wasn’t that simple,” says van der Veer. “Then CNCF started, we saw the community grow, these problems got solved, and from there it became a very easy decision.” +

+ The first Nerdalize product that was launched in 2017 was “100% containerized and Kubernetes native,” says van der Veer. “On top of our heating devices throughout the Netherlands, we have a virtual machine layer, and on top of that we run Kubernetes clusters for our customers. As a small company, we have to provide a rock solid story in terms of the technology. Kubernetes allows us to offer a hybrid solution: ‘You can run this on our cloud, but you can run it on other clouds as well. It runs in your internal hardware if you like.’ And together with the Docker image standard and our multi-cloud dashboard, that gives them peace of mind.” +

+ Not to mention the 40% cost savings. “Every euro that we have to invest for licensing of software that’s not open source comes from that 40%,” says van der Veer. If Nerdalize had used a non-open source orchestration platform instead of Kubernetes, “that would reduce our cost savings proposition to like 30%. Kubernetes directly allows us to have this business model and this strategic advantage.” +
+
+
+
+ “One of our customers used to spend up to a day setting up the virtual machines, network and software every time they wanted to run a project in the cloud. On our platform, with Docker and Kubernetes, customers can have their projects running in a couple of minutes.” +

- MAAIKE STOOPS, CUSTOMER EXPERIENCE QUEEN, NERDALIZE
+
+
+
+
+ Nerdalize now has customers, from individual engineers to data-intensive startups and established companies, all around the world. (For the time being, though, the heating devices are exclusive to the Netherlands.) One of the most common use cases is batch workloads used by data scientists and researchers, and the time savings for these end users is profound. “One of our customers used to spend up to a day setting up the virtual machines, network and software every time they wanted to run a project in the cloud,” says Stoops. “On our platform, with Docker and Kubernetes, customers can have their projects running in a couple of minutes.” +

+ As for households using the heating devices, they save an average of 200 euro a year on their heating bill. The environmental impact? The annual reduction in CO2 emissions comes out to 2 tons per Nerdalize household, which is equivalent to a car driving 8,000 km. +

+ For the Nerdalize team, feature development—such as the accessible command line interface called Nerd, which recently went live—has also been sped up by Kubernetes. “We always try to get a working version online first, like minimal viable products, and then move to stabilize that,” says van der Veer. “And I think that these kinds of day-two problems are now immediately solved. The rapid prototyping we saw internally is a very valuable aspect of Kubernetes.” +

+ Another unexpected benefit has been the growing influence and reputation of Kubernetes. “We can walk into a boardroom and put a Kubernetes logo up, and people accept it as an established technology,” says van der Veer. “It becomes this centerpiece where other cloud native projects can tie in, so there’s a network effect that each project empowers each other. This is something that has a lot of value when we have to convince customers that our cloud fits their needs.” +
+ +
+
+“It shouldn’t be too big of a hassle and too large of a commitment. It should be fun and easy for end users. So we really love Kubernetes in that way.”

- MAAIKE STOOPS, CUSTOMER EXPERIENCE QUEEN, NERDALIZE
+
+ +
+ + In fact, Nerdalize is currently looking into implementing other CNCF projects, such as Prometheus for monitoring and Rook, “which should help us with some of the data problems that we want to solve for our customers,” says van der Veer. +

+ In the coming year, Nerdalize will scale up the number of households running its hardware to 50, or the equivalent of a small scale data center. Geographic redundancy and greater server ability for customers are two main goals. Spreading the word about Kubernetes is also in the game plan. “We offer a free namespace on our sandbox, multi-tenant Kubernetes cluster for anyone to try,” says van der Veer. “What’s more cool than trying your first Kubernetes project on houses, to warm a shower?” +

+ Ultimately, this ties into Nerdalize’s mission of supporting affordable and sustainable cloud hosting. “We want to be the disrupter of the cloud space, showing organizations that running in the cloud is easy and affordable,” says Stoops. “It shouldn’t be too big of a hassle and too large of a commitment. It should be fun and easy for end users. So we really love Kubernetes in that way.” +
+ +
diff --git a/content/en/case-studies/nerdalize/nerdalize_featured_logo.png b/content/en/case-studies/nerdalize/nerdalize_featured_logo.png new file mode 100644 index 0000000000..eb959b8ecf Binary files /dev/null and b/content/en/case-studies/nerdalize/nerdalize_featured_logo.png differ diff --git a/content/en/case-studies/pingcap/index.html b/content/en/case-studies/pingcap/index.html new file mode 100644 index 0000000000..637f891b3e --- /dev/null +++ b/content/en/case-studies/pingcap/index.html @@ -0,0 +1,96 @@ +--- +title: pingcap Case Study +linkTitle: pingcap +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +featured: false +--- + +
+

CASE STUDY:
PingCAP Bets on Cloud Native for Its TiDB Database Platform + +

+ +
+ +
+ Company  PingCAP     Location  Beijing, China, and San Mateo, CA     Industry  Software +
+ +
+
+
+
+

Challenge

+ PingCAP is the company leading the development of the popular open source NewSQL database TiDB, which is MySQL-compatible, can handle hybrid transactional and analytical processing (HTAP) workloads, and has a cloud native architectural design. "Having a hybrid multi-cloud product is an important part of our global go-to-market strategy," says Kevin Xu, General Manager of Global Strategy and Operations. In order to achieve that, the team had to address two challenges: "how to deploy, run, and manage a distributed stateful application, such as a distributed database like TiDB, in a containerized world," Xu says, and "how to deliver an easy-to-use, consistent, and reliable experience for our customers when they use TiDB in the cloud, any cloud, whether that’s one cloud provider or a combination of different cloud environments." Knowing that using a distributed system isn’t easy, they began looking for the right orchestration layer to help reduce some of that complexity for end users. +

Solution

+ The team started looking at Kubernetes for orchestration early on. "We knew Kubernetes had the promise of helping us solve our problems," says Xu. "We were just waiting for it to mature." In early 2018, PingCAP began integrating Kubernetes into its internal development as well as in its TiDB product. At that point, the team has already had experience using other cloud native technologies, having integrated both Prometheus and gRPC as parts of the TiDB platform earlier on. +
+

Impact

+ Xu says that PingCAP customers have had a "very positive" response so far to Kubernetes being the tool to deploy and manage TiDB. Prometheus, with Grafana as the dashboard, is installed by default when customers deploy TiDB, so that they can monitor performance and make any adjustments needed to reach their target before and while deploying TiDB in production. That monitoring layer "makes the evaluation process and communication much smoother," says Xu. +

+ With the company’s Kubernetes-based Operator implementation, which is open sourced, customers are now able to deploy, run, manage, upgrade, and maintain their TiDB clusters in the cloud with no downtime, and reduced workload, burden and overhead. And internally, says Xu, "we’ve completely switched to Kubernetes for our own development and testing, including our data center infrastructure and Schrodinger, an automated testing platform for TiDB. With Kubernetes, our resource usage is greatly improved. Our developers can allocate and deploy clusters themselves, and the deploying process has gone from hours to minutes, so we can devote fewer people to manage IDC resources. The productivity improvement is about 15%, and as we gain more Kubernetes knowledge on the debugging and diagnosis front, the productivity should improve to more than 20%." + +
+ +
+
+
+
+ "We knew Kubernetes had the promise of helping us solve our problems. We were just waiting for it to mature, so we can fold it into our own development and product roadmap." +

- KEVIN XU, GENERAL MANAGER OF GLOBAL STRATEGY AND OPERATIONS, PINGCAP
+
+
+
+
+

Since it was introduced in 2015, the open source NewSQL database TiDB has gained a following for its compatibility with MySQL, its ability to handle hybrid transactional and analytical processing (HTAP) workloads—and its cloud native architectural design.

+ PingCAP, the company behind TiDB, designed the platform with cloud in mind from day one, says Kevin Xu, General Manager of Global Strategy and Operations, and "having a hybrid multi-cloud product is an important part of our global go-to-market strategy." +

+ In order to achieve that, the team had to address two challenges: "how to deploy, run, and manage a distributed stateful application, such as a distributed database like TiDB, in a containerized world," Xu says, and "how to deliver an easy-to-use, consistent, and reliable experience for our customers when they use TiDB in the cloud, any cloud, whether that’s one cloud provider or a combination of different cloud environments." +

+ Knowing that using a distributed system isn’t easy, the PingCAP team began looking for the right orchestration layer to help reduce some of that complexity for end users. Kubernetes had been on their radar for quite some time. "We knew Kubernetes had the promise of helping us solve our problems," says Xu. "We were just waiting for it to mature." +
+
+
+
+ "With the governance process being so open, it’s not hard to find out what’s the latest development in the technology and community, or figure out who to reach out to if we have problems or issues."

- KEVIN XU, GENERAL MANAGER OF GLOBAL STRATEGY AND OPERATIONS, PINGCAP
+ +
+
+
+
+ That time came in early 2018, when PingCAP began integrating Kubernetes into its internal development as well as in its TiDB product. "Having Kubernetes be part of the CNCF, as opposed to having only the backing of one individual company, was valuable in having confidence in the longevity of the technology," says Xu. Plus, "with the governance process being so open, it’s not hard to find out what’s the latest development in the technology and community, or figure out who to reach out to if we have problems or issues." +

+ TiDB’s cloud native architecture consists of a stateless SQL layer (also called TiDB) and a persistent key-value storage layer that supports distributed transactions (TiKV, which is now in the CNCF Sandbox), which are loosely coupled. "You can scale both out or in depending on your computation and storage needs, and the two scaling processes can happen independent of each other," says Xu. The PingCAP team also built the TiDB Operator based on Kubernetes, which helps bootstrap a TiDB cluster on any cloud environment and simplifies and automates deployment, scaling, scheduling, upgrades, and maintenance. The company also recently previewed its fully-managed TiDB Cloud offering. +
+
+
+
+ "A cloud native infrastructure will not only save you money and allow you to be more in control of the infrastructure resources you consume, but also empower new product innovation, new experience for your users, and new business possibilities. It’s both a cost reducer and a money maker."

- KEVIN XU, GENERAL MANAGER OF GLOBAL STRATEGY AND OPERATIONS, PINGCAP
+
+
+ +
+
+ The entire TiDB platform leverages Kubernetes and other cloud native technologies, including Prometheus for monitoring and gRPC for interservice communication. +

+ So far, the customer response to the Kubernetes-enabled platform has been "very positive." Prometheus, with Grafana as the dashboard, is installed by default when customers deploy TiDB, so that they can monitor and make any adjustments needed to reach their performance requirements before deploying TiDB in production. That monitoring layer "makes the evaluation process and communication much smoother," says Xu. With the company’s Kubernetes-based Operator implementation, customers are now able to deploy, run, manage, upgrade, and maintain their TiDB clusters in the cloud with no downtime, and reduced workload, burden and overhead. +

+ These technologies have also had an impact internally. "We’ve completely switched to Kubernetes for our own development and testing, including our data center infrastructure and Schrodinger, an automated testing platform for TiDB," says Xu. "With Kubernetes, our resource usage is greatly improved. Our developers can allocate and deploy clusters themselves, and the deploying process takes less time, so we can devote fewer people to manage IDC resources. +
+ +
+
+"The entire cloud native community, whether it’s Kubernetes, CNCF in general, or cloud native vendors like us, have all gained enough experience—and have the battle scars to prove it—and are ready to help you succeed."

- KEVIN XU, GENERAL MANAGER OF GLOBAL STRATEGY AND OPERATIONS, PINGCAP
+
+ +
+ The productivity improvement is about 15%, and as we gain more Kubernetes knowledge on the debugging and diagnosis front, the productivity should improve to more than 20%." +

+ Kubernetes is now a crucial part of PingCAP’s product roadmap. For anyone else considering going cloud native, Xu has this advice: "There’s no better time to get started," he says. "The entire cloud native community, whether it’s Kubernetes, CNCF in general, or cloud native vendors like us, have all gained enough experience—and have the battle scars to prove it—and are ready to help you succeed." +

+ In fact, the PingCAP team has seen more and more customers moving toward a cloud native approach, and for good reason. "IT infrastructure is quickly evolving from a cost-center and afterthought, to the core competency and competitiveness of any company," says Xu. "A cloud native infrastructure will not only save you money and allow you to be more in control of the infrastructure resources you consume, but also empower new product innovation, new experience for your users, and new business possibilities. It’s both a cost reducer and a money maker." + +
+
diff --git a/content/en/case-studies/pingcap/pingcap_featured_logo.png b/content/en/case-studies/pingcap/pingcap_featured_logo.png new file mode 100644 index 0000000000..8b57f417ae Binary files /dev/null and b/content/en/case-studies/pingcap/pingcap_featured_logo.png differ diff --git a/content/en/case-studies/prowise/index.html b/content/en/case-studies/prowise/index.html new file mode 100644 index 0000000000..03bbc51173 --- /dev/null +++ b/content/en/case-studies/prowise/index.html @@ -0,0 +1,99 @@ +--- +title: Prowise Case Study +linkTitle: prowise +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +featured: false +--- + + +
+

CASE STUDY:
Prowise: How Kubernetes is Enabling the Edtech Solution’s Global Expansion +

+ +
+ +
+ Company  Prowise     Location  Budel, The Netherlands      Industry  Edtech +
+ +
+
+
+
+ +

Challenge

+ A Dutch company that produces educational devices and software used around the world, Prowise had an infrastructure based on Linux services with multiple availability zones in Europe, Australia, and the U.S. “We’ve grown a lot in the past couple of years, and we started to encounter problems with versioning and flexible scaling,” says Senior DevOps Engineer Victor van den Bosch, “not only scaling in demands, but also in being able to deploy multiple products which all have their own versions, their own development teams, and their own problems that they’re trying to solve. To be able to put that all on the same platform without much resistance is what we were looking for. We wanted to future proof our infrastructure, and also solve some of the problems that are associated with just running a normal Linux service.” +

Solution

+ The Prowise team adopted containerization, spent time improving its CI/CD pipelines, and chose Microsoft Azure’s managed Kubernetes service, AKS, for orchestration. “Kubernetes solves things like networking really well, in a way that fits our business model,” says van den Bosch. “We want to focus on our core products, and that’s the software that runs on it and not necessarily the infrastructure itself.” +

Impact

+ With its first web-based applications now running in beta on Prowise’s Kubernetes platform, the team is seeing the benefits of rapid and smooth deployments. “The old way of deploying took half an hour of preparations and half an hour deploying it. With Kubernetes, it’s a couple of seconds,” says Senior Developer Bart Haalstra. As a result, adds van den Bosch, “We’ve gone from quarterly releases to a release every month in production. We’re pretty much deploying every hour or just when we find that a feature is ready for production; before, our releases were mostly done on off-hours, where it couldn’t impact our customers, as our confidence in the process was relatively low. Kubernetes has also enabled us to follow up quickly on bugs and implement tweaks to our users with zero downtime between versions. For some bugs we’ve pushed code fixes to production minutes after detection.” Recently, the team launched a new single sign-on solution for use in an internal application. “Due to the resource based architecture of the Kubernetes platform, we were able to bring that application into an entirely new production environment in less than a day, most of that time used for testing after applying the already well-known resource definitions from staging to the new environment,” says van den Bosch. “On a traditional VM this would have likely cost a day or two, and then probably a few weeks to iron out the kinks in our provisioning scripts as we apply updates.” +
+
+
+
+
+ "Because of Kubernetes, things have been much easier, our individual applications are better, and we can spend more time on functional implementation. We do not want to go back." +

— VICTOR VAN DEN BOSCH, SENIOR DEVOPS ENGINEER, PROWISE
+
+
+
+
+

If you haven’t set foot in a school in awhile, you might be surprised by what you’d see in a digitally connected classroom these days: touchscreen monitors, laptops, tablets, touch tables, and more.

+ One of the leaders in the space, the Dutch company Prowise, offers an integrated solution of hardware and software to help educators create a more engaging learning environment. +

+ As the company expanded its offerings beyond the Netherlands in recent years—creating multiple availability zones in Europe, Australia, and the U.S., with as many as nine servers per zone—its Linux service-based infrastructure struggled to keep up. “We’ve grown a lot in the past couple of years, and we started to encounter problems with versioning and flexible scaling,” says Senior DevOps Engineer Victor van den Bosch, who was hired by the company in late 2017 to build a new platform. +

+ Prowise’s products support ten languages, so the problem wasn’t just scaling in demands, he adds, “but also in being able to deploy multiple products which all have their own versions, their own development teams, and their own problems that they’re trying to solve. To be able to put that all on the same platform without much resistance is what we were looking for. We wanted to future proof our infrastructure, and also solve some of the problems that are associated with just running a normal Linux service.” +

+ The company’s existing infrastructure on Microsoft Azure Cloud was all on virtual machines, “a pretty traditional setup,” van den Bosch says. “We decided that we want some features in our software that requires being able to scale quickly, being able to deploy new applications and versions on different versions of different programming languages quickly. And we didn’t really want the hassle of trying to keep those servers in a particular state.” +
+
+
+
+ "You don’t have to go all-in immediately. You can just take a few projects, a service, run it alongside your more traditional stack, and build it up from there. Kubernetes scales, so as you add applications and services to it, it will scale with you. You don’t have to do it all at once, and that’s really a secret to everything, but especially true to Kubernetes."

— VICTOR VAN DEN BOSCH, SENIOR DEVOPS ENGINEER, PROWISE
+ +
+
+
+
+ After researching possible solutions, he opted for containerization and Kubernetes orchestration. “Containerization is the future,” van den Bosch says. “Kubernetes solves things like networking really well, in a way that fits our business model. We want to focus on our core products, and that’s the software that runs on it and not necessarily the infrastructure itself.” Plus, the Prowise team liked that there was no vendor lock-in. “We don’t want to be limited to one platform,” he says. “We try not to touch products that are very proprietary and can’t be ported easily to another vendor.” +

+ The time to market with Kubernetes was very short: The first web-based applications on the platform went into beta within a few months. That was largely made possible by van den Bosch’s decision to use Azure’s managed Kubernetes service, AKS. The team then had to figure out which components to keep and which to replace. Monitoring tools like New Relic were taken out “because they tend to become very expensive when you scale it to different availability zones, and it’s just not very maintainable,” he says. +

+ A lot of work also went into improving Prowise’s CI/CD pipelines. “We wanted to make sure that the pipelines are automated and easy to use,” he says. “We have a lot of settings and configurations figured out for the pipelines, and it’s just applying those scripts and those configurations to new projects from here on out.” +

+ With its first web-based applications now running in beta on Prowise’s Kubernetes platform, the team is seeing the benefits of rapid and smooth deployments. “The old way of deploying took half an hour of preparations and half an hour deploying it. With Kubernetes, it’s a couple of seconds,” says Senior Developer Bart Haalstra. As a result, adds van den Bosch, “We’ve gone from quarterly releases to a release every month in production. We’re pretty much deploying every hour or just when we find that a feature is ready for production. Before, our releases were mostly done on off-hours, where it couldn’t impact our customers, as our confidence the process itself was relatively low. With Kubernetes, we dare to deploy in the middle of a busy day with high confidence the deployment will succeed.” +
+
+
+
+ "Kubernetes allows us to really consider the best tools for a problem. Want to have a full-fledged analytics application developed by a third party that is just right for your use case? Run it. Dabbling in machine learning and AI algorithms but getting tired of waiting days for training to complete? It takes only seconds to scale it. Got a stubborn developer that wants to use a programming language no one has heard of? Let him, if it runs in a container, of course. And all of that while your operations team/DevOps get to sleep at night."

- VICTOR VAN DEN BOSCH, SENIOR DEVOPS ENGINEER, PROWISE
+
+
+ +
+ +
+ Plus, van den Bosch says, “Kubernetes has enabled us to follow up quickly on bugs and implement tweaks to our users with zero downtime between versions. For some bugs we’ve pushed code fixes to production minutes after detection.” +

+ Recently, the team launched a new single sign-on solution for use in an internal application. “Due to the resource based architecture of the Kubernetes platform, we were able to bring that application into an entirely new production environment in less than a day, most of that time used for testing after applying the already well-known resource definitions from staging to the new environment,” says van den Bosch. “On a traditional VM this would have likely cost a day or two, and then probably a few weeks to iron out the kinks in our provisioning scripts as we apply updates.” +

+ Legacy applications are also being moved to Kubernetes. Not long ago, the team needed to set up a Java-based application for compiling and running a frontend. “On a traditional VM, it would have taken quite a bit of time to set it up and keep it up to date, not to mention maintenance for that setup down the line,” says van den Bosch. Instead, it took less than half a day to Dockerize it and get it running on Kubernetes. “It was much easier, and we were able to save costs too because we didn’t have to spin up new VMs specially for it.” +
+ +
+
+"We’re really trying to deliver integrated solutions with our hardware and software and making it as easy as possible for users to use and collaborate from different places,” says van den Bosch. And, says Haalstra, “We cannot do it without Kubernetes."

- VICTOR VAN DEN BOSCH, SENIOR DEVOPS ENGINEER, PROWISE
+
+ +
+ Perhaps most importantly, van den Bosch says, “Kubernetes allows us to really consider the best tools for a problem and take full advantage of microservices architecture. Got a library in Node.js that excels at solving a certain problem? Use it. Want to have a full-fledged analytics application developed by a third party that is just right for your use case? Run it. Dabbling in machine learning and AI algorithms but getting tired of waiting days for training to complete? It takes only seconds to scale it. Got a stubborn developer that wants to use a programming language no one has heard of? Let him, if it runs in a container, of course. And all of that while your operations team/DevOps get to sleep at night.” +

+ Looking ahead, all new web development, platforms, and APIs at Prowise will be on Kubernetes. One of the big greenfield projects is a platform for teachers and students that is launching for back-to-school season in September. Users will be able to log in and access a wide variety of educational applications. With the recent acquisition of the software company Oefenweb, Prowise plans to provide adaptive software that allows teachers to get an accurate view of their students’ progress and weak points, and automatically adjusts the difficulty level of assignments to suit individual students. “We will be leveraging Kubernetes’ power to integrate, supplement, and support our combined application portfolio and bring our solutions to more classrooms,” says van den Bosch. +

+ Collaborative software is also a priority. With the single sign-in software, users’ settings and credentials are saved in the cloud and can be used on any screen in the world. “We’re really trying to deliver integrated solutions with our hardware and software and making it as easy as possible for users to use and collaborate from different places,” says van den Bosch. And, says Haalstra, “We cannot do it without Kubernetes.” +
+ +
diff --git a/content/en/case-studies/prowise/prowise_featured_logo.png b/content/en/case-studies/prowise/prowise_featured_logo.png new file mode 100644 index 0000000000..e6dc1a35ec Binary files /dev/null and b/content/en/case-studies/prowise/prowise_featured_logo.png differ diff --git a/content/en/case-studies/ricardo-ch/index.html b/content/en/case-studies/ricardo-ch/index.html new file mode 100644 index 0000000000..62501c4f5b --- /dev/null +++ b/content/en/case-studies/ricardo-ch/index.html @@ -0,0 +1,98 @@ +--- +title: ricardo.ch Case Study +linkTitle: ricardo-ch +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +featured: false +--- + +
+

CASE STUDY:
ricardo.ch: How Kubernetes Improved Velocity and DevOps Harmony +

+ +
+ +
+ Company  ricardo.ch     Location  Zurich, Switzerland      Industry  E-commerce +
+ +
+
+
+
+ +

Challenge

+ A Swiss online marketplace, ricardo.ch was experiencing problems with velocity, as well as a "classic gap" between Development and Operations, with the two sides unable to work well together. "They wanted to, but they didn’t have common ground," says Cedric Meury, Head of Platform Engineering. "This was one of the root causes that slowed us down." The company began breaking down the legacy monolith into microservices, and needed orchestration to support the new architecture in its own data centers—as well as bring together Dev and Ops. +

Solution

+ The company adopted Kubernetes for cluster management, Prometheus for monitoring, and Fluentd for logging. The first cluster was deployed on premise in December 2016, with the first service in production three months later. The migration is about half done, and the company plans to move completely to Google Cloud Platform by the end of 2018. +

Impact

+ Splitting up the monolith into microservices "allowed higher velocity, and Kubernetes was crucial to support that," says Meury. The number of deployments to production has gone from fewer than 10 a week to 30-60 per day. Before, "when there was a problem with something in production, tickets or complaints would be thrown over the wall to operations, the classical problem. Now, people have the chance to look into operations and troubleshoot for themselves first because everything is deployed in a standardized way," says Meury. He sees the impact in everyday interactions: "A couple of weeks ago, I saw a product manager doing a pull request for a JSON file that contains some variables, and someone else accepted it. And it was deployed after a couple of minutes or seconds even, which was unthinkable before. There used to be quite a chain of things that needed to happen, the whole monolith was difficult to understand, even for engineers. So, previously requests would go into large, inefficient Kanban boards and hopefully someone will have done the change after weeks and months." Before, infrastructure- and platform-related projects took months or years to complete; now developers and operators can work together to deploy infrastructure parts via Kubernetes in a matter of weeks and sometimes days. In the long run, the company also expects to notch 50% cost savings going from custom data center and virtual machines to containerized infrastructure and cloud services. +
+
+
+
+
+ "Splitting up the monolith allowed higher velocity, and Kubernetes was crucial to support that. Containerization and orchestration by Kubernetes helped us to drastically reduce the conflict between Dev and Ops and also allowed us to speak the same language on both sides of the aisle." +

— CEDRIC MEURY, HEAD OF PLATFORM ENGINEERING, RICARDO.CH
+
+
+
+
+

When Cedric Meury joined ricardo.ch in 2016, he saw a clear divide between Operations and Development. In fact, there was literal distance between them: The engineering team worked in France, while the rest of the org was based in Switzerland. +



+ "It was a classic gap between those departments and even some anger and frustration here and there," says Meury. "They wanted to work together, but they didn’t have common ground. This was one of the root causes that slowed us down." +

+ That gap was hurting velocity at ricardo.ch, a Swiss online marketplace. The website processes up to 2.6 million searches on a peak day from both web and mobile apps, serving 3.2 million members with its live auctions. The technology team’s main challenge was to make sure that "the bids for items come in the right order, and before the auction is finished, and that this works in a fair way," says Meury. "We have a real-time requirement. We also provide an automated system to bid, and it needs to be accurate and correct. With a distributed system, you have the challenge of making sure that the ordering is right. And that’s one of the things we’re currently dealing with." +

+ To address the velocity issue, ricardo.ch CTO Jeremy Seitz established a new software factory called EPD, which consists of 65 engineers, 7 product managers and 2 designers. "We brought these three departments together so that they can kind of streamline this and talk to each other much more closely," says Meury. +
+
+
+
+ "Being in the End User Community demonstrates that we stand behind these technologies. In Switzerland, if all the companies see that ricardo.ch’s using it, I think that will help adoption. I also like that we’re connected to the other end users, so if there is a really heavy problem, I could go to the Slack channel, and say, ‘Hey, you guys…’ Like Reddit, Github and New York Times or whoever can give a recommendation on what to use here or how to solve that. So that’s kind of a superpower."

— CEDRIC MEURY, HEAD OF PLATFORM ENGINEERING, RICARDO.CH
+ +
+
+
+
+ + The company also began breaking down the legacy monolith into more than 100 microservices, and needed orchestration to support the new architecture in its own data centers. "Splitting up the monolith allowed higher velocity, and Kubernetes was crucial to support that," says Meury. "Containerization and orchestration by Kubernetes helped us to drastically reduce the conflict between Dev and Ops and also allowed us to speak the same language on both sides of the aisle." +

+ Meury put together a platform engineering team to choose the tools—including Fluentd for logging and Prometheus for monitoring, with Grafana visualization—and lay the groundwork for the first Kubernetes cluster, which was installed on premise in December 2016. Within a few weeks, the new platform was available to teams, who were given training sessions and documentation. The platform engineering team then embedded with engineers to help them deploy their applications on the new platform. The first service in production was the ricardo.ch jobs page. "It was an exercise in front-end development, so the developers could experiment with a new stack," says Meury. +

+ Meury estimates that half of the application has been migrated to Kubernetes. And the plan is to move everything to the Google Cloud Platform by the end of 2018. "We are still running some servers in our own data centers, but all of the containerization efforts and describing our services as Kubernetes manifests will allow us to quite easily make that shift," says Meury. +
+
+
+
+ "One of the core moments was when a front-end developer asked me how to do a port forward from his laptop to a front-end application to debug, and I told him the command. And he was like, ‘Wow, that’s all I need to do?’ He was super excited and happy about it. That showed me that this power in the right hands can just accelerate development." +

- CEDRIC MEURY, HEAD OF PLATFORM ENGINEERING, RICARDO.CH
+
+
+ +
+
+ The impact has been great. Moving from custom data center and virtual machines to containerized infrastructure and cloud services is expected to result in 50% cost savings for the company. The number of deployments to production has gone from fewer than 10 a week to 30-60 per day. Before, "when there was a problem with something in production, tickets or complaints would be thrown over the wall to operations, the classical problem," says Meury. "Now, people have the chance to look into operations and troubleshoot for themselves first because everything is deployed in a standardized way. That reduces time and uncertainty." +

+ Meury also sees the impact in everyday interactions: "A couple of weeks ago, I saw a product manager doing a pull request for a JSON file that contains some variables, and someone else accepted it. And it was deployed after a couple of minutes or seconds even, which was unthinkable before. There used to be quite a chain of things that needed to happen, the whole monolith was difficult to understand, even for engineers. So, previously requests would go into large, inefficient Kanban boards and hopefully someone will have done the change after weeks and months." +

+ The divide between Dev and Ops has also diminished. "After a couple of months, I got requests by people saying, ‘Hey, could you help me install the Kubernetes client? I want to actually look at what’s going on,’" says Meury. "People were directly looking at the state of the system, bringing them much, much closer to the operations." Before, infrastructure- and platform-related projects took months or years to complete; now developers and operators can work together to deploy infrastructure parts via Kubernetes in a matter of weeks and sometimes days. +
+ +
+
+"One of my colleagues was listening to all the talks at KubeCon, and he was overwhelmed by all the tools, technologies, frameworks out there that are currently lacking on our platform, but at the same time, he’s very happy to know that in the future there is so much that we can still explore and we can improve and we can work on."

- CEDRIC MEURY, HEAD OF PLATFORM ENGINEERING, RICARDO.CH
+
+ +
+ + + The ability to have insight into the system has extended to other parts of the company, too. "I found out that one of our customer support representatives looks at Grafana metrics to find out whether the system is running fine, which is fantastic," says Meury. "Prometheus is directly hooked into customer care." +

+ The ricardo.ch cloud native journey has perhaps had the most impact on the Ops team. "We have an operations team that comes from a hardware-based background, and right now they are relearning how to operate in a more virtualized and cloud native world, with great success so far," says Meury. "So besides still operating on-site data center firewalls, they learn to code in Go or do some Python scripting at the same time. Former network administrators are writing Go code. It’s just really cool. +

+ For Meury, the journey boils down to this. "One of my colleagues was listening to all the talks at KubeCon, and he was overwhelmed by all the tools, technologies, frameworks out there that are currently lacking on our platform," says Meury. "But at the same time, he’s very happy to know that in the future there is so much that we can still explore and we can improve and we can work on. We’re transitioning from seeing problems everywhere—like, ‘This is broken’ or ‘This is down, and we have to fix it’—more to, ‘How can we actually improve and automate more, and make it nicer for developers and ultimately for the end users?’" +
+ +
diff --git a/content/en/case-studies/ricardo-ch/ricardo-ch_featured_logo.png b/content/en/case-studies/ricardo-ch/ricardo-ch_featured_logo.png new file mode 100644 index 0000000000..c462c7ba56 Binary files /dev/null and b/content/en/case-studies/ricardo-ch/ricardo-ch_featured_logo.png differ diff --git a/content/en/case-studies/slamtec/index.html b/content/en/case-studies/slamtec/index.html new file mode 100644 index 0000000000..4a99d28fb3 --- /dev/null +++ b/content/en/case-studies/slamtec/index.html @@ -0,0 +1,88 @@ +--- +title: Slamtec Case Study +linkTitle: slamtec +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +featured: false +--- + +
+

CASE STUDY:

+

+
+ +
+ Company  Slamtec     Location  Shanghai, China     Industry  Robotics +
+ +
+
+
+
+

Challenge

+ Founded in 2013, SLAMTEC provides service robot autonomous localization and navigation solutions. The company’s strength lies in its R&D team’s ability to quickly introduce, and continually iterate on, its core products. In the past few years, the company, which had a legacy infrastructure based on Alibaba Cloud and VMware vSphere, began looking to build its own stable and reliable container cloud platform to host its Internet of Things applications. "Our needs for the cloud platform included high availability, scalability and security; multi-granularity monitoring alarm capability; friendliness to containers and microservices; and perfect CI/CD support," says Benniu Ji, Director of Cloud Computing Business Division. + +

Solution

+ Ji’s team chose Kubernetes for orchestration. "CNCF brings quality assurance and a complete ecosystem for Kubernetes, which is very important for the wide application of Kubernetes," says Ji. Thus Slamtec decided to adopt other CNCF projects as well: Prometheus monitoring, Fluentd logging, Harbor registry, and Helm package manager. +
+

Impact

+ With the new platform, Ji reports that Slamtec has experienced "18+ months of 100% stability!" For users, there is now zero service downtime and seamless upgrades. "Kubernetes with third-party service mesh integration (Istio, along with Jaeger and Envoy) significantly reduced the microservice configuration and maintenance efforts by 50%," he adds. With centralized metrics monitoring and log aggregation provided by Prometheus on Fluentd, teams are saving 50% of time spent on troubleshooting and debugging. Harbor replication has allowed production/staging/testing environments to cross public cloud and the private Kubernetes cluster to share the same container registry, resulting in 30% savings of CI/CD efforts. Plus, Ji says, "Helm has accelerated prototype development and environment setup with its rich sharing charts." +
+ +
+
+
+
+ "Cloud native technology helps us ensure high availability of our business, while improving development and testing efficiency, shortening the research and development cycle and enabling rapid product delivery." +

- BENNIU JI, DIRECTOR OF CLOUD COMPUTING BUSINESS DIVISION
+
+
+
+
+

Founded in 2013, Slamtec provides service robot autonomous localization and navigation solutions. In this fast-moving space, the company built its success on the ability of its R&D team to quickly introduce, and continually iterate on, its core products. +

+ To sustain that development velocity, the company over the past few years began looking to build its own stable and reliable container cloud platform to host its Internet of Things applications. With a legacy infrastructure based on Alibaba Cloud and VMware vSphere, Slamtec teams had already adopted microservice architecture and continuous delivery, for "fine granularity on-demand scaling, fault isolation, ease of development, testing, and deployment, and for facilitating high-speed iteration," says Benniu Ji, Director of Cloud Computing Business Division. So "our needs for the cloud platform included high availability, scalability and security; multi-granularity monitoring alarm capability; friendliness to containers and microservices; and perfect CI/CD support." +

+ After an evaluation of existing technologies, Ji’s team chose Kubernetes for orchestration. "CNCF brings quality assurance and a complete ecosystem for Kubernetes, which is very important for the wide application of Kubernetes," says Ji. Plus, "avoiding binding to an infrastructure technology or provider can help us ensure that our business is deployed and migrated in cross-regional environments, and can serve users all over the world." +
+
+
+
+ "CNCF brings quality assurance and a complete ecosystem for Kubernetes, which is very important for the wide application of Kubernetes."

- BENNIU JI, DIRECTOR OF CLOUD COMPUTING BUSINESS DIVISION
+ +
+
+
+
+ Thus Slamtec decided to adopt other CNCF projects as well. "We built a monitoring and logging system based on Prometheus and Fluentd," says Ji. "The integration between Prometheus/Fluentd and Kubernetes is convenient, with multiple dimensions of data monitoring and log collection capabilities." +

+ The company uses Harbor as a container image repository. "Harbor’s replication function helps us implement CI/CD on both private and public clouds," says Ji. "In addition, multi-project support, certification and policy configuration, and integration with Kubernetes are also excellent functions." Helm is also being used as a package manager, and the team is evaluating the Istio framework. "We’re very pleased that Kubernetes and these frameworks can be seamlessly integrated," Ji adds. +
+
+
+
+ "Cloud native is suitable for microservice architecture, it’s suitable for fast iteration and agile development, and it has a relatively perfect ecosystem and active community."

- BENNIU JI, DIRECTOR OF CLOUD COMPUTING BUSINESS DIVISION
+
+
+ +
+
+ With the new platform, Ji reports that Slamtec has experienced "18+ months of 100% stability!" For users, there is now zero service downtime and seamless upgrades. "We benefit from the abstraction of Kubernetes from network and storage," says Ji. "The dependence on external services can be decoupled from the service and placed under unified management in the cluster." +

+ Using Kubernetes and Istio "significantly reduced the microservice configuration and maintenance efforts by 50%," he adds. With centralized metrics monitoring and log aggregation provided by Prometheus on Fluentd, teams are saving 50% of time spent on troubleshooting and debugging. Harbor replication has allowed production/staging/testing environments to cross public cloud and the private Kubernetes cluster to share the same container registry, resulting in 30% savings of CI/CD efforts. Plus, Ji adds, "Helm has accelerated prototype development and environment setup with its rich sharing charts." +

+In short, Ji says, Slamtec’s new platform is helping it achieve one of its primary goals: the quick and easy release of products. With multiple release models and a centralized control interface, the platform is changing developers’ lives for the better. Slamtec also offers a unified API for the development of automated deployment tools according to users’ specific needs. +
+ +
+
+"We benefit from the abstraction of Kubernetes from network and storage, the dependence on external services can be decoupled from the service and placed under unified management in the cluster."

- BENNIU JI, DIRECTOR OF CLOUD COMPUTING BUSINESS DIVISION
+
+ +
+ Given its own success with cloud native, Slamtec has just one piece of advice for organizations considering making the leap. "For already containerized services, you should migrate them to the cloud native architecture as soon as possible and enjoy the advantages brought by the cloud native ecosystem," Ji says. "To migrate traditional, non-containerized services, in addition to the architecture changes of the service itself, you need to fully consider the operation and maintenance workload required to build the cloud native architecture." +

+ That said, the cost-benefit analysis has been simple for Slamtec. "Cloud native technology is suitable for microservice architecture, it’s suitable for fast iteration and agile development, and it has a relatively perfect ecosystem and active community," says Ji. "It helps us ensure high availability of our business, while improving development and testing efficiency, shortening the research and development cycle and enabling rapid product delivery." +
+
diff --git a/content/en/case-studies/slamtec/slamtec_featured_logo.png b/content/en/case-studies/slamtec/slamtec_featured_logo.png new file mode 100644 index 0000000000..598db9fe43 Binary files /dev/null and b/content/en/case-studies/slamtec/slamtec_featured_logo.png differ diff --git a/content/en/case-studies/thredup/index.html b/content/en/case-studies/thredup/index.html new file mode 100644 index 0000000000..0a35de2b1a --- /dev/null +++ b/content/en/case-studies/thredup/index.html @@ -0,0 +1,94 @@ +--- +title: ThredUp Case Study +linkTitle: thredup +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +featured: false +--- + +
+

CASE STUDY:

+

+
+ +
+ Company  ThredUp     Location  San Francisco, CA     Industry  eCommerce +
+ +
+
+
+
+

Challenge

+ The largest online consignment store for women’s and children’s clothes, ThredUP launched in 2009 with a monolithic application running on Amazon Web Services. Though the company began breaking up the monolith into microservices a few years ago, the infrastructure team was still dealing with handcrafted servers, which hampered productivity. "We’ve configured them just to get them out as fast as we could, but there was no standardization, and as we kept growing, that became a bigger and bigger chore to manage," says Cofounder/CTO Chris Homer. The infrastructure, they realized, needed to be modernized to enable the velocity the company needed. "It’s really important to a company like us who’s disrupting the retail industry to make sure that as we’re building software and getting it out in front of our users, we can do it on a fast cycle and learn a ton as we experiment," adds Homer. "We wanted to make sure that our engineers could embrace the DevOps mindset as they built software. It was really important to us that they could own the life cycle from end to end, from conception at design, through shipping it and running it in production, from marketing to ecommerce, the user experience and our internal distribution center operations." +

+ +

Solution

+ In early 2017, the company adopted Kubernetes for container orchestration, and in the course of a year, the entire infrastructure was moved to Kubernetes. +

+

Impact

+ Before, "even considering that we already have all the infrastructure in the cloud, databases and services, and all these good things," says Infrastructure Engineer Oleksandr Snagovskyi, setting up a new service meant waiting 2-4 weeks just to get the environment. With Kubernetes, new application roll-out time has decreased from several days or weeks to minutes or hours. Now, says Infrastructure Engineer Oleksii Asiutin, "our developers can experiment with existing applications and create new services, and do it all blazingly fast." In fact, deployment time has decreased about 50% on average for key services. "Lead time" for all applications is under 20 minutes, enabling engineers to deploy multiple times a day. Plus, 3200+ ansible scripts have been deprecated in favor of helm charts. And impressively, hardware cost has decreased 56% while the number of services ThredUP runs has doubled. +
+ +
+
+
+
+

+ "Moving towards cloud native technologies like Kubernetes really unlocks our ability to experiment quickly and learn from customers along the way." +

- CHRIS HOMER, COFOUNDER/CTO, THREDUP
+
+
+
+
+

The largest online consignment store for women’s and children’s clothes, ThredUP is focused on getting consumers to think second-hand first. "We’re disrupting the retail industry, and it’s really important to us to make sure that as we’re building software and getting it out in front of our users, we can do it on a fast cycle and learn a ton as we experiment," says Cofounder/CTO Chris Homer. +

+ But over the past few years, ThredUP, which was launched in 2009 with a monolithic application running on Amazon Web Services, was feeling growing pains as its user base passed the 20- million mark. Though the company had begun breaking up the monolith into microservices, the infrastructure team was still dealing with handcrafted servers, which hampered productivity. "We’ve configured them just to get them out as fast as we could, but there was no standardization, and as we kept growing, that became a bigger and bigger chore to manage," says Homer. The infrastructure, Homer realized, needed to be modernized to enable the velocity—and the culture—the company wanted. +

+ "We wanted to make sure that our engineers could embrace the DevOps mindset as they built software," Homer says. "It was really important to us that they could own the life cycle from end to end, from conception at design, through shipping it and running it in production, from marketing to ecommerce, the user experience and our internal distribution center operations." +
+
+
+
+ "Kubernetes enabled auto scaling in a seamless and easily manageable way on days like Black Friday. We no longer have to sit there adding instances, monitoring the traffic, doing a lot of manual work."

- CHRIS HOMER, COFOUNDER/CTO, THREDUP
+ +
+
+
+
+ In early 2017, Homer found the solution with Kubernetes container orchestration. In the course of a year, the company migrated its entire infrastructure to Kubernetes, starting with its website applications and concluding with its operations backend. Teams are now also using Fluentd and Helm. "Initially there were skeptics about the value that this move to cloud native technologies would bring, but as we went through the process, people very quickly started to realize the benefit of having seamless upgrades and easy rollbacks without having to worry about what was happening," says Homer. "It unlocks the developers’ confidence in being able to deploy quickly, learn, and if you make a mistake, you can roll it back without any issue." +

+ According to the infrastructure team, the key improvement was the consistent experience Kubernetes enabled for developers. "It lets developers work in the same environment that their application will be running in production," says Infrastructure Engineer Oleksandr Snagovskyi. Plus, "It became easier to test, easier to refine, and easier to deploy, because everything’s done automatically," says Infrastructure Engineer Oleksii Asiutin. "One of the main goals of our team is to make developers’ lives more comfortable, and we are achieving this with Kubernetes. They can experiment with existing applications and create new services, and do it all blazingly fast." +
+
+
+
+ "One of the main goals of our team is to make developers’ lives more comfortable, and we are achieving this with Kubernetes. They can experiment with existing applications and create new services, and do it all blazingly fast."

- OLEKSII ASIUTIN, INFRASTRUCTURE ENGINEER, THREDUP
+
+
+ +
+
+ Before, "even considering that we already have all the infrastructure in the cloud, databases and services, and all these good things," says Snagovskyi, setting up a new service meant waiting 2-4 weeks just to get the environment. With Kubernetes, because of simple configuration and minimal dependency on the infrastructure team, the roll-out time for new applications has decreased from several days or weeks to minutes or hours. +

+ In fact, deployment time has decreased about 50% on average for key services. "Fast deployment and parallel test execution in Kubernetes keep a ‘lead time’ for all applications under 20 minutes," allowing engineers to do multiple releases a day, says Director of Infrastructure Roman Chepurnyi. The infrastructure team’s jobs, he adds, have become less burdensome, too: "We can execute seamless upgrades frequently and keep cluster performance and security up-to-date because OS-level hardening and upgrades of a Kubernetes cluster is a non-blocking activity for production operations and does not involve coordination with multiple engineering teams." +

+ More than 3,200 ansible scripts have been deprecated in favor of Helm charts. And impressively, hardware cost has decreased 56% while the number of services ThredUP runs has doubled. +
+ +
+
+"Our future’s all about automation, and behind that, cloud native technologies are going to unlock our ability to embrace that and go full force towards the future."

- CHRIS HOMER, COFOUNDER/CTO, THREDUP
+
+ +
+ Perhaps the impact is most evident on the busiest days in retail. "Kubernetes enabled auto scaling in a seamless and easily manageable way on days like Black Friday," says Homer. "We no longer have to sit there adding instances, monitoring the traffic, doing a lot of manual work. That’s handled for us, and instead we can actually have some turkey, drink some wine and enjoy our families." +

+ For ThredUP, Kubernetes fits perfectly with the company’s vision for how it’s changing retail. Some of what ThredUP does is still very manual: "As our customers send bags of items to our distribution centers, they’re photographed, inspected, tagged, and put online today," says Homer. +

+ But in every other aspect, "we use different forms of technology to drive everything we do," Homer says. "We have machine learning algorithms to help predict the likelihood of sale for items, which drives our pricing algorithm. We have personalization algorithms that look at the images and try to determine style and match users’ preferences across our systems." +

+ Count Kubernetes as one of those drivers. "Our future’s all about automation," says Homer, "and behind that, cloud native technologies are going to unlock our ability to embrace that and go full force towards the future." +
+
diff --git a/content/en/case-studies/thredup/thredup_featured_logo.png b/content/en/case-studies/thredup/thredup_featured_logo.png new file mode 100644 index 0000000000..3961f761b1 Binary files /dev/null and b/content/en/case-studies/thredup/thredup_featured_logo.png differ diff --git a/content/en/case-studies/vsco/index.html b/content/en/case-studies/vsco/index.html new file mode 100644 index 0000000000..4ca7aa1bbc --- /dev/null +++ b/content/en/case-studies/vsco/index.html @@ -0,0 +1,97 @@ +--- +title: vsco Case Study +linkTitle: vsco +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +featured: false +--- + +
+

CASE STUDY:
VSCO: How a Mobile App Saved 70% on Its EC2 Bill with Cloud Native +

+ +
+ +
+ Company  VSCO     Location  Oakland, CA     Industry  Photo Mobile App +
+ +
+
+
+
+

Challenge

+ After moving from Rackspace to AWS in 2015, VSCO began building Node.js and Go microservices in addition to running its PHP monolith. The team containerized the microservices using Docker, but "they were all in separate groups of EC2 instances that were dedicated per service," says Melinda Lu, Engineering Manager for the Machine Learning Team. Adds Naveen Gattu, Senior Software Engineer on the Community Team: "That yielded a lot of wasted resources. We started looking for a way to consolidate and be more efficient in the AWS EC2 instances." +

Solution

+ The team began exploring the idea of a scheduling system, and looked at several solutions including Mesos and Swarm before deciding to go with Kubernetes. VSCO also uses gRPC and Envoy in their cloud native stack. +
+

Impact

+ Before, deployments required "a lot of manual tweaking, in-house scripting that we wrote, and because of our disparate EC2 instances, Operations had to babysit the whole thing from start to finish," says Senior Software Engineer Brendan Ryan. "We didn't really have a story around testing in a methodical way, and using reusable containers or builds in a standardized way." There's a faster onboarding process now. Before, the time to first deploy was two days' hands-on setup time; now it's two hours. By moving to continuous integration, containerization, and Kubernetes, velocity was increased dramatically. The time from code-complete to deployment in production on real infrastructure went from one to two weeks to two to four hours for a typical service. Adds Gattu: "In man hours, that's one person versus a developer and a DevOps individual at the same time." With an 80% decrease in time for a single deployment to happen in production, the number of deployments has increased as well, from 1200/year to 3200/year. There have been real dollar savings too: With Kubernetes, VSCO is running at 2x to 20x greater EC2 efficiency, depending on the service, adding up to about 70% overall savings on the company's EC2 bill. Ryan points to the company's ability to go from managing one large monolithic application to 50+ microservices with "the same size developer team, more or less. And we've only been able to do that because we have increased trust in our tooling and a lot more flexibility, so we don't need to employ a DevOps engineer to tune every service." With Kubernetes, gRPC, and Envoy in place, VSCO has seen an 88% reduction in total minutes of outage time, mainly due to the elimination of JSON-schema errors and service-specific infrastructure provisioning errors, and an increased speed in fixing outages. + +
+ +
+
+
+
+ "I've been really impressed seeing how our engineers have come up with creative solutions to things by just combining a lot of Kubernetes primitives. Exposing Kubernetes constructs as a service to our engineers as opposed to exposing higher order constructs has worked well for us. It lets you get familiar with the technology and do more interesting things with it." +

- MELINDA LU, ENGINEERING MANAGER FOR VSCO'S MACHINE LEARNING TEAM
+
+
+
+
+

A photography app for mobile, VSCO was born in the cloud in 2011. In the beginning, "we were using Rackspace and had one PHP monolith application talking to MySQL database, with FTP deployments, no containerization, no orchestration," says Software Engineer Brendan Ryan, "which was sufficient at the time."

+ After VSCO moved to AWS in 2015 and its user base passed the 30 million mark, the team quickly realized that set-up wouldn't work anymore. Developers had started building some Node and Go microservices, which the team tried containerizing with Docker. But "they were all in separate groups of EC2 instances that were dedicated per service," says Melinda Lu, Engineering Manager for the Machine Learning Team. Adds Naveen Gattu, Senior Software Engineer on the Community Team: "That yielded a lot of wasted resources. We started looking for a way to consolidate and be more efficient in the EC2 instances." +

+ With a checklist that included ease of use and implementation, level of support, and whether it was open source, the team evaluated a few scheduling solutions, including Mesos and Swarm, before deciding to go with Kubernetes. "Kubernetes seemed to have the strongest open source community around it," says Lu. Plus, "We had started to standardize on a lot of the Google stack, with Go as a language, and gRPC for almost all communication between our own services inside the data center. So it seemed pretty natural for us to choose Kubernetes." + +
+
+
+
+ "Kubernetes seemed to have the strongest open source community around it, plus, we had started to standardize on a lot of the Google stack, with Go as a language, and gRPC for almost all communication between our own services inside the data center. So it seemed pretty natural for us to choose Kubernetes."

- MELINDA LU, ENGINEERING MANAGER FOR VSCO'S MACHINE LEARNING TEAM
+ +
+
+
+
+ At the time, there were few managed Kubernetes offerings and less tooling available in the ecosystem, so the team stood up its own cluster and built some custom components for its specific deployment needs, such as an automatic ingress controller and policy constructs for canary deploys. "We had already begun breaking up the monolith, so we moved things one by one, starting with pretty small, low-risk services," says Lu. "Every single new service was deployed there." The first service was migrated at the end of 2016, and after one year, 80% of the entire stack was on Kubernetes, including the rest of the monolith. +

+ The impact has been great. Deployments used to require "a lot of manual tweaking, in-house scripting that we wrote, and because of our disparate EC2 instances, Operations had to babysit the whole thing from start to finish," says Ryan. "We didn't really have a story around testing in a methodical way, and using reusable containers or builds in a standardized way." There's a faster onboarding process now. Before, the time to first deploy was two days' hands-on setup time; now it's two hours. +

+ By moving to continuous integration, containerization, and Kubernetes, velocity was increased dramatically. The time from code-complete to deployment in production on real infrastructure went from one to two weeks to two to four hours for a typical service. Plus, says Gattu, "In man hours, that's one person versus a developer and a DevOps individual at the same time." With an 80% decrease in time for a single deployment to happen in production, the number of deployments has increased as well, from 1200/year to 3200/year. + +
+
+
+
+ "I've been really impressed seeing how our engineers have come up with really creative solutions to things by just combining a lot of Kubernetes primitives, exposing Kubernetes constructs as a service to our engineers as opposed to exposing higher order constructs has worked well for us. It lets you get familiar with the technology and do more interesting things with it."

- MELINDA LU, ENGINEERING MANAGER FOR VSCO’S MACHINE LEARNING TEAM
+
+
+ +
+
+ There have been real dollar savings too: With Kubernetes, VSCO is running at 2x to 20x greater EC2 efficiency, depending on the service, adding up to about 70% overall savings on the company’s EC2 bill. +

+ Ryan points to the company’s ability to go from managing one large monolithic application to 50+ microservices with “the same size developer team, more or less. And we’ve only been able to do that because we have increased trust in our tooling and a lot more flexibility when there are stress points in our system. You can increase CPU memory requirements of a service without having to bring up and tear down instances, and read through AWS pages just to be familiar with a lot of jargon, which isn’t really tenable for a company at our scale.” +

+ Envoy and gRPC have also had a positive impact at VSCO. “We get many benefits from gRPC out of the box: type safety across multiple languages, ease of defining services with the gRPC IDL, built-in architecture like interceptors, and performance improvements over HTTP/1.1 and JSON,” says Lu. +

+ VSCO was one of the first users of Envoy, getting it in production five days after it was open sourced. “We wanted to serve gRPC and HTTP/2 directly to mobile clients through our edge load balancers, and Envoy was our only reasonable solution,” says Lu. “The ability to send consistent and detailed stats by default across all services has made observability and standardization of dashboards much easier.” The metrics that come built in with Envoy have also “greatly helped with debugging,” says DevOps Engineer Ryan Nguyen. +
+ +
+
+"Because there’s now an organization that supports Kubernetes, does that build confidence? The answer is a resounding yes."

- NAVEEN GATTU, SENIOR SOFTWARE ENGINEER ON VSCO’S COMMUNITY TEAM
+
+ +
+ With Kubernetes, gRPC, and Envoy in place, VSCO has seen an 88% reduction in total minutes of outage time, mainly due to the elimination of JSON-schema errors and service-specific infrastructure provisioning errors, and an increased speed in fixing outages. +

+ Given its success using CNCF projects, VSCO is starting to experiment with others, including CNI and Prometheus. “To have a large organization backing these technologies, we have a lot more confidence trying this software and deploying to production,” says Nguyen. +

+ The team has made contributions to gRPC and Envoy, and is hoping to be even more active in the CNCF community. “I’ve been really impressed seeing how our engineers have come up with really creative solutions to things by just combining a lot of Kubernetes primitives,” says Lu. “Exposing Kubernetes constructs as a service to our engineers as opposed to exposing higher order constructs has worked well for us. It lets you get familiar with the technology and do more interesting things with it.” + +
+
diff --git a/content/en/case-studies/vsco/vsco_featured_logo.png b/content/en/case-studies/vsco/vsco_featured_logo.png new file mode 100644 index 0000000000..e01e2e4e8f Binary files /dev/null and b/content/en/case-studies/vsco/vsco_featured_logo.png differ diff --git a/content/en/case-studies/woorank/index.html b/content/en/case-studies/woorank/index.html new file mode 100644 index 0000000000..aa41b7cb44 --- /dev/null +++ b/content/en/case-studies/woorank/index.html @@ -0,0 +1,96 @@ +--- +title: Woorank Case Study +linkTitle: woorank +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +featured: false +--- + + +
+

CASE STUDY:
Woorank: How Kubernetes Helped a Startup Manage 50 Microservices with
12 Engineers—At 30% Less Cost +

+ +
+ +
+ Company  Woorank     Location  Brussels, Belgium     Industry  Digital marketing tool +
+ +
+
+
+
+ +

Challenge

+ Founded in 2011, Woorank embraced microservices and containerization early on, so its core product, a tool that helps digital marketers improve their websites’ visibility on the internet, consists of 50 applications developed and maintained by a technical team of 12. For two years, the infrastructure ran smoothly on Mesos, but “there were still lots of our own libraries that we had to roll and applications that we had to bring in, so it was very cumbersome for us as a small team to keep those things alive and to update them,” says CTO/Cofounder Nils De Moor. So he began looking for a new solution with more automation and self-healing built in, that would better suit the company’s human resources. +

Solution

+ De Moor decided to switch to Kubernetes running on AWS, which “allows us to just define applications, how they need to run, how scalable they need to be, and it takes pain away from the developers thinking about that,” he says. “When things fail and errors pop up, the system tries to heal itself, and that’s really, for us, the key reason to work with Kubernetes.” The company now also uses Fluentd, Prometheus, and OpenTracing. +

Impact

+ The company’s number one concern was immediately erased: Maintaining Kubernetes takes just one person on staff, and it’s not a fulltime job. Infrastructure updates used to take two active working days; now it’s just a matter of “a few hours of passively following the process,” says De Moor. Implementing new tools—which once took weeks of planning, installing, and onboarding—now only takes a few days. “We were already pretty flexible in our costs and taking on traffic peaks and higher load in general,” adds De Moor, “but with Kubernetes and the other CNCF tools we use, we have achieved about 30% in cost savings.” Plus, the rate of deployments per day has nearly doubled. + +
+
+
+
+
+ “It was definitely important for us to have CNCF as an umbrella above everything. We’ve always been working with open source libraries and tools and technologies. It works very well for us, but sometimes things can drift, maintainers drop out, and projects go haywire. For us, it was indeed important to know that whatever project gets taken under this umbrella, it’s taken very seriously. Our way of contributing back is also by joining this community. It’s, for us, a way to show our appreciation for what’s going on in this framework.” +

— NILS DE MOOR, CTO/COFOUNDER, WOORANK
+
+
+
+
+

Woorank’s core product is a tool that enables digital marketers to improve their websites’ visibility on the internet.

+ “We help them acquire lots of data and then present it to them in meaningful ways so they can work with it,” says CTO/Cofounder Nils De Moor. In its seven years as a startup, the company followed a familiar technological path to build that product: starting with a monolithic application, breaking it down into microservices, and then embracing containerization. “That’s where our modern infrastructure started out,” says De Moor. +

+ As new features have been added to the product, it has grown to consist of 50 applications under the hood. Though Docker had made things easier to deploy, and the team had been using Mesos as an orchestration framework on AWS since 2015, De Moor realized there was still too much overhead to managing the infrastructure, especially with a technical team of just 12. +

+ “The pain point was that there were still lots of our own libraries that we had to roll and applications that we had to bring in, so it was very cumbersome for us as a small team to keep those things alive and to update them,” says De Moor. “When things went wrong during deployment, someone manually had to come in and figure it out. It wasn’t necessarily that the technology or anything was wrong with Mesos; it was just not really fitting our model of being a small company, not having the human resources to make sure it all works and can be updated.” + +
+
+
+
+ "Cloud native technologies have brought to us a transparency on everything going on in our system, from the code to the server. It has brought huge cost savings and a better way of dealing with those costs and keeping them under control. And performance-wise, it has helped our team understand how we can make our code work better on the cloud native infrastructure."

— NILS DE MOOR, CTO/COFOUNDER, WOORANK
+ +
+
+ +
+
+ Around the time Woorank was grappling with these issues, Kubernetes was emerging as a technology. De Moor knew that he wanted a platform that would be more automated and self-healing, and when he began experimenting with Kubernetes, he found that it checked all those boxes. “Kubernetes allows us to just define applications, how they need to run, how scalable they need to be, and it takes pain away from the developers thinking about that,” he says. “When things fail and errors pop up, the system tries to heal itself, and that’s really, for us, the key reason to work with Kubernetes. It allowed us to set up certain testing frameworks to just be alerted when things go wrong, instead of having to look at whether everything went right. It’s made people’s lives much easier. It’s quite a big mindset change.” +

+ Once one small Kubernetes cluster was up and running, the team began moving over a few applications at a time, gradually increasing the load over the course of several months. By early 2017, Woorank was 100% deployed on Kubernetes. +

+ The company’s number one concern was immediately erased: Maintaining Kubernetes is the responsibility of just one person on staff, and it’s not his fulltime job. Updating the old infrastructure “was always a pain,” says De Moor: It used to take two active working days, “and it was always a bit scary when we did that.” With Kubernetes, it’s just a matter of “a few hours of passively following the process.” +
+
+
+
+ "When things fail and errors pop up, the system tries to heal itself, and that’s really, for us, the key reason to work with Kubernetes. It allowed us to set up certain testing frameworks to just be alerted when things go wrong, instead of having to look at whether everything went right. It’s made people’s lives much easier. It’s quite a big mindset change."

- NILS DE MOOR, CTO/COFOUNDER, WOORANK
+
+
+ +
+
+ Transparency on all levels, from the code to the servers, has also been a byproduct of the move to Kubernetes. “It’s easier for the entire team to get a better understanding of the infrastructure, how it’s working, how it looks like, what’s going on,” says De Moor. “It’s not that thing that’s running, and no one really knows how it works except this one person. Now it’s really a team effort of everyone knowing, ‘Okay, when something goes wrong, it’s probably in this area or we need to check this.’” +

+ To that end, Woorank has begun implementing other cloud native tools that help with visibility, such as Fluentd for logging, Prometheus for monitoring, and OpenTracing for distributed tracing. Implementing these new tools—which once took weeks of planning, installing, and onboarding—now only takes a few days. “With all the tools and projects under the CNCF umbrella, it’s easier for us to test and play with technology than it used to be,” says De Moor. “With Prometheus, we used it fairly early and couldn’t get it fairly stable. A couple of months ago, the question reappeared, so we set it up in two days, and now everyone is using it.” +

+ Deployments, too, have been impacted: The rate has more than doubled, which De Moor partly attributes to the transparency of the new process. “With Kubernetes, you see that these three containers didn’t start for this reason,” he says. Plus, “now we bring deployment messages into Slack. If you see deployments rolling by every day, it does somehow indirectly enforce you, okay, I need to be part of this train, so I also need to deploy.” +
+ +
+
+"We can plan those things over a certain timeline, try to fit our resource usage to that, and then bring in spot instances, which will hopefully drive the costs down more."

- NILS DE MOOR, CTO/COFOUNDER, WOORANK
+
+ +
+ Perhaps the biggest impact, though, has been on the bottom line. “We were already pretty flexible in our costs and taking on traffic peaks and higher load in general, but with Kubernetes and the other CNCF tools we use, we have achieved about 30% in cost savings,” says De Moor. +

+ And there’s room for even greater savings. Currently, most of Woorank’s infrastructure is running on AWS on demand; the company pays a fixed price and makes some reservations for its planned amount of resources needed. De Moor is planning to experiment more with spot instances with certain resource-heavy workloads such as web crawls: “We can plan those things over a certain timeline, try to fit our resource usage to that, and then bring in spot instances, which will hopefully drive the costs down more.” +

+ Moving to Kubernetes has been so beneficial to Woorank that the company is doubling down on both cloud native technologies and the community. “It was definitely important for us to have CNCF as an umbrella above everything,” says De Moor. “We’ve always been working with open source libraries and tools and technologies. It works very well for us, but sometimes things can drift, maintainers drop out, and projects go haywire. For us, it was indeed important to know that whatever project gets taken under this umbrella, it’s taken very seriously. Our way of contributing back is also by joining this community. It’s, for us, a way to show our appreciation for what’s going on in this framework.” +
+
diff --git a/content/en/case-studies/woorank/woorank_featured_logo.png b/content/en/case-studies/woorank/woorank_featured_logo.png new file mode 100644 index 0000000000..f7d6ed300f Binary files /dev/null and b/content/en/case-studies/woorank/woorank_featured_logo.png differ diff --git a/static/images/CaseStudy_antfinancial_banner1.jpg b/static/images/CaseStudy_antfinancial_banner1.jpg new file mode 100644 index 0000000000..f3eda7abac Binary files /dev/null and b/static/images/CaseStudy_antfinancial_banner1.jpg differ diff --git a/static/images/CaseStudy_antfinancial_banner3.jpg b/static/images/CaseStudy_antfinancial_banner3.jpg new file mode 100644 index 0000000000..4e2482c90e Binary files /dev/null and b/static/images/CaseStudy_antfinancial_banner3.jpg differ diff --git a/static/images/CaseStudy_antfinancial_banner4.jpg b/static/images/CaseStudy_antfinancial_banner4.jpg new file mode 100644 index 0000000000..67d1ff2fd2 Binary files /dev/null and b/static/images/CaseStudy_antfinancial_banner4.jpg differ diff --git a/static/images/CaseStudy_ft_banner1.jpg b/static/images/CaseStudy_ft_banner1.jpg new file mode 100644 index 0000000000..d6b7f7fa09 Binary files /dev/null and b/static/images/CaseStudy_ft_banner1.jpg differ diff --git a/static/images/CaseStudy_ft_banner3.jpg b/static/images/CaseStudy_ft_banner3.jpg new file mode 100644 index 0000000000..ef1bda2825 Binary files /dev/null and b/static/images/CaseStudy_ft_banner3.jpg differ diff --git a/static/images/CaseStudy_ft_banner4.jpg b/static/images/CaseStudy_ft_banner4.jpg new file mode 100644 index 0000000000..69cd051c60 Binary files /dev/null and b/static/images/CaseStudy_ft_banner4.jpg differ diff --git a/static/images/CaseStudy_jdcom_banner1.jpg b/static/images/CaseStudy_jdcom_banner1.jpg new file mode 100644 index 0000000000..a01d2bdef7 Binary files /dev/null and b/static/images/CaseStudy_jdcom_banner1.jpg differ diff --git a/static/images/CaseStudy_jdcom_banner3.jpg b/static/images/CaseStudy_jdcom_banner3.jpg new file mode 100644 index 0000000000..1b04d83488 Binary files /dev/null and b/static/images/CaseStudy_jdcom_banner3.jpg differ diff --git a/static/images/CaseStudy_jdcom_banner4.jpg b/static/images/CaseStudy_jdcom_banner4.jpg new file mode 100644 index 0000000000..7da2f3cb57 Binary files /dev/null and b/static/images/CaseStudy_jdcom_banner4.jpg differ diff --git a/static/images/CaseStudy_montreal_banner1.jpg b/static/images/CaseStudy_montreal_banner1.jpg new file mode 100644 index 0000000000..19c28999c8 Binary files /dev/null and b/static/images/CaseStudy_montreal_banner1.jpg differ diff --git a/static/images/CaseStudy_montreal_banner3.jpg b/static/images/CaseStudy_montreal_banner3.jpg new file mode 100644 index 0000000000..d918d13667 Binary files /dev/null and b/static/images/CaseStudy_montreal_banner3.jpg differ diff --git a/static/images/CaseStudy_montreal_banner4.jpg b/static/images/CaseStudy_montreal_banner4.jpg new file mode 100644 index 0000000000..7d41407d0b Binary files /dev/null and b/static/images/CaseStudy_montreal_banner4.jpg differ diff --git a/static/images/CaseStudy_nerdalize_banner1.jpg b/static/images/CaseStudy_nerdalize_banner1.jpg new file mode 100644 index 0000000000..e664276efa Binary files /dev/null and b/static/images/CaseStudy_nerdalize_banner1.jpg differ diff --git a/static/images/CaseStudy_nerdalize_banner3.jpg b/static/images/CaseStudy_nerdalize_banner3.jpg new file mode 100644 index 0000000000..5fdd2e1659 Binary files /dev/null and b/static/images/CaseStudy_nerdalize_banner3.jpg differ diff --git a/static/images/CaseStudy_nerdalize_banner4.jpg b/static/images/CaseStudy_nerdalize_banner4.jpg new file mode 100644 index 0000000000..a824872cd5 Binary files /dev/null and b/static/images/CaseStudy_nerdalize_banner4.jpg differ diff --git a/static/images/CaseStudy_pingcap_banner1.jpg b/static/images/CaseStudy_pingcap_banner1.jpg new file mode 100644 index 0000000000..c98bf076ee Binary files /dev/null and b/static/images/CaseStudy_pingcap_banner1.jpg differ diff --git a/static/images/CaseStudy_pingcap_banner3.jpg b/static/images/CaseStudy_pingcap_banner3.jpg new file mode 100644 index 0000000000..32c599810f Binary files /dev/null and b/static/images/CaseStudy_pingcap_banner3.jpg differ diff --git a/static/images/CaseStudy_pingcap_banner4.jpg b/static/images/CaseStudy_pingcap_banner4.jpg new file mode 100644 index 0000000000..954832dc5c Binary files /dev/null and b/static/images/CaseStudy_pingcap_banner4.jpg differ diff --git a/static/images/CaseStudy_prowise_banner1.jpg b/static/images/CaseStudy_prowise_banner1.jpg new file mode 100644 index 0000000000..a54519df10 Binary files /dev/null and b/static/images/CaseStudy_prowise_banner1.jpg differ diff --git a/static/images/CaseStudy_prowise_banner3.jpg b/static/images/CaseStudy_prowise_banner3.jpg new file mode 100644 index 0000000000..c4126de691 Binary files /dev/null and b/static/images/CaseStudy_prowise_banner3.jpg differ diff --git a/static/images/CaseStudy_prowise_banner4.jpg b/static/images/CaseStudy_prowise_banner4.jpg new file mode 100644 index 0000000000..fa4b7ba6d7 Binary files /dev/null and b/static/images/CaseStudy_prowise_banner4.jpg differ diff --git a/static/images/CaseStudy_ricardoch_banner1.png b/static/images/CaseStudy_ricardoch_banner1.png new file mode 100644 index 0000000000..3107c07a6c Binary files /dev/null and b/static/images/CaseStudy_ricardoch_banner1.png differ diff --git a/static/images/CaseStudy_ricardoch_banner3.png b/static/images/CaseStudy_ricardoch_banner3.png new file mode 100644 index 0000000000..7059d2507e Binary files /dev/null and b/static/images/CaseStudy_ricardoch_banner3.png differ diff --git a/static/images/CaseStudy_ricardoch_banner4.png b/static/images/CaseStudy_ricardoch_banner4.png new file mode 100644 index 0000000000..f545ed5b15 Binary files /dev/null and b/static/images/CaseStudy_ricardoch_banner4.png differ diff --git a/static/images/CaseStudy_slamtec_banner1.jpg b/static/images/CaseStudy_slamtec_banner1.jpg new file mode 100644 index 0000000000..e2293d8b5d Binary files /dev/null and b/static/images/CaseStudy_slamtec_banner1.jpg differ diff --git a/static/images/CaseStudy_slamtec_banner3.jpg b/static/images/CaseStudy_slamtec_banner3.jpg new file mode 100644 index 0000000000..c5541b3d60 Binary files /dev/null and b/static/images/CaseStudy_slamtec_banner3.jpg differ diff --git a/static/images/CaseStudy_slamtec_banner4.jpg b/static/images/CaseStudy_slamtec_banner4.jpg new file mode 100644 index 0000000000..567db1f39c Binary files /dev/null and b/static/images/CaseStudy_slamtec_banner4.jpg differ diff --git a/static/images/CaseStudy_thredup_banner1.jpg b/static/images/CaseStudy_thredup_banner1.jpg new file mode 100644 index 0000000000..f31ea36ab1 Binary files /dev/null and b/static/images/CaseStudy_thredup_banner1.jpg differ diff --git a/static/images/CaseStudy_thredup_banner3.jpg b/static/images/CaseStudy_thredup_banner3.jpg new file mode 100644 index 0000000000..f27d74f180 Binary files /dev/null and b/static/images/CaseStudy_thredup_banner3.jpg differ diff --git a/static/images/CaseStudy_thredup_banner4.jpg b/static/images/CaseStudy_thredup_banner4.jpg new file mode 100644 index 0000000000..c15afa2f00 Binary files /dev/null and b/static/images/CaseStudy_thredup_banner4.jpg differ diff --git a/static/images/CaseStudy_vsco_banner1.jpg b/static/images/CaseStudy_vsco_banner1.jpg new file mode 100644 index 0000000000..d675936171 Binary files /dev/null and b/static/images/CaseStudy_vsco_banner1.jpg differ diff --git a/static/images/CaseStudy_vsco_banner2.jpg b/static/images/CaseStudy_vsco_banner2.jpg new file mode 100644 index 0000000000..a235490a7c Binary files /dev/null and b/static/images/CaseStudy_vsco_banner2.jpg differ diff --git a/static/images/CaseStudy_vsco_banner4.jpg b/static/images/CaseStudy_vsco_banner4.jpg new file mode 100644 index 0000000000..e884a1dbe5 Binary files /dev/null and b/static/images/CaseStudy_vsco_banner4.jpg differ diff --git a/static/images/CaseStudy_woorank_banner1.jpg b/static/images/CaseStudy_woorank_banner1.jpg new file mode 100644 index 0000000000..b20a34b990 Binary files /dev/null and b/static/images/CaseStudy_woorank_banner1.jpg differ diff --git a/static/images/CaseStudy_woorank_banner3.jpg b/static/images/CaseStudy_woorank_banner3.jpg new file mode 100644 index 0000000000..e5572abbb6 Binary files /dev/null and b/static/images/CaseStudy_woorank_banner3.jpg differ diff --git a/static/images/CaseStudy_woorank_banner4.jpg b/static/images/CaseStudy_woorank_banner4.jpg new file mode 100644 index 0000000000..93956b173c Binary files /dev/null and b/static/images/CaseStudy_woorank_banner4.jpg differ diff --git a/static/images/antfinancial_logo.png b/static/images/antfinancial_logo.png new file mode 100644 index 0000000000..37cd480d7f Binary files /dev/null and b/static/images/antfinancial_logo.png differ diff --git a/static/images/ft_logo.png b/static/images/ft_logo.png new file mode 100644 index 0000000000..9c76f5dc17 Binary files /dev/null and b/static/images/ft_logo.png differ diff --git a/static/images/jdcom_logo.png b/static/images/jdcom_logo.png new file mode 100644 index 0000000000..22bcb1910c Binary files /dev/null and b/static/images/jdcom_logo.png differ diff --git a/static/images/montreal_logo.png b/static/images/montreal_logo.png new file mode 100644 index 0000000000..56af7f018f Binary files /dev/null and b/static/images/montreal_logo.png differ diff --git a/static/images/nerdalize_logo.png b/static/images/nerdalize_logo.png new file mode 100644 index 0000000000..55bae791ca Binary files /dev/null and b/static/images/nerdalize_logo.png differ diff --git a/static/images/pingcap_logo.png b/static/images/pingcap_logo.png new file mode 100644 index 0000000000..af292c3eb7 Binary files /dev/null and b/static/images/pingcap_logo.png differ diff --git a/static/images/prowise_logo.png b/static/images/prowise_logo.png new file mode 100644 index 0000000000..4ee17fe038 Binary files /dev/null and b/static/images/prowise_logo.png differ diff --git a/static/images/ricardoch_logo.png b/static/images/ricardoch_logo.png new file mode 100644 index 0000000000..005329d4b9 Binary files /dev/null and b/static/images/ricardoch_logo.png differ diff --git a/static/images/slamtec_logo.png b/static/images/slamtec_logo.png new file mode 100644 index 0000000000..19667f57ab Binary files /dev/null and b/static/images/slamtec_logo.png differ diff --git a/static/images/thredup_logo.png b/static/images/thredup_logo.png new file mode 100644 index 0000000000..ab32a526a3 Binary files /dev/null and b/static/images/thredup_logo.png differ diff --git a/static/images/vsco_logo.png b/static/images/vsco_logo.png new file mode 100644 index 0000000000..aa6e56394e Binary files /dev/null and b/static/images/vsco_logo.png differ diff --git a/static/images/woorank_logo.png b/static/images/woorank_logo.png new file mode 100644 index 0000000000..ac62de874e Binary files /dev/null and b/static/images/woorank_logo.png differ