fixed CPU benchmarking typos and small formatting

pull/11279/head
Steven Powell 2021-05-04 12:48:04 -07:00
parent ce01c06fef
commit 2be9109797
4 changed files with 42 additions and 42 deletions

View File

@ -27,7 +27,7 @@ elif [[ ${OS} == "Linux" ]]; then
fi
# calc average each non-autopause test target
calcAvarageNonAutopause() {
calcAverageNonAutopause() {
for target in ${TESTS_TARGETS[@]}; do
nap_count=0;
nap_total=0;
@ -50,7 +50,7 @@ calcAvarageNonAutopause() {
}
# calc average each autopause test target
calcAvarageAutopause() {
calcAverageAutopause() {
for target in ${TESTS_TARGETS[@]}; do
if [[ "${target}" == "minikube."* ]]; then
ap_count=0;
@ -85,8 +85,8 @@ updateAutopauseSummary() {
}
}
calcAvarageNonAutopause
calcAverageNonAutopause
updateNonAutopauseSummary
calcAvarageAutopause
calcAverageAutopause
updateAutopauseSummary

View File

@ -26,7 +26,7 @@ elif [[ ${OS} == "Linux" ]]; then
fi
# calc average each test target
calcAvarage() {
calcAverage() {
for target in ${TESTS_TARGETS[@]}; do
count=0;
total=0;
@ -51,5 +51,5 @@ updateSummary() {
}
}
calcAvarage
calcAverage
updateSummary

View File

@ -1,45 +1,45 @@
---
title: "CPU Usage Benchmarks(Linux)"
linkTitle: "CPU Usage Benchmarks(Linux)"
title: "CPU Usage Benchmarks (Linux)"
linkTitle: "CPU Usage Benchmarks (Linux)"
weight: 1
---
## CPU% Busy Overhead - Avarage first 5 minutes only
## CPU% Busy Overhead - Average first 5 minutes only
This chart shows each tool's CPU busy overhead percentage.
After each tool's starting, we measured each tool's idle for 5 minutes.
This chart was measured only after the start without deploying any pods.
This chart shows each tool's CPU busy overhead percentage.
After each tool's starting, we measured each tool's idle for 5 minutes.
This chart was measured only after the start without deploying any pods.
1. start each local kubernetes tool
2. measure its cpu usage with [cstat](https://github.com/tstromberg/cstat)
![idleOnly](/images/benchmarks/cpuUsage/idleOnly/linux.png)
NOTE: the benchmark environment uses GCE with nested virtualization. This may affect virtual machine's overhead.
https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances
NOTE: the benchmark environment uses GCE with nested virtualization. This may affect virtual machine's overhead.
https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances
## CPU% Busy Overhead - With Auto Pause vs. Non Auto Pause
This chart shows each tool's CPU busy overhead percentage with auto-pause addon.
The auto-pause is mechanism which reduce CPU busy usage by pausing kube-apiserver.
We compare CPU usage after deploying sample application(nginx deployment) to all tools(including minikube and other tools).
This chart was measured with the following steps.
By these steps, we compare CPU usage with auto-pause vs. non-auto-pause.
This chart shows each tool's CPU busy overhead percentage with auto-pause addon.
The auto-pause is mechanism which reduce CPU busy usage by pausing kube-apiserver.
We compare CPU usage after deploying sample application (nginx deployment) to all tools (including minikube and other tools).
This chart was measured with the following steps.
By these steps, we compare CPU usage with auto-pause vs. non-auto-pause.
1. start each local kubernetes tool
2. deploy sample application(nginx deployment) to each tool
2. deploy sample application (nginx deployment) to each tool
3. wait 1 minute without anything
4. measure No.3 idle CPU usage with [cstat](https://github.com/tstromberg/cstat)
5. if tool is minikube, enable auto-pause addon which pause control plane
6. if tool is minikube, wait 1 minute so that control plane will become Paused status(It takes 1 minute to become Pause status from Stopped status)
6. if tool is minikube, wait 1 minute so that control plane will become Paused status (It takes 1 minute to become Pause status from Stopped status)
7. if tool is minikube, verify if minikube control plane is paused
8. if tool is minikube, wait 3 minute without anything
9. if tool is minikube, measure No.8 idle CPU usage with [cstat](https://github.com/tstromberg/cstat)
No.1-4: Initial start CPU usage with sample(nginx) deployment
No.5-9: Auto Paused CPU usage with sample(nginx) deployment
No.1-4: Initial start CPU usage with sample (nginx) deployment
No.5-9: Auto Paused CPU usage with sample (nginx) deployment
![autopause](/images/benchmarks/cpuUsage/autoPause/linux.png)
NOTE: the benchmark environment uses GCE with nested virtualization. This may affect virtual machine's overhead.
https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances
NOTE: the benchmark environment uses GCE with nested virtualization. This may affect virtual machine's overhead.
https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances

View File

@ -1,14 +1,14 @@
---
title: "CPU Usage Benchmarks(macOS)"
linkTitle: "CPU Usage Benchmarks(macOS)"
title: "CPU Usage Benchmarks (macOS)"
linkTitle: "CPU Usage Benchmarks (macOS)"
weight: 1
---
## CPU% Busy Overhead - Avarage first 5 minutes only
## CPU% Busy Overhead - Average first 5 minutes only
This chart shows each tool's CPU busy overhead percentage.
After each tool's starting, we measured each tool's idle for 5 minutes.
This chart was measured only after the start without deploying any pods.
This chart shows each tool's CPU busy overhead percentage.
After each tool's starting, we measured each tool's idle for 5 minutes.
This chart was measured only after the start without deploying any pods.
1. start each local kubernetes tool
2. measure its cpu usage with [cstat](https://github.com/tstromberg/cstat)
@ -17,23 +17,23 @@ This chart was measured only after the start without deploying any pods.
## CPU% Busy Overhead - With Auto Pause vs. Non Auto Pause
This chart shows each tool's CPU busy overhead percentage with auto-pause addon.
The auto-pause is mechanism which reduce CPU busy usage by pausing kube-apiserver.
We compare CPU usage after deploying sample application(nginx deployment) to all tools(including minikube and other tools).
This chart was measured with the following steps.
By these steps, we compare CPU usage with auto-pause vs. non-auto-pause.
This chart shows each tool's CPU busy overhead percentage with auto-pause addon.
The auto-pause is mechanism which reduce CPU busy usage by pausing kube-apiserver.
We compare CPU usage after deploying sample application (nginx deployment) to all tools (including minikube and other tools).
This chart was measured with the following steps.
By these steps, we compare CPU usage with auto-pause vs. non-auto-pause.
1. start each local kubernetes tool
2. deploy sample application(nginx deployment) to each tool
2. deploy sample application (nginx deployment) to each tool
3. wait 1 minute without anything
4. measure No.3 idle CPU usage with [cstat](https://github.com/tstromberg/cstat)
5. if tool is minikube, enable auto-pause addon which pause control plane
6. if tool is minikube, wait 1 minute so that control plane will become Paused status(It takes 1 minute to become Pause status from Stopped status)
6. if tool is minikube, wait 1 minute so that control plane will become Paused status (It takes 1 minute to become Pause status from Stopped status)
7. if tool is minikube, verify if minikube control plane is paused
8. if tool is minikube, wait 3 minute without anything
9. if tool is minikube, measure No.8 idle CPU usage with [cstat](https://github.com/tstromberg/cstat)
No.1-4: Initial start CPU usage with sample(nginx) deployment
No.5-9: Auto Paused CPU usage with sample(nginx) deployment
No.1-4: Initial start CPU usage with sample (nginx) deployment
No.5-9: Auto Paused CPU usage with sample (nginx) deployment
![autopause](/images/benchmarks/cpuUsage/autoPause/mac.png)