spelling: Fix spelling errors with codespell (#21273)

Fix trivial spelling errors using codespell[1]:

    codespell --skip '*.yaml,*.tmpl,*.json,*.html,*.patch,go.sum' -w

And rejecting some false positives fixes:

    ./CHANGELOG.md:907: fliter ==> filter
    ./third_party/go9p/clnt_write.go:48: Writen ==> Written
    ./third_party/kubeadm/app/features/features.go:69: AtLeast ==> at least
    ./site/content/en/docs/contrib/translations.md:106: certificats ==> certificates
    ./site/content/en/docs/contrib/translations.md:113: espace ==> escape
    ./site/content/en/docs/tutorials/amd.md:75: HSA ==> HAS
    ./site/content/en/docs/tutorials/amd.md:87: HSA ==> HAS
    ./pkg/minikube/config/extra_options_test.go:143: expRes ==> express
    ./pkg/minikube/config/extra_options_test.go:151: expRes ==> express
    ./pkg/minikube/config/extra_options_test.go:152: expRes ==> express
    ./pkg/minikube/config/extra_options_test.go:168: expRes ==> express
    ./pkg/minikube/config/extra_options_test.go:177: expRes ==> express
    ./pkg/minikube/config/extra_options_test.go:178: expRes ==> express

There are more spelling errors that need manual selection:

    ./CHANGELOG.md:234: issuse ==> issue, issues
    ./CHANGELOG.md:543: Pris ==> Prise, Prism
    ./hack/benchmark/time-to-k8s/page.go:73: readin ==> reading, read in
    ./hack/benchmark/image-build/generate-chart.go:82: INTERATIVE ==> INTERACTIVE, ITERATIVE
    ./hack/benchmark/image-build/generate-chart.go:87: INTERATIVE ==> INTERACTIVE, ITERATIVE
    ./hack/benchmark/image-build/generate-chart.go:137: INTERATIVE ==> INTERACTIVE, ITERATIVE
    ./hack/benchmark/image-build/generate-chart.go:162: interative ==> interactive, iterative
    ./hack/benchmark/image-build/generate-chart.go:195: INTERATIVE ==> INTERACTIVE, ITERATIVE
    ./third_party/go9p/fmt.go:132: Tread ==> Thread, Treat
    ./third_party/go9p/fmt.go:133: Tread ==> Thread, Treat
    ./third_party/go9p/p9.go:33: Tread ==> Thread, Treat
    ./third_party/go9p/p9.go:170: Tread ==> Thread, Treat
    ./third_party/go9p/p9.go:171: Tread ==> Thread, Treat
    ./third_party/go9p/p9.go:225: Tread ==> Thread, Treat
    ./third_party/go9p/p9.go:263: Tread ==> Thread, Treat
    ./third_party/go9p/packt.go:165: Tread ==> Thread, Treat
    ./third_party/go9p/packt.go:168: Tread ==> Thread, Treat
    ./third_party/go9p/srv_srv.go:305: Tread ==> Thread, Treat
    ./third_party/go9p/srv_srv.go:349: Tread ==> Thread, Treat
    ./third_party/go9p/unpack.go:170: Tread ==> Thread, Treat
    ./site/content/en/docs/tutorials/multi_control_plane_ha_clusters.md:145: Virual ==> Virtual, Visual, Viral
    ./pkg/drivers/krunkit/krunkit.go:392: Terminte ==> Terminate, Termite
    ./pkg/drivers/common/common.go:283: drawin ==> drawing, draw in, drawn
    ./pkg/drivers/kic/oci/oci.go:175: stroed ==> stored, stroked, strode
    ./pkg/minikube/out/out.go:412: isT ==> is, it, its, it's, sit, list
    ./pkg/minikube/out/out.go:413: isT ==> is, it, its, it's, sit, list
    ./pkg/minikube/out/out.go:414: isT ==> is, it, its, it's, sit, list
    ./pkg/minikube/shell/shell_test.go:152: writed ==> wrote, written, write, writer
    ./pkg/minikube/bootstrapper/kubeadm/kubeadm.go:710: wil ==> will, well

If we find a way to prevent the false positives we can use this command
for spell checking in the CI.

[1] https://github.com/codespell-project/codespell
pull/21293/head
Nir Soffer 2025-08-11 21:27:20 +03:00 committed by GitHub
parent bcf69bf6f0
commit 62529ec03e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
50 changed files with 66 additions and 66 deletions

View File

@ -485,7 +485,7 @@ Bugs:
* Fix starting kvm2 clusters using Linux on arm64 Mac [#18239](https://github.com/kubernetes/minikube/pull/18239)
* Fix displaying error when deleting non-existing cluster [#17713](https://github.com/kubernetes/minikube/pull/17713)
* Fix no-limit not being respected on restart [#17598](https://github.com/kubernetes/minikube/pull/17598)
* Fix not applying `kubeadm.applyNodeLabels` label to nodes added after inital start [#16416](https://github.com/kubernetes/minikube/pull/16416)
* Fix not applying `kubeadm.applyNodeLabels` label to nodes added after initial start [#16416](https://github.com/kubernetes/minikube/pull/16416)
* Fix logs delimiter output [#17734](https://github.com/kubernetes/minikube/pull/17734)
Version Upgrades:
@ -1268,7 +1268,7 @@ Features (Experimental):
* QEMU Driver: Add support for dedicated network on macOS (socket_vmnet) [#14989](https://github.com/kubernetes/minikube/pull/14989)
* QEMU Driver: Add support minikube service and tunnel on macOS [#14989](https://github.com/kubernetes/minikube/pull/14989)
Minor Imprevements:
Minor Improvements:
* Check if context is invalid during update-context command [#15032](https://github.com/kubernetes/minikube/pull/15032)
* Use SSH tunnel if user specifies bindAddress [#14951](https://github.com/kubernetes/minikube/pull/14951)
* Warn QEMU users if DNS issue detected [#15073](https://github.com/kubernetes/minikube/pull/15073)
@ -1582,7 +1582,7 @@ Thank you to our triage members for this release!
## Version 1.26.0-beta.0 - 2022-05-13
Featues:
Features:
* Add support for the QEMU driver [#13639](https://github.com/kubernetes/minikube/pull/13639)
* Add support for building aarch64 ISO [#13762](https://github.com/kubernetes/minikube/pull/13762)
* Support rootless Podman driver (Usage: `minikube config set rootless true`) [#13829](https://github.com/kubernetes/minikube/pull/13829)
@ -2348,7 +2348,7 @@ Bugs:
Version Upgrades:
* bump default k8s version to v1.20.7 and newest to v1.22.0-alpha.2 [#11525](https://github.com/kubernetes/minikube/pull/11525)
* containerd: upgrade `io.containerd.runtime.v1.linux` to `io.containerd.runc.v2` (suppot cgroup v2) [#11325](https://github.com/kubernetes/minikube/pull/11325)
* containerd: upgrade `io.containerd.runtime.v1.linux` to `io.containerd.runc.v2` (support cgroup v2) [#11325](https://github.com/kubernetes/minikube/pull/11325)
* metallb-addon: Update metallb from 0.8.2 to 0.9.6 [#11410](https://github.com/kubernetes/minikube/pull/11410)
For a more detailed changelog, including changes occurring in pre-release versions, see [CHANGELOG.md](https://github.com/kubernetes/minikube/blob/master/CHANGELOG.md).
@ -2662,7 +2662,7 @@ Minor Improvements:
* disable minikube-scheduled-stop.service until a user schedules a stop [#10548](https://github.com/kubernetes/minikube/pull/10548)
* docker/podman: add crun for running on cgroups v2 [#10426](https://github.com/kubernetes/minikube/pull/10426)
* Specify mount point for cri-o config [#10528](https://github.com/kubernetes/minikube/pull/10528)
* Esnure addon integrity by adding Image SHA [#10527](https://github.com/kubernetes/minikube/pull/10527)
* Ensure addon integrity by adding Image SHA [#10527](https://github.com/kubernetes/minikube/pull/10527)
* improve kvm network delete/cleanup [#10479](https://github.com/kubernetes/minikube/pull/10479)
* docker/podman: avoid creating overlapping networks with other tools (KVM,...) [#10439](https://github.com/kubernetes/minikube/pull/10439)
* Improve insecure registry validation [#10493](https://github.com/kubernetes/minikube/pull/10493)
@ -4976,7 +4976,7 @@ Thank you to the folks who contributed to this bugfix release:
* Include pod output in 'logs' command & display detected problems during start [#3673](https://github.com/kubernetes/minikube/pull/3673)
* Upgrade Docker, from 18.06.1-ce to 18.06.2-ce [#3666](https://github.com/kubernetes/minikube/pull/3666)
* Upgrade opencontainers/runc to 0a012df [#3669](https://github.com/kubernetes/minikube/pull/3669)
* Clearer output when re-using VM's so that users know what they are waiting on [#3659](https://github.com/kubernetes/minikube/pull/3659)
* Clearer output when reusing VM's so that users know what they are waiting on [#3659](https://github.com/kubernetes/minikube/pull/3659)
* Disable kubelet disk eviction by default [#3671](https://github.com/kubernetes/minikube/pull/3671)
* Run poweroff before delete, only call uninstall if driver is None [#3665](https://github.com/kubernetes/minikube/pull/3665)
* Add DeleteCluster to bootstrapper [#3656](https://github.com/kubernetes/minikube/pull/3656)

View File

@ -79,7 +79,7 @@ var dashboardCmd = &cobra.Command{
enabled := addon.IsEnabled(co.Config)
if !enabled {
// Send status messages to stderr for folks re-using this output.
// Send status messages to stderr for folks reusing this output.
out.ErrT(style.Enabling, "Enabling dashboard ...")
// Enable the dashboard add-on
err = addons.SetAndSave(cname, "dashboard", "true")

View File

@ -647,7 +647,7 @@ func killProcess(path string) error {
return errors.New("multiple errors encountered while closing mount processes")
}
// if no errors were encoutered, it's safe to delete pidFile
// if no errors were encountered, it's safe to delete pidFile
if err := os.Remove(pidPath); err != nil {
return errors.Wrap(err, "while closing mount-pids file")
}

View File

@ -853,7 +853,7 @@ func TestValidateAutoPause(t *testing.T) {
t.Errorf("interval of %q failed validation; expected it to pass: %v", input, err)
}
if err == nil && tc.shouldError {
t.Errorf("interval of %q passed validataion; expected it to fail: %v", input, err)
t.Errorf("interval of %q passed validation; expected it to fail: %v", input, err)
}
}
}

View File

@ -67,7 +67,7 @@ rules:
- apiGroups: [""]
resources: ["services"]
# list is needed by network-policy gadget
# watch is needed by operators enriching with service informations
# watch is needed by operators enriching with service information
verbs: ["list", "watch"]
- apiGroups: ["gadget.kinvolk.io"]
resources: ["traces", "traces/status"]

View File

@ -29,7 +29,7 @@ SYSDIG_DEPENDENCIES = \
zlib
# sysdig creates the module Makefile from a template, which contains a
# single place-holder, KBUILD_FLAGS, wich is only replaced with two
# single place-holder, KBUILD_FLAGS, which is only replaced with two
# things:
# - debug flags, which we don't care about here,
# - 'sysdig-feature' flags, which are never set, so always empty

View File

@ -62,7 +62,7 @@ validate_userns() {
overlayfs_preferrable() {
if [[ -z "$userns" ]]; then
# If we are outside userns, we can always assume overlayfs is preferrable
# If we are outside userns, we can always assume overlayfs is preferable
return 0
fi
@ -107,7 +107,7 @@ configure_containerd() {
if [[ -n "$userns" ]]; then
# enable restrict_oom_score_adj
sed -i 's/restrict_oom_score_adj = false/restrict_oom_score_adj = true/' /etc/containerd/config.toml
# Use fuse-overlayfs if overlayfs is not preferrable: https://github.com/kubernetes-sigs/kind/issues/2275
# Use fuse-overlayfs if overlayfs is not preferable: https://github.com/kubernetes-sigs/kind/issues/2275
if [[ -z "$snapshotter" ]] && ! overlayfs_preferrable; then
snapshotter="fuse-overlayfs"
fi

View File

@ -28,7 +28,7 @@ Until now minikube has always been a local single node Kubernetes cluster. Havin
## Design Details
Since minikube was designed with only a single node cluster in mind, we need to make some fairly significant refactors, the biggest of which is the introduction of the Node object. Each cluster config will be able to have an abitrary number of Node objects, each of which will have attributes that can define it, similar to what [tstromberg proposed](https://github.com/kubernetes/minikube/pull/5874) but with better backwards compatibility with current config.
Since minikube was designed with only a single node cluster in mind, we need to make some fairly significant refactors, the biggest of which is the introduction of the Node object. Each cluster config will be able to have an arbitrary number of Node objects, each of which will have attributes that can define it, similar to what [tstromberg proposed](https://github.com/kubernetes/minikube/pull/5874) but with better backwards compatibility with current config.
Each node will correspond to one VM (or container) and will connect back to the primary control plane via `kubeadm join`.

View File

@ -55,7 +55,7 @@ As a `keep-alive` implementation, tools will repeat the command to reset the clo
Advantages:
* Able to re-use all of the existing `pause` and `stop` implementation within minikube.
* Able to reuse all of the existing `pause` and `stop` implementation within minikube.
* Built-in handling for multiple architectures
* Does not consume memory reserved for the VM

View File

@ -77,7 +77,7 @@ cp -r test/integration/testdata out/
rm -rf out/buildroot
# At this point, the out directory contains the jenkins scripts (populated by jenkins),
# testdata, and our build output. Push the changes to GCS so that worker nodes can re-use them.
# testdata, and our build output. Push the changes to GCS so that worker nodes can reuse them.
# -d: delete remote files that don't exist (removed test files, for instance)
# -J: gzip compression

View File

@ -37,7 +37,7 @@ grep -E "^VERSION_MAJOR \\?=" Makefile | grep "${VERSION_MAJOR}"
grep -E "^VERSION_MINOR \\?=" Makefile | grep "${VERSION_MINOR}"
grep -E "^VERSION_BUILD \\?=" Makefile | grep "${VERSION_BUILD}"
# Force go packages to the Jekins home directory
# Force go packages to the Jenkins home directory
# export GOPATH=$HOME/go
./hack/jenkins/installers/check_install_golang.sh "/usr/local"
@ -120,7 +120,7 @@ cp "out/minikube-${RPM_VERSION}-0.s390x.rpm" out/minikube-latest.s390x.rpm
echo "Generating tarballs for kicbase images"
# first get the correct tag of the kic base image
KIC_VERSION=$(grep -E "Version =" pkg/drivers/kic/types.go | cut -d \" -f 2 | cut -d "-" -f 1)
# then generate tarballs for all achitectures
# then generate tarballs for all architectures
for ARCH in "amd64" "arm64" "arm/v7" "ppc64le" "s390x"
do
SUFFIX=$(echo $ARCH | sed 's/\///g')

View File

@ -142,7 +142,7 @@ func main() {
os.Stdout.Write(submatches[1])
}
// some components have _ or - in their names vs their make folders, standarizing for automation such as as update-all
// some components have _ or - in their names vs their make folders, standardizing for automation such as as update-all
func standrizeComponentName(name string) string {
// Convert the component name to lowercase and replace underscores with hyphens
name = strings.ToLower(name)

View File

@ -279,7 +279,7 @@ func parseDHCPdLeasesFile(file io.Reader) ([]DHCPEntry, error) {
return dhcpEntries, scanner.Err()
}
// parseMAC parse both standard fixeed size MAC address "%02x:..." and the
// parseMAC parse both standard fixed size MAC address "%02x:..." and the
// variable size MAC address on drawin "%x:...".
func parseMAC(mac string) (net.HardwareAddr, error) {
hw := make(net.HardwareAddr, 6)

View File

@ -99,7 +99,7 @@ func ValidateHelper() error {
out.ErrT(style.Indent, "To configure vment-helper to run without a password, please check the documentation:")
out.ErrT(style.Indent, "https://github.com/nirs/vmnet-helper/#granting-permission-to-run-vmnet-helper")
// Authenticate the user, updateing the user's cached credentials.
// Authenticate the user, updating the user's cached credentials.
cmd = exec.Command("sudo", executablePath, "--version")
stdout, err = cmd.Output()
if err != nil {
@ -117,7 +117,7 @@ func ValidateHelper() error {
}
// Start the vmnet-helper child process, creating the vmnet interface for the
// machine. The helper will create a unix datagram socket at the specfied path.
// machine. The helper will create a unix datagram socket at the specified path.
// The client (e.g. vfkit) will connect to this socket.
func (h *Helper) Start(socketPath string) error {
args := []string{

View File

@ -458,7 +458,7 @@ func (d *Driver) Stop() error {
crMgr, err := cruntime.New(cruntime.Config{Type: d.NodeConfig.ContainerRuntime, Runner: d.exec})
if err != nil { // won't return error because:
// even though we can't stop the cotainers inside, we still wanna stop the minikube container itself
// even though we can't stop the containers inside, we still wanna stop the minikube container itself
klog.Errorf("unable to get container runtime: %v", err)
} else {
containers, err := crMgr.ListContainers(cruntime.ListContainersOptions{Namespaces: constants.DefaultNamespaces})

View File

@ -182,7 +182,7 @@ func CreateContainerNode(p CreateParams) error { //nolint to suppress cyclomatic
"--label", p.ClusterLabel,
// label the node with the role ID
"--label", fmt.Sprintf("%s=%s", nodeRoleLabelKey, p.Role),
// label th enode wuth the node ID
// label th enode with the node ID
"--label", p.NodeLabel,
}
// to provide a static IP

View File

@ -45,7 +45,7 @@ func mapsEqual(a, b map[string]string) bool {
func TestParseMapString(t *testing.T) {
cases := map[string]map[string]string{
"Ardvark=1,B=2,Cantaloupe=3": {"Ardvark": "1", "B": "2", "Cantaloupe": "3"},
"Aardvark=1,B=2,Cantaloupe=3": {"Aardvark": "1", "B": "2", "Cantaloupe": "3"},
"A=,B=2,C=": {"A": "", "B": "2", "C": ""},
"": {},
"malformed,good=howdy,manyequals==,": {"good": "howdy"},

View File

@ -97,7 +97,7 @@ func (e *ErrNetworkNotReady) Error() string {
return fmt.Sprintf(errTextFormat, e.Type, e.Reason, e.Message)
}
// NodePressure verfies that node is not under disk, memory, pid or network pressure.
// NodePressure verifies that node is not under disk, memory, pid or network pressure.
func NodePressure(cs *kubernetes.Clientset) error {
klog.Info("verifying NodePressure condition ...")
start := time.Now()

View File

@ -526,7 +526,7 @@ func (k *Bootstrapper) WaitForNode(cfg config.ClusterConfig, n config.Node, time
return nil
}
// if extra waiting for system pods to be ready is required, we need node to be ready beforehands
// if extra waiting for system pods to be ready is required, we need node to be ready beforehand
if cfg.VerifyComponents[kverify.NodeReadyKey] || cfg.VerifyComponents[kverify.ExtraKey] {
name := bsutil.KubeNodeName(cfg, n)
if err := kverify.WaitNodeCondition(client, name, core.NodeReady, timeout); err != nil {

View File

@ -61,12 +61,12 @@ type StartedCmd struct {
// Runner represents an interface to run commands.
type Runner interface {
// RunCmd runs a cmd of exec.Cmd type. allowing user to set cmd.Stdin, cmd.Stdout,...
// not all implementors are guaranteed to handle all the properties of cmd.
// not all implementers are guaranteed to handle all the properties of cmd.
RunCmd(cmd *exec.Cmd) (*RunResult, error)
// StartCmd starts a cmd of exec.Cmd type.
// This func in non-blocking, use WaitCmd to block until complete.
// Not all implementors are guaranteed to handle all the properties of cmd.
// Not all implementers are guaranteed to handle all the properties of cmd.
StartCmd(cmd *exec.Cmd) (*StartedCmd, error)
// WaitCmd will prevent further execution until the started command has completed.

View File

@ -40,7 +40,7 @@ type execRunner struct {
sudo bool
}
// NewExecRunner returns a kicRunner implementor of runner which runs cmds inside a container
// NewExecRunner returns a kicRunner implementer of runner which runs cmds inside a container
func NewExecRunner(sudo bool) Runner {
return &execRunner{sudo: sudo}
}

View File

@ -43,7 +43,7 @@ type kicRunner struct {
ociBin string
}
// NewKICRunner returns a kicRunner implementor of runner which runs cmds inside a container
// NewKICRunner returns a kicRunner implementer of runner which runs cmds inside a container
func NewKICRunner(containerNameOrID string, ociName string) Runner {
return &kicRunner{
nameOrID: containerNameOrID,

View File

@ -294,11 +294,11 @@ func CleanUpOlderPreloads() {
}
for _, file := range files {
splited := strings.Split(file.Name(), "-")
if len(splited) < 4 {
split := strings.Split(file.Name(), "-")
if len(split) < 4 {
continue
}
ver := splited[3]
ver := split[3]
if ver != PreloadVersion {
fn := path.Join(targetDir(), file.Name())
klog.Infof("deleting older generation preload %s", fn)

View File

@ -34,7 +34,7 @@ type Extension struct {
LastUpdate string `json:"last-update"`
}
// NewExtension returns a minikube formatted kubeconfig's extension block to idenity clusters and contexts
// NewExtension returns a minikube formatted kubeconfig's extension block to identity clusters and contexts
func NewExtension() *Extension {
return &Extension{
Provider: "minikube.sigs.k8s.io",

View File

@ -106,7 +106,7 @@ func StartHost(api libmachine.API, cfg *config.ClusterConfig, n *config.Node) (*
func engineOptions(cfg config.ClusterConfig) *engine.Options {
// get docker env from user's proxy settings
dockerEnv := proxy.SetDockerEnv()
// get docker env from user specifiec config
// get docker env from user specific config
dockerEnv = append(dockerEnv, cfg.DockerEnv...)
uniqueEnvs := util.RemoveDuplicateStrings(dockerEnv)

View File

@ -253,7 +253,7 @@ func Name(index int) string {
// ID returns the appropriate node id from the node name.
// ID of first (primary control-plane) node (with empty name) is 1, so next one would be "m02", etc.
// Eg, "m05" should return "5", regardles if any preceded nodes were deleted.
// Eg, "m05" should return "5", regardless if any preceded nodes were deleted.
func ID(name string) (int, error) {
if name == "" {
return 1, nil

View File

@ -142,7 +142,7 @@ func Start(starter Starter) (*kubeconfig.Settings, error) { // nolint:gocyclo
if err != nil {
return nil, err
}
// configure CoreDNS concurently from primary control-plane node only and only on first node start
// configure CoreDNS concurrently from primary control-plane node only and only on first node start
if !starter.PreExists {
wg.Add(1)
go func() {
@ -393,7 +393,7 @@ func Provision(cc *config.ClusterConfig, n *config.Node, delOnFail bool) (comman
beginCacheKubernetesImages(&cacheGroup, cc.KubernetesConfig.ImageRepository, n.KubernetesVersion, cc.KubernetesConfig.ContainerRuntime, cc.Driver)
}
// Abstraction leakage alert: startHost requires the config to be saved, to satistfy pkg/provision/buildroot.
// Abstraction leakage alert: startHost requires the config to be saved, to satisfy pkg/provision/buildroot.
// Hence, SaveProfile must be called before startHost, and again afterwards when we know the IP.
if err := config.SaveProfile(viper.GetString(config.ProfileName), cc); err != nil {
return nil, false, nil, nil, errors.Wrap(err, "Failed to save config")

View File

@ -35,7 +35,7 @@ func applyStyle(st style.Enum, useColor bool, format string) (string, bool, bool
format = translate.T(format)
s, ok := style.Config[st]
// becaue of https://github.com/kubernetes/minikube/issues/21148
// because of https://github.com/kubernetes/minikube/issues/21148
// will handle making new lines with spinner library itself
if !s.ShouldSpin {
format += "\n"

View File

@ -54,7 +54,7 @@ func TestStep(t *testing.T) {
{style.Fatal, "Fatal: {{.error}}", V{"error": "ugh"}, "💣 Fatal: ugh\n", "X Fatal: ugh\n"},
{style.Issue, "http://i/{{.number}}", V{"number": 10000}, " ▪ http://i/10000\n", " - http://i/10000\n"},
{style.Usage, "raw: {{.one}} {{.two}}", V{"one": "'%'", "two": "%d"}, "💡 raw: '%' %d\n", "* raw: '%' %d\n"},
// spining steps do not support being unit tested with fake file writer, since passing the fake writer to the spininer library is not testable.
// spinning steps do not support being unit tested with fake file writer, since passing the fake writer to the spininer library is not testable.
{style.Provisioning, "Installing Kubernetes version {{.version}} ...", V{"version": "v1.13"}, "🌱 ... v1.13 تثبيت Kubernetes الإصدار\n", "* ... v1.13 تثبيت Kubernetes الإصدار\n"},
}
for _, tc := range testCases {

View File

@ -74,7 +74,7 @@ func Exists(pid int, executable string) (bool, error) {
}
// Terminate a process with pid and matching name. Returns os.ErrProcessDone if
// the process does not exist, or nil if termiation was requested. Caller need
// the process does not exist, or nil if termination was requested. Caller need
// to wait until the process disappears.
func Terminate(pid int, executable string) error {
exists, err := Exists(pid, executable)

View File

@ -72,7 +72,7 @@ func configure(cc config.ClusterConfig, n config.Node) (interface{}, error) {
}
func status() registry.State {
// Re-use this function as it's particularly helpful for Windows
// Reuse this function as it's particularly helpful for Windows
tryPath := driver.VBoxManagePath()
path, err := exec.LookPath(tryPath)
if err != nil {

View File

@ -75,7 +75,7 @@ func daemonize(profiles []string, duration time.Duration) error {
// to start the systemd service, we first have to tell the systemd service how long to sleep for
// before shutting down minikube from within
// we do this by settig the SLEEP environment variable in the environment file to the users
// we do this by setting the SLEEP environment variable in the environment file to the users
// requested duration
func startSystemdService(profile string, duration time.Duration) error {
// get ssh runner

View File

@ -96,7 +96,7 @@ func TestFreeSubnet(t *testing.T) {
t.Fatalf("expected to fail since IP non-private but no error thrown")
}
if !strings.Contains(err.Error(), startingSubnet) {
t.Errorf("expected starting subnet of %q to be included in error, but intead got: %v", startingSubnet, err)
t.Errorf("expected starting subnet of %q to be included in error, but instead got: %v", startingSubnet, err)
}
})
}

View File

@ -41,7 +41,7 @@ and minikube binary will be available in ./out/minikube
## build minikube in docker
if you have issues runninng make due to tooling issue you can run the make in "docker"
if you have issues running make due to tooling issue you can run the make in "docker"
```shell
MINIKUBE_BUILD_IN_DOCKER=y make
```
@ -74,12 +74,12 @@ make clean
make gomodtidy
```
## Run Short intergration test (functional test)
## Run Short integration test (functional test)
```shell
make functional
```
To see HTML report of the fucntional test you can install [gopogh](https://github.com/medyagh/gopogh)
To see HTML report of the functional test you can install [gopogh](https://github.com/medyagh/gopogh)
and run
```shell
make html_report

View File

@ -486,7 +486,7 @@ ensures ha (multi-control plane) cluster can start.
deploys an app to ha (multi-control plane) cluster and ensures all nodes can serve traffic.
#### validateHAPingHostFromPods
uses app previously deplyed by validateDeployAppToHACluster to verify its pods, located on different nodes, can resolve "host.minikube.internal".
uses app previously deployed by validateDeployAppToHACluster to verify its pods, located on different nodes, can resolve "host.minikube.internal".
#### validateHAAddWorkerNode
uses the minikube node add command to add a worker node to an existing ha (multi-control plane) cluster.
@ -641,7 +641,7 @@ tests that the node name verification works as expected
deploys an app to a multinode cluster and makes sure all nodes can serve traffic
#### validatePodsPingHost
uses app previously deplyed by validateDeployAppToMultiNode to verify its pods, located on different nodes, can resolve "host.minikube.internal".
uses app previously deployed by validateDeployAppToMultiNode to verify its pods, located on different nodes, can resolve "host.minikube.internal".
## TestNetworkPlugins
tests all supported CNI options

View File

@ -32,7 +32,7 @@ brew install krunkit
To use the krunkit driver you must install
[vmnet-helper](https://github.com/nirs/vmnet-helper), see installation
instructions bellow.
instructions below.
### Install vment-helper

View File

@ -23,7 +23,7 @@ The `nat` network is always available, but it does not provide access
between minikube clusters. To access other clusters or run multi-node
cluster, you need the `vmnet-shared` network. The `vmnet-shared` network
requires [vmnet-helper](https://github.com/nirs/vmnet-helper), see
installation instructions bellow.
installation instructions below.
{{% tabs %}}
{{% tab vmnet-shared %}}

View File

@ -22,7 +22,7 @@ Once minikube is installed, start the cluster, by running the *minikube start* c
minikube start
```
Great! You now have a runnning Kubernetes cluster in your terminal. minikube started a virtual environment for you, and a Kubernetes cluster is now running in that environment.
Great! You now have a running Kubernetes cluster in your terminal. minikube started a virtual environment for you, and a Kubernetes cluster is now running in that environment.
## Step 2 - Cluster version

View File

@ -56,7 +56,7 @@ Pods that are running inside Kubernetes are running on a private, isolated netwo
We will cover other options on how to expose your application outside the Kubernetes cluster in Module 4.
The `kubectl` command can create a proxy that will forward communications into the cluster-wide, private network. The proxy can be terminiated by pressing control-C and won't show any output while its running.
The `kubectl` command can create a proxy that will forward communications into the cluster-wide, private network. The proxy can be terminated by pressing control-C and won't show any output while its running.
We will open a second terminal window to run the proxy.

View File

@ -31,7 +31,7 @@ minikube will try to automatically enable control-plane load-balancing if these
## Caveat
While a minikube HA cluster will continue to operate (although in degraded mode) after loosing any one control-plane node, keep in mind that there might be some components that are attached only to the primary control-plane node, like the storage-provisioner.
While a minikube HA cluster will continue to operate (although in degraded mode) after losing any one control-plane node, keep in mind that there might be some components that are attached only to the primary control-plane node, like the storage-provisioner.
## Tutorial
@ -245,7 +245,7 @@ minikube ssh -p ha-demo -- 'find /var/lib/minikube/binaries -iname kubectl -exec
+------------------+---------+-------------+---------------------------+---------------------------+------------+
```
- Loosing a control-plane node - degrades cluster, but not a problem!
- Losing a control-plane node - degrades cluster, but not a problem!
```shell
minikube node delete m02 -p ha-demo

View File

@ -38,7 +38,7 @@ xattr -d com.apple.quarantine /Applications/minikube-gui.app
3. If you see the following, click `More info` and then `Run anyway`
![Windows unreconized app](/images/gui/windows.png)
![Windows unrecognized app](/images/gui/windows.png)
{{% /windowstab %}}
{{% linuxtab %}}
1. [Download minikube-gui](https://github.com/kubernetes-sigs/minikube-gui/releases/latest/download/minikube-gui-linux.tar.gz)

View File

@ -250,7 +250,7 @@ func validateIngressAddon(ctx context.Context, t *testing.T, profile string) {
}
if _, err := PodWait(ctx, t, profile, "default", "run=nginx", Minutes(8)); err != nil {
t.Fatalf("failed waiting for ngnix pod: %v", err)
t.Fatalf("failed waiting for nginx pod: %v", err)
}
if err := kapi.WaitForService(client, "default", "nginx", true, time.Millisecond*500, Minutes(10)); err != nil {
t.Errorf("failed waiting for nginx service to be up: %v", err)

View File

@ -1595,7 +1595,7 @@ func validateServiceCmdURL(ctx context.Context, t *testing.T, profile string) {
// isUnexpectedServiceError is used to prevent failing ServiceCmd tests on Docker Desktop due to DeadlineExceeded errors.
// Due to networking constraints Docker Desktop requires creating an SSH tunnel to connect to a service. This command has
// to be left running to keep the SSH tunnel connected, so for the ServiceCmd tests we set a timeout context so we can
// check the output and then the command is terminated, otherwise it would keep runnning forever. So if using Docker
// check the output and then the command is terminated, otherwise it would keep running forever. So if using Docker
// Desktop and the DeadlineExceeded, consider it an expected error.
func isUnexpectedServiceError(ctx context.Context, err error) bool {
if err == nil {

View File

@ -180,7 +180,7 @@ func validateMountCmd(ctx context.Context, t *testing.T, profile string) { // no
// test that file written from host was read in by the pod via cat /mount-9p/fromhost;
rr, err := Run(t, exec.CommandContext(ctx, Target(), "-p", profile, "ssh", "stat", gp))
if err != nil {
t.Errorf("failed to stat the file %q iniside minikube : args %q: %v", gp, rr.Command(), err)
t.Errorf("failed to stat the file %q inside minikube : args %q: %v", gp, rr.Command(), err)
}
if runtime.GOOS == "windows" {

View File

@ -75,7 +75,7 @@ func TestGuestEnvironment(t *testing.T) {
mount := mount
t.Run(mount, func(t *testing.T) {
t.Parallel()
rr, err := Run(t, exec.CommandContext(ctx, Targt(), "-p", profile, "ssh", fmt.Sprintf("df -t ext4 %s | grep %s", mount, mount)))
rr, err := Run(t, exec.CommandContext(ctx, Target(), "-p", profile, "ssh", fmt.Sprintf("df -t ext4 %s | grep %s", mount, mount)))
if err != nil {
t.Errorf("failed to verify existence of %q mount. args %q: %v", mount, rr.Command(), err)
}

View File

@ -193,7 +193,7 @@ func validateHADeployApp(ctx context.Context, t *testing.T, profile string) {
}
}
// validateHAPingHostFromPods uses app previously deplyed by validateDeployAppToHACluster to verify its pods, located on different nodes, can resolve "host.minikube.internal".
// validateHAPingHostFromPods uses app previously deployed by validateDeployAppToHACluster to verify its pods, located on different nodes, can resolve "host.minikube.internal".
func validateHAPingHostFromPods(ctx context.Context, t *testing.T, profile string) {
// get Pod names
rr, err := Run(t, exec.CommandContext(ctx, Target(), "-p", profile, "kubectl", "--", "get", "pods", "-o", "jsonpath='{.items[*].metadata.name}'"))

View File

@ -476,7 +476,7 @@ func showPodLogs(ctx context.Context, t *testing.T, profile string, ns string, n
// MaybeParallel sets that the test should run in parallel
func MaybeParallel(t *testing.T) {
t.Helper()
// TODO: Allow paralellized tests on "none" that do not require independent clusters
// TODO: Allow parallelized tests on "none" that do not require independent clusters
if NoneDriver() {
return
}

View File

@ -558,7 +558,7 @@ func validateDeployAppToMultiNode(ctx context.Context, t *testing.T, profile str
}
}
// validatePodsPingHost uses app previously deplyed by validateDeployAppToMultiNode to verify its pods, located on different nodes, can resolve "host.minikube.internal".
// validatePodsPingHost uses app previously deployed by validateDeployAppToMultiNode to verify its pods, located on different nodes, can resolve "host.minikube.internal".
func validatePodsPingHost(ctx context.Context, t *testing.T, profile string) {
// get Pod names
rr, err := Run(t, exec.CommandContext(ctx, Target(), "kubectl", "-p", profile, "--", "get", "pods", "-o", "jsonpath='{.items[*].metadata.name}'"))

View File

@ -33,7 +33,7 @@ func CreateTarStream(srcPath, dockerfilePath string) (io.ReadCloser, error) {
// then make sure we send both files over to the daemon
// because Dockerfile is, obviously, needed no matter what, and
// .dockerignore is needed to know if either one needs to be
// removed. The deamon will remove them for us, if needed, after it
// removed. The daemon will remove them for us, if needed, after it
// parses the Dockerfile.
//
// https://github.com/docker/docker/issues/8330

View File

@ -193,7 +193,7 @@ type SrvReq struct {
prev, next *SrvReq
}
// The Start method should be called once the file server implementor
// The Start method should be called once the file server implementer
// initializes the Srv struct with the preferred values. It sets default
// values to the fields that are not initialized and creates the goroutines
// required for the server's operation. The method receives an empty