--- title: Testing of CSI drivers date: 2020-01-08 author: > Patrick Ohly (Intel) --- When developing a [Container Storage Interface (CSI) driver](https://kubernetes-csi.github.io/docs/), it is useful to leverage as much prior work as possible. This includes source code (like the [sample CSI hostpath driver](https://github.com/kubernetes-csi/csi-driver-host-path/)) but also existing tests. Besides saving time, using tests written by someone else has the advantage that it can point out aspects of the specification that might have been overlooked otherwise. An earlier blog post about [end-to-end testing](https://kubernetes.io/blog/2019/03/22/kubernetes-end-to-end-testing-for-everyone/) already showed how to use the [Kubernetes storage tests](https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/testsuites) for testing of a third-party CSI driver. That approach makes sense when the goal to also add custom E2E tests, but depends on quite a bit of effort for setting up and maintaining a test suite. When the goal is to merely run the existing tests, then there are simpler approaches. This blog post introduces those. ## Sanity testing [csi-test sanity](https://github.com/kubernetes-csi/csi-test/tree/master/pkg/sanity) ensures that a CSI driver conforms to the CSI specification by calling the gRPC methods in various ways and checking that the outcome is as required. Despite its current hosting under the Kubernetes-CSI organization, it is completely independent of Kubernetes. Tests connect to a running CSI driver through its Unix domain socket, so although the tests are written in Go, the driver itself can be implemented in any language. The main [README](https://github.com/kubernetes-csi/csi-test/blob/master/pkg/sanity/README.md) explains how to include those tests into an existing Go test suite. The simpler alternative is to just invoke the [csi-sanity]( https://github.com/kubernetes-csi/csi-test/tree/master/cmd/csi-sanity) command. ### Installation Starting with csi-test v3.0.0, you can build the `csi-sanity` command with `go get github.com/kubernetes-csi/csi-test/cmd/csi-sanity` and you'll find the compiled binary in `$GOPATH/bin/csi-sanity`. `go get` always builds the latest revision from the master branch. To build a certain release, [get the source code](https://github.com/kubernetes-csi/csi-test/releases) and run `make -C cmd/csi-sanity`. This produces `cmd/csi-sanity/csi-sanity`. ### Usage The `csi-sanity` binary is a full [Ginkgo test suite](http://onsi.github.io/ginkgo/) and thus has the usual `-gingko` command line flags. In particular, `-ginkgo.focus` and `-ginkgo.skip` can be used to select which tests are run resp. not run. During a test run, `csi-sanity` simulates the behavior of a container orchestrator (CO) by creating staging and target directories as required by the CSI spec and calling a CSI driver via gRPC. The driver must be started before invoking `csi-sanity`. Although the tests currently only check the gRPC return codes, that might change and so the driver really should make the changes requested by a call, like mounting a filesystem. That may mean that it has to run as root. At least one [gRPC endpoint](https://github.com/grpc/grpc/blob/master/doc/naming.md) must be specified via the `-csi.endpoint` parameter when invoking `csi-sanity`, either as absolute path (`unix:/tmp/csi.sock`) for a Unix domain socket or as host name plus port (`dns:///my-machine:9000`) for TCP. `csi-sanity` then uses that endpoint for both node and controller operations. A separate endpoint for controller operations can be specified with `-csi.controllerendpoint`. Directories are created in `/tmp` by default. This can be changed via `-csi.mountdir` and `-csi.stagingdir`. Some drivers cannot be deployed such that everything is guaranteed to run on the same host. In such a case, custom scripts have to be used to handle directories: they log into the host where the CSI node controller runs and create or remove the directories there. For example, during CI testing the [CSI hostpath example driver](https://github.com/kubernetes-csi/csi-driver-host-path) gets deployed on a real Kubernetes cluster before invoking `csi-sanity` and then `csi-sanity` connects to it through port forwarding provided by [`socat`](https://github.com/kubernetes-csi/csi-driver-host-path/blob/v1.2.0/deploy/kubernetes-1.16/hostpath/csi-hostpath-testing.yaml). [Scripts](https://github.com/kubernetes-csi/csi-driver-host-path/blob/v1.2.0/release-tools/prow.sh#L808-L859) are used to create and remove the directories. Here's how one can replicate that, using the v1.2.0 release of the CSI hostpath driver: ``` $ cd csi-driver-host-path $ git describe --tags HEAD v1.2.0 $ kubectl get nodes NAME STATUS ROLES AGE VERSION 127.0.0.1 Ready 42m v1.16.0 $ deploy/kubernetes-1.16/deploy-hostpath.sh applying RBAC rules kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-provisioner/v1.4.0/deploy/kubernetes/rbac.yaml ... deploying hostpath components deploy/kubernetes-1.16/hostpath/csi-hostpath-attacher.yaml using image: quay.io/k8scsi/csi-attacher:v2.0.0 service/csi-hostpath-attacher created statefulset.apps/csi-hostpath-attacher created deploy/kubernetes-1.16/hostpath/csi-hostpath-driverinfo.yaml csidriver.storage.k8s.io/hostpath.csi.k8s.io created deploy/kubernetes-1.16/hostpath/csi-hostpath-plugin.yaml using image: quay.io/k8scsi/csi-node-driver-registrar:v1.2.0 using image: quay.io/k8scsi/hostpathplugin:v1.2.0 using image: quay.io/k8scsi/livenessprobe:v1.1.0 ... service/hostpath-service created statefulset.apps/csi-hostpath-socat created 07:38:46 waiting for hostpath deployment to complete, attempt #0 deploying snapshotclass volumesnapshotclass.snapshot.storage.k8s.io/csi-hostpath-snapclass created $ cat >mkdir_in_pod.sh <rmdir_in_pod.sh </kubernetes-test-linux-amd64.tar.gz` (like for [v1.16.0](https://dl.k8s.io/v1.16.0/kubernetes-test-linux-amd64.tar.gz)). These include a `e2e.test` binary for Linux on x86-64. Archives for other platforms are also available, see [this KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-testing/20190118-breaking-apart-the-kubernetes-test-tarball.md#proposal). The `e2e.test` binary is completely self-contained, so one can "install" it and the [`ginkgo` test runner](https://onsi.github.io/ginkgo/) with: ``` curl --location https://dl.k8s.io/v1.16.0/kubernetes-test-linux-amd64.tar.gz | \ tar --strip-components=3 -zxf - kubernetes/test/bin/e2e.test kubernetes/test/bin/ginkgo ``` Each `e2e.test` binary contains tests that match the features available in the corresponding release. In particular, the `[Feature: xyz]` tags change between releases: they separate tests of alpha features from tests of non-alpha features. Also, the tests from an older release might rely on APIs that were removed in more recent Kubernetes releases. To avoid problems, it's best to simply use the `e2e.test` binary that matches the Kubernetes release that is used for testing. ### Usage Not all features of a CSI driver can be discovered through the Kubernetes API. Therefore a configuration file in YAML or JSON format is needed which describes the driver that is to be tested. That file is used to populate [the driverDefinition struct](https://github.com/kubernetes/kubernetes/blob/v1.16.0/test/e2e/storage/external/external.go#L142-L211) and [the DriverInfo struct](https://github.com/kubernetes/kubernetes/blob/v1.16.0/test/e2e/storage/testsuites/testdriver.go#L139-L185) that is embedded inside it. For detailed usage instructions of individual fields refer to these structs. A word of warning: tests are often only run when setting some fields and the file parser does not warn about unknown fields, so always check that the file really matches those structs. Here is an example that tests the [`csi-driver-host-path`](https://github.com/kubernetes-csi/csi-driver-host-path): ``` $ cat >test-driver.yaml <>> kubeConfig: /var/run/kubernetes/admin.kubeconfig Oct 8 17:17:42.241: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable ... ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ External Storage [Driver: hostpath.csi.k8s.io] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node /workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:191 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] ... ``` You can use `ginkgo` to run some kinds of test in parallel. Alpha feature tests or those that by design have to be run sequentially then need to be run separately: ``` $ ./ginkgo -p -v \ -focus='External.Storage' \ -skip='\[Feature:|\[Disruptive\]|\[Serial\]' \ ./e2e.test \ -- \ -storage.testdriver=test-driver.yaml $ ./ginkgo -v \ -focus='External.Storage.*(\[Feature:|\[Disruptive\]|\[Serial\])' \ ./e2e.test \ -- \ -storage.testdriver=test-driver.yaml ``` ## Getting involved Both the Kubernetes storage tests and the sanity tests are meant to be applicable to arbitrary CSI drivers. But perhaps tests are based on additional assumptions and your driver does not pass the testing although it complies with the CSI specification. If that happens then please file issues (links below). These are open source projects which depend on the help of those using them, so once a problem has been acknowledged, a pull request addressing it will be highly welcome. The same applies to writing new tests. The following searches in the issue trackers select issues that have been marked specifically as something that needs someone's help: - [csi-test](https://github.com/kubernetes-csi/csi-test/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) - [Kubernetes](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22+label%3Asig%2Fstorage+) Happy testing! May the issues it finds be few and easy to fix.