--- layout: blog title: "Introducing Windows CSI support alpha for Kubernetes" date: 2020-04-03 slug: kubernetes-1-18-feature-windows-csi-support-alpha --- **Authors:** Authors: Deep Debroy [Docker], Jing Xu [Google], Krishnakumar R (KK) [Microsoft] The alpha version of [CSI Proxy][csi-proxy] for Windows is being released with Kubernetes 1.18. CSI proxy enables CSI Drivers on Windows by allowing containers in Windows to perform privileged storage operations. ## Background Container Storage Interface (CSI) for Kubernetes went GA in the Kubernetes 1.13 release. CSI has become the standard for exposing block and file storage to containerized workloads on Container Orchestration systems (COs) like Kubernetes. It enables third-party storage providers to write and deploy plugins without the need to alter the core Kubernetes codebase. All new storage features will utilize CSI, therefore it is important to get CSI drivers to work on Windows. A CSI driver in Kubernetes has two main components: a controller plugin and a node plugin. The controller plugin generally does not need direct access to the host and can perform all its operations through the Kubernetes API and external control plane services (e.g. cloud storage service). The node plugin, however, requires direct access to the host for making block devices and/or file systems available to the Kubernetes kubelet. This was previously not possible for containers on Windows. With the release of [CSIProxy][csi-proxy], CSI drivers can now perform storage operations on the node. This inturn enables containerized CSI Drivers to run on Windows. ## CSI support for Windows clusters CSI drivers (e.g. AzureDisk, GCE PD, etc.) are recommended to be deployed as containers. CSI driver’s node plugin typically runs on every worker node in the cluster (as a DaemonSet). Node plugin containers need to run with elevated privileges to perform storage related operations. However, Windows currently does not support privileged containers. To solve this problem, [CSIProxy][csi-proxy] makes it so that node plugins can now be deployed as unprivileged pods and then use the proxy to perform privileged storage operations on the node. ## Node plugin interactions with CSIProxy The design of the CSI proxy is captured in this [KEP][kep]. The following diagram depicts the interactions with the CSI node plugin and CSI proxy.
\\.\pipe\csi-proxy-filesystem-v1alpha1
and volume APIs under the \\.\pipe\csi-proxy-volume-v1alpha1
, and so on.
From each API group service, the calls are routed to the host API layer. The host API calls into the host Windows OS by either Powershell or Go standard library calls. For example, when the filesystem API [Rmdir][rmdir] is called the API group service would decode the grpc structure [RmdirRequest][rmdir-req] and find the directory to be removed and call into the Host APIs layer. This would result in a call to [os.Remove][os-rem], a Go standard library call, to perform the remove operation.
## Control flow details
The following figure uses CSI call NodeStageVolume as an example to explain the interaction between kubelet, CSI plugin, and CSI proxy for provisioning a fresh volume. After the node plugin receives a CSI RPC call, it makes a few calls to CSIproxy accordingly. As a result of the NodeStageVolume call, first the required disk is identified using either of the Disk API calls: ListDiskLocations (in AzureDisk driver) or GetDiskNumberByName (in GCE PD driver). If the disk is not partitioned, then the PartitionDisk (Disk API group) is called. Subsequently, Volume API calls such as ListVolumesOnDisk, FormatVolume and MountVolume are called to perform the rest of the required operations. Similar operations are performed in case of NodeUnstageVolume, NodePublishVolume, NodeUnpublishedVolume, etc.