This section describes how to resolve common problems you might encounter when installing and using the Container Storage Interface (CSI) Storage Plugin.
Run the following commands to get the pods deployed for the CSI plugin and provisioner:
kubectl get pods -n mapr-csiLoopback
NFSkubectl get pods -n mapr-nfscsiThe installation is considered successful if the above command shows the pods in "Running" state. For example, your output should look similar to the following when CSI plugin is deployed on 3 Worker nodes:
mapr-csi csi-controller-kdf-0 5/5 Running 0 4h25m
mapr-csi csi-nodeplugin-kdf-2kfrf 3/3 Running 0 4h25m
mapr-csi csi-nodeplugin-kdf-lq5nw 3/3 Running 0 4h25m
mapr-csi csi-nodeplugin-kdf-pkrzt 3/3 Running 0 4h25mLoopback
NFScsi-controller-nfskdf-0 7/7 Running 0 22h
csi-nodeplugin-nfskdf-5rjt2 3/3 Running 0 18h
csi-nodeplugin-nfskdf-7d9cs 3/3 Running 0 22h
csi-nodeplugin-nfskdf-qw7kg 3/3 Running 0 22hThe above output shows the following:
csi-nodeplugin-kdf-*: Daemonset pods deployed on all the Kubernetes
worker nodescsi-controller-kdf-0: StatefulSet pod deployed on a single Kubernetes
worker nodecsi-nodeplugin-nfskdf-*: Daemonset pods deployed on all the
Kubernetes worker nodescsi-controller-nfskdf-0: StatefulSet pod deployed on a single
Kubernetes worker nodeIf the pods show a failure in the deployment, run the following kubectl command to see the container logs:
kubectl logs <csi-nodeplugin-*> -n mapr-csi -c <nodeplugin-pod-container>Loopback
NFSkubectl logs <csi-nodeplugin-*> -n mapr-nfscsi -c <nodeplugin-pod-container>If the pods show a failure in the deployment, run the following kubectl commands to see the container logs:
kubectl logs <csi-nodeplugin-*> -n mapr-csi -c <nodeplugin-pod-container>Loopback
NFSkubectl logs csi-controller-nfskdf-0 -n mapr-nfscsi -c <controller-pod-container><nodeplugin-pod-container> with the container which is
failing. You can also run the following kubectl command to see the controller logs:
kubectl logs csi-controller-kdf-0 -n mapr-csi -c <Controller-Pod-Container>
Here, replace <Controller-Pod-Container> with the container which is
failing.Check the provisioner log and check for any provisioner errors:
tail -100f /var/log/csi-maprkdf/csi-provisioner-<version>.logLoopback
NFStail -100f /var/log/csi-maprkdf/csi-nfsprovisioner-<version>.logCheck the CSI Storage plug-in log and check for any mount/unmount errors:
tail -100f /var/log/csi-maprkdf/csi-plugin-<version>.logLoopback
NFStail -100f /var/log/csi-maprkdf/csi-nfsplugin-<version>.logIf you don’t see any errors, see the kubelet logs on the node where the pod is scheduled to run. Check for the MapR CSI Storage plugin logs for specific errors.
Check the kubelet path for kubernetes deployment from the kubelet process running with
--root-dir. The --root-dir is a string that contains the
directory path for managing kubelet files (such as volume mounts, etc.,) and defaults to
/var/lib/kubelet. If the kubernetes environment has a different kubelet
path, modify the CSI driver deployment .yaml file with the new path and
redeploy the CSI Storage Plugin again.
tail -100f /var/log/csi-maprkdf/csi-provisioner-<version>.log If
there are no errors, run the following kubectl command to check the snapshot:
kubectl describe volumesnapshot.snapshot.storage.k8s.io <snapshot-name> -n <namespace-name> Here:<snapshot-name>: Name of the VolumeSnapshot Object defined in
yaml<namespace-name>: Namespace where the VolumeSnapshot object is
createdThe devicemapper storage driver used for Docker allows only 10 GB by default resulting in
"no space left on device" errors when writing to new directories for a new volume mount
request. If --maxvolumepernode is configured to be greater than 20 and
underlying docker is using devicemapper storagedriver, do the following to increase the
storage size:
/etc/sysconfig/docker-storage file, add --storage-opt
dm.basesize=50G under DOCKER_STORAGE_OPTIONS section.docker info | grep "Base Device Size"