-
Pl
chevron_right
Thibault Martin: TIL that Minikube mounts volumes as root
news.movim.eu / PlanetGnome • 7:00 • 1 minute
When I have to play with a container image I have never met before, I like to deploy it on a test cluster to poke and prod it. I usually did that on a k3s cluster, but recently I've moved to Minikube to bring my test cluster with me when I'm on the go.
Minikube is a tiny one-node Kubernetes cluster meant to run on development machines. It's useful to test
Deployments
or
StatefulSets
with images you are not familiar with and build proper helm charts from them.
It provides volumes of the
hostPath
type by default. The major caveat of
hostPath
volumes is that they're
mounted as root by default
.
I usually handle mismatched ownership with a
securityContext
like the following to instruct the container to run with a specific UID and GID, and to make the volume owned by a specific group.
Typically in a
StatefulSet
it looks like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp
# [...]
spec:
# [...]
template:
# [...]
spec:
securityContext:
runAsUser: 10001
runAsGroup: 10001
fsGroup: 10001
containers:
- name: myapp
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
# [...]
In this configuration:
-
Processes in the Pod
myappwill run with UID 10001 and GID 10001. -
The
/datadirectory mounted from thedatavolume will belong to group 10001 as well.
The
securityContext
usually solves the problem, but that's not how
hostPath
works. For
hostPath
volumes, the
securityContext.fsGroup
property is silently ignored.
[!success] Init Container to the Rescue!
The solution in this specific case is to use an initContainer as root to
chownthe volume mounts to the unprivileged user.
In practice it will look like this.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp
# [...]
spec:
# [...]
template:
# [...]
spec:
securityContext:
runAsUser: 10001
runAsGroup: 10001
fsGroup: 10001
initContainers:
- name: fix-perms
image: busybox
command:
["sh", "-c", "chown -R 10001:10001 /data"]
securityContext:
runAsUser: 0
volumeMounts:
- name: data
mountPath: /data
containers:
- name: myapp
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
# [...]
It took me a little while to figure it out, because I was used to testing my
StatefulSets
on k3s. K3s uses a local path provisioner, which gives me
local
volumes, not
hostPath
ones like Minikube.
In production I don't need the
initContainer
to fix permissions since I'm deploying this on an EKS cluster.