Why GitLab Fails with “Operation Not Permitted” on Windows Using Podman

If you run GitLab (or any application that modifies file permissions or ownership of files in volume mounts) in a container, you may see the installation fail with an error like:

chgrp: changing group of '/var/opt/gitlab/git-data/repositories': Operation not permitted

This error prevents GitLab from starting. Here’s why it happens—and the simplest way to fix it.

A local GitLab installation was required to troubleshoot and verify several production-critical queries. This setup is clearly not intended for production use and should be used only for testing and troubleshooting purposes.

Also, Windows is not officially supported as the images have known compatibility issues with volume permissions and potentially other unknown issues (although, I haven’t noticed any issues during a week)

Both Podman Desktop and Docker Desktop run containers by using WSL2

The problem appears when you bind-mount a Windows directory (NTFS) into the container, for example:

E:\volumes\gitlab\data → /var/opt/gitlab
podman run --detach --hostname gitlab.example.com `
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.example.com'" `
--publish 443:443 --publish 80:80 --publish 22:22 ` 
--name gitlab --restart always `  
--volume /e/volumes/gitlab/config:/etc/gitlab `
--volume /e/volumes/gitlab/logs:/var/log/gitlab `
--volume /e/volumes/gitlab/data:/var/opt/gitlab `
gitlab/gitlab-ce:18.5.4-ce.0

The same command works fine with Docker Desktop (E is an external disk drive available to Windows host)

What goes wrong

So far, we have the following flow:

  • GitLab requires real Linux filesystem permissions and ownership
  • During startup, it runs chown and chgrp on its data directories
  • Windows filesystems (NTFS) do not support Linux UID/GID ownership
  • WSL2 cannot translate these permission changes correctly
  • The operation fails, and GitLab refuses to start

If both Podman and Docker are based on WSL2, why does Docker run GitLab on an E: drive without breaking a sweat? The root cause is the difference in how Docker and Podman translate file permissions.

Docker: if GitLab calls chgrp, WSL’s drvfs layer intercepts the call. It doesn’t actually change the Windows folder, but it records the “permission change” in a hidden metadata area (NTFS Extended Attributes).

/etc/wsl.conf content of the docker desktop engine:

[automount]
root = /mnt/host
options = "metadata"
[interop]
enabled = true

When metadata is enabled as a mount option in WSL, extended attributes on Windows NT files can be added and interpreted to supply Linux file system permissions.

Podman: mounts Windows drives using the standard WSL2 9p protocol and drvfs driver (as Docker actually) without the complex metadata mapping enabled by default. When GitLab/your app tries to set its required ownership, the mount simply refuses, causing the container to crash

Here is an output for E disk drive mount from the podman machine:

mount | grep " /mnt/e "
E:\ on /mnt/e type 9p (rw,noatime,aname=drvfs;path=E:\;uid=1000;gid=1000;symlinkroot=/mnt/,cache=5,access=client,msize=65536,trans=fd,rfd=5,wfd=5)

there is no metadata option for the mount because of such simple wsl.conf:

[user]
default=user

Solution

The easiest solution here is to use named volumes (universal and faster) or a bind mount (if Docker is used; slower); custom wsl.conf and bind mount (if Podman is used; slower)

Named volumes:

podman run --detach --hostname gitlab.example.com `
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.example.com'" ` 
--publish 443:443 --publish 80:80 --publish 22:22 ` 
--name gitlab --restart always ` 
--volume gitlab-config:/etc/gitlab `
--volume gitlab-logs:/var/log/gitlab ` 
--volume gitlab-data:/var/opt/gitlab `
gitlab/gitlab-ce:18.5.4-ce.0

and the data will be stored at /var/lib/containers/storage/volumes (podman machine in this example):

Can be accessed from Windows Explorer as well:

  • Docker: \\wsl$\docker-desktop\mnt\docker-desktop-disk\data\docker\volumes
  • Podman: \\wsl$\podman-machine-default\var\lib\containers\storage\volumes

Bind mounts:

docker run --detach `
  --hostname gitlab.example.com `
  --publish 443:443 --publish 80:80 --publish 22:22 `
  --name gitlab-bind-mount `
  --restart always `
  --volume /e/volumes/gitlab/config:/etc/gitlab `
  --volume /e/volumes/gitlab/logs:/var/log/gitlab `
  --volume /e/volumes/gitlab/data:/var/opt/gitlab `
  gitlab/gitlab-ce:18.5.4-ce.0

Custom wsl.conf (podman):

[automount]
options = "metadata"

[user]
default=user

[interop] enabled=true is not actually required since it’s true by default, then restart podman and try podman run again


Docker and Podman use different WSL default configurations. Docker tolerates emulated ownership changes by enabling the metadata option out of the box.

Podman, on the other hand, does not rely on this metadata and expects real Linux filesystem behavior. It is also daemonless and lighter than Docker—but that’s a story for another blog post.

Solving Docker Hub rate limits in Kubernetes with containerd registry mirrors

When running Kubernetes workloads in AWS EKS (or any other environment), you may encounter the Docker Hub rate limit error:

429 Too Many Requests – Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading

Why are Docker Hub rate limits a problem? Docker Hub imposes strict pull rate limits: authenticated users up to 40 pulls per hour; anonymous users up to 10 pulls per hour; no pull rate limits for only paid authenticated users

To check your current limit state, you need to get token first:

For anonymous pulls:

TOKEN=$(curl "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token)

For authenticated pulls:

TOKEN=$(curl --user 'username:password' "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token)

and then make a request to get headers

curl --head -H "Authorization: Bearer $TOKEN" https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest

You should see the following headers (excerpt):


date: Sun, 26 Jan 2025 08:17:36 GMT
strict-transport-security: max-age=31536000
ratelimit-limit: 100;w=21600
ratelimit-remaining: 100;w=21600
docker-ratelimit-source: <hidden>

Possible solutions

  • DaemonSets that run predefined configuration of each your k8s node
  • For AWS-based clusters, EC2 Launch Template and it’s user data input
  • For AWS-based clusters, AWS Systems Manager and aws:runShellScript action
  • You can update the config manually, however, in most cases the cluster nodes have a short lifetime due to autoscaler (use the shell script from daemonset below, containerd service restart is not required)

In this guide, we will find out how to define DaemonSets in AWS EKS with containerd (containerd-1.7.11-1.amzn2.0.1.x86_64) and Kubernetes 1.30

  1. Check your containerd config at /etc/containerd/config.toml and make sure that the following is present config_path = "/etc/containerd/certs.d"
  2. Containerd registry host namespace configuration is stored at /etc/containerd/certs.d/hosts.toml
  3. The following manifest adds the required files and folders. Existing and future nodes will be automatically configured with the mirror by the DaemonSet. initContainer is used to update the node’s configuration, wait container is required to keep the DaemonSet active on nodes. Use taints and tolerations, change priority class or other fields to fit your requirements.
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    name: containerd-registry-mirror
    cluster: clustername
    otherlabel: labelvalue
  name: containerd-registry-mirror
spec:
  selector:
    matchLabels:
      name: containerd-registry-mirror
  template:
    metadata:
      labels:
        name: containerd-registry-mirror
    spec:
      nodeSelector:
        eks.amazonaws.com/nodegroup: poolname
      priorityClassName: system-node-critical
      initContainers:
      - image: alpine:3.21
        imagePullPolicy: IfNotPresent
        name: change-hosts-file-init
        command:
          - /bin/sh
          - -c
          - |
            #!/bin/sh
            set -euo pipefail
            TARGET="/etc/containerd/certs.d/docker.io" 
            cat << EOF > $TARGET/hosts.toml
            server = "https://registry-1.docker.io"

            [host."https://<your private registry>"]
              capabilities = ["pull", "resolve"]
            EOF
        resources:
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 50m
            memory: 100Mi
        volumeMounts:
        - mountPath: /etc/containerd/certs.d/docker.io/
          name: docker-mirror
      containers:
      - name: wait
        image: registry.k8s.io/pause:3.9
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 50m
            memory: 100Mi
          requests:
            cpu: 10m
            memory: 20Mi
      volumes:
      - name: docker-mirror
        hostPath:
          path: /etc/containerd/certs.d/docker.io/

5. Apply the manifest to your cluster:kubectl apply -f containerd-registry-mirror.yaml , and then monitor the DaemonSet status kubectl get daemonset containerd-registry-mirror -n kube-system

6. To double-check, ssh to the node and get the content of /etc/containerd/certs.d/docker.io/hosts.toml

7. If you need to setup default mirror for ALL registries, use the following path /etc/containerd/certs.d/_default/hosts.toml

Hope it’s helpful for someone faced with the same issue.

Got questions or need help? Drop a comment or share your experience

Have a smooth containerization!