Why GitLab Fails with “Operation Not Permitted” on Windows Using Podman

If you run GitLab (or any application that modifies file permissions or ownership of files in volume mounts) in a container, you may see the installation fail with an error like:

chgrp: changing group of '/var/opt/gitlab/git-data/repositories': Operation not permitted

This error prevents GitLab from starting. Here’s why it happens—and the simplest way to fix it.

A local GitLab installation was required to troubleshoot and verify several production-critical queries. This setup is clearly not intended for production use and should be used only for testing and troubleshooting purposes.

Also, Windows is not officially supported as the images have known compatibility issues with volume permissions and potentially other unknown issues (although, I haven’t noticed any issues during a week)

Both Podman Desktop and Docker Desktop run containers by using WSL2

The problem appears when you bind-mount a Windows directory (NTFS) into the container, for example:

E:\volumes\gitlab\data → /var/opt/gitlab
podman run --detach --hostname gitlab.example.com `
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.example.com'" `
--publish 443:443 --publish 80:80 --publish 22:22 ` 
--name gitlab --restart always `  
--volume /e/volumes/gitlab/config:/etc/gitlab `
--volume /e/volumes/gitlab/logs:/var/log/gitlab `
--volume /e/volumes/gitlab/data:/var/opt/gitlab `
gitlab/gitlab-ce:18.5.4-ce.0

The same command works fine with Docker Desktop (E is an external disk drive available to Windows host)

What goes wrong

So far, we have the following flow:

  • GitLab requires real Linux filesystem permissions and ownership
  • During startup, it runs chown and chgrp on its data directories
  • Windows filesystems (NTFS) do not support Linux UID/GID ownership
  • WSL2 cannot translate these permission changes correctly
  • The operation fails, and GitLab refuses to start

If both Podman and Docker are based on WSL2, why does Docker run GitLab on an E: drive without breaking a sweat? The root cause is the difference in how Docker and Podman translate file permissions.

Docker: if GitLab calls chgrp, WSL’s drvfs layer intercepts the call. It doesn’t actually change the Windows folder, but it records the “permission change” in a hidden metadata area (NTFS Extended Attributes).

/etc/wsl.conf content of the docker desktop engine:

[automount]
root = /mnt/host
options = "metadata"
[interop]
enabled = true

When metadata is enabled as a mount option in WSL, extended attributes on Windows NT files can be added and interpreted to supply Linux file system permissions.

Podman: mounts Windows drives using the standard WSL2 9p protocol and drvfs driver (as Docker actually) without the complex metadata mapping enabled by default. When GitLab/your app tries to set its required ownership, the mount simply refuses, causing the container to crash

Here is an output for E disk drive mount from the podman machine:

mount | grep " /mnt/e "
E:\ on /mnt/e type 9p (rw,noatime,aname=drvfs;path=E:\;uid=1000;gid=1000;symlinkroot=/mnt/,cache=5,access=client,msize=65536,trans=fd,rfd=5,wfd=5)

there is no metadata option for the mount because of such simple wsl.conf:

[user]
default=user

Solution

The easiest solution here is to use named volumes (universal and faster) or a bind mount (if Docker is used; slower); custom wsl.conf and bind mount (if Podman is used; slower)

Named volumes:

podman run --detach --hostname gitlab.example.com `
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.example.com'" ` 
--publish 443:443 --publish 80:80 --publish 22:22 ` 
--name gitlab --restart always ` 
--volume gitlab-config:/etc/gitlab `
--volume gitlab-logs:/var/log/gitlab ` 
--volume gitlab-data:/var/opt/gitlab `
gitlab/gitlab-ce:18.5.4-ce.0

and the data will be stored at /var/lib/containers/storage/volumes (podman machine in this example):

Can be accessed from Windows Explorer as well:

  • Docker: \\wsl$\docker-desktop\mnt\docker-desktop-disk\data\docker\volumes
  • Podman: \\wsl$\podman-machine-default\var\lib\containers\storage\volumes

Bind mounts:

docker run --detach `
  --hostname gitlab.example.com `
  --publish 443:443 --publish 80:80 --publish 22:22 `
  --name gitlab-bind-mount `
  --restart always `
  --volume /e/volumes/gitlab/config:/etc/gitlab `
  --volume /e/volumes/gitlab/logs:/var/log/gitlab `
  --volume /e/volumes/gitlab/data:/var/opt/gitlab `
  gitlab/gitlab-ce:18.5.4-ce.0

Custom wsl.conf (podman):

[automount]
options = "metadata"

[user]
default=user

[interop] enabled=true is not actually required since it’s true by default, then restart podman and try podman run again


Docker and Podman use different WSL default configurations. Docker tolerates emulated ownership changes by enabling the metadata option out of the box.

Podman, on the other hand, does not rely on this metadata and expects real Linux filesystem behavior. It is also daemonless and lighter than Docker—but that’s a story for another blog post.

Fixing GitLab Runner Issues on Kubernetes

Recently, I encountered and resolved several issues that were causing job failures and instability in AWS-based GitLab Runner setup on Kubernetes. In this post, I’ll walk through the errors, their causes, and the solutions that worked for me.

Job Timeout While Waiting for Pod to Start

ERROR: Job failed (system failure): prepare environment: waiting for pod running: timed out waiting for pod to start

Adjusting poll_interval and poll_timeout helped resolve the issue:

  • poll_timeout (default: 180s) – The maximum time the runner waits before timing out while connecting to a newly created pod.
  • poll_interval (default: 3s) – How frequently the runner checks the pod’s status.

By increasing poll_timeout (180s to 360s), the runner allowed more time for the pod to start, preventing such failures.

ErrImagePull: pull QPS exceeded

When the GitLab Runner starts multiple jobs that require pulling the same images (e.g., for services and builders), it can exceed the kubelet’s default pull rate limits:

  • registryPullQPS (default: 5) – Limits the number of image pulls per second.
  • registryBurst (default: 10) – Allows temporary bursts above registryPullQPS.

Instead of modifying kubelet parameters, I resolved this issue by changing the runner’s image pull policy from always to if-not-present to prevent unnecessary pulls:

pull_policy = ["if-not-present"] # default one
allowed_pull_policies = ["always", "if-not-present"] # allow to set always from pipeline if necessary

TLS Error When Preparing Environment

ERROR: Job failed (system failure): prepare environment: setting up trapping scripts on emptyDir: error dialing backend: remote error: tls: internal error

GitLab Runner communicates securely with the Kubernetes API to create executor pods. A TLS failure can occur due to API slowness, network issues, or misconfigured certificates

Setting the feature flag FF_WAIT_FOR_POD_TO_BE_REACHABLE to true helped resolve the issue by ensuring that the runner waits until the pod is fully reachable before proceeding. This can be set in the GitLab Runner configuration:

[runners.feature_flags]
  FF_WAIT_FOR_POD_TO_BE_REACHABLE = true

DNS Timeouts

dial tcp: lookup on <coredns ip>:53: read udp i/o timeout

While CoreDNS logs and network communication appeared normal, there was an unexpected spike in DNS load after launching more GitLab jobs than usual.

Scaling the CoreDNS deployment resolved the issue. Ideally, enabling automatic DNS horizontal autoscaling is preferred for handling load variations (check out kubernetes docs or cloud provider’s specific solution: they all share the same approach – add more replicas if increased load occurs)

kubectl scale deployments -n kube-system coredns --replicas=4

If you encountered other GitLab Runner issues, share them in comments 🙂

Cheers!