Why Old Docker Clients No Longer Work In Your Pipelines

If your pipelines suddenly started failing with an error like:

Error response from daemon: client version 1.40 is too old. 
Minimum supported API version is 1.44

—even though you didn’t change anything, this is not a CI issue and not a random failure. Some breaking changes have been introduced recently.

Let’s say you have a Docker-in-Docker setup in GitLab CI, for example:

image: docker:stable # gets the latest stable 
services:
  - docker:dind # pulls the latest dind version
script:
  - docker login/build/push ...

..and this have worked for years

The problem is that docker:stable has not actually been updated for a long time. It’s effective version is Docker 19.03.14.

The docker:stable, docker:test, and related “channel” tags have been deprecated since June 2020 and have not been updated since December 2020 (when Docker 20.10 was released)

At the same time, docker:dind is actively updated and may now be running Docker 29.2.0 (as of February 1, 2026).

The docker login command is executed inside the job container (docker:stable), which contains the Docker CLI. That CLI sends requests to the Docker daemon running in the docker:dind service. With this version mismatch, the request now fails. Why?

Starting with Docker Engine 29, the Docker daemon enforces a minimum supported Docker API version and drops support for older clients entirely. This is a real breaking change, and it has a significant impact on CI systems — especially GitLab CI setups using the Docker executor with docker:dind.

The daemon now requires API version v1.44 or later (Docker v25.0+).

This would not have been an issue if best practices had been followed. GitLab documentation (and many other sources) clearly states:

You should always pin a specific version of the image, like docker:24.0.5. If you use a tag like docker:latest, you have no control over which version is used. This can cause incompatibility problems when new versions are released.

Another case illustrating why you should not use latest or any other tag that doesn’t allow you to control which version is used.

Solution 1 – use specific versions (25+ in this case; recommended)

image: docker:29.2.0
services:
  - docker:29.2.0-dind
script:
  - docker login/build/push ...

Solution 2– set docker min api version for daemon:

  services:
    - name: docker:dind
      variables:
        DOCKER_MIN_API_VERSION: "1.40"

Solution 3 – change daemon.json with min api version:

{
  "min-api-version": "1.40"
}

1.40 represents docker:stable API version; in theory, Docker can break something again, so solution 1 is always preferable (for me, at least)

Hope it was helpful to someone.

Why GitLab Fails with “Operation Not Permitted” on Windows Using Podman

If you run GitLab (or any application that modifies file permissions or ownership of files in volume mounts) in a container, you may see the installation fail with an error like:

chgrp: changing group of '/var/opt/gitlab/git-data/repositories': Operation not permitted

This error prevents GitLab from starting. Here’s why it happens—and the simplest way to fix it.

A local GitLab installation was required to troubleshoot and verify several production-critical queries. This setup is clearly not intended for production use and should be used only for testing and troubleshooting purposes.

Also, Windows is not officially supported as the images have known compatibility issues with volume permissions and potentially other unknown issues (although, I haven’t noticed any issues during a week)

Both Podman Desktop and Docker Desktop run containers by using WSL2

The problem appears when you bind-mount a Windows directory (NTFS) into the container, for example:

E:\volumes\gitlab\data → /var/opt/gitlab
podman run --detach --hostname gitlab.example.com `
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.example.com'" `
--publish 443:443 --publish 80:80 --publish 22:22 ` 
--name gitlab --restart always `  
--volume /e/volumes/gitlab/config:/etc/gitlab `
--volume /e/volumes/gitlab/logs:/var/log/gitlab `
--volume /e/volumes/gitlab/data:/var/opt/gitlab `
gitlab/gitlab-ce:18.5.4-ce.0

The same command works fine with Docker Desktop (E is an external disk drive available to Windows host)

What goes wrong

So far, we have the following flow:

  • GitLab requires real Linux filesystem permissions and ownership
  • During startup, it runs chown and chgrp on its data directories
  • Windows filesystems (NTFS) do not support Linux UID/GID ownership
  • WSL2 cannot translate these permission changes correctly
  • The operation fails, and GitLab refuses to start

If both Podman and Docker are based on WSL2, why does Docker run GitLab on an E: drive without breaking a sweat? The root cause is the difference in how Docker and Podman translate file permissions.

Docker: if GitLab calls chgrp, WSL’s drvfs layer intercepts the call. It doesn’t actually change the Windows folder, but it records the “permission change” in a hidden metadata area (NTFS Extended Attributes).

/etc/wsl.conf content of the docker desktop engine:

[automount]
root = /mnt/host
options = "metadata"
[interop]
enabled = true

When metadata is enabled as a mount option in WSL, extended attributes on Windows NT files can be added and interpreted to supply Linux file system permissions.

Podman: mounts Windows drives using the standard WSL2 9p protocol and drvfs driver (as Docker actually) without the complex metadata mapping enabled by default. When GitLab/your app tries to set its required ownership, the mount simply refuses, causing the container to crash

Here is an output for E disk drive mount from the podman machine:

mount | grep " /mnt/e "
E:\ on /mnt/e type 9p (rw,noatime,aname=drvfs;path=E:\;uid=1000;gid=1000;symlinkroot=/mnt/,cache=5,access=client,msize=65536,trans=fd,rfd=5,wfd=5)

there is no metadata option for the mount because of such simple wsl.conf:

[user]
default=user

Solution

The easiest solution here is to use named volumes (universal and faster) or a bind mount (if Docker is used; slower); custom wsl.conf and bind mount (if Podman is used; slower)

Named volumes:

podman run --detach --hostname gitlab.example.com `
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.example.com'" ` 
--publish 443:443 --publish 80:80 --publish 22:22 ` 
--name gitlab --restart always ` 
--volume gitlab-config:/etc/gitlab `
--volume gitlab-logs:/var/log/gitlab ` 
--volume gitlab-data:/var/opt/gitlab `
gitlab/gitlab-ce:18.5.4-ce.0

and the data will be stored at /var/lib/containers/storage/volumes (podman machine in this example):

Can be accessed from Windows Explorer as well:

  • Docker: \\wsl$\docker-desktop\mnt\docker-desktop-disk\data\docker\volumes
  • Podman: \\wsl$\podman-machine-default\var\lib\containers\storage\volumes

Bind mounts:

docker run --detach `
  --hostname gitlab.example.com `
  --publish 443:443 --publish 80:80 --publish 22:22 `
  --name gitlab-bind-mount `
  --restart always `
  --volume /e/volumes/gitlab/config:/etc/gitlab `
  --volume /e/volumes/gitlab/logs:/var/log/gitlab `
  --volume /e/volumes/gitlab/data:/var/opt/gitlab `
  gitlab/gitlab-ce:18.5.4-ce.0

Custom wsl.conf (podman):

[automount]
options = "metadata"

[user]
default=user

[interop] enabled=true is not actually required since it’s true by default, then restart podman and try podman run again


Docker and Podman use different WSL default configurations. Docker tolerates emulated ownership changes by enabling the metadata option out of the box.

Podman, on the other hand, does not rely on this metadata and expects real Linux filesystem behavior. It is also daemonless and lighter than Docker—but that’s a story for another blog post.