Why Old Docker Clients No Longer Work In Your Pipelines

If your pipelines suddenly started failing with an error like:

Error response from daemon: client version 1.40 is too old. 
Minimum supported API version is 1.44

—even though you didn’t change anything, this is not a CI issue and not a random failure. Some breaking changes have been introduced recently.

Let’s say you have a Docker-in-Docker setup in GitLab CI, for example:

image: docker:stable # gets the latest stable 
services:
  - docker:dind # pulls the latest dind version
script:
  - docker login/build/push ...

..and this have worked for years

The problem is that docker:stable has not actually been updated for a long time. It’s effective version is Docker 19.03.14.

The docker:stable, docker:test, and related “channel” tags have been deprecated since June 2020 and have not been updated since December 2020 (when Docker 20.10 was released)

At the same time, docker:dind is actively updated and may now be running Docker 29.2.0 (as of February 1, 2026).

The docker login command is executed inside the job container (docker:stable), which contains the Docker CLI. That CLI sends requests to the Docker daemon running in the docker:dind service. With this version mismatch, the request now fails. Why?

Starting with Docker Engine 29, the Docker daemon enforces a minimum supported Docker API version and drops support for older clients entirely. This is a real breaking change, and it has a significant impact on CI systems — especially GitLab CI setups using the Docker executor with docker:dind.

The daemon now requires API version v1.44 or later (Docker v25.0+).

This would not have been an issue if best practices had been followed. GitLab documentation (and many other sources) clearly states:

You should always pin a specific version of the image, like docker:24.0.5. If you use a tag like docker:latest, you have no control over which version is used. This can cause incompatibility problems when new versions are released.

Another case illustrating why you should not use latest or any other tag that doesn’t allow you to control which version is used.

Solution 1 – use specific versions (25+ in this case; recommended)

image: docker:29.2.0
services:
  - docker:29.2.0-dind
script:
  - docker login/build/push ...

Solution 2– set docker min api version for daemon:

  services:
    - name: docker:dind
      variables:
        DOCKER_MIN_API_VERSION: "1.40"

Solution 3 – change daemon.json with min api version:

{
  "min-api-version": "1.40"
}

1.40 represents docker:stable API version; in theory, Docker can break something again, so solution 1 is always preferable (for me, at least)

Hope it was helpful to someone.

Using the GitLab Rails Console: Practical Examples for Managing Container Registry

While the GitLab UI and API are the standard tools for day-to-day operations, they often hit a wall when dealing with massive datasets. API rate limits can slow down bulk cleanup tasks, and the UI is ineffective for managing thousands of objects.

In such situations, we sometimes need direct, fast access to GitLab’s internal data. This is where the GitLab Rails console comes in. Built on the Ruby on Rails framework, the Rails console provides a command-line interface for interacting directly with GitLab’s application models using the Ruby language.

The Rails console is fast and flexible, but also unforgiving: a single command can modify or delete production data instantly.

It is strongly recommended to test all commands in a staging or test environment before executing them in production, and to ensure data backups are in place. Check out this post to run the GitLab using Docker/Podman locally.

At the core of the GitLab Rails console is ActiveRecord, the Object–Relational Mapping (ORM) layer provided by Ruby on Rails. ActiveRecord acts as a bridge between Ruby objects and database tables, allowing to work with database records using plain Ruby code instead of raw SQL (“just a wrapper”).

Basics

Run the rails console on your GitLab server:

sudo gitlab-rails console

Once loaded, enable the debug mode:

ActiveRecord::Base.logger = Logger.new($stdout)

Consider the following command:

prj = Project.find_by_full_path(test/test2)
  1. Project is a model mapped to the projects database table
  2. find_by_full_path is a model method that builds a SQL query using the provided project path (try to type Project. and press tab to initiate autocompletion, the console will display all available methods for the Project model)
  3. prj is a variable to store Ruby’s object or nil (if no project found)

Project.methods.grep(/find/) will show methods available to Project model containing find word. Use <model>.methods and <object>.attributes to get all available methods and properties of object (e.g. prj.attributes and project.methods)

Next, if we run prj.container_repositories we will see actual SQL query used by the code, so we are interacting with the container_repositories table

 D, [2025-12-24T09:47:41.526263 #4889] DEBUG -- : ContainerRepository Load (1.8ms) /*application:console,db_config_database:gitlabhq_production,db_config_name:main,console_hostname:gitlab.example.com*/ SELECT "container_repositories".* FROM "container_repositories" WHERE "container_repositories"."project_id" = 1 /* loading for pp */ LIMIT 11

Consider you need to find all projects with at least one tag in the repo

count = 0
Project.find_each do |project|
  if project.has_container_registry_tags? 
    count += 1
  end
end
puts "Total projects: #{count} "

and result is Total projects: 2567

Thousands of projects in the GitLab were “queried” in just a few seconds. That’s a magic of GitLab Rails console that API doesn’t have.

Some explanations:

  • count = 0 simply a local variable; our counter
  • Project.find_each loads projects in batches
  • do | project | starts a loop and assigns each project to the variable project
  • if project.has_container_registry_tags? main logic, clear enough to any
  • count += 1 increments our counter
  • puts "Total projects: #{count}" prints result once the loop is finished

If you need to show project paths and print projects with image tags:

count = 0
Project.find_each do |project|
  puts project.full_path
  if project.has_container_registry_tags?
    puts "  -> Project has container registry tags"
    count += 1
   end
 end
puts "Total projects: #{count} "

Result:

test/project1
test/test2
-> Project has container registry tags
test2/test2

Try out different models such as User, Group, Ci::Pipeline and others

Zombies

An interesting behavior to note is that destroying a repository object via repo.destroy! does not automatically purge the physical data from the storage backend. If the Container Registry is active, GitLab’s background processes or a simple page refresh may trigger a “re-sync.”

  • You run repo.destroy! on repository ID 16.
  • The database record is deleted, but the physical tags remain in storage.
  • GitLab detects the existing data at that path and automatically creates a new record in a database
  • A new repository appears with a new ID 17, containing all the old tags.

To prevent this “zombie” effect, you should always delete the tags through the GitLab API or Rails methods before destroying the repository record, and follow up with a Registry Garbage Collection to reclaim the disk space

gitlab-ctl registry-garbage-collect -m
#removes untagged manifests and unreferenced layers as well
#wont'work if the container registry is disabled

You can write a simple command to delete the repo and tags in the test/test2 project (DANGER):

prj = Project.find_by_full_path("test/test2")
  prj.container_repositories.each do |repo|
    repo.delete_tags!
    repo.destroy!
end

delete_tags and has_container_registry_tags? methods require availability of GitLab Container Registry. Otherwise, the following error shown “Failed to open TCP connection to localhost:5000 (Connection refused – connect(2) for “localhost” port 5000) (Faraday::ConnectionFailed)”

Other examples

To delete image tags and container registry repositories (DANGER):

Project.find_each do |p|
    next unless p.has_container_registry_tags?
    p.container_repositories.each do |repo|
      puts "Cleaning tags for #{p.full_path}"
       repo.delete_tags!
      puts ":::Destroying the repo for #{p.full_path}"
       repo.destroy!
    end
end

next unless can be read as “if repository doesn’t have any container registry tags, skip it and find relevant projects“; works faster and looks cleaner

If your Container registry has been disabled and you need to clean container registry repositories:

  • temporary enable the container registry service (recommended) OR
  • destroy registry repositories, and then manually clean file storage (based on my testing, we need to interact with  /var/opt/gitlab/gitlab-rails/shared/registry/docker/registry/v2/ after deleting registry repos via GitLab rails , and likely it doesn’t break anything; blobs and repositories folders will be created automatically once you pushed any images again)
  • Anyway, do you own research before any change!

To find projects with container registry repo when registry has been disabled:

count = 0
Project.find_each do |project|
  if project.container_repositories.count > 0
    puts "  -> Project #{project.name} has container repos"
    count += 1
   end
 end
puts "Total projects: #{count} "

To delete container registry repos when registry has been disabled (DANGER):

Project.find_each do |p|
    next unless p.container_repositories.count > 0
    p.container_repositories.each do |repo|
      puts ":::Destroying the repo #{repo.name} for #{p.full_path}"
       repo.destroy!
    end
end

Use case: deleting registry repositories while registry is disabled will help you to transfer a project (if you need). Otherwise, GitLab won’t allow you to change the path or namespace of any project.

While the scripts above are powerful, they are also permanent. Treat the Rails console like a sharp blade: incredibly useful, but dangerous if handled carelessly. And don’t forget these rules 1. Do Your Own Research. 2. Audit with puts before execution of harmful queries (destroy and other methods) . 3. Test every query on a single project before running a batch.