Solving Docker Hub rate limits in Kubernetes with containerd registry mirrors

When running Kubernetes workloads in AWS EKS (or any other environment), you may encounter the Docker Hub rate limit error:

429 Too Many Requests – Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading

Why are Docker Hub rate limits a problem? Docker Hub imposes strict pull rate limits: authenticated users up to 40 pulls per hour; anonymous users up to 10 pulls per hour; no pull rate limits for only paid authenticated users

To check your current limit state, you need to get token first:

For anonymous pulls:

TOKEN=$(curl "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token)

For authenticated pulls:

TOKEN=$(curl --user 'username:password' "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token)

and then make a request to get headers

curl --head -H "Authorization: Bearer $TOKEN" https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest

You should see the following headers (excerpt):


date: Sun, 26 Jan 2025 08:17:36 GMT
strict-transport-security: max-age=31536000
ratelimit-limit: 100;w=21600
ratelimit-remaining: 100;w=21600
docker-ratelimit-source: <hidden>

Possible solutions

  • DaemonSets that run predefined configuration of each your k8s node
  • For AWS-based clusters, EC2 Launch Template and it’s user data input
  • For AWS-based clusters, AWS Systems Manager and aws:runShellScript action
  • You can update the config manually, however, in most cases the cluster nodes have a short lifetime due to autoscaler (use the shell script from daemonset below, containerd service restart is not required)

In this guide, we will find out how to define DaemonSets in AWS EKS with containerd (containerd-1.7.11-1.amzn2.0.1.x86_64) and Kubernetes 1.30

  1. Check your containerd config at /etc/containerd/config.toml and make sure that the following is present config_path = "/etc/containerd/certs.d"
  2. Containerd registry host namespace configuration is stored at /etc/containerd/certs.d/hosts.toml
  3. The following manifest adds the required files and folders. Existing and future nodes will be automatically configured with the mirror by the DaemonSet. initContainer is used to update the node’s configuration, wait container is required to keep the DaemonSet active on nodes. Use taints and tolerations, change priority class or other fields to fit your requirements.
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    name: containerd-registry-mirror
    cluster: clustername
    otherlabel: labelvalue
  name: containerd-registry-mirror
spec:
  selector:
    matchLabels:
      name: containerd-registry-mirror
  template:
    metadata:
      labels:
        name: containerd-registry-mirror
    spec:
      nodeSelector:
        eks.amazonaws.com/nodegroup: poolname
      priorityClassName: system-node-critical
      initContainers:
      - image: alpine:3.21
        imagePullPolicy: IfNotPresent
        name: change-hosts-file-init
        command:
          - /bin/sh
          - -c
          - |
            #!/bin/sh
            set -euo pipefail
            TARGET="/etc/containerd/certs.d/docker.io" 
            cat << EOF > $TARGET/hosts.toml
            server = "https://registry-1.docker.io"

            [host."https://<your private registry>"]
              capabilities = ["pull", "resolve"]
            EOF
        resources:
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 50m
            memory: 100Mi
        volumeMounts:
        - mountPath: /etc/containerd/certs.d/docker.io/
          name: docker-mirror
      containers:
      - name: wait
        image: registry.k8s.io/pause:3.9
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 50m
            memory: 100Mi
          requests:
            cpu: 10m
            memory: 20Mi
      volumes:
      - name: docker-mirror
        hostPath:
          path: /etc/containerd/certs.d/docker.io/

5. Apply the manifest to your cluster:kubectl apply -f containerd-registry-mirror.yaml , and then monitor the DaemonSet status kubectl get daemonset containerd-registry-mirror -n kube-system

6. To double-check, ssh to the node and get the content of /etc/containerd/certs.d/docker.io/hosts.toml

7. If you need to setup default mirror for ALL registries, use the following path /etc/containerd/certs.d/_default/hosts.toml

Hope it’s helpful for someone faced with the same issue.

Got questions or need help? Drop a comment or share your experience

Have a smooth containerization!

Argo CD Login and API: received unexpected content-type or TLS handshake timeout

My Argo CD is running on AWS EKS and is exposed via a standard Kubernetes Ingress (traefik class), meaning it also interacts with AWS ELB. Additionally, the server.insecure parameter in the Argo CD server is set to “true” (configured in the argocd-cmd-params-cm ConfigMap in Kubernetes), with TLS termination happening on the ingress side.

There are no issues with the Argo CD UI. However, I am unable to access the Argo CD API using simple curl requests or the Argo CD CLI. I keep receiving errors related to content-type and TLS handshake failures:

 argocd login argo.example.com --grpc-web --insecure --skip-test-tls
 FATA[0036] rpc error: code = Unknown desc = Post "https://argocd.example.com/session.SessionService/Create": net/http: TLS handshake timeout 
FATA[0045] rpc error: code = Unimplemented desc = unexpected HTTP status code received from server: 404 (Not Found); transport: received unexpected content-type "text/plain; charset=utf-8"

All requests are being sent from a WSL instance (Ubuntu 22.04). Note that I have no issues accessing the API when using port forwarding or when connecting from the management partition (local machine).

I was about to give up, but then I decided to check the MTU size.

Get-NetIPInterface -AddressFamily IPv4 | Sort-Object -Property NlMtu | Select ifIndex, InterfaceAlias, NlMtu -first 5
ifIndex InterfaceAlias            NlMtu
------- --------------            -----
     19 Ethernet 3                 1392
     20 Local Area Connection* 1   1500
      8 Ethernet (WSL)             1500

Ethernet 3 is my VPN interface, and the API is only reachable through this interface.

Ethernet is the interface that WSL uses, so an MTU mismatch is occurring.

The solution is to adjust the MTU to match 1392 (the exact value may vary).

In your WSL instance, run the following:

ifconfig # to list interfaces. note your main interface (eth0 typically)
sudo ifconfig eth0 mtu 1392 # to change MTU size

I hope it helps!