When running Kubernetes workloads in AWS EKS (or any other environment), you may encounter the Docker Hub rate limit error:
429 Too Many Requests – Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading
Why are Docker Hub rate limits a problem? Docker Hub imposes strict pull rate limits: authenticated users up to 40 pulls per hour; anonymous users up to 10 pulls per hour; no pull rate limits for only paid authenticated users
To check your current limit state, you need to get token first:
For anonymous pulls:
TOKEN=$(curl "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token)
For authenticated pulls:
TOKEN=$(curl --user 'username:password' "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token)
and then make a request to get headers
curl --head -H "Authorization: Bearer $TOKEN" https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest
You should see the following headers (excerpt):
date: Sun, 26 Jan 2025 08:17:36 GMT
strict-transport-security: max-age=31536000
ratelimit-limit: 100;w=21600
ratelimit-remaining: 100;w=21600
docker-ratelimit-source: <hidden>
Possible solutions
- DaemonSets that run predefined configuration of each your k8s node
- For AWS-based clusters, EC2 Launch Template and it’s user data input
- For AWS-based clusters, AWS Systems Manager and aws:runShellScript action
- You can update the config manually, however, in most cases the cluster nodes have a short lifetime due to autoscaler (use the shell script from daemonset below, containerd service restart is not required)
In this guide, we will find out how to define DaemonSets in AWS EKS with containerd (containerd-1.7.11-1.amzn2.0.1.x86_64) and Kubernetes 1.30
- Check your containerd config at /etc/containerd/config.toml and make sure that the following is present
config_path = "/etc/containerd/certs.d" - Containerd registry host namespace configuration is stored at
/etc/containerd/certs.d/hosts.toml - The following manifest adds the required files and folders. Existing and future nodes will be automatically configured with the mirror by the DaemonSet. initContainer is used to update the node’s configuration, wait container is required to keep the DaemonSet active on nodes. Use taints and tolerations, change priority class or other fields to fit your requirements.
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
name: containerd-registry-mirror
cluster: clustername
otherlabel: labelvalue
name: containerd-registry-mirror
spec:
selector:
matchLabels:
name: containerd-registry-mirror
template:
metadata:
labels:
name: containerd-registry-mirror
spec:
nodeSelector:
eks.amazonaws.com/nodegroup: poolname
priorityClassName: system-node-critical
initContainers:
- image: alpine:3.21
imagePullPolicy: IfNotPresent
name: change-hosts-file-init
command:
- /bin/sh
- -c
- |
#!/bin/sh
set -euo pipefail
TARGET="/etc/containerd/certs.d/docker.io"
cat << EOF > $TARGET/hosts.toml
server = "https://registry-1.docker.io"
[host."https://<your private registry>"]
capabilities = ["pull", "resolve"]
EOF
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 50m
memory: 100Mi
volumeMounts:
- mountPath: /etc/containerd/certs.d/docker.io/
name: docker-mirror
containers:
- name: wait
image: registry.k8s.io/pause:3.9
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
volumes:
- name: docker-mirror
hostPath:
path: /etc/containerd/certs.d/docker.io/
5. Apply the manifest to your cluster:kubectl apply -f containerd-registry-mirror.yaml , and then monitor the DaemonSet status kubectl get daemonset containerd-registry-mirror -n kube-system
6. To double-check, ssh to the node and get the content of /etc/containerd/certs.d/docker.io/hosts.toml
7. If you need to setup default mirror for ALL registries, use the following path /etc/containerd/certs.d/_default/hosts.toml
Hope it’s helpful for someone faced with the same issue.
Got questions or need help? Drop a comment or share your experience
Have a smooth containerization!