Solving Docker Hub rate limits in Kubernetes with containerd registry mirrors

When running Kubernetes workloads in AWS EKS (or any other environment), you may encounter the Docker Hub rate limit error:

429 Too Many Requests – Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading

Why are Docker Hub rate limits a problem? Docker Hub imposes strict pull rate limits: authenticated users up to 40 pulls per hour; anonymous users up to 10 pulls per hour; no pull rate limits for only paid authenticated users

To check your current limit state, you need to get token first:

For anonymous pulls:

TOKEN=$(curl "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token)

For authenticated pulls:

TOKEN=$(curl --user 'username:password' "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token)

and then make a request to get headers

curl --head -H "Authorization: Bearer $TOKEN" https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest

You should see the following headers (excerpt):


date: Sun, 26 Jan 2025 08:17:36 GMT
strict-transport-security: max-age=31536000
ratelimit-limit: 100;w=21600
ratelimit-remaining: 100;w=21600
docker-ratelimit-source: <hidden>

Possible solutions

  • DaemonSets that run predefined configuration of each your k8s node
  • For AWS-based clusters, EC2 Launch Template and it’s user data input
  • For AWS-based clusters, AWS Systems Manager and aws:runShellScript action
  • You can update the config manually, however, in most cases the cluster nodes have a short lifetime due to autoscaler (use the shell script from daemonset below, containerd service restart is not required)

In this guide, we will find out how to define DaemonSets in AWS EKS with containerd (containerd-1.7.11-1.amzn2.0.1.x86_64) and Kubernetes 1.30

  1. Check your containerd config at /etc/containerd/config.toml and make sure that the following is present config_path = "/etc/containerd/certs.d"
  2. Containerd registry host namespace configuration is stored at /etc/containerd/certs.d/hosts.toml
  3. The following manifest adds the required files and folders. Existing and future nodes will be automatically configured with the mirror by the DaemonSet. initContainer is used to update the node’s configuration, wait container is required to keep the DaemonSet active on nodes. Use taints and tolerations, change priority class or other fields to fit your requirements.
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    name: containerd-registry-mirror
    cluster: clustername
    otherlabel: labelvalue
  name: containerd-registry-mirror
spec:
  selector:
    matchLabels:
      name: containerd-registry-mirror
  template:
    metadata:
      labels:
        name: containerd-registry-mirror
    spec:
      nodeSelector:
        eks.amazonaws.com/nodegroup: poolname
      priorityClassName: system-node-critical
      initContainers:
      - image: alpine:3.21
        imagePullPolicy: IfNotPresent
        name: change-hosts-file-init
        command:
          - /bin/sh
          - -c
          - |
            #!/bin/sh
            set -euo pipefail
            TARGET="/etc/containerd/certs.d/docker.io" 
            cat << EOF > $TARGET/hosts.toml
            server = "https://registry-1.docker.io"

            [host."https://<your private registry>"]
              capabilities = ["pull", "resolve"]
            EOF
        resources:
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 50m
            memory: 100Mi
        volumeMounts:
        - mountPath: /etc/containerd/certs.d/docker.io/
          name: docker-mirror
      containers:
      - name: wait
        image: registry.k8s.io/pause:3.9
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 50m
            memory: 100Mi
          requests:
            cpu: 10m
            memory: 20Mi
      volumes:
      - name: docker-mirror
        hostPath:
          path: /etc/containerd/certs.d/docker.io/

5. Apply the manifest to your cluster:kubectl apply -f containerd-registry-mirror.yaml , and then monitor the DaemonSet status kubectl get daemonset containerd-registry-mirror -n kube-system

6. To double-check, ssh to the node and get the content of /etc/containerd/certs.d/docker.io/hosts.toml

7. If you need to setup default mirror for ALL registries, use the following path /etc/containerd/certs.d/_default/hosts.toml

Hope it’s helpful for someone faced with the same issue.

Got questions or need help? Drop a comment or share your experience

Have a smooth containerization!

How to enable ACL in Kafka running on Kubernetes


This brief article is intended for individuals encountering challenges with ACL configuration in Kafka, regardless of whether it is deployed on Kubernetes or as a stand-alone setup.

Assuming you have Kafka configured with both internal and external listeners, where the internal one may lack security measures while external access requires safeguarding through SASL methods like PLAIN, SSL, or SCRAM. Each SASL authentication mechanism necessitates a valid JAAS configuration file. Specifically, for PLAIN, the configuration file appears as follows:

KafkaServer {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin"
  user_client="client123"
}; 
Client {};

When an application or administrator attempts to access Kafka through an external listener, they are required to provide a username and password.
This is why enabling ACL in Kafka becomes necessary — to offer authorization in addition to the authentication provided by SASL.

To enable ACL, you just need to slightly change your Kafka brokers configuration. Add the following environment variables:

KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "false"
KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_SUPER_USERS: User:ANONYMOUS;User:admin

KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND is quite straightforward: if there are no ACLs configured for groups or topics, access is either allowed or denied. As I’ve configured multiple brokers utilizing internal non-secured listeners for interconnection, I set this environment variable to false to secure external access.

The true value makes sense only during ACL configuration phase because ACL denies all unauthorized connections on all listeners by default. Additionally, I added the ANONYMOUS user to KAFKA_SUPER_USERS, enabling brokers to connect with each other even when ACL is enabled. Internal traffic relies on pod-to-pod network.

Please note that this setup is suitable for development environments. In a production environment, I would recommend using SASL_SSL for inter-broker authentication and end-to-end SSL for the entire Kubernetes env.

Note: If your Kafka cluster is KRaft-based, use org.apache.kafka.metadata.authorizer.StandardAuthorizer; If your Kafka is not running on Kubernetes, add the mentioned variables to server.properties (for instance, authorizer.class.name=kafka.security.authorizer.AclAuthorizer)

Then, on the first Kafka broker you need to create your ACL rules:

/opt/kafka/bin/kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer \
--authorizer-properties zookeeper.connect=zookeeper:2181 \
--add --allow-principal User:client --operation WRITE \
--operation DESCRIBE --operation CREATE --topic topicname

Check that ACL rules have been created:

/opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9093 --list

Verify access to topic using kcat/kafkacat tool (change security protocol and mechanism if necessary):

docker run -it --network=host edenhill/kcat:1.7.1 -L \
-b <ext listener>:<port> -X security.protocol=SASL_PLAINTEXT \
-X sasl.mechanism=PLAIN -X sasl.username=username \
-X sasl.password=pass -t topicname

ACL rules are stored in Zookeeper, so it’s not necessary to repeat steps on other brokers. If you are using Kafka operator, steps might be slightly different.

List of supported ACL operations can be found here: https://github.com/apache/kafka/blob/24f664aa1621dc70794fd6576ac99547e41d2113/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java#L44

If any questions, comments are open. The gist for this post is here:https://gist.github.com/rlevchenko/8811080c7bbeb060b0a2c3f2a90c9ee9