How to enable ACL in Kafka running on Kubernetes



This brief article is intended for individuals encountering challenges with ACL configuration in Kafka, regardless of whether it is deployed on Kubernetes or as a stand-alone setup.

Assuming you have Kafka configured with both internal and external listeners, where the internal one may lack security measures while external access requires safeguarding through SASL methods like PLAIN, SSL, or SCRAM. Each SASL authentication mechanism necessitates a valid JAAS configuration file. Specifically, for PLAIN, the configuration file appears as follows:

KafkaServer {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin"
  user_client="client123"
}; 
Client {};

When an application or administrator attempts to access Kafka through an external listener, they are required to provide a username and password.
This is why enabling ACL in Kafka becomes necessary — to offer authorization in addition to the authentication provided by SASL.

To enable ACL, you just need to slightly change your Kafka brokers configuration. Add the following environment variables:

KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "false"
KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_SUPER_USERS: User:ANONYMOUS;User:admin

KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND is quite straightforward: if there are no ACLs configured for groups or topics, access is either allowed or denied. As I’ve configured multiple brokers utilizing internal non-secured listeners for interconnection, I set this environment variable to false to secure external access.

The true value makes sense only during ACL configuration phase because ACL denies all unauthorized connections on all listeners by default. Additionally, I added the ANONYMOUS user to KAFKA_SUPER_USERS, enabling brokers to connect with each other even when ACL is enabled. Internal traffic relies on pod-to-pod network.

Please note that this setup is suitable for development environments. In a production environment, I would recommend using SASL_SSL for inter-broker authentication and end-to-end SSL for the entire Kubernetes env.

Note: If your Kafka cluster is KRaft-based, use org.apache.kafka.metadata.authorizer.StandardAuthorizer; If your Kafka is not running on Kubernetes, add the mentioned variables to server.properties (for instance, authorizer.class.name=kafka.security.authorizer.AclAuthorizer)

Then, on the first Kafka broker you need to create your ACL rules:

/opt/kafka/bin/kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer \
--authorizer-properties zookeeper.connect=zookeeper:2181 \
--add --allow-principal User:client --operation WRITE \
--operation DESCRIBE --operation CREATE --topic topicname

Check that ACL rules have been created:

/opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9093 --list

Verify access to topic using kcat/kafkacat tool (change security protocol and mechanism if necessary):

docker run -it --network=host edenhill/kcat:1.7.1 -L \
-b <ext listener>:<port> -X security.protocol=SASL_PLAINTEXT \
-X sasl.mechanism=PLAIN -X sasl.username=username \
-X sasl.password=pass -t topicname

ACL rules are stored in Zookeeper, so it’s not necessary to repeat steps on other brokers. If you are using Kafka operator, steps might be slightly different.

List of supported ACL operations can be found here: https://github.com/apache/kafka/blob/24f664aa1621dc70794fd6576ac99547e41d2113/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java#L44

If any questions, comments are open. The gist for this post is here:https://gist.github.com/rlevchenko/8811080c7bbeb060b0a2c3f2a90c9ee9

Leave a comment