How to enable ACL in Kafka running on Kubernetes


This brief article is intended for individuals encountering challenges with ACL configuration in Kafka, regardless of whether it is deployed on Kubernetes or as a stand-alone setup.

Assuming you have Kafka configured with both internal and external listeners, where the internal one may lack security measures while external access requires safeguarding through SASL methods like PLAIN, SSL, or SCRAM. Each SASL authentication mechanism necessitates a valid JAAS configuration file. Specifically, for PLAIN, the configuration file appears as follows:

KafkaServer {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin"
  user_client="client123"
}; 
Client {};

When an application or administrator attempts to access Kafka through an external listener, they are required to provide a username and password.
This is why enabling ACL in Kafka becomes necessary — to offer authorization in addition to the authentication provided by SASL.

To enable ACL, you just need to slightly change your Kafka brokers configuration. Add the following environment variables:

KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "false"
KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_SUPER_USERS: User:ANONYMOUS;User:admin

KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND is quite straightforward: if there are no ACLs configured for groups or topics, access is either allowed or denied. As I’ve configured multiple brokers utilizing internal non-secured listeners for interconnection, I set this environment variable to false to secure external access.

The true value makes sense only during ACL configuration phase because ACL denies all unauthorized connections on all listeners by default. Additionally, I added the ANONYMOUS user to KAFKA_SUPER_USERS, enabling brokers to connect with each other even when ACL is enabled. Internal traffic relies on pod-to-pod network.

Please note that this setup is suitable for development environments. In a production environment, I would recommend using SASL_SSL for inter-broker authentication and end-to-end SSL for the entire Kubernetes env.

Note: If your Kafka cluster is KRaft-based, use org.apache.kafka.metadata.authorizer.StandardAuthorizer; If your Kafka is not running on Kubernetes, add the mentioned variables to server.properties (for instance, authorizer.class.name=kafka.security.authorizer.AclAuthorizer)

Then, on the first Kafka broker you need to create your ACL rules:

/opt/kafka/bin/kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer \
--authorizer-properties zookeeper.connect=zookeeper:2181 \
--add --allow-principal User:client --operation WRITE \
--operation DESCRIBE --operation CREATE --topic topicname

Check that ACL rules have been created:

/opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9093 --list

Verify access to topic using kcat/kafkacat tool (change security protocol and mechanism if necessary):

docker run -it --network=host edenhill/kcat:1.7.1 -L \
-b <ext listener>:<port> -X security.protocol=SASL_PLAINTEXT \
-X sasl.mechanism=PLAIN -X sasl.username=username \
-X sasl.password=pass -t topicname

ACL rules are stored in Zookeeper, so it’s not necessary to repeat steps on other brokers. If you are using Kafka operator, steps might be slightly different.

List of supported ACL operations can be found here: https://github.com/apache/kafka/blob/24f664aa1621dc70794fd6576ac99547e41d2113/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java#L44

If any questions, comments are open. The gist for this post is here:https://gist.github.com/rlevchenko/8811080c7bbeb060b0a2c3f2a90c9ee9

Application Gateway: Incorrect certificate chain or order

SSL management is always a pain. We should check SSL certificates periodically or implement a solution that carries all management tasks for us (let’s encrypt and cert-manager, for instance). And if there is an issue with a certificate, it’s a always a subject of downtime, so we have to find a solution as quickly as possible. Furthermore, all websites should meet requirements to complete tests and get a “green” mark from mozilla observatory or ssl shopper checker, for example. In this post, we’ll discuss possible issues you may face during the ssl check: “incorrect certificate chain” or “incorrect order. contains anchor”

Please note that my setup includes azure application gateway and azure kubernetes service. The following steps are general, however, may require using different certificate formats or signature algorithms. Check your environment’s requirements beforehand.

  • In my case, it was a wrong intermediate certificate provided by GoDaddy. So, I went to the godaddy site, clicked on certificate and copied intermediate certificate to cer file intermediate.cer
Godaddy.com > Intermediate certificate
  • Make sure you have openssl on your computer and create a new pfx that contains a certificate, private key and intermediate certificate:
    openssl pkcs12 -export -out appgw-cert.pfx -inkey .\pk.key -in .\ssl.crt -certfile .\intermediate.cer
  • If you have an old pfx with a valid certificate and key, do these commands:
    openssl pkcs12 -in old.pfx -nocerts -nodes -out pk.key
    openssl pkcs12 -in old.pfx -clcerts -nokeys -out cert.crt
    openssl pkcs12 -export -out new.pfx -inkey .\pk.key -in .\cert.crt -certfile .\intermediate.cer
  • Type password for the pfx, and then update azure application gateway if needed:
    $appGW = Get-AzApplicationGateway -Name "ApplicationGatewayName"ResourceGroupName "ResourceGroupName"
    $password = ConvertTo-SecureString $passwordPlainString -AsPlainText -Force
    $cert = Set-AzApplicationGatewaySslCertificate -ApplicationGateway $AppGW -Name "CertName" -CertificateFile "D:\certname.pfx" -Password $password
  • Also, export pfx certificate to your personal certificate store and make sure that the correct chain is used or use ssllabs.com for already updated certificate.
ssllabs.com and certificate chain
  • ..and finally my certificate is “green”
ssllabs.com and overall rating