Technical Review: Practical Automation with PowerShell

It’s becoming increasingly difficult to find a standout book on PowerShell in today’s crowded market. I’m sure everyone is familiar with such books as:

  • “Learn PowerShell in a Month of Lunches” (best for newbies)
  • “Learn PowerShell Scripting in a Month Lunches” (best for learners).
  • “Windows PowerShell in Action” (best handbook)

Let’s assume you have read the first two and are trying to find the next one to completely master PowerShell skills, get more practice, and gain insights. Allow me to introduce “Practical Automation with PowerShell” by Matthew Dowst.

Surprisingly to me, this book became my favorite (despite having read several bestsellers, some of which are mentioned above), and I thoroughly enjoyed both reading and reviewing it. The main reason is its comprehensive table of contents, which addresses everything one encounters on a daily basis: automation of clouds, on-premise servers, databases, and other essential tasks.

Click to see TOC
  • 1. POWERSHELL AUTOMATION
  • 2. GET STARTED AUTOMATING
  • 3. SCHEDULING AUTOMATION SCRIPTS
  • 4. HANDLING SENSITIVE DATA
  • 5. POWERSHELL REMOTE EXECUTION
  • 6. MAKING ADAPTABLE AUTOMATIONS
  • 7. WORKING WITH SQL
  • 8. CLOUD-BASED AUTOMATION
  • 9. WORKING OUTSIDE OF POWERSHELL
  • 10. AUTOMATION CODING BEST PRACTICES
  • 11. END-USER SCRIPTS AND FORMS
  • 12. SHARING SCRIPTS AMONG A TEAM
  • 13. TESTING YOUR SCRIPTS
  • 14. MAINTAINING YOUR CODE
  • APPENDIX A: DEVELOPMENT ENVIRONMENT SET UP

The book teaches you how to design, write, test and maintain your scripts. If you work as a part of team – this book is also for you: “Handling sensitive data” and “Sharing scripts among a team” chapters are awesome and extremely helpful. Additionally, it covers integration with Jenkins, Azure Automation and Azure Functions. Consequently, after reading the book, you will be able to execute automations in mixed environments with different sets of services.

I highly recommend this book to anyone passionate about PowerShell. However, if you’re just starting out, I suggest beginning with “month of lunches” books before diving into this one to refine your skills and develop an automation engineer’s mindset.

Kudos to the author for an excellent work!

How to enable ACL in Kafka running on Kubernetes


This brief article is intended for individuals encountering challenges with ACL configuration in Kafka, regardless of whether it is deployed on Kubernetes or as a stand-alone setup.

Assuming you have Kafka configured with both internal and external listeners, where the internal one may lack security measures while external access requires safeguarding through SASL methods like PLAIN, SSL, or SCRAM. Each SASL authentication mechanism necessitates a valid JAAS configuration file. Specifically, for PLAIN, the configuration file appears as follows:

KafkaServer {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin"
  user_client="client123"
}; 
Client {};

When an application or administrator attempts to access Kafka through an external listener, they are required to provide a username and password.
This is why enabling ACL in Kafka becomes necessary — to offer authorization in addition to the authentication provided by SASL.

To enable ACL, you just need to slightly change your Kafka brokers configuration. Add the following environment variables:

KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "false"
KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_SUPER_USERS: User:ANONYMOUS;User:admin

KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND is quite straightforward: if there are no ACLs configured for groups or topics, access is either allowed or denied. As I’ve configured multiple brokers utilizing internal non-secured listeners for interconnection, I set this environment variable to false to secure external access.

The true value makes sense only during ACL configuration phase because ACL denies all unauthorized connections on all listeners by default. Additionally, I added the ANONYMOUS user to KAFKA_SUPER_USERS, enabling brokers to connect with each other even when ACL is enabled. Internal traffic relies on pod-to-pod network.

Please note that this setup is suitable for development environments. In a production environment, I would recommend using SASL_SSL for inter-broker authentication and end-to-end SSL for the entire Kubernetes env.

Note: If your Kafka cluster is KRaft-based, use org.apache.kafka.metadata.authorizer.StandardAuthorizer; If your Kafka is not running on Kubernetes, add the mentioned variables to server.properties (for instance, authorizer.class.name=kafka.security.authorizer.AclAuthorizer)

Then, on the first Kafka broker you need to create your ACL rules:

/opt/kafka/bin/kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer \
--authorizer-properties zookeeper.connect=zookeeper:2181 \
--add --allow-principal User:client --operation WRITE \
--operation DESCRIBE --operation CREATE --topic topicname

Check that ACL rules have been created:

/opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9093 --list

Verify access to topic using kcat/kafkacat tool (change security protocol and mechanism if necessary):

docker run -it --network=host edenhill/kcat:1.7.1 -L \
-b <ext listener>:<port> -X security.protocol=SASL_PLAINTEXT \
-X sasl.mechanism=PLAIN -X sasl.username=username \
-X sasl.password=pass -t topicname

ACL rules are stored in Zookeeper, so it’s not necessary to repeat steps on other brokers. If you are using Kafka operator, steps might be slightly different.

List of supported ACL operations can be found here: https://github.com/apache/kafka/blob/24f664aa1621dc70794fd6576ac99547e41d2113/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java#L44

If any questions, comments are open. The gist for this post is here:https://gist.github.com/rlevchenko/8811080c7bbeb060b0a2c3f2a90c9ee9