Technical Review: Practical Automation with PowerShell

It’s becoming increasingly difficult to find a standout book on PowerShell in today’s crowded market. I’m sure everyone is familiar with such books as:

  • “Learn PowerShell in a Month of Lunches” (best for newbies)
  • “Learn PowerShell Scripting in a Month Lunches” (best for learners).
  • “Windows PowerShell in Action” (best handbook)

Let’s assume you have read the first two and are trying to find the next one to completely master PowerShell skills, get more practice, and gain insights. Allow me to introduce “Practical Automation with PowerShell” by Matthew Dowst.

Surprisingly to me, this book became my favorite (despite having read several bestsellers, some of which are mentioned above), and I thoroughly enjoyed both reading and reviewing it. The main reason is its comprehensive table of contents, which addresses everything one encounters on a daily basis: automation of clouds, on-premise servers, databases, and other essential tasks.

Click to see TOC
  • 1. POWERSHELL AUTOMATION
  • 2. GET STARTED AUTOMATING
  • 3. SCHEDULING AUTOMATION SCRIPTS
  • 4. HANDLING SENSITIVE DATA
  • 5. POWERSHELL REMOTE EXECUTION
  • 6. MAKING ADAPTABLE AUTOMATIONS
  • 7. WORKING WITH SQL
  • 8. CLOUD-BASED AUTOMATION
  • 9. WORKING OUTSIDE OF POWERSHELL
  • 10. AUTOMATION CODING BEST PRACTICES
  • 11. END-USER SCRIPTS AND FORMS
  • 12. SHARING SCRIPTS AMONG A TEAM
  • 13. TESTING YOUR SCRIPTS
  • 14. MAINTAINING YOUR CODE
  • APPENDIX A: DEVELOPMENT ENVIRONMENT SET UP

The book teaches you how to design, write, test and maintain your scripts. If you work as a part of team – this book is also for you: “Handling sensitive data” and “Sharing scripts among a team” chapters are awesome and extremely helpful. Additionally, it covers integration with Jenkins, Azure Automation and Azure Functions. Consequently, after reading the book, you will be able to execute automations in mixed environments with different sets of services.

I highly recommend this book to anyone passionate about PowerShell. However, if you’re just starting out, I suggest beginning with “month of lunches” books before diving into this one to refine your skills and develop an automation engineer’s mindset.

Kudos to the author for an excellent work!

Create diagram as code in Python

In the previous post, we explored my custom ClickHouse backup agent, built upon the clickhouse-backup tool, logrotate, Cron and Bash scripts. I have also shared all the necessary resources for testing the agent on your local machine using Docker as well as Docker Compose or deploying it in a production environment. Let’s update the agent’s repo with some Python code.

You may be familiar with a main GitOps principle: use Git as the single source of truth; store your applications and infrastructure configurations in a Git repository along with application code. Kubernetes (yaml), Terraform (tf), Docker, Compose files, Jenkinsfile and even diagrams can be good examples of files kept in such repositories. But how to represent diagrams? As png, vsd or jpeg? Let’s pretend we’re developers and can draw diagrams using code.

The diagrams project brings this approach to life. I opted for Diagrams (mingrammer) because it’s free and built on Python and Graphviz, widely used language and tool that enable you to create various diagrams, whether it’s a flowchart or a cloud architecture. Another advantage is that the project is actively maintained and continuously developed. You can also check out other tools such as pyflowchart, mermaid, plantuml or terrastruct.

Let’s get started and draw a flowchart for the clickhouse backup agent using Diagrams (mingrammer). First, install Python (>3.7; mine is 3.11) and Graphviz (9.0.0, Windows in my env), then install diagrams module (0.23.4).

Diagrams include the following objects: node (=shapes; programming, azure, custom and others), edge (=connection lines; linkage between nodes), cluster (=group of isolated nodes) and diagram (represents your entire chart). Each object has it’s own attributes. Description of all attributes can be found at Graphviz docs. Also, check out basic examples to understand what we gonna “build”. I won’t describe every attribute. DYOR.

The first line of your code might look like this:

# import required modules
from diagrams import Diagram, Edge, Cluster, Node

Then we define attributes for each object (excerpt):

# define attributes for graphviz components
graph_attributes = {
    "fontsize": "9",
    "orientation": "portrait",
    "splines":"spline"
}

Next, we need to describe diagram object and it’s attributes (excerpt):

with Diagram(show=False, outformat="png", graph_attr=graph_attributes, direction="TB"):
    # nodes and icons
    start_end_icon = "diagram/custom-images/start-end.png"
    start = Node (label="Start", image=start_end_icon, labelloc="c", height="0.4", weight="0.45", **node_attributes)

I use general Node class with custom images which were taken from programming nodes and then optimized to my flowchart (I’ve deleted canvas and resized images). You could safely use diagrams.programming.flowchart node class instead, but be ready to play with height/width node’s attributes. Another way to add your own images as nodes is Custom node class.

We have described icons and shared nodes. Now we need to add the first group of nodes to represent the main process of the agent and flowchart (creating and uploading FULL backups):

# cluster/full backup
    with Cluster("main", graph_attr=graph_attributes):
       diff_or_full = Node (label="TYPE?", image=decision_icon, height="0.7", weight="", labelloc="c", **node_attributes )

Subroutine processes (diff backups and etc.) are clusters (excerpt):

# cluster/diff backup
    with Cluster("diff", graph_attr=graph_attributes):
      create_diff_backup = Node (label="Create DIFF", labelloc="c", height="0.5", weight="4", image=action_icon, **node_attributes)

Edges or connections between nodes are defined at the bottom (excerpt):

# Log connections
    diff_or_full - Edge(label="\n\n wrong type", tailport="e", headport="n", **edge_attributes ) - write_error 

As a result, I’ve updated the repo with diagram as code; slightly modified GitHub actions by adding a new step to “draw” diagram and check python code. When I push new commits to the repo, the diagram is created and published as an artifact with nodes (start, end, condition, action, catch, input/output), four clusters (main, diff, log, upload log) and edges between nodes.

Looks pretty good, doesn’t it?