Deploy Azure Data Services with Terraform

Terraform-based deployment of almost all Azure Data Services (default deployment settings are in the parenthesis):

  • Azure Service Bus (Standard, namespace,topic,subscription, auth. rules)
  • Azure Data Lake Storage (ZRS, Hot, Secured, StandardV2)
  • Azure Data Factory (w/Git or without)
  • Azure Data Factory linked with Data Lake Storage
  • Azure Data Factory Pipeline
  • Azure DataBricks WorkSpace (Standard)
  • Azure EventHub (Standard, namespace)
  • Azure Functions (Dynamic, LRS storage, Python, w/App.Insights or without)
  • Azure Data Explorer (Kusto, Standard_D11_v2, 2 nodes)
  • Azure Analysis Server (backup-enabled, S0, LRS, Standard)
  • Azure Event Grid (domain, EventGridSchema)

Properties and content

  • 831 strings in total
  • Written about 1 year ago, updated a day ago to fix deprecated expressions
  • Tested with the latest Terraform 0.13.2 and Azure provider 2.27.0 (in fact, works fine with >=0.12 and Azure provider >= 1.35)
  • auth.tf – provider authentication and version settings
  • main.tf – a desired Azure infrastructure
  • terraform.tfvars – controls deployment settings
  • variables.tf – variables list
  • outputs.tf – outputs useful information

Deployment settings (excerpt)

#--------------------------------------------------------------
# What should be deployed?
#--------------------------------------------------------------
servicebus       = true  # Azure Service Bus
datafactory      = true  # Azure Data Factory
datafactory_git  = false # Enable GIT for Data Factory? (don't forget to set Git settings in the Data Factory section)
databricks       = true  # Azure DataBricks
eventhub         = true  # Azure EventHub
functions        = true  # Azure Functions 
functions_appins = true  # Integrate App.Insights with Azure Functions?
eventgrid        = true  # Azure EventGrid
kusto            = true  # Azure Data Explorer (kusto)
analysis         = true  # Azure Analysis Server

Resource block (excerpt)

resource "azurerm_function_app" "rlmvp-svc-function-appins" {
  count                      = var.functions == "true" && var.functions_appins == "true" ? 1 : 0
  name                       = "${var.prefix}function${random_string.rndstr.result}"
  location                   = var.az_region
  resource_group_name        = azurerm_resource_group.az_rg.name
  app_service_plan_id        = azurerm_app_service_plan.rlmvp-svc-appplan[count.index].id
  storage_account_name       = azurerm_storage_account.rlmvp-svc-storacc[count.index].name
  storage_account_access_key = azurerm_storage_account.rlmvp-svc-storacc[count.index].primary_access_key
  # storage_connection_string = azurerm_storage_account.rlmvp-svc-storacc[count.index].primary_connection_string (deprecated; works though)
  app_settings = {
    "FUNCTIONS_WORKER_RUNTIME"       = var.az_funcapp_runtime
    "APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.rlmvp-svc-appins[count.index].instrumentation_key
  }
  tags = var.az_tags
}

Usage guide

  • Open the terraform.tfvars file
  • Indicate the “What Should Be Deployed?” section
  • Use true/false to set your desired configuration
  • Check or change Azure services settings in the appropriate sections (naming convention (prefix/suffix), location, SKUs and etc.)
  • Run terraform init to get required Terraform providers
  • Run terraform plan to initiate pre-deployment check
  • Run terraform apply to start a deployment
  • (optional) terraform destroy to delete Azure resources

Requirements

  • The script uses Service Principal authentication, so define the subscription ID, client ID, tenand ID and principal secret in the auth.tf (or use another authentication type – Managed Identity, if your CI is running on Azure VMs, for instance)
  • If you are going to deploy Analysis Server (enabled, by default), provide valid Azure AD user(s) UPN(s) to set them as administrators of Analysis Server (az_ansrv_users variable, file – terraform.tfvars)

Result

Deployed Azure Resources (all in one resource group)
Terraform output

P.S. feel free to share/commit/fork/slam/sell/copy and do anything that your conscience allows you 🙂

Kubernetes: ASP.NET Core HTTPS and mac verify failure

You are trying to configure HTTPS in ASP.NET Core to run on Kubernetes, successfully mounted secret data volumes and defined ASP.NET environment variables, however, the following error appears in the pod’s log:

error:23076071:PKCS12 routines:PKCS12_parse:mac verify failure
at Internal.Cryptography.Pal.OpenSslPkcs12Reader.Decrypt(SafePasswordHandle password)

The reason is quite simple – a wrong password. Check out the manifest examples below to understand the behavior.

<doesn’t work> Kubernetes deployment manifest:

env:
            - name: "Kestrel__Certificates__Default__Path"
              value: /var/secrets/cert #data volume must be used here
            - name: "ASPNETCORE_Kestrel__Certificates__Default__Password"
              value: /var/secrets/password #wrong!
        

<works> Kubernetes deployment manifest:

env:       
            - name: "Kestrel__Certificates__Default__Path"
              value: /var/secrets/cert #data volume must be used here
            - name: "ASPNETCORE_Kestrel__Certificates__Default__Password"
              valueFrom: #works!
                secretKeyRef:
                  name: backend-tls
                  key: password

Noticed the difference? Instead of using the data volume path to the secret key “password” (cat /var/secret/password outputs the password without any issues, by the way), you need to explicitly define the env value by referring to the secret’s key. In my case, “/var/secret/password” (text, not a secret itself!) was assigned to the variable’s value and it was unexpected.

In short, check if the password is correct and try to define the secret as an environment variable rather than using data volumes.