Deploy Azure Data Services with Terraform

Terraform-based deployment of almost all Azure Data Services (default deployment settings are in the parenthesis):

  • Azure Service Bus (Standard, namespace,topic,subscription, auth. rules)
  • Azure Data Lake Storage (ZRS, Hot, Secured, StandardV2)
  • Azure Data Factory (w/Git or without)
  • Azure Data Factory linked with Data Lake Storage
  • Azure Data Factory Pipeline
  • Azure DataBricks WorkSpace (Standard)
  • Azure EventHub (Standard, namespace)
  • Azure Functions (Dynamic, LRS storage, Python, w/App.Insights or without)
  • Azure Data Explorer (Kusto, Standard_D11_v2, 2 nodes)
  • Azure Analysis Server (backup-enabled, S0, LRS, Standard)
  • Azure Event Grid (domain, EventGridSchema)

Properties and content

  • 831 strings in total
  • Written about 1 year ago, updated a day ago to fix deprecated expressions
  • Tested with the latest Terraform 0.13.2 and Azure provider 2.27.0 (in fact, works fine with >=0.12 and Azure provider >= 1.35)
  • auth.tf – provider authentication and version settings
  • main.tf – a desired Azure infrastructure
  • terraform.tfvars – controls deployment settings
  • variables.tf – variables list
  • outputs.tf – outputs useful information

Deployment settings (excerpt)

#--------------------------------------------------------------
# What should be deployed?
#--------------------------------------------------------------
servicebus       = true  # Azure Service Bus
datafactory      = true  # Azure Data Factory
datafactory_git  = false # Enable GIT for Data Factory? (don't forget to set Git settings in the Data Factory section)
databricks       = true  # Azure DataBricks
eventhub         = true  # Azure EventHub
functions        = true  # Azure Functions 
functions_appins = true  # Integrate App.Insights with Azure Functions?
eventgrid        = true  # Azure EventGrid
kusto            = true  # Azure Data Explorer (kusto)
analysis         = true  # Azure Analysis Server

Resource block (excerpt)

resource "azurerm_function_app" "rlmvp-svc-function-appins" {
  count                      = var.functions == "true" && var.functions_appins == "true" ? 1 : 0
  name                       = "${var.prefix}function${random_string.rndstr.result}"
  location                   = var.az_region
  resource_group_name        = azurerm_resource_group.az_rg.name
  app_service_plan_id        = azurerm_app_service_plan.rlmvp-svc-appplan[count.index].id
  storage_account_name       = azurerm_storage_account.rlmvp-svc-storacc[count.index].name
  storage_account_access_key = azurerm_storage_account.rlmvp-svc-storacc[count.index].primary_access_key
  # storage_connection_string = azurerm_storage_account.rlmvp-svc-storacc[count.index].primary_connection_string (deprecated; works though)
  app_settings = {
    "FUNCTIONS_WORKER_RUNTIME"       = var.az_funcapp_runtime
    "APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.rlmvp-svc-appins[count.index].instrumentation_key
  }
  tags = var.az_tags
}

Usage guide

  • Open the terraform.tfvars file
  • Indicate the “What Should Be Deployed?” section
  • Use true/false to set your desired configuration
  • Check or change Azure services settings in the appropriate sections (naming convention (prefix/suffix), location, SKUs and etc.)
  • Run terraform init to get required Terraform providers
  • Run terraform plan to initiate pre-deployment check
  • Run terraform apply to start a deployment
  • (optional) terraform destroy to delete Azure resources

Requirements

  • The script uses Service Principal authentication, so define the subscription ID, client ID, tenand ID and principal secret in the auth.tf (or use another authentication type – Managed Identity, if your CI is running on Azure VMs, for instance)
  • If you are going to deploy Analysis Server (enabled, by default), provide valid Azure AD user(s) UPN(s) to set them as administrators of Analysis Server (az_ansrv_users variable, file – terraform.tfvars)

Result

Deployed Azure Resources (all in one resource group)
Terraform output

P.S. feel free to share/commit/fork/slam/sell/copy and do anything that your conscience allows you 🙂

Not a Microsoft Cloud and Datacenter Management MVP Anymore

I’ve got some good news and some bad news…

Goodbye………

The bad news is I am not a Cloud and Datacenter Management MVP anymore. About 6 years ago, I received my first email to say that I had been awarded as Microsoft MVP in the Hyper-V category. And it was really unexpected by me!

I remember myself chatting with Russian Technet members and one of them sent me a private message “hey, can you please check your email?” , I asked him “for what?”, and then I realized..my hard work during almost 2,5 years on Technet forums and offline had been finally appreciated!

I haven’t ever requested a nomination and truly believed that you had to be praised by technical leaders to get your first MVP award. I am still the same person and haven’t changed my beliefs , so if you wanna be a Microsoft MVP, do a lot of great stuff and you will be spotted, either way! Later, Hyper-V expertise was merged into a Cloud & Datacenter Management, which I had been added before the good news came..

Today, I’ve been re-awarded as an Azure MVP! And it’s my 6th award in a row. I’ve been working with Azure and related stuff for almost 5 years and this year all my activities have been only connected with Azure. If you go to my About page, you will see that I’ve changed/extended my expertise and efforts toward public clouds, Azure and DevOps. So, it’s natural that I’ve become an Azure MVP.

Times always change, you don’t have to limit yourself to just one product or technology, always keep track needs and trends instead and you will succeed (sounds like an IT law)

As usual, I’d like to report on activities for 2019-2020 :):