Featured

Python Coding: FizzBuzz challenge

FizzBuzz is a very common task, asked in Dev/DevOps interviews. You are given a range of numbers and need to write algorithm using the following rules: if the number is divisible by 3, print “Fizz”; if the number is divisible by 5, output “Buzz”; if the number is divisible by both 3 and 5, the result should be “FizzBuzz”.

The main goal of the task is to check how you understand loops, conditionals and simple math using one of programming or scripting languages. I solved the task using PowerShell years ago: check this gist.

As I started to learn Python, I decided to share FizzBuzz implementation in this language to show how simple and “elegant” the solution can be.

I used matplotlib and colorama to make a pie chart and add color text output respectively. Defined a function fizz_buzz with 2 arguments, and then used try/catch/finally statements to catch exceptions errors. Inside of the try, the for loop and if conditionals are described to meet all task’s rules. As a result, the function outputs numbers and categories based on rules, and makes a pie chart to show how many fizz, buzz, fizzbuzz found in percentage.

import matplotlib.pyplot as plt
import colorama
from colorama import Fore, Back, Style
colorama.init()

def fizz_buzz(x,y):
    """Python version of popular Fizz Buzz task"""
    fb = 0 ; b = 0; f = 0; rest = 0 # start values
    fb_type = ['fizzbuzz','fizz','buzz','rest'] # plot labels
    fb_colors = ['r','y','c','g'] # plot colors
    fb_explode = [0.2, 0.1, 0.1, 0.1] # plot fraction of the radius
    try:
        for n in range(x,y):
            if ((n % 3 == 0) and (n % 5 == 0)):
                fb += 1
                print(Fore.RED + f"Found FizzBuzz: {n}")
            elif n % 3 ==0:
                f += 1
                print(Fore.WHITE + f"Found Fizz: {n}")
            elif n % 5 ==0:
                b += 1
                print(Fore.GREEN + f"Found Buzz: {n}")
            else: 
                rest += 1
                print(Style.BRIGHT + f"The rest is {n}")
            print(Style.RESET_ALL)
        fb_array = [fb, f, b, rest]
        plt.pie(fb_array, colors = fb_colors, explode = fb_explode, shadow = True, radius = 1.1, autopct = '%1.1f%%') # form a pie
        plt.legend(fb_type,loc='upper right') # show legend
        plt.show() # show a pie
    except:
        print(Style.BRIGHT + Fore.RED + "You provided wrong x and y")
        print(Style.RESET_ALL)
    finally:
        print(Style.BRIGHT + Fore.GREEN + "Author: github.com/rlevchenko")
        print(Style.RESET_ALL)

Result

Available at Gist

Featured

Simple PostgreSQL Backup Agent

Dockerized cron job to backup PostgreSQL database or multiple databases on different hosts. It’s based on Alpine docker image, so the image size is less than 11 Mb. The script can be also used without docker and docker compose or as a base for your own dockerized cron jobs. My general recommendation is to run docker container on your backup host to provide a kind of isolation from the management partition.

The script or “agent” does the following:

  • Reads content of /config/passfile to get pg_dump connection parameters
  • Verifies if the backup can be done by executing a dry run for each db
  • If the dry run is completed and plain format set, produces plain-text sql script and compresses it with gzip
  • If the dry run succeeds and custom format set, outputs a custom backup archive (more flexible and by default)
  • Cleans up the storage folder. Files older than 30 days are deleted
  • Redirects all cron job statuses to stdout
  • Keeps backup files under ./psql/backups/{hostname}/{dbname}/ on your host
  • Default settings: twice a day at 8:30 and 20:30 UTC; custom format; clean backups older than 30 days

Current limitations:

  • no encryption for specific databases (in to-do list)
  • no handling of wildcars in passfile (in to-do list)

Content

  • Dockerfile – describes docker image
  • docker-compose.yml – docker compose file to build and run agent service
  • /config/cronfile – cron job schedule settings
  • /config/passfile – PostgreSQL .pgpass actually
  • /config/psql_backup.sh – the script itself

Usage guide

  • check out the passfile and provide your own connection parameters
  • verify the cron job settings in the /config/cronfile
  • change make_backup function argument to set format output (plain/custom)
  • update cleaner function argument at the bottom of the script if necessary
  • edit dockerfile/docker-compose.yml or script itself if necessary
  • run docker compose build
  • run docker compose up -d
  • check out the stoud of the container to get the job’s status
  • TO RESTORE: use psql (if plain set) or pg_restore command (if custom format set)

Dockerfile

FROM alpine:3.16.2
LABEL AUTHOR="Roman Levchenko"
LABEL WEBSITE="rlevchenko.com"
RUN mkdir /etc/periodic/custom \
    && mkdir -p /backup/config \ 
    && touch /var/log/cron.log \
    && apk --no-cache add \
    postgresql14-client=14.5-r0 \
    bash=5.1.16-r2
COPY /config/cronfile /etc/crontabs/root
COPY /config/psql_backup.sh /etc/periodic/custom/backup
COPY ["/config/psql_backup.sh","/config/passfile","/backup/config/"]
RUN chmod 755 /etc/periodic/custom/backup \
    && chmod 0600 /backup/config/passfile
CMD ["-f","-l","8", "-L", "/dev/stdout"]
ENTRYPOINT ["crond"]

Script (excerpt)

# Clean old backup files
function cleaner()
{
set -o pipefail -e
	if [[ -n $(find $BACKUP_DIR \( -name "*.sql.gz" -o -name "*.custom" \) -type f -mtime +"$1") ]]; 
	then
		echo -e "\n${GREEN}[INFO]${OFF} ${BOLD}There are backup files older than $1 days. Cleaning up the following files:${OFF}"
		find $BACKUP_DIR \(-name "*.sql.gz" -o -name "*.custom" \) -print -type f -mtime +"$1" -exec rm {} \;
	else 
		echo -e "\n${GREEN}[INFO]${OFF} ${BOLD}There are no backup files older than $1 days. \nHave a nice day!${OFF}"
	fi
set +o pipefail +e
}

Result

Sample Output (w/error and success messages):

Featured

Azure DevOps: Update service connection expired secret

If you’re reading this post, you’re trying to find a way to edit an existing service connection with a new service principal secret/key.

It’s a weird that UI and devops cli don’t allow us to quickly change service connection details if it was created automatically by Azure DevOps (“creationMode”: “Automatic”; will talk about it a bit later).

So, how to change a secret? Answer: Azure DevOps REST API.
Note: if you have correct permissions, try out the steps at the bottom of the post. The steps below are for those who don’t have Owner permissions.

  • Create a new Personal Access Token (full access and all scope, expiration 1 day)
  • Go to Project Settings – Service Connections, choose your connection and click on Manage Service Principal. Add a new secret and note it’s value.
  • Choose a tool to work with REST API. It could be either PowerShell or Postman, for instance. I will show both.
  • [Postman] Install Postman and create a new HTTP Request
Postman – File – New – HTTP Request
  • [Postman] Go to Authorization and paste PAT token to the password field
PAT token should be used as password for any REST API requests
  • [Postman] Using the following GET request, get a service endpoint details in JSON format. Organization Name, Project Name and Endpoint Name are parts of the URI (can be taken from service connections list in the azure devops ui) :

    https://dev.azure.com/<orgName>/<ProjectName>/_apis/serviceendpoint/endpoints?endpointNames=<Endpoint Name> &api-version=6.0-preview.4
  • [Postman] Copy everything from the response under the value as shown below
{
            "data": {
                "subscriptionId": "",
                "subscriptionName": "",
                "environment": "AzureCloud",
                "scopeLevel": "Subscription",
                "creationMode": "Automatic",
                "azureSpnRoleAssignmentId": ""
            },
            ...............
                }
            ]
}
  • [Postman] Using a PUT request update the service connection. Make sure you set Body – Raw to JSON , and then Paste JSON copied in the previous step to the Body
Body – RAW should be set to JSON
  • Here is a tricky part. Prior to sending PUT request, change creationMode from “Automatic” to “Manual”. Also, in my case, I had to delete the following parameters spnObjectId and appObjectId (data section). Plus, I added serviceprincipalkey with a value set to a new secret (authorization section)
    A short excerpt is provided below:
{
    "data": {
        "subscriptionId": "",
        "subscriptionName": "",
        "environment": "AzureCloud",
        "scopeLevel": "Subscription",
        "creationMode": "Manual",  # changed
        "azureSpnRoleAssignmentId": "",
        "azureSpnPermissions": ""
         spnObjectId # deleted
         appObjectId # deleted
    },
    "description": "",
    "authorization": {
        "parameters": {
            "tenantid": "",
            "serviceprincipalid": "",
            "authenticationType": "spnKey",
            "serviceprincipalkey": "secret here" # added
        },
        "scheme": "ServicePrincipal"
}
}
  • [Postman] URI used for a PUT request: https://dev.azure.com/OrganizationName/_apis/serviceendpoint/endpoints/EndpointId?api-version=6.0-preview.4
  • [Postman] Go back to Azure DevOps and make sure that service connections has been updated and ready to use.

  • [PowerShell] Use the following example
$token ="PAT Token"
$orgName = "Organization Name"
$projectName = "Project Name"
$endpointName = "your endpoint"
$endpointId = "your endpoint ID, use GET request or UI"
$header = @{Authorization = 'Basic ' + [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$($token)")) }

# Get Endpoint details

Invoke-RestMethod -Method GET -URI "https://dev.azure.com/$($orgName)/$($projectName)/_apis/serviceendpoint/endpoints?endpointNames=$($endpointName)&api-version=6.0-preview.4" -Headers $header -ContentType "application\json"

# Update Endpoint
$json = @{ json here } | ConvertTo-Json -Depth <your depth>
Invoke-RestMethod -Method PUT -URI "https://dev.azure.com/$($OrgName)/_apis/serviceendpoint/endpoints/$($endpointId)?api-version=6.0-preview.4" -Body -Headers $header -ContentType "application\json" -Body $json


That’s it. Now you know how to change a service connection with a new secret without removing a connection and customizing all pipelines in a project.

P.S. If you have Owners permissions on the app registration/service principal used by the connection, try to edit the connection by adding a description, and then click on Save. Azure DevOps should create a new secret and update the connection automatically.

Featured

Deploy Azure Data Services with Terraform

June, 2021 Update: see details below

Terraform-based deployment of almost all Azure Data Services (default deployment settings are in the parenthesis):

  • Azure Service Bus (Standard, namespace,topic,subscription, auth. rules)
  • Azure Data Lake Storage (ZRS, Hot, Secured, StandardV2)
  • Azure Data Factory (w/Git or without)
  • Azure Data Factory linked with Data Lake Storage
  • Azure Data Factory Pipeline
  • Azure DataBricks WorkSpace (Standard)
  • Azure EventHub (Standard, namespace)
  • Azure Functions (Dynamic, LRS storage, Python, w/App.Insights or without)
  • Azure Data Explorer (Kusto, Standard_D11_v2, 2 nodes)
  • Azure Analysis Server (backup-enabled, S0, LRS, Standard)
  • Azure Event Grid (domain, EventGridSchema)
  • Azure SQL Server (version 12.0)
  • Azure SQL Database (ElasticPool SKU name, 5 GB max data size)
  • Azure SQL Elastic Pool (StandardPool, LicenseIncluded, 50 eDTU/50GB)

Properties and content

  • Over 1k strings and 26 terraform resources in total
  • Almost every string is commented out, multiple conditions in each resource, variable conditions to check it’s value before the deployment and etc. So, it’s flexible, not hardcoded and allows you to create infrastructure with your own set of resources.
  • Written a few years ago, updated once since then to fix deprecated features
  • June, 2021 Update: SQL Server, Database and Elastic Pool , added variable conditions (for example, sql password must be longer than 8 symbols and have upper-case, digits and special characters), added a sensitive variable (just for sample), new Terraform 0.15.5 syntax/features were added, multiple minor changes
  • Tested with the latest Terraform 0.15.5 and Azure provider 2.62.0 (the first version of the script worked fine with >=0.12 and AzureRM >=1.35, just check the syntax and try out)
  • auth.tf – provider authentication and version settings
  • main.tf – a desired Azure infrastructure
  • terraform.tfvars – controls deployment settings
  • variables.tf – variables list
  • outputs.tf – outputs useful information

Deployment settings (excerpt)

#--------------------------------------------------------------
# What should be deployed?
#--------------------------------------------------------------
servicebus       = true  # Azure Service Bus
datafactory      = true  # Azure Data Factory
datafactory_git  = false # Enable GIT for Data Factory? (don't forget to set Git settings in the Data Factory section)
databricks       = true  # Azure DataBricks
eventhub         = true  # Azure EventHub
functions        = true  # Azure Functions 
functions_appins = true  # Integrate App.Insights with Azure Functions?
eventgrid        = true  # Azure EventGrid
kusto            = true  # Azure Data Explorer (kusto)
analysis         = true  # Azure Analysis Server
sqlserver        = true  # Azure SQL Server 
sqlep            = true  # Azure SQL Elastic Pool
sqldb            = true  # Azure SQL Database

Resource block (excerpt)

#Azure SQL Database
resource "azurerm_mssql_database" "rlmvp-svc-sql-db" {
  count           = var.sqlserver == "true" && var.sqldb == "true" ? 1 : 0
  name            = "${var.prefix}sqldb${random_string.rndstr.result}"
  elastic_pool_id = var.sqlep == "true" ? azurerm_mssql_elasticpool.rlmvp-svc-sql-elastic-pool[count.index].id : null
  server_id       = azurerm_sql_server.rlmvp-svc-sql-server[count.index].id
  max_size_gb     = var.az_sql_db_maxsize
  sku_name        = var.az_sql_db_sku_name
  tags            = var.az_tags
}

Variable Conditions

variable "az_sqlserver_password" {
  type        = string
  description = "Azure SQL Server Admin's Password"
  validation {
    condition     = length(var.az_sqlserver_password) > 8 && can(regex("(^.*[A-Z0-9].*[[:punct:]].*$)", var.az_sqlserver_password)) # meets Azure SQL password's policy
    error_message = "SQL Server Admin's password must contain more than 6 symbols (lowercase + upper-case and special/punctuation characters!)."
  }
}

Usage guide

  • Open the terraform.tfvars file
  • Indicate the “What Should Be Deployed?” section
  • Use true/false to set your desired configuration
  • Check or change Azure services settings in the appropriate sections (naming convention (prefix/suffix), location, SKUs and etc.)
  • Run terraform init to get required Terraform providers
  • Run terraform plan to initiate pre-deployment check
  • Run terraform apply to start a deployment
  • (optional) terraform destroy to delete Azure resources

Requirements

  • The script uses Service Principal authentication, so define the subscription ID, client ID, tenand ID and principal secret in the auth.tf (or use another authentication type – Managed Identity, if your CI is running on Azure VMs, for instance)
  • If you are going to deploy Analysis Server (enabled, by default), provide valid Azure AD user(s) UPN(s) to set them as administrators of Analysis Server (az_ansrv_users variable, file – terraform.tfvars)

Result

P.S. feel free to share/commit/fork/slam/sell/copy and do anything that your conscience allows you 🙂

Featured

Playing with Kubernetes running on Azure (AKS)

Heptio (it’s founders co-created Kubernetes) polled about 400 IT decision makers from different sectors and company sizes to find out whether they use Kubernetes or not, understand obstacles and overall experience. About 60% of respondents are using Kubernetes today, and 65% expect to be using the technology in the next year. More surprisingly, about 77% of companies with more than 1000 developers that run Kubernetes are using it in production. 

Furthermore, VMware has recently announced the Project Pacific  that completely rebuilds vSphere architecture. Kubernetes is going to be a control plane in future vSphere versions. Sounds amazing, doesn’t it? 

The supervisor cluster is a Kubernetes cluster of ESXi instead of Linux

I hope you have warmed up and taken an interest in something that may not have been familiar with – Containers and Kubernetes. I believe so and recommend to read about Kubernetes and Docker concepts before we get started.

We’re gonna do some tasks in Azure Kubernetes Service (managed Kubernetes service in Azure) to help you to dive into Kubernetes and also get hands on experience with related services such as Container Registry, AKS, Terraform and Docker. 

Tasks: 
Notes:
  • This GitHub Repo includes everything covered in this blog post 
  • This lab uses a custom and simple ASP.NET Core web application that we will deploy to, and then publish by using Kubernetes (K8S)
  • Docker multistage image build packs up the application
  • Azure Container Registry stores the Docker image
  • Terraform automates deployment of Azure Kubernetes Service and Azure Container Registry. Scripts are stored at the different repo
  • Azure Kubernetes Services provides a managed Kubernetes master node in the cloud with ability to scale up worker nodes. AKS will host our PODs (roughly speaking, PODs represent process/containers running on  k8s cluster)
  • Azure CLI, PowerShell, docker, terraform, kubectl  (command line interface for running commands against Kubernetes clusters) are the main tools for completing tasks. Make sure you have them installed on your machine or use Azure Cloud Shell instead.
  • Create a Service Principal beforehand (Conributor role)
  • Azure DevOps is used for CI/CD (optional)

Deploy Kubernetes cluster in Azure

Although, we can use Azure Portal, CLI or PowerShell to deploy new Azure services, using Infrastructure as Code (IaC) approach is more visionary. We will use Terraform, so check out my repo and comments inside . Terraform creates Azure Container Registry:

#Get RG (create a new one if necessary by using "resource azurerm..")
data "azurerm_resource_group" "Rg" {
  name = "kubRg"
}

............

#Create a container registry
resource "azurerm_container_registry" "cr" {
  name                = "cr${random_string.randomName.result}"
  resource_group_name = "${data.azurerm_resource_group.Rg.name}"
  location            = "${data.azurerm_resource_group.Rg.location}"
  admin_enabled       = true
  sku                 = "Basic"
  # Only for classic SKU (deprecated)
  # storage_account_id  = "${azurerm_storage_account.storacc.id}" (Classic)
}
..............

Configures Kubernetes cluster with Azure Container Network Interface (CNI) to allow you to access PODs directly as every POD gets an IP from Azure subnet rather than use kubenet . At the end of configuration file, terraform enables K8S RBAC (it’s disabled by default in Azure) that we’ll use later during the service accounts creation.

resource "azurerm_kubernetes_cluster" "k8sClu" {
  name                = "rlk8sclu-${random_string.randomName.result}"
  location            = "${data.azurerm_resource_group.Rg.location}"
  resource_group_name = "${data.azurerm_resource_group.Rg.name}"
  dns_prefix          = "${var.dnsPrefix}"

  .......

  network_profile {
    network_plugin = "azure"
  }

  role_based_access_control {
    enabled = true
  }
  .........

Apply the configuration, and then check the output (in my case, resources have been already deployed, so nothing to add). Note ACR and AKS resource names (/managedClusters/…; registries/…)

For an additional deployment check , open up the Azure Cloud Shell or Azure CLI and type the following to open Kubernetes portal:

#Get Azure AKS Credentials
az login
az aks get-credentials --resource-group kubRg --name rlk8sclu-l3y5

#Open K8S dashboard
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
az aks browse --resource-group kubRg --name rlk8sclu-l3y5

#View kubectl config (optional, %HOMEPATH%/.kube/config)
kubectl config view

If your browser shows up a new page, likely there aren’t any issues with the deployment. Let’s jump into the second task.

TIP: get addresses of the master and services by running kubectl cluster-info

Make Docker image with the application

Let’s create a docker image with the application, and then push the image to the Azure Container Registry. The Dockerfile is located at the root of the aspnetapp folder (check out my repo) and describes multi-stage image build process. There is also the Dockerignore file to define folders that must be excluded from the image.

Run the Docker CLI and build the image (docker build <dir>):

Push the image to Azure Container Registry:

az acr login --name acrName
docker login acrFQDN
docker tag aspnetapp acrFQDN/aspnetapp
docker push acrFQND/aspnetapp

TIP: get the attributes of the image by running az acr repository show -n crl3y5 –image aspnetapp:latest

Make two fully isolated namespaces within the Kubernetes cluster

Once the docker image is uploaded to ACR, we are ready to proceed with the Kubernetes tasks. When you need something to change in K8S you may use kubectl to define operations sequentially or manifests files (yaml) that can describe multiple requests to K8S API Server in the declarative form.

If you look at my repo, you can see two folders ns1 and ns2 that store yaml/manifest files for respective namespaces. We’ll use that files in conjunction with kubectl to make some changes on the AKS cluster. Because manifests files are almost the same, only manifests for NS1 will be shown.

#Apply the manifest (example)
kubectl apply -f pathToTheManifestFile

Create a new namespace:

#Create a namespace 1
apiVersion: v1
kind: Namespace
metadata:
  name: ns1
  labels:
    author: rlevchenko

To deny ingress traffic from PODs running in the other namespaces:

#NS1
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-from-other-namespaces
  namespace: ns1
spec:
  podSelector:
    matchLabels: {}
  ingress:
    - from:
        - podSelector: {}

TIP: use kubectl get namespaces to list namespaces and kubectl get networkpolicy -n ns1 to get the policy

Configure anti-affinity for PODs

To make sure that group of PODs (labelSelector section) in the cluster running on particular nodes , we need to configure affinity/anti-affinity rules. This anti-affinity “rule” ensures that each POD with app=aspcore label does not co-locate on a single node.

.....
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - aspcore
              topologyKey: "kubernetes.io/hostname"
......

TIP: use kubectl get pods -o wide -n ns1 to get info about assigned nodes, and then kubectl get nodes –show-labels to check node labels

Configure network policy to deny egress traffic from PODs (except DNS requests)

This task shows how you can filter network traffic from PODs in the namespace. All PODs with the app=aspcore label in the first namespace can make only DNS requests (out), other ones will be denied.

#Deny all traffic (except DNS) from PODs
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-external-egress
  namespace: ns1
spec:
  podSelector:
    matchLabels:
      app: aspcore
  policyTypes:
    - Egress
  egress:
    # allow DNS TCP/UDP 53 ports
    - ports:
        - port: 53
          protocol: UDP
        - port: 53
          protocol: TCP
    - to:
        - namespaceSelector: {}

TIP: get list of the network policies kubectl get networkpolicy -n ns1

Create a service account with read permission on PODs in the first namespace

A service account provides an identity for processes that run in a Pod. This except of manifest file describes a service account read-sa-ns that has read only permissions on PODs in the NS1 namespace (the rules section/verbs). Also, note that rbac role is used which we have enabled during applying the terraform configuration.

#New SA - ns level
apiVersion: v1
kind: ServiceAccount
metadata:
  name: read-sa-ns
---
#New Role - ns level
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: read-only-ns
  namespace: ns1
rules:
  - apiGroups: ["", "extensions", "apps"]
    resources: ["pods"]
    verbs: ["get", "list", "watch"]
---
#Binding the role to the sa -NS1
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-ns1-pods
  namespace: ns1
subjects:
  - kind: ServiceAccount
    name: read-sa-ns
    apiGroup: ""
    namespace: default
roleRef:
  kind: Role
  name: read-only-ns
  apiGroup: rbac.authorization.k8s.io

TIP: get roles in the NS1 namespace kubectl get role -n ns1 , and then check service accounts in K8S cluster kubectl get serviceaccounts –all-namespaces

Set CPU and RAM limits for each pod

If a container is created in the ns1 namespace, and the container does not specify its own values for memory request and memory limit, the container is given a default memory request of 128 MiB and a default memory limit of 400 MiB. In addition, you can define limits on the PODs level.

#Define mem-cpu limits
apiVersion: v1
kind: LimitRange
metadata:
  name: mem-cpu-limit-range
  namespace: ns1
spec:
  limits:
  - default:
      memory: 256Mi
      cpu: "400M"
    defaultRequest:
      memory: 128Mi
      cpu: "200M"
    type: Container

TIP: check the limits by running kubectl describe pod podname -n ns1

Configure PODs scalability based on CPU metric

Kubernetes allows you to automatically scale PODs based on the CPU/RAM metrics (horizontal pod autoscaler). If CPU average utilization is equal or greater than 70%, K8S deploys additional replicas (spec stenza, maxReplicas).

#Scale pods automatically (cpu metric)
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: aspcore-load-cpu
  namespace: ns1
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: asp-deployment1
  minReplicas: 1
  maxReplicas: 2
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 70

TIP: to list the NS limits kubectl describe namespace ns1

Publish the application

Now it’s time to publish the application running on a container. First, create a POD that will use our docker image that we’ve already pushed to the Azure Container Registry. A one Pod with the latest image will be created under the ns1 namespace. Check labels (one of the most important things, actually in K8S 🙂 ) , pod name and number (replicas)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: asp-deployment1
  namespace: ns1
  labels:
    app: web
    release: stable
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aspcore
  template:
    metadata:
      labels:
        app: aspcore
    spec:
      containers:
        - name: web-aspcore
          image: crl3y5.azurecr.io/aspnetapp:latest

TIP: use kubectl get pods -n ns1 -o wide to check the pod state in the ns1

If the Pod’s status is running, you can publish it via the LoadBalancer service:

#Publish the deployment through the Service

apiVersion: v1
kind: Service
metadata:
  name: demo-service1
  namespace: ns1
spec:
  selector:
    app: aspcore
  type: LoadBalancer
  ports:
    - name: name-of-the-port
      port: 80
      targetPort: 80

Then check the deployment status, and get public ip of the service:

#Get deployments in the NS1
kubectl get deployments -n ns1

#Get Service's Public IP
kubectl get service -n ns1 -o jsonpath='{.items[].status.loadBalancer.ingress[0].ip}'

Open up the browser and navigate to http://publicip/api/values to verify that application is published and works:

What’s next?

Complete the following homework tasks to boost your skills:

  • Make a test POD with a static volume (use Azure storage)
  • Make sure that PODs are running under non-root account
  • Create a service account with read permission on all PODs in the cluster
  • Add context details about service accounts to your configuration file (kubeconfig), and then verify service accounts permissions
  • Configure PODs scalability based on RAM or network metrics
  • Check yourself – the answers are at my repo
  • Create CI/CD Azure DevOps pipelines to automate docker image build and environment deployment (use terraform/kubectl/docker)

Thanks for reading, stars and commits!

%d bloggers like this: