Featured

Passed Certified Kubernetes Administrator exam

About 2 weeks ago, I wrote about my LFCS experience and noticed that it was probably the hardest exam I’ve ever taken. Actually, the LFCS exam was my first step to the Certified Kubernetes Administrator exam. Why do you need LFCS or strong Linux skills before taking the CKA?

Well, the answer is quite simple – all exams are based on Linux, Kubernetes control plane and worker nodes are also Linux-based, so it’s quite logical to get/check/level up/confirm Linux skills, even though you work with Linux everyday. Anyway, let’s talk a bit about the CKA exam.

About the exam

  1. Exam duration is still 2 hours. I’ve been working with k8s for 3 years and 2 hours were quite enough to finish and even verify answers. I completed the exam in an hour and a half and had about 20 minutes to think on tasks I wasn’t sure about. In contrast, the LFCS exam didn’t provide me such opportunity and I spent whole 2 hours solving the tasks.
  2. 100% performance-based. The CKA exam requires all work to be done on the command-line. In my case the environment was based on Ubuntu 18.04 and Kubernetes 1.20. One of the main difference between CKA and LFCS – you are allowed to have ONE tab with kubernetes.io opened! It’s also logical. 2 hours would not be enough for writing yaml files without api/doc references. But again – you should be familiar with all objects of Kubernetes to pass the exam. Otherwise, the time limit will help you to fail (read docs at home, write yamls at the exam 🙂 )
  3. Online, Proctored, Certificate is valid for 3 years – Good news as I’m not ready to spend 300$ every year and don’t have so much time to take exams frequently.

Requirements and Preparation

  1. At least 1-2 years experience with Kubernetes in commercial projects (from setting up to troubleshooting/logging and monitoring)
  2. I ordered both LFCS and CKA exams a year ago and passed them on the last day of expiration. Both exams were purchased along with official online training courses, which I don’t recommend actually. They are 80% text-based + 20% of lab tasks available as PDF files. I don’t see significant differences between them and official docs (kubernetes.io/docs).
  3. Complete this FREE Linux Foundation course – Introduction to Kubernetes
  4. If you prefer books, I’d recommend Kubernetes in Action (2st edition going to be published later this year, you can start with the 1st ed.) and Core Kubernetes. I reviewed them both and they would be the great companions on your journey.
  5. I’m a fan of video courses and found that they make learning process more interactive and easier. Pluralsight is one the best choice for everything you need – Certified Kubernetes Administrator path
  6. One of the prefect preparation for the CKA is the Killer Shell – test environment containing 25 scenarios and their solutions. This exam simulator is much difficult than the real CKA exam – a great option for those who doesn’t have everyday practice with k8s. (I haven’t tried this service but heard great feedbacks on it)
  7. Because of 2 and as the last step, use the docs as the main source of truth, then do the tasks available at https://kubernetes.io/docs/tasks/ to check your knowledge.
  8. In summary, if you work with Kubernetes on a daily basis for years – you will pass the exam for sure (preparation would still be needed though). For others, complete the “from zero to hero” path and try your luck.

I wish you good luck and don’t ever give up. Cheers.

Featured

Passed Linux Foundation Certified System Administrator

Howdy,

It’s been a long time since the last blog’s post – weeks and months. I was and am extremely busy working on some projects and .. preparation to my first ever Linux exam – Linux Foundation Certified System Administrator or just LFCS.

If you have ever looked at About section, you may already know that my main experience has always been associated to Microsoft products/hardware, however, I have been working with Linux for 2+ years so far since I had started as a DevOps engineer. Being a DevOps engineer is always about dealing with different environments, 3rd party tools, clouds and automation. So, after years of experience in Linux and it’s terminal, I had decided to prove and even broaden my knowledge by taking one of the highly valued and respected exam from the Linux Foundation.

About the exam

  1. Duration of Exam 2 hours . In fact, you won’t be able to read man pages, which is available during the exam, because of the lack of time. Really. I could read and complete all 24 questions, and I had nearly no time left. My advice – use grep , man search to save some time. If you spend more than 1-2 mins reading the man, you will probably fail 🙂
  2. 100% performance-based. The LFCS exam requires all work to be done on the command-line. I chose the Ubuntu 18.04, however, it will be updated to Ubuntu 20.04 and CentOS Stream 8 soon (at the end of April, as I know).
    I really enjoyed this format and would like to have the same provided by Microsoft. Reasons? Checks your real experience and skills, completely protects against the “dumpers” (LF exams don’t have any dumps available as far as I know – my respect to LF!), proctored exams and strict requirements (voice/video and screen sharing, I was asked to turn off even my speakers and then take them away of the table..), duration is also an additional challenge. Overall, I’d rate the exam 10/10.
Exam’s interface. You just have a terminal window, a few servers already prepared and 24 performance-based tasks (all should be done using CLI)

Requirements and preparation

  1. At least 1 year of a real experience with Ubuntu Server or CentOS
  2. Initially, I purchased a bundle (exam and the official companion course that costs about 500$). However, I’d recommend not wasting time on the official course ’cause it’s almost have much text and a bit of practice in fact (boring and not efficient). Instead, buy this course , read man/official docs and do A LOT of practice, learn CLI tips and tricks (key bindings and etc..)
  3. Read carefully each of questions, spend not more about 5-7 mins on one questions, note the weight of the question ( it can be 2, 3% or even 8%, so if you can’t do a question with 2% weight quickly, go ahead and try to do questions that have more weight). I did all 24 questions (partly or completely), so wasn’t worry about the grade result.
  4. If you like reading books, I’d recommend the latest edition of the Linux Bible by Christopher Negus, which actually is one of the best book available in the market. What makes the book unique? It’s structure – every chapter has a summary followed by practice tasks, and meets the exam’s domains and competences. Just start reading from the first page, do tasks and master your CLI skills. Although the book was initially written for CentOS, however, the author made some changes in the recent editions to fit Ubuntu as well. In short, a fundamental work that deserves to be assessed by making a purchase.
  5. Sleep well, don’t drink the day before the exam 🙂

I wish you good luck and don’t ever give up.

LFCS: Linux Foundation Certified Systems Administrator
Featured

Deploy Azure Data Services with Terraform

June, 2021 Update: see details below

Terraform-based deployment of almost all Azure Data Services (default deployment settings are in the parenthesis):

  • Azure Service Bus (Standard, namespace,topic,subscription, auth. rules)
  • Azure Data Lake Storage (ZRS, Hot, Secured, StandardV2)
  • Azure Data Factory (w/Git or without)
  • Azure Data Factory linked with Data Lake Storage
  • Azure Data Factory Pipeline
  • Azure DataBricks WorkSpace (Standard)
  • Azure EventHub (Standard, namespace)
  • Azure Functions (Dynamic, LRS storage, Python, w/App.Insights or without)
  • Azure Data Explorer (Kusto, Standard_D11_v2, 2 nodes)
  • Azure Analysis Server (backup-enabled, S0, LRS, Standard)
  • Azure Event Grid (domain, EventGridSchema)
  • Azure SQL Server (version 12.0)
  • Azure SQL Database (ElasticPool SKU name, 5 GB max data size)
  • Azure SQL Elastic Pool (StandardPool, LicenseIncluded, 50 eDTU/50GB)

Properties and content

  • Over 1k strings and 26 terraform resources in total
  • Almost every string is commented out, multiple conditions in each resource, variable conditions to check it’s value before the deployment and etc. So, it’s flexible, not hardcoded and allows you to create infrastructure with your own set of resources.
  • Written a few years ago, updated once since then to fix deprecated features
  • June, 2021 Update: SQL Server, Database and Elastic Pool , added variable conditions (for example, sql password must be longer than 8 symbols and have upper-case, digits and special characters), added a sensitive variable (just for sample), new Terraform 0.15.5 syntax/features were added, multiple minor changes
  • Tested with the latest Terraform 0.15.5 and Azure provider 2.62.0 (the first version of the script worked fine with >=0.12 and AzureRM >=1.35, just check the syntax and try out)
  • auth.tf – provider authentication and version settings
  • main.tf – a desired Azure infrastructure
  • terraform.tfvars – controls deployment settings
  • variables.tf – variables list
  • outputs.tf – outputs useful information

Deployment settings (excerpt)

#--------------------------------------------------------------
# What should be deployed?
#--------------------------------------------------------------
servicebus       = true  # Azure Service Bus
datafactory      = true  # Azure Data Factory
datafactory_git  = false # Enable GIT for Data Factory? (don't forget to set Git settings in the Data Factory section)
databricks       = true  # Azure DataBricks
eventhub         = true  # Azure EventHub
functions        = true  # Azure Functions 
functions_appins = true  # Integrate App.Insights with Azure Functions?
eventgrid        = true  # Azure EventGrid
kusto            = true  # Azure Data Explorer (kusto)
analysis         = true  # Azure Analysis Server
sqlserver        = true  # Azure SQL Server 
sqlep            = true  # Azure SQL Elastic Pool
sqldb            = true  # Azure SQL Database

Resource block (excerpt)

#Azure SQL Database
resource "azurerm_mssql_database" "rlmvp-svc-sql-db" {
  count           = var.sqlserver == "true" && var.sqldb == "true" ? 1 : 0
  name            = "${var.prefix}sqldb${random_string.rndstr.result}"
  elastic_pool_id = var.sqlep == "true" ? azurerm_mssql_elasticpool.rlmvp-svc-sql-elastic-pool[count.index].id : null
  server_id       = azurerm_sql_server.rlmvp-svc-sql-server[count.index].id
  max_size_gb     = var.az_sql_db_maxsize
  sku_name        = var.az_sql_db_sku_name
  tags            = var.az_tags
}

Variable Conditions

variable "az_sqlserver_password" {
  type        = string
  description = "Azure SQL Server Admin's Password"
  validation {
    condition     = length(var.az_sqlserver_password) > 8 && can(regex("(^.*[A-Z0-9].*[[:punct:]].*$)", var.az_sqlserver_password)) # meets Azure SQL password's policy
    error_message = "SQL Server Admin's password must contain more than 6 symbols (lowercase + upper-case and special/punctuation characters!)."
  }
}

Usage guide

  • Open the terraform.tfvars file
  • Indicate the “What Should Be Deployed?” section
  • Use true/false to set your desired configuration
  • Check or change Azure services settings in the appropriate sections (naming convention (prefix/suffix), location, SKUs and etc.)
  • Run terraform init to get required Terraform providers
  • Run terraform plan to initiate pre-deployment check
  • Run terraform apply to start a deployment
  • (optional) terraform destroy to delete Azure resources

Requirements

  • The script uses Service Principal authentication, so define the subscription ID, client ID, tenand ID and principal secret in the auth.tf (or use another authentication type – Managed Identity, if your CI is running on Azure VMs, for instance)
  • If you are going to deploy Analysis Server (enabled, by default), provide valid Azure AD user(s) UPN(s) to set them as administrators of Analysis Server (az_ansrv_users variable, file – terraform.tfvars)

Result

P.S. feel free to share/commit/fork/slam/sell/copy and do anything that your conscience allows you 🙂

Featured

Playing with Kubernetes running on Azure (AKS)

Heptio (it’s founders co-created Kubernetes) polled about 400 IT decision makers from different sectors and company sizes to find out whether they use Kubernetes or not, understand obstacles and overall experience. About 60% of respondents are using Kubernetes today, and 65% expect to be using the technology in the next year. More surprisingly, about 77% of companies with more than 1000 developers that run Kubernetes are using it in production. 

Furthermore, VMware has recently announced the Project Pacific  that completely rebuilds vSphere architecture. Kubernetes is going to be a control plane in future vSphere versions. Sounds amazing, doesn’t it? 

The supervisor cluster is a Kubernetes cluster of ESXi instead of Linux

I hope you have warmed up and taken an interest in something that may not have been familiar with – Containers and Kubernetes. I believe so and recommend to read about Kubernetes and Docker concepts before we get started.

We’re gonna do some tasks in Azure Kubernetes Service (managed Kubernetes service in Azure) to help you to dive into Kubernetes and also get hands on experience with related services such as Container Registry, AKS, Terraform and Docker. 

Tasks: 
Notes:
  • This GitHub Repo includes everything covered in this blog post 
  • This lab uses a custom and simple ASP.NET Core web application that we will deploy to, and then publish by using Kubernetes (K8S)
  • Docker multistage image build packs up the application
  • Azure Container Registry stores the Docker image
  • Terraform automates deployment of Azure Kubernetes Service and Azure Container Registry. Scripts are stored at the different repo
  • Azure Kubernetes Services provides a managed Kubernetes master node in the cloud with ability to scale up worker nodes. AKS will host our PODs (roughly speaking, PODs represent process/containers running on  k8s cluster)
  • Azure CLI, PowerShell, docker, terraform, kubectl  (command line interface for running commands against Kubernetes clusters) are the main tools for completing tasks. Make sure you have them installed on your machine or use Azure Cloud Shell instead.
  • Create a Service Principal beforehand (Conributor role)
  • Azure DevOps is used for CI/CD (optional)

Deploy Kubernetes cluster in Azure

Although, we can use Azure Portal, CLI or PowerShell to deploy new Azure services, using Infrastructure as Code (IaC) approach is more visionary. We will use Terraform, so check out my repo and comments inside . Terraform creates Azure Container Registry:

#Get RG (create a new one if necessary by using "resource azurerm..")
data "azurerm_resource_group" "Rg" {
  name = "kubRg"
}

............

#Create a container registry
resource "azurerm_container_registry" "cr" {
  name                = "cr${random_string.randomName.result}"
  resource_group_name = "${data.azurerm_resource_group.Rg.name}"
  location            = "${data.azurerm_resource_group.Rg.location}"
  admin_enabled       = true
  sku                 = "Basic"
  # Only for classic SKU (deprecated)
  # storage_account_id  = "${azurerm_storage_account.storacc.id}" (Classic)
}
..............

Configures Kubernetes cluster with Azure Container Network Interface (CNI) to allow you to access PODs directly as every POD gets an IP from Azure subnet rather than use kubenet . At the end of configuration file, terraform enables K8S RBAC (it’s disabled by default in Azure) that we’ll use later during the service accounts creation.

resource "azurerm_kubernetes_cluster" "k8sClu" {
  name                = "rlk8sclu-${random_string.randomName.result}"
  location            = "${data.azurerm_resource_group.Rg.location}"
  resource_group_name = "${data.azurerm_resource_group.Rg.name}"
  dns_prefix          = "${var.dnsPrefix}"

  .......

  network_profile {
    network_plugin = "azure"
  }

  role_based_access_control {
    enabled = true
  }
  .........

Apply the configuration, and then check the output (in my case, resources have been already deployed, so nothing to add). Note ACR and AKS resource names (/managedClusters/…; registries/…)

For an additional deployment check , open up the Azure Cloud Shell or Azure CLI and type the following to open Kubernetes portal:

#Get Azure AKS Credentials
az login
az aks get-credentials --resource-group kubRg --name rlk8sclu-l3y5

#Open K8S dashboard
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
az aks browse --resource-group kubRg --name rlk8sclu-l3y5

#View kubectl config (optional, %HOMEPATH%/.kube/config)
kubectl config view

If your browser shows up a new page, likely there aren’t any issues with the deployment. Let’s jump into the second task.

TIP: get addresses of the master and services by running kubectl cluster-info

Make Docker image with the application

Let’s create a docker image with the application, and then push the image to the Azure Container Registry. The Dockerfile is located at the root of the aspnetapp folder (check out my repo) and describes multi-stage image build process. There is also the Dockerignore file to define folders that must be excluded from the image.

Run the Docker CLI and build the image (docker build <dir>):

Push the image to Azure Container Registry:

az acr login --name acrName
docker login acrFQDN
docker tag aspnetapp acrFQDN/aspnetapp
docker push acrFQND/aspnetapp

TIP: get the attributes of the image by running az acr repository show -n crl3y5 –image aspnetapp:latest

Make two fully isolated namespaces within the Kubernetes cluster

Once the docker image is uploaded to ACR, we are ready to proceed with the Kubernetes tasks. When you need something to change in K8S you may use kubectl to define operations sequentially or manifests files (yaml) that can describe multiple requests to K8S API Server in the declarative form.

If you look at my repo, you can see two folders ns1 and ns2 that store yaml/manifest files for respective namespaces. We’ll use that files in conjunction with kubectl to make some changes on the AKS cluster. Because manifests files are almost the same, only manifests for NS1 will be shown.

#Apply the manifest (example)
kubectl apply -f pathToTheManifestFile

Create a new namespace:

#Create a namespace 1
apiVersion: v1
kind: Namespace
metadata:
  name: ns1
  labels:
    author: rlevchenko

To deny ingress traffic from PODs running in the other namespaces:

#NS1
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-from-other-namespaces
  namespace: ns1
spec:
  podSelector:
    matchLabels: {}
  ingress:
    - from:
        - podSelector: {}

TIP: use kubectl get namespaces to list namespaces and kubectl get networkpolicy -n ns1 to get the policy

Configure anti-affinity for PODs

To make sure that group of PODs (labelSelector section) in the cluster running on particular nodes , we need to configure affinity/anti-affinity rules. This anti-affinity “rule” ensures that each POD with app=aspcore label does not co-locate on a single node.

.....
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - aspcore
              topologyKey: "kubernetes.io/hostname"
......

TIP: use kubectl get pods -o wide -n ns1 to get info about assigned nodes, and then kubectl get nodes –show-labels to check node labels

Configure network policy to deny egress traffic from PODs (except DNS requests)

This task shows how you can filter network traffic from PODs in the namespace. All PODs with the app=aspcore label in the first namespace can make only DNS requests (out), other ones will be denied.

#Deny all traffic (except DNS) from PODs
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-external-egress
  namespace: ns1
spec:
  podSelector:
    matchLabels:
      app: aspcore
  policyTypes:
    - Egress
  egress:
    # allow DNS TCP/UDP 53 ports
    - ports:
        - port: 53
          protocol: UDP
        - port: 53
          protocol: TCP
    - to:
        - namespaceSelector: {}

TIP: get list of the network policies kubectl get networkpolicy -n ns1

Create a service account with read permission on PODs in the first namespace

A service account provides an identity for processes that run in a Pod. This except of manifest file describes a service account read-sa-ns that has read only permissions on PODs in the NS1 namespace (the rules section/verbs). Also, note that rbac role is used which we have enabled during applying the terraform configuration.

#New SA - ns level
apiVersion: v1
kind: ServiceAccount
metadata:
  name: read-sa-ns
---
#New Role - ns level
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: read-only-ns
  namespace: ns1
rules:
  - apiGroups: ["", "extensions", "apps"]
    resources: ["pods"]
    verbs: ["get", "list", "watch"]
---
#Binding the role to the sa -NS1
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-ns1-pods
  namespace: ns1
subjects:
  - kind: ServiceAccount
    name: read-sa-ns
    apiGroup: ""
    namespace: default
roleRef:
  kind: Role
  name: read-only-ns
  apiGroup: rbac.authorization.k8s.io

TIP: get roles in the NS1 namespace kubectl get role -n ns1 , and then check service accounts in K8S cluster kubectl get serviceaccounts –all-namespaces

Set CPU and RAM limits for each pod

If a container is created in the ns1 namespace, and the container does not specify its own values for memory request and memory limit, the container is given a default memory request of 128 MiB and a default memory limit of 400 MiB. In addition, you can define limits on the PODs level.

#Define mem-cpu limits
apiVersion: v1
kind: LimitRange
metadata:
  name: mem-cpu-limit-range
  namespace: ns1
spec:
  limits:
  - default:
      memory: 256Mi
      cpu: "400M"
    defaultRequest:
      memory: 128Mi
      cpu: "200M"
    type: Container

TIP: check the limits by running kubectl describe pod podname -n ns1

Configure PODs scalability based on CPU metric

Kubernetes allows you to automatically scale PODs based on the CPU/RAM metrics (horizontal pod autoscaler). If CPU average utilization is equal or greater than 70%, K8S deploys additional replicas (spec stenza, maxReplicas).

#Scale pods automatically (cpu metric)
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: aspcore-load-cpu
  namespace: ns1
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: asp-deployment1
  minReplicas: 1
  maxReplicas: 2
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 70

TIP: to list the NS limits kubectl describe namespace ns1

Publish the application

Now it’s time to publish the application running on a container. First, create a POD that will use our docker image that we’ve already pushed to the Azure Container Registry. A one Pod with the latest image will be created under the ns1 namespace. Check labels (one of the most important things, actually in K8S 🙂 ) , pod name and number (replicas)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: asp-deployment1
  namespace: ns1
  labels:
    app: web
    release: stable
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aspcore
  template:
    metadata:
      labels:
        app: aspcore
    spec:
      containers:
        - name: web-aspcore
          image: crl3y5.azurecr.io/aspnetapp:latest

TIP: use kubectl get pods -n ns1 -o wide to check the pod state in the ns1

If the Pod’s status is running, you can publish it via the LoadBalancer service:

#Publish the deployment through the Service

apiVersion: v1
kind: Service
metadata:
  name: demo-service1
  namespace: ns1
spec:
  selector:
    app: aspcore
  type: LoadBalancer
  ports:
    - name: name-of-the-port
      port: 80
      targetPort: 80

Then check the deployment status, and get public ip of the service:

#Get deployments in the NS1
kubectl get deployments -n ns1

#Get Service's Public IP
kubectl get service -n ns1 -o jsonpath='{.items[].status.loadBalancer.ingress[0].ip}'

Open up the browser and navigate to http://publicip/api/values to verify that application is published and works:

What’s next?

Complete the following homework tasks to boost your skills:

  • Make a test POD with a static volume (use Azure storage)
  • Make sure that PODs are running under non-root account
  • Create a service account with read permission on all PODs in the cluster
  • Add context details about service accounts to your configuration file (kubeconfig), and then verify service accounts permissions
  • Configure PODs scalability based on RAM or network metrics
  • Check yourself – the answers are at my repo
  • Create CI/CD Azure DevOps pipelines to automate docker image build and environment deployment (use terraform/kubectl/docker)

Thanks for reading, stars and commits!

Webinar – Your 5 Most Critical M365 Vulnerabilities Revealed and How to Fix Them

Microsoft 365 is an incredibly powerful software suite for businesses, but it is becoming increasingly targeted by people trying to steal your data. The good news is that there are plenty of ways admins can fight back and safeguard their Microsoft 365 infrastructure against attack.

This free upcoming webinar, on June 23 and produced by Hornetsecurity/Altaro, features two enterprise security experts from the leading security consultancy Treusec – Security Team Leader Fabio Viggiani and Principal Cyber Security Advisor Hasain Alshakarti. They will explain the 5 most critical vulnerabilities in your M365 environment and what you can do to mitigate the risks they pose. To help attendees fully understand the situation, a series of live demonstrations will be performed to reveal the threats and their solutions covering:

  • O365 Credential Phishing
  • Insufficient or Incorrectly Configured MFA Settings
  • Malicious Application Registrations
  • External Forwarding and Business Email Compromise Attacks
  • Insecure AD Synchronization in Hybrid Environments

This is truly an unmissable event for all Microsoft 365 admins!

The webinar will be presented live twice on June 23 to enable as many people as possible to join the event live and ask questions directly to the expert panel of presenters. It will be presented at 2pm CEST/8am EDT/5am PDT and 7pm CEST/1pm EDT/10am PDT.

Don’t miss out – Save your seat now!