Onboarding : Azure Infrastructure deployment

Scenarios

Keywords

  • Azure Resource Manager (ARM)

Available provisioning solutions

Discover the services and tools available to automate the deployment and configuration of your Azure infrastructure

Scenario: A clothing manufacturer that’s moving several product design applications to Azure virtual machines. The company needs to scale out to many virtual machines now and in the future. Their current manual process is time consuming and error prone. They want to automate the scale-out process to improve operational abilities. They’re unsure about the tools that are available on Azure to provision compute resources, and where each fits into the overall provisioning process.

Available provisioing solutions are:

  • Custom scripts (VMs)
  • Desired State Configuration Extensions (VMs)
  • Chef Server
  • Terraform (all resources)
  • Azure Automation State Configuration
  • Azure Resource Manager templates (all resources)
Custom Script Extension (VMs)
  • custom script extension downloads and runs scripts on vms
  • useful for post deployment configuration, software installation
  • this script can be powershell script on
    • local file server,
    • Github,
    • azure storage,
    • other locations that are accessible to vm
  • available via powershell, cli, ARM template
{
    "apiVersion": "2019-06-01",
    "type": "Microsoft.Compute/virtualMachines/extensions",
    "name": "[concat(variables('virtual machineName'),'/', 'InstallWebServer')]",
    "location": "[parameters('location')]",
    "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/',variables('virtual machineName'))]"
    ],
    "properties": {
        "publisher": "Microsoft.Compute",
        "type": "CustomScriptExtension",
        "typeHandlerVersion": "1.7",
        "autoUpgradeMinorVersion":true,
        "settings": {
            "fileUris": [
                "https://your-potential-file-location.com/your-script-file.ps1"
            ],
            "commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File your-script-file.ps1"
       		 }
    	}
    }
}

Note: Take care if your configuration or management task requires a restart. A custom script extension won’t continue after a restart.

How to extend a Resource Manager template

There are several ways

  • create multiple templates, each defining one piece of the system (then link or nest them together to build a more complete system)
  • modify an existing template ( that’s often the fastest way to get started writing your own templates)

Example

  1. Create a VM.
  2. Open port 80 through the network firewall.
  3. Install and configure web server software on your VM.
# Requirements:
# Create a VM.
# Open port 80 through the network firewall.
# Install and configure web server software on your VM.

az vm extension set \
  --resource-group $RESOURCEGROUP \
  --vm-name SimpleWinVM \
  --name CustomScriptExtension \
  --publisher Microsoft.Compute \
  --version 1.9 \
  --settings '{"fileUris":["https://raw.githubusercontent.com/MicrosoftDocs/mslearn-welcome-to-azure/master/configure-iis.ps1"]}' \
  --protected-settings '{"commandToExecute": "powershell -ExecutionPolicy Unrestricted -File configure-iis.ps1"}' # the script to enable IIS


# This is the content of the configure-iis.ps1 file
#--------------------------------------------------------------
# Install IIS.
dism /online /enable-feature /featurename:IIS-WebServerRole

# Set the home page.
Set-Content `
  -Path "C:\\inetpub\\wwwroot\\Default.htm" `
  -Value "<html><body><h2>Welcome to Azure! My name is $($env:computername).</h2></body></html>"
#--------------------------------------------------------------

Source

Desired State Configuration Extensions (VMs)
  • DSC extensions are for more complex configuration, installation
  • configuration for state can be located in blob storage, internal file storage
  • DSC can reboot, and continue the execution after reboots are completed
{
	"type": "Microsoft.Compute/virtualMachines/extensions",
	"name": "Microsoft.Powershell.DSC",
	"apiVersion": "2018-06-30",
	"location": "your-region",
	"dependsOn": [
		"[concat('Microsoft.Compute/virtualMachines/', parameters('virtual machineName'))]"
	],
	"properties": {
		"publisher": "Microsoft.Powershell",
		"type": "DSC",
		"typeHandlerVersion": "2.77",
		"autoUpgradeMinorVersion": true,
		"settings": {
			"configuration": {
				"url": "https://demo.blob.core.windows.net/iisinstall.zip",
				"script": "IisInstall.ps1",
				"function": "IISInstall"
			}
		},
		"protectedSettings": {
			"configurationUrlSasToken": "odLPL/U1p9lvcnp..."
		}
	}
}
Chef Automate Server
  • chef server handels 10,000 node/machine at a time
  • works on-prem and cloud
  • it can be hosted for you and works as a service
  • Use Chef’s knife tool to deploy virtual machines and simultaneously apply recipes to them. You install the knife tool on your admin workstation, which is the machine where you create policies and execute commands. Then run your knife commands from your admin workstation.
# The following example shows how a knife command can be used to create a virtual machine on Azure. The command
# simultaneously applies a recipe that installs a web server on the machine.

knife azurerm server create `
    --azure-resource-group-name rg-chefdeployment `
    --azure-storage-account store `
    --azure-vm-name chefvm `
    --azure-vm-size 'Standard_DS2_v2' `
    --azure-service-location 'eastus' `
    --azure-image-reference-offer 'WindowsServer' `
    --azure-image-reference-publisher 'MicrosoftWindowsServer' `
    --azure-image-reference-sku '2016-Datacenter' `
    --azure-image-reference-version 'latest' `
    -x myuser `
    -P yourPassword `
    --tcp-endpoints '80,3389' `
    --chef-daemon-interval 1 `
    -r "recipe[webserver]"

You can also use the Chef extension to apply recipes to the target machines. The following example defines a Chef extension for a virtual machine in an Azure Resource Manager template. It points to a Chef server by using the chef_server_url property. It points to a recipe to run on the virtual machine to put it in the desired state.

{
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "name": "[concat(variables('virtual machineName'),'/', variables('virtual machineExtensionName'))]",
  "apiVersion": "2015-05-01-preview",
  "location": "[parameters('location')]",
  "dependsOn": [
    "[concat('Microsoft.Compute/virtualMachines/', variables('virtual machineName'))]"
  ],
  "properties": {
    "publisher": "Chef.Bootstrap.WindowsAzure",
    "type": "LinuxChefClient",
    "typeHandlerVersion": "1210.12",
    "settings": {
      "bootstrap_options": {
        "chef_node_name": "chef_node_name",
        "chef_server_url": "chef_server_url",
        "validation_client_name": "validation_client_name"
      },
      "runlist": "recipe[your-recipe]",
      "validation_key_format": "validation_key_format",
      "chef_service_interval": "chef_service_interval",
      "bootstrap_version": "bootstrap_version",
      "bootstrap_channel": "bootstrap_channel",
      "daemon": "service"
    },
    "protectedSettings": {
      "validation_key": "validation_key",
      "secret": "secret"
    }
  }
}

A recipe might look like the one that follows. The recipe installs an IIS web server.

#install IIS on the node.
powershell_script 'Install IIS' do
     action :run
     code 'add-windowsfeature Web-Server'
end

service 'w3svc' do
     action [ :enable, :start ]
end

Terraform
  • Hashicorp Configuration Language (HCL)
# Configure the Microsoft Azure as a provider
provider "azurerm" {
    subscription_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
    client_id       = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
    client_secret   = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
    tenant_id       = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}

# Create a resource group
resource "azurerm_resource_group" "myterraformgroup" {
    name     = "myResourceGroup"
    location = "eastus"

    tags = {
        environment = "Terraform Demo"
    }
}
# Create the virtual machine
resource "azurerm_virtual_machine" "myterraformvirtual machine" {
    name                  = "myvirtual machine"
    location              = "eastus"
    resource_group_name   = "${azurerm_resource_group.myterraformgroup.name}"
    network_interface_ids = ["${azurerm_network_interface.myterraformnic.id}"]
    virtual machine_size               = "Standard_DS1_v2"

    storage_os_disk {
        name              = "myOsDisk"
        caching           = "ReadWrite"
        create_option     = "FromImage"
        managed_disk_type = "Premium_LRS"
    }

    storage_image_reference {
        publisher = "Canonical"
        offer     = "UbuntuServer"
        sku       = "16.04.0-LTS"
        version   = "latest"
    }

    os_profile {
        computer_name  = "myvirtual machine"
        admin_username = "azureuser"
    }

    os_profile_linux_config {
        disable_password_authentication = true
        ssh_keys {
            path     = "/home/azureuser/.ssh/authorized_keys"
            key_data = "ssh-rsa AAAAB3Nz{snip}hwhaa6h"
        }
    }

    boot_diagnostics {
        enabled     = "true"
        storage_uri = "${azurerm_storage_account.mystorageaccount.primary_blob_endpoint}"
    }

    tags = {
        environment = "Terraform Demo"
    }
}

To use terraform file the following commands have to be used

  • terraform init
  • terraform plan
  • terraform apply

Source

Azure Automation State Configuration (DSC)
Azure Resource Manager templates (all resources)
  • Azure Resource Manager (ARM) template
    • Structure of sections and spesific properties of each sections
    • Version of the template language is important e.g. “2019-04-01”
    • Resource Manager templates express your deployments as code
    • Azure Resource Manager is the interface for managing and organizing cloud resources
    • Resource Manager is what organizes the resource groups that let you deploy, manage, and delete all of the resources together in a single action.
    • Resource Manager template is a JSON
    • a form of declarative automation (means that you define what resources you need but not how to create them)
    • make your deployments faster and more repeatable
    • Templates improve consistency
    • Templates help express complex deployments
    • Templates reduce manual, error-prone tasks
    • Templates are code
    • Templates promote reuse
    • Templates are linkable

Sections of the ARM template

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", # required
  "contentVersion": "", # required : any value is acceptable
  "apiProfile": "",
  "parameters": {  },
  "variables": {  },
  "functions": [  ],
  "resources": [  ], # required
  "outputs": {  }
}

Parameters

Parameter section for input value in deployment time

limited to 256 parameters

We can used objects that contains multiple properties instead of using input parameters.

"parameters": {
  "<parameter-name>" : { # required
    "type" : "<type-of-parameter-value>", # required : [string|securestring|int|bool|object|secureObject|array]
    "defaultValue": "<default-value-of-parameter>",
    "allowedValues": [ "<array-of-allowed-values>" ],
    "minValue": <minimum-value-for-int>,
    "maxValue": <maximum-value-for-int>,
    "minLength": <minimum-length-for-string-or-array>,
    "maxLength": <maximum-length-for-string-or-array-parameters>,
    "metadata": {
      "description": "<description-of-the parameter>"
    }
  }
}

Variables

  • Variables to reduce the complexity
"variables": {
  "<variable-name>": "<variable-value>",
  "<variable-name>": {
    <variable-complex-type-value>
  },
  "<variable-object-name>": {
    "copy": [
      {
        "name": "<name-of-array-property>",
        "count": <number-of-iterations>,
        "input": <object-or-value-to-repeat>
      }
    ]
  },
  "copy": [
    {
      "name": "<variable-array-name>",
      "count": <number-of-iterations>,
      "input": <object-or-value-to-repeat>
    }
  ]
}

Functions

  • procedures that you don’t want to repeat throughout the template
  • This example creates a unique name for resources
"functions": [
  {
    "namespace": "contoso",
    "members": {
      "uniqueName": {
        "parameters": [
          {
            "name": "namePrefix",
            "type": "string"
          }
        ],
        "output": {
          "type": "string",
          "value": "[concat(toLower(parameters('namePrefix')), uniqueString(resourceGroup().id))]"
        }
      }
    }
  }
],

Output

  • any information you’d like to receive when the template runs
  • information you do not know until the deployment runs (VM’s IP address or FQDN)
"outputs": {
  "hostname": {
    "type": "string",
    "value": "[reference(variables('publicIPAddressName')).dnsSettings.fqdn]"
  }
}

How to write a ARM template

Example

{
  "type": "Microsoft.Compute/virtualMachines",
  "apiVersion": "2018-10-01",
  "name": "[variables('virtual machineName')]",
  "location": "[parameters('location')]",
  "dependsOn": [
    "[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]",
    "[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
  ],
  "properties": {
    "hardwareProfile": {
      "virtual machinesize": "Standard_A2"
    },
    "osProfile": {
      "computerName": "[variables('virtual machineName')]",
      "adminUsername": "[parameters('adminUsername')]",
      "adminPassword": "[parameters('adminPassword')]"
    },
    "storageProfile": {
      "imageReference": {
        "publisher": "MicrosoftWindowsServer",
        "offer": "WindowsServer",
        "sku": "[parameters('windowsOSVersion')]",
        "version": "latest"
      },
      "osDisk": {
        "createOption": "FromImage"
      },
      "dataDisks": [
        {
          "diskSizeGB": 1023,
          "lun": 0,
          "createOption": "Empty"
        }
      ]
    },
    "networkProfile": {
      "networkInterfaces": [
        {
          "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nicName'))]"
        }
      ]
    },
    "diagnosticsProfile": {
      "bootDiagnostics": {
        "enabled": true,
        "storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))).primaryEndpoints.blob]"
      }
    }
  }
}
Custom scriptAzure Desired State Configuration (DSC) extensionsAutomation State ConfigurationResource Manager templates
Ease of setupis built into the Azure portal, so setup is easare easy to read, update, and store. Configurations define what state you want to achieve. The author doesn’t need to know how that state is reached.isn’t difficult to set up, but it requires the user to be familiar with the Azure portal.create Resource Manager templates easily. You have many templates available from the GitHub community, which you can use or build upon. Alternatively, you can create your own templates from the Azure portal.
Managementcan get tricky as your infrastructure grows and you accumulate different custom scripts for different resourcesdemocratizes configuration management across servers.The service manages all of the virtual machines for you automatically. Each virtual machine can send you detailed reports about its state, which you can use to draw insights from this data. Automation State Configuration also helps you to manage your DSC configurations more easily.is straightforward because you manage JavaScript Object Notation (JSON) files.
Interoperabilitycan be added into an Azure Resource Manager template. can also deploy it through Azure PowerShell or the Azure CLI.are used with Azure Automation State Configuration. They can be configured through the Azure portal, Azure PowerShell, or Azure Resource Manager templates.requires DSC configurations. It works with your Azure virtual machines automatically, and any virtual machines that you have on-premises or on another cloud provider.You can use other tools to provision Resource Manager templates, such as the Azure CLI, the Azure portal, PowerShell, and Terraform.
Configuration languagewrite scripts by using many types of commands. e.g. powershell, bashUse PowerShellpowershellJSON
Limitations and drawbacksaren’t suitable for long run scripts or reboots needed scriptsonly use PowerShell to define configurations. If you use DSC without Azure Automation State Configuration, you have to take care of your own orchestration and management.use powershellJSON has a strict syntax and grammar, and mistakes can easily render a template invalid. The requirement to know all of the resource providers in Azure and their options can be onerous.
[Source]

Scenario for custom script: The organization you work for has been given a new contract to work for a new client. They have a handful of virtual machines that run on Azure. The development team decides they need to install a small application they’ve written to help increase their team’s productivity and make sure they can meet new deadlines. This application doesn’t require a restart.

Custom script advantages: The custom script extension is good for small configurations after provisioning. It’s also good if you need to add or update some applications on a target machine quickly. It’s imperative for ad-hoc cross-platform scripting.

Scenario for Azure Desired State Configuration State: The organization you work for is testing a new application, which requires new virtual machines to be identical so that the application can be accurately tested. The company wants to ensure that the virtual machines have the exact same configuration settings. You notice that some of these settings require multiple restarts of each virtual machine. Your company wants a singular state configuration for all machines at the point of provisioning. Any error handling to achieve the state should be abstracted as much as possible from the state configuration. Configurations should be easy to read.

Azure Desired State Configuration advantages: DSC is easy to read, update, and store. DSC configurations help you declare the state your machines should be in at the point they are provisioned, rather than having instructions that detail how to put the machines in a certain state. Without Azure Automation State Configuration, you have to manage your own DSC configurations and orchestration. DSC can achieve more when it’s coupled with Azure Automation State Configuration.

Scenario for Azure State Configuration: You learn that the company you work for wants to be able to create hundreds of virtual machines, with identical configurations. They want to report back on these configurations. They want to be able to see which machines accept which configurations without problems. They also want to see those problems when a machine doesn’t achieve a desired state. In addition, they want to be able to feed all of this data into a monitoring tool so they can analyze all of the data and learn from it.

Azure State Configuration advantages: The Azure Automation State Configuration service is good for automating your DSC configurations, along with the management of machines that need those configurations, and getting centralized reporting back from each machine. You can use DSC without Azure Automation State Configuration, particularly if you want to administer a smaller number of machines. For larger and more complicated scenarios that need orchestration, Azure Automation State Configuration is the solution you need. All of the configurations and features that you need can be pushed to all of the machines, and applied equally, with minimal effort.

Scenario for ARM Templates: Each developer should be able to automatically provision an entire group of virtual machines that are identical to what everyone else on the team creates. The developers want to be sure they’re all working in the same environment. The developers are familiar with JSON, but they don’t necessarily know how to administer infrastructure. They need to be able to provision all of the resources they need to run these virtual machines in an easy and rapid manner.

ARM Template advantages: Resource Manager templates can be used for small ad-hoc infrastructures. They’re also ideal for deploying larger infrastructures with multiple services along with their dependencies. Resource templates can fit well into developers’ workflows. You use the same template to deploy your application repeatedly during every stage of the application lifecycle.

third-party solution comparisonChefTerraform
Ease of setupruns on the master machine, and Chef clients run as agents on each of your client machines. You can also use hosted Chef and get started much faster, instead of running your own server.To get started with Terraform, download the version that corresponds with your operating system and install it.
Management can be difficult because it uses a Ruby-based domain-specific language. You might need a Ruby developer to manage the configuration.files are designed to be easy to manage.
Interoperabilityonly works under Linux and Unix, but the Chef client can run on Windows.supports Azure, Amazon Web Services, and Google Cloud Platform.
Configuration languageuses a Ruby-based domain-specific language.uses Hashicorp Configuration Language (HCL). You can also use JSON.
Limitations and drawbacksThe language can take time to learn, especially for developers who aren’t familiar with Ruby.Because Terraform is managed separately from Azure, you might find that you can’t provision some types of services or resources.
[Source]

Scenario for Chef Server: Your organization has decided to let the developers create some virtual machines for their own testing purposes. The development team knows various programming languages and recently started writing Ruby applications. They’d like to scale these applications and run them on test environments. They’re familiar with Linux. The developers run only Linux-based machines and destroy them after testing is finished.

Chef Server advantages: Chef is suitable for large-scale infrastructure deployment and configuration. Chef makes it easy for you to automate the deployment of an entire infrastructure, such as in the workflow of a development team.

Scenario for Terraform: Your organization has gained a new client who wants to create multiple virtual machines across several cloud providers. The client has asked you to create three new virtual machines in Azure and one other in the public cloud. The client wants the virtual machines to be similar. They should be created by using a script that works with both providers. This approach will help the client have a better idea of what they’ve provisioned across providers.

Terraform advantages: With Terraform, you can plan the infrastructure as code and see a preview of what the code will create. You can have that code peer reviewed to minimize errors in configuration. Terraform supports infrastructure configurations across different cloud service providers.

Example

# Source : https://docs.microsoft.com/en-us/learn/modules/choose-compute-provisioning/5-exercise-deploy-template
# Clone the configuration and template
git clone https://github.com/MicrosoftDocs/mslearn-choose-compute-provisioning.git

cd mslearn-choose-compute-provisioning
code Webserver.ps1

# file content 
Configuration Webserver
{
    param ($MachineName)

    Node $MachineName
    {
        #Install the IIS Role
        WindowsFeature IIS
        {
            Ensure = "Present"
            Name = "Web-Server"
        }

        #Install ASP.NET 4.5
        WindowsFeature ASP
        {
            Ensure = "Present"
            Name = "Web-Asp-Net45"
        }

        WindowsFeature WebServerManagementConsole
        {
            Name = "Web-Mgmt-Console"
            Ensure = "Present"
        }
    }
}

# configure template
code template.json

# replace modulesUrl parameter in template
"modulesUrl": {
    "type": "string",
    "metadata": {
        "description": "URL for the DSC configuration module."
    }
},

# Validate your template
az deployment group validate \
    --resource-group learn-46d7acf0-e3c7-48c8-9416-bf9f3875659c \
    --template-file template.json \
    --parameters vmName=hostVM1 adminUsername=serveradmin

# Deploy your template
az deployment group create \
    --resource-group learn-46d7acf0-e3c7-48c8-9416-bf9f3875659c \
    --template-file template.json \
    --parameters vmName=hostVM1 adminUsername=serveradmin

az resource list \
    --resource-group learn-46d7acf0-e3c7-48c8-9416-bf9f3875659c \
    --output table \
    --query "[*].{Name:name, Type:type}"

echo http://$(az vm show \
    --show-details \
    --resource-group learn-46d7acf0-e3c7-48c8-9416-bf9f3875659c \
    --name hostVM1 \
    --query publicIps \
    --output tsv)

Source

Deploy ARM Template via Powershell

  1. verify the template
  2. visualize the template http://armviz.io/designer

Powershell

New-AzResourceGroup -Name <resource-group-name> -Location <resource-group-location> #use this command when you need to create a new resource group for your deployment
New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.json

CLI

az group create --name <resource-group-name> --location <resource-group-location> #use this command when you need to create a new resource group for your deployment
az group deployment create --resource-group <my-resource-group> --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.json

Example

# define parameters for ARM template
RESOURCEGROUP=learn-quickstart-vm-rg
LOCATION=eastus
USERNAME=azureuser
PASSWORD=$(openssl rand -base64 32)

# create resource group
az group create --name $RESOURCEGROUP --location $LOCATION

# validate the template
az deployment group validate \
  --resource-group $RESOURCEGROUP \
  --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.json" \
  --parameters adminUsername=$USERNAME \
  --parameters adminPassword=$PASSWORD \
  --parameters dnsLabelPrefix=$DNS_LABEL_PREFIX

# deploy the template
az deployment group create \
  --name MyDeployment \
  --resource-group $RESOURCEGROUP \
  --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.json" \
  --parameters adminUsername=$USERNAME \
  --parameters adminPassword=$PASSWORD \
  --parameters dnsLabelPrefix=$DNS_LABEL_PREFIX

  # verify the deployment
  az deployment group show \
  --name MyDeployment \
  --resource-group $RESOURCEGROUP


  # list the vms
  az vm list \
  --resource-group $RESOURCEGROUP \
  --output table

  

Add additional customizations to template

The solution is, using the Custom Script Extension, which I have explained at the top of this document.

Source


Surprise the life with your decisions.

Parisa Moosavinezhad


Azure Storage and Best Practices

Topics

  • Call Storage Rest API
  • How Authenticate by Azure Storage
  • How to secure the authentication values

This document presents the Azure Storage’s Best Practices.

Call Storage Rest API

The Storage’s REST API can be called as follows over HTTP/HTTPS. The output of this call is XML therefore the pre-built client libraries can help to work with XML output.

GET https://[url-for-service-account]/?comp=list&include=metadata

# Custom Domain can be used as well
# Https://[StorageName].blob.core.windows.net/
# Https://[StorageName].queue.core.windows.net/
# Https://[StorageName].table.core.windows.net/
# Https://[StorageName].file.core.windows.net/

How Authenticate by Azure Storage

  1. Storage Connection String: DefaultEndpointsProtocol=https;AccountName={your-storage};AccountKey={your-access-key};EndpointSuffix=core.windows.net
  2. Access Key & API Endpoint: Each storage has a unique access key.
  3. Shared Access Signature (SAS): It can have grained permission

How to secure the authentication values

  1. Using Key/value

Best Practice 1

Scenario

You’re building a photo-sharing application. Every day, thousands of users take pictures and rely on your application to keep them safe and make them accessible across all their devices. Storing these photos is critical to your business, and you would like to ensure that the system used in your application is fast, reliable, and secure. Ideally, this would be done without you having to build all these aspects into the app. [Source]

  1. Create a Storage
  2. Create an Application
  3. Configure Application
1. Create a Storage

–kind [BlobStorage|Storage|StorageV2]

–SKU [Premium_LRS|Standard_GRS|Standard_RAGRS|Standard_ZRS]

–access-tier [cool|hot]

# Create an Azure Storage
az storage account create \
        --resource-group learn-242f907f-37b3-454d-a023-dae97958e5d9 \
        --kind StorageV2 \
        --sku Standard_LRS \
        --access-tier Cool \
        --name parisalsnstorage

# Get the ConnectionString of the Storage
az storage account show-connection-string \
    --resource-group learn-242f907f-37b3-454d-a023-dae97958e5d9 \
    --name parisalsnstorage \
    --query parisalsnstorage
2. Create an Application
# Create a DotNet Core Application
# Create the project in spesific folder with -o / --output <folder-name>
dotnet new console --name PhotoSharingApp

# Change to project folder
cd PhotoSharingApp

# Run the project
dotnet run

# Create a appsettings.json file. The Storage connection string is kept here.
# This is the simple version 
touch appsettings.json
3. Configure Application
# Add Azure Storage NuGet Package
dotnet add package WindowsAzure.Storage

# Run to test the project
dotnet run

# Edit the appsettings.json
code .

After the appsettings.json file is opned in Editor change the content as follows

{
  "StorageAccountConnectionString": "The Storage Connection String must be placed here"
}

The next file is PhotoSharingApp.csproj. It have to be changed as follows

<Project Sdk="Microsoft.NET.Sdk">
   ...
    <PropertyGroup>
      <OutputType>Exe</OutputType>
      <LangVersion>7.1</LangVersion>
      <TargetFramework>netcoreapp2.2</TargetFramework>
    </PropertyGroup>
...
    <ItemGroup>
        <None Update="appsettings.json">
          <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
        </None>
    </ItemGroup>
    ...
</Project>

The last file if the program.cs file

using System;
using Microsoft.Extensions.Configuration;
using System.IO;
using Microsoft.WindowsAzure.Storage;
using System.Threading.Tasks;

namespace PhotoSharingApp
{
    class Program
    {
        static async Task Main(string[] args)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(Directory.GetCurrentDirectory())
                .AddJsonFile("appsettings.json");

            var configuration = builder.Build();
            var connectionString = configuration["StorageAccountConnectionString"];

            # Simplest way to initialize the object model via either .TryParse or .Parse
            if (!CloudStorageAccount.TryParse(connectionString, out CloudStorageAccount storageAccount))
            {
                Console.WriteLine("Unable to parse connection string");
                return;
            }

            var blobClient = storageAccount.CreateCloudBlobClient();
            var blobContainer = blobClient.GetContainerReference("photoblobs");
            bool created = await blobContainer.CreateIfNotExistsAsync();

            Console.WriteLine(created ? "Created the Blob container" : "Blob container already exists.");
        }
    }
}

Best Practice 2

Best Practice n

I’m working on the content..it will be published soon 🙂

AWS : Serverless

Topics

Related topics

Lambda

Create a simple Lambda function

The function looks like this after creation

To call the function

  1. First select a test event and configure the test values
  2. Click test button
Select a Test event
Configure test values

For monitoring the lambda refer to AWS : Monitor, React, and Recover document.

This is a list of available serverless services on Amazon cloud.

[Source]

AWS Lambda is one of the Serverless possibilites, We can bring our code and have it run in response to an event.

Handler is the primary entry point and the whole code can follow the OOP standards.

Lambda function and object oriented design

This is the usual layer of the lambda function. The business logic must not be developed in handler.

The business logic must be developed in controller class.

Services is for interfaces with external services.

when a Lambda function runs, each time a new execution context is created, there’s a bootstrap component to that start up time for that function where the runtime itself needs to be bootstrapped, and all of the dependencies for that function have to be loaded from S3 and initialized. In order to optimize the cold starts for your function, you should consider reducing the overall size of your libraries.

So that means removing any unnecessary dependencies in your dependency graph, as well as being conscious of the overall start up time for the libraries you’re using. Certain libraries like Spring within the Java context take a long time to initialize, and that time is going to impact the cold start times for your function. Also, if you’re using any compiled libraries within your function, consider statically linking them instead of using dynamic link libraries.

Managing the Developer Workflow

AWS : DynamoDB

Amazon DynamoDB is a fast NoSQL database service for all applications that need consistent, single-millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and many other applications. [Source]

Create DynamoDB

Create Global table

For creating Global table, the DynamoDb Stream must be enabled. The following figure demonstrates how to enable it.

Then we use the Global Tables tab and Add region.

After I added a new item to “Oregon” table, the values would be added to the “California” as well.

Resources

Onboarding: Resilient and scaleable application

Key components for scaleable and resilient applications

  • Application Gateway
  • Azure Load balancer
  • Availability Set
    • logical grouping for isolating VM resources from each other (run across multiple physical servers, racks, storage units, and network switches)
    • For building reliable cloud solutions
  • Availability Zone
    • Groups of data centers that have independent power, cooling, and networking
    • VMs in availability zone are placed in different physical locations within the same region
    • It doesn’t support all VM sizes
    • It’s available in all regions
A diagram that shows an overview of availability sets in Azure
Availability Set [Source]
A diagram that shows an overview of availability zones in Azure
Availability Zone [Source]
  • Traffic Manager: provides DNS load balancing to your application, so you improve your ability to distribute your application around the world. Use Traffic Manager to improve the performance and availability of your application.

Application Gateway vs. Traffic Manager: The traffic manager only directs the clients to the IP address of the service that they want to go to and the Traffic Manager cannot see the traffic. But Gateway sees the traffic.

Load balancing the web service with the application gateway

Improve application resilience by distributing the load across multiple servers and using path-based routing to direct web traffic.

  • Application gateway works based on Layer 7

Scenario: you work for the motor vehicle department of a governmental organization. The department runs several public websites that enable drivers to register their vehicles and renew their driver’s licenses online. The vehicle registration website has been running on a single server and has suffered multiple outages because of server failures.

Application Gateway features

  • Application delivery controller
  • Load balancing HTTP traffic
  • Web Application Firewall
  • Support SSL
  • Encrypt end-to-end traffic with TLS

Microsoft Learn offers many different learning materials. This learning module is about Application Gateway Theory and this learning module is the Practical part of the learning module. Microsoft Learn for the Application Gateway and Encryption.

Source code

Link to a sample code
– Terraform implementation of Azure Application Gateway
– Terraform implementation of Azure Application Gateway’ Backend pool with VM
– Terraform implementation of Azure Application Gateway’s HTTPS with Keyvault as Ceritficate Store

Load balancing with Azure Load Balancer

  • Azure load balancer for resilient applications against failure and for easily scaling
  • Azure load balancer works in layer 4
  • LB spreads/distributes requests to multiple VMs and services (user gets service even when a VM is failed) automatically
  • LB provides high availability
  • LB uses a Hash-based distribution algorithm (5-tuple)
  • 5-tuple hash map traffic to available services (Source IP, Source Port, Destination IP, Destination Port, Protocol Type)
  • supports an inbound, and outbound scenario
  • Low latency, high throughput, scale up to millions of flows for all TCP and UDP applications
  • Isn’t a physical instance but only an object for configuring infrastructure
  • For high availability, we can use LB with availability set (protect for hardware failure) and availability zones (for data center failure)

Scenario: You work for a healthcare organization that’s launching a new portal application in which patients can schedule appointments. The application has a patient portal and web application front end and a business-tier database. The database is used by the front end to retrieve and save patient information.
The new portal needs to be available around the clock to handle failures. The portal must adjust to fluctuations in load by adding and removing resources to match the load. The organization needs a solution that distributes work to virtual machines across the system as virtual machines are added. The solution should detect failures and reroute jobs to virtual machines as needed. Improved resiliency and scalability help ensure that patients can schedule appointments from any location [Source].

Source code

Link to a sample code to deploy simple Nginx web servers with Availability Set and Public Load Balancer.

Load Balancer SKU
  • Basic Load Balancer
    • Port forwarding
    • Automatic reconfiguration
    • Health Probe
    • Outbound connections through source network address translation (SNAT)
    • Diagnostics through Azure log analytics for public-facing load balancers
    • Can be used only with availability set
  • Standard Load Balancer
    • Supports all the basic LB features
    • Https health probe
    • Availability zone
    • Diagnostics through Azure monitor, for multidimensional metrics
    • High availability (HA) ports
    • outbound rules
    • guaranteed SLA (99,99% for two or more VMs)
Load Balancer Types

Internal LB

  • distributes the load from internal Azure resources to other Azure resources
  • no traffic from the internet is allowed

External/Public LB

  • Distributes client traffic across multiple VMS.
  • Permits traffic from the internet (browser, module app, other resources)
  • public LB maps the public IP and port of incoming traffic to the private IP address and port number of the VM in the back-end pool.
  • Distribute traffic by applying the load-balancing rule
Distribution modes
  • Lb distributes traffic equally among vms
  • distribution modes are for creating different behavior
  • When you create the load balancer endpoint, you must specify the distribution mode in the load balancer rule
  • Prerequisites for load balancer rule
    • must have at least one backend
    • must have at least one health probe

Five tuple hash

  • default of LB
  • As the source port is included in the hash and can be changed for each session, the client might be directed to a different VM for each session.

source IP affinity / Session Affinity / Client IP affinity

  • this distribution is known as session affinity/client IP affinity
  • to map traffic to the server, the 2-tuple hash is used (Source IP, Destination IP) or the 3-tuple (Source IP, Destination IP, Protocol)
  • Hash ensures that requests from specific clients are always sent to the same VM.

Scenario: Remote Desktop Protocol is incompatible with 5-tuple hash

Scenario: for uploading media files this distribution must be used because for uploading a file the same TCP session is used to monitor the progress and a separate UDP session uploads the file.

Scenario: The requirement of the presentation tier is to use in-memory sessions to store the logged user’s profile as the user interacts with the portal. In this scenario, the load balancer must provide source IP affinity to maintain a user’s session. The profile is stored only on the virtual machine that the client first connects to because that IP address is directed to the same server.

Enhance service availability and data locality with Traffic Manager

Scenario:  a company that provides a global music streaming web application. You want your customers, wherever they are in the world, to experience near-zero downtime. The application needs to be responsive. You know that poor performance might drive your customers to your competitors. You’d also like to have customized experiences for customers who are in specific regions for user interface, legal, and operational reasons.
Your customers require 24×7 availability of your company’s streaming music application. Cloud services in one region might become unavailable because of technical issues, such as planned maintenance or scheduled security updates. In these scenarios, your company wants to have a failover endpoint so your customers can continue to access its services. 

  • traffic manager is a DNS-based traffic load balancer
  • Traffic Manager distributes traffic to different regions for high availability, resilience, and responsiveness
  • it resolves the DNS name of the service as an IP address (directs to the service endpoint based on the rules of the traffic routing method)
  • it’s a proxy or gateway
  • it doesn’t see the traffic that a client sends to a server
  • it only gives the client the IP address of where they need to go
  • it’s created only Global.
The location cannot be specified because it’s Global
Traffic Manager Profile’s routing methods
  • each profile has only one routing method
Weighted routing
  • distribute traffic across a set of endpoints, either evently or based on different weights
  • weights between 1 to 1000
  • for each DNS query received, the traffic manager randomly chooses an available endpoint
  • probability of choosing an endpoint is based on the weights assigned to endpoints
Performance routing
  • with endpoints in different geographic locations, the best performance endpoint for the user is sent
  • it uses an internet latency table, which actively track network latencies to the endpoints
Example of a setup where a client connects to Traffic Manager and their traffic is routed based on relative performance of three endpoints.
Geographic routing
  • based on where the DNS query originated, the specific endpoint of the region is sent to the user
  • it’s good for geo-fence content e.g. it’s good for countries with specific terms and conditions for regional compliance
Example of a setup where a client connects to Traffic Manager and their traffic is routed based on the geographic location of four endpoints.
Multivalue routing
  • to obtain multiple healthy endpoints in a single DNS query
  • caller can make client-side retries if endpoint is unresponsive
  • it can increase availability of service and reduce latency associated with a new DNS query
Subnet routing
  • maps a set of user ip addresses to specific endpoints e.g. can be used for testing an app before release (internal test), or to block users from specific ISPs.
Priority routing
  • traffic manager profile contains a prioritized list of services
Example of a setup where a client connects to Traffic Manager and their traffic is routed based on the priority given to three endpoints.
Traffic Manager Profile’s endpoints
  • endpoint is the destination location that is returned to the client
  • Types are
    • Azure endpoints: for services hosted in azure
      • Azure App Service
      • public ip resources that are associated with load balancers, or vms
    • External endpoints
      • for ip v4/v6
      • FQDNs
      • services hosted outside azure either on-prem or other cloud
    • Nested endpoints: are used to combine Traffic Manager profiles to create more flexible traffic-routing schemes to support the needs of larger, more complex deployments.
Endpoints Types/Targets
  • Each traffic manager profile can have serveral endpoints with different types

Source code

Link to a sample code to deploy a Traffic Manager.

Source: https://docs.microsoft.com/en-us/learn/modules/distribute-load-with-traffic-manager/


Resources

Azure Service Fabric

Related words for the Service Fabric in Microservice Architecture:

  • Dockre
  • DC/OS
  • Mesos
  • Kubernetes

Microservices Development possibilities on Azure Cloud:

  • Azure function
  • Kubernetes Service
  • Service Fabric
Azure FunctionKubernetes ServiceAzure Service Fabric
– They are micro-microservices.
– reactes to an external change & event arrived on service bus. E.g. Blob created, message arrived on a queue or service bus queue.
– They can be called as REST service from another application.
– Azure function & Serverless computing is a great choice for types of applications that response to some events.
– Good alternative for ASF (Azure Service Fabric) and AKS (Azure Kubernetes Service).
– Doesn’t need infrastructure at all.
– Suitable for background tasks with some upfront design.
– Microsoft implementation of open-source container orchestrator based on Docker => Docker Container
– Container is faster => lighter virtual machine
– Run Docker Container on Azure=> to manage environment (Upgrade, scale, versioning, expose network, load balance, and …)
– Docker is a technology to manage and run multiple containers in production.
-Azure supports Kubernetes natively & no installation need.
– Similar to containerization technologies.
– Focuses on Microservices.
– Kubernetes solves all the problem and developer should only develop the services-> it’s not easy to write scalable application, which runs on multiple & distributed clusters.
– Container orchestration =>
Microserrvice Challenges:
Service Communication => service and instances of the services.
Service Discovery => How to talk to another Microservices, when there’s thousands on instances.
Monitoring Application => telemetry and collecting logs, provisioning and upgrading microservices.
Testing locally
Managing and recovering from downtime.
Scaling in & out.
  • For building a full flagged microserviceses solution.
  • It has focus on business objectives and no infrastructure.
  • For easy scalable architecture.

Programming model of Azure Service Fabric:

Reliable Service: They are like windows services or Linux daemon application. It’s like console application.
These services divided to sub types:
– Stateless Services
– Statefull Services -> for co-locate compute and data.
Reliable Actors: It uses the Virtual Actor Design Pattern and built on state-full reliable service framework.
For massive amount of requests.
Guest executable: for existing projects without charging too much.Containers: is like quest executable and still run on the Host OS and is completely isolated piece of deployment.

Advantages of State-full Services:

  • Reducing latency
  • Provide resiliency by replication & persisting data across several different nodes.

Entry point from outside to back-end application is as follows:

Web API
– has no state (Stateless Service)
– Must scale
Microservice
– Independent part of the business logic and is perfect for Actor model.

Advantages of Stateless Services:

  • They are application proxies and gateway
  • They are easy and chipper to scale

Start to work with the Azure Service Fabric:

  • Installing Service Fabric -> is necessary for local development
  • Start Visual Studio with “run as administrator” because of the “Service Fabric Cluster”. Because it needs low-privileged user called “Network Service”. The Network Service has no privileges on local system, therefore we need admin user.

Normal APP vs. Reliable APP

Normal AppReliable App
– An Application
– Easy to write (Established framework)
– Great choice of libraries
– No learning curve
– A reliable service
– Easy to write (established framework)
– Reliable service is like an exe file that can be run without service fabric.
– Great choice of libraries (x64 only)
– None to little learning curve
– Access ASF API for microservice scaling, health reporting, discover other services.
– we can use plugable communication model. (They are Azure Service Fabric Built-in) via HTTPS, TCP, Wensocket, Custome TCP
– Access to reliable storage, stateful service, low latency, high speed, local storage, replicated across machines.

References

Lambda Architecture in Google & Azure Could

Lambda Architecture Definition

Lambda Architecture is a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch stream-processing methods to design a robust, scalable and fault-tolerance (human and machine) big data systems.

Lamba Architecture tries tries also balancing between the latency & Accuracy.

Lambda Architecture Layers
Master Layery
Serving Layer
Speed Layer

Lambda Architecture Properties:

  • A paradigm for Big Data
  • In data processing for balance on throughput , latency, fault-tolerance and scalable.
  • For modern data warehouse

Applying the Lambda Architecture with Spark, Kafka, and Cassandra

The toolings are the following:

  • Spark Data Frame & Spark SQL in addition to Spark’s Data Source API to load, store and manipulate data.
  • Spark Streaming & Spark-Kafka Integration techniques -> for reliability and speed
  • Develop a Kafka Data Producer -> to simulate the real-time data stream feed into streaming application.
  • Stateful Spark Streaming Application -> to preserve global state and use memory efficiently with approximate algorithms.
  • Errors & Code updates -> when we build a stateful Spark streaming application and a production application isn’t complete without the ability to handle errors and code updates.
  • Persist Data to Cassandra & HDFS -> for working with the scalable NoSQL database and persist the data to Cassandra and HDFS.
Note

Your Text Here

Lambda Architecture on Azure, Google and AWS

AzureGCPAWS

Related links

References:

How to build a Big Data Pipeline

Onboarding : Azure Serverless

Topics

  • Key concepts
  • Azure Function App
  • Azure LogicApp

Key concepts

  • serverless: is Platform as a Service (PaaS)
  • functionapp
  • logicapp

Azure Function App

  • function app runs based on triggers
  • function app can be triggered by
    • webhook
    • API
    • Timer
    • Data Processing
  • can have more triggers
  • it’s event-driven
  • project files are
    • host.json
    • local.settings.json
  • runs code on-demand without explicitly provision / manage the infrastructure
  • hosting plans are
    • consumption plan : azure provisions all the necessary resources for running function and we pay as function is running
    • app service plan : just like web app. we can use the same plan with no additional costs
  • runtime stack
    • node js
    • .net core
    • java
    • powershell
  • <function-app-name>.azurewebsites.net
  • for stateful functions
  • function needs trigger, integration, price plan

Integration

  • Azure Cosmos DB
  • Event Hub
  • Event Grid
  • Notification Hub
  • Service Bus (Queue & Topic)
  • Storage (Blob, Table)
  • On-prem (using serrvice bus)
  • Twilio (SMS Message)

Event & triggers

  • Http
  • timer
  • cosmosdb
  • blob
  • queue
  • event grid
  • event hub (iot)
  • service bus queue
  • service bus topic
Consumption Plan (Pay for what used)App Service Plan (predictable monthly cost)Premium Plan
Basic or higher tierimproved performance
Scaling is integrated in serviceScaling must be configuredVnet support
Pay for number of execution
Pay for CPU time & RAM
timeout after 5 min, iincreasable to 10 min
400,000 GB Free

Azure LogicApp

  • can have only one trigger
  • it’s event-driven

SQL


You owe your dreams your courage.

Koleka Putuma