Onboarding : Serverless and message passing architecture

The onboarding posts go through the important points, that an Azure Cloud Solution Architect has to know to get started for certificate preparation. The prepration for the certificate has the following steps:

  • Get familier with Azure services
  • Know the keywords
  • Get familier with the key concept of each Azure service
  • Know how to use them in practice
  • Know the Cloud Architecture Design Patterns

Topices

Keywords

  • Serverless
  • Azure Function
  • Logic App
  • Azure queue technologies
    • Azure queue storage
    • Azure Service bus queue
    • Azure service bus topic
  • Azure event technologies
    • Event grid
    • Event hub
  • Notification Hub

Serverless logic with Azure function

Scenario: Imagine you work for an escalator company that has invested in IoT technology to monitor its product in the field. You oversee the processing of temperature sensor data from the drive gears of the escalators. You monitor the temperature data and add a data flag to indicate when the gears are too hot. In downstream systems, this data helps determine when maintenance is required.

Your company receives sensor data from several locations and from different escalator models. The data arrives in different formats, including batch file uploads, scheduled database pulls, messages on a queue, and incoming data from an event hub. You want to develop a reusable service that can process your temperature data from all these sources. [Source]

Azure function has three components like all the function that we develop:

  • Input/s: which is done by a json configuration without developing a code.
  • Logic: the part that you have to develop with the language you like.
  • Output/s: which is done by a json configuration without developing a code.
  • Azure function
    • Can be considered as the Function as a Service (FaaS)
    • Function can be a microservice (But don’t user Azure Function for long run workloads)
    • Auto scale infrastructure (scale out or down) based on load
    • Automatic provisioning by cloud provider
    • Use the language of your choice.
    • Less administrative tasks and more focus business logic
    • Important characteristics of serverless solutions
      • Avoid over-allocation of infrastructure (you pay only when the function is running)
      • Stateless logic (as the work around the states can be stored in associated storage services)
      • Event driven (they run only in response to an event e.g. receive an HTTP request, message being added to a queue,… No need to develop a code for listening or watching the queue). Refer to Azure function triggers to see a list of supported services.
    • Drawbacks of serverless solutions
      • Execution time: Function has a timeout of 5 minutes and configurable to 10 minutes. With Azure Durable Functions we can solve the timeout problem.
      • Execution frequency: if the function is used/ executed continuously, it’s prudent to host this service on a VM unless it will get expensive/costly.
    • Function APP
      • It’s for logically group the functions and resources.
    • Service Plan ( Azure function is a serverless service but doesn’t mean, that it doesn’t have a server, where it have to be hosted and run. Azure function has a server, where it’s hosted and run but Cloud provider will provision the resources for you)
    • Service Plan Types
      • Consumption Service Plan
        • Timeout 5-10 min
        • Automatic scaling
        • Bills you when function is running
      • App Service Plan (Not serverless anymore)
        • Avoid timeout periods + continuously run
    • Azure function uses a storage account as well for logging function execution
    • Azure function can be tested as well, refer to screenshot below. To have automated test use the deployment slots and deployment center. They are explained in next sections.
    • Use the Monitor option in the screenshot below to check the executions.

Azure function triggers

Blob storage

Microsoft graph events

Azure cosmos db

Queue storage

Evnt grid

Service bus

Http

Timer

Azure function binding

Azure function has to have input and output bindings.

{
  "bindings": [
    {
      "name": "order",
      "type": "queueTrigger",
      "direction": "in",
      "queueName": "myqueue-items",
      "connection": "MY_STORAGE_ACCT_APP_SETTING"
    },
    {
      "name": "$return",
      "type": "table",
      "direction": "out",
      "tableName": "outTable",
      "connection": "MY_TABLE_STORAGE_ACCT_APP_SETTING"
    }
  ]
}
  • Binding is a declarative way to connect data to your function.
  • Bilndings contains (Azure Doc)
    • input/s
    • output/s

Scenario: Let’s say we want to write a new row to Azure Table storage whenever a new message appears in Azure Queue storage. This scenario can be implemented using an Azure Queue storage trigger and an Azure Table storage output binding.

Each function contains

  • function.json -> bindings are configured here
  • run.csx -> logic is developed here

Source code

This sample code is a microservice architecture with “database per service” design pattern. When a new product is added, a message is pushed to the storage queue for each image of the product. By pushing the message/s to the queue, the function is getting run, gets the original image/s from a storage container, generates the thumbnail and saves it to another container.

By Creating a function app some resources are creating by default.

The following figures demonstrate testing an azure function.

Test the function
Check the expected result

Monitor the execution

Monitor the execution

Secure the azure function backend

  • User API Key to block unknown callers.
  • To use this feature the function must have

Source

Route and Processing via Logic App

Scenario: For example, in the shoe-company scenario we want to monitor social media reaction to our new product. We’ll build a logic app to integrate Twitter, Azure Cognitive Services, SQL Server, and Outlook email.

  • Azure Logic Apps
    • Make diverse services work together
    • Provide pre-built components that can connect to hundreds of services
    • Steps of designing a logic app
      • Plan the business process (step based)
      • Identify the type of each step
    • Logic apps operations
      • Trigger -> respond to external events. Triggers are for lunching the logic app.
      • Action -> process or store data
      • Control action -> make decision based on data

Example:

  1. detect tweets about the product -> Trigger
  2. analyze the sentiment -> Action
  3. If logic -> Control
  4. store a link to positive tweets -> Action
  5. email customer service for negative tweets -> Action

An external service must have a connector to be usable in logic app.

The left figure illustrates the twitter connector. A group of related triggers and actions are packaged inside a connector

Trigger types

Polling trigger: periodically checks an external service for new data e.g. check RSS feed for new posts. For polling trigger we have to set frequency (second, minute, hour) & interval e.g. frequency = minutes & interval = 5 means the pooling trigger runs each 5 minutes.

Polling triggers force you to make a choice between how much they cost and how quickly they respond to new data. There is often a delay between when new data becomes available and when it is detected by the app. The following illustration shows the issue.

In the worst case, the potential delay for detecting new data is equal to the polling interval. So why not use a smaller interval? To check for new data, the Logic Apps execution engine needs to run your app, which means you incur a cost. In general, the shorter the interval, the higher the cost but the quicker you respond to new data. The best polling interval for your logic app depends on your business process and its tolerance for delay.

Polling triggers are perfect for the “route and process data” scenarios.

  • Push trigger
    • notifies immediately when data is available e.g. the trigger that detects when a message is added to an Azure Service Bus queue is a push trigger.
    • Push triggers are implemented using webhooks.
    • The Logic Apps infrastructure generates a callback URL for you and registers it with the external service by first creation and each later updates
    • Logic Apps de-registers the callback for you as needed e.g. if you disable or delete your app.
    • The nice thing about push triggers is that they don’t incur any costs polling for data when none is available
    • If push triggers respond more quickly and cost less than polling triggers, then why not use them all the time? The reason is that not every connector offers a push trigger.
    • Sometimes the trigger author chose not to implement push and sometimes the external service didn’t support push

Scenarios for the logic app

When a message is received in a Service Bus QueueWhen a HTTP request is received
When a new tweet is postedWhen a Event Grid resource event occures
RecurrenceWhen a new email is received in Outlook
When a new file created in OneDriveWhen a file is added to FTP server

Source

Azure messaging model to loosely connect services

When a solution consists of several different services/programs, this solution is a distributed solution. In distributed solutions the components have to communicate with each other via messages.

Even on the same server or in the same data center, loosely coupled architectures require mechanisms for components to communicate. Reliable messaging is often a critical problem.

As the cloud solution architect you have to

  • understand each individual communication that the components of the application exchange
  • understand whether the communication sends message or event
  • then you can decide to choose an event-based or message-based architecture
  • Each communication can use different technologies

In the both event-based and message-based, there’s a sender and receiver. But the difference is the content of what they send.

Message

  • Contains raw data
  • This data is produced by sender
  • This data is consumed by receiver
  • It contains data/payload itself not just the reference to that data
  • Sender expect that the destination component process this data in a certain way

E.g. mobile app expect that the web API save the sent data to a storage.

Available technologies

  • Azure Queue Storage
  • Azure Service Bus
    • Message Queue
    • Topics

Event

  • Light weight notification that indicates something has happend
  • Doesn’t contain raw data
  • May reference where the data lives
  • Sender has no expectations of receiver

E.g. Web API inform the Web App or mobile App about a new file.

Available technologies

  • Azure Event Grid
  • Azure Event Hubs
Azure queue technologies

The section explains more about Azure Queue Storage, Azure Service Bus Queue, and Azure Service Bus Topic and when which technology can be used in the solution.

  • Azure queue storage
    • This service is integrated in Azure storage account
    • Can contains millions of messages
    • The queue limitation is by the capacity of the storage account
  • Azure service bus queue
    • It’s a message broker system intended for enterprise applications
    • For higher security requirements
    • have different data contracts
    • utilize multiple communication protocols
    • include both cloud and on-prem services

In message queues of the Azure queue storage and the Azure service bus queu, each queue has a sender and a subscriber. Subscriber takes the message and process is as the sender expects.

Both of these services are based on the idea of a “queue” which holds sent messages until the target is ready to receive them.

  • Azure service bus topics
    • It’s like queues
    • Can have multiple subscriber
    • Each subscriber receives its own copy of the message
    • Topics use queues
    • By post to a topic, the message is copied and dropped into the queue for each subscription.
    • The queue means that the message copy will stay around to be processed by each subscription branch even if the component processing that subscription is too busy to keep up.
  • Benefits of quese
    • Increased reliability
      • For exchanging messages (at times of high demand, messages can simply wait until a destination component is ready to process them)
    • Message delivery guarantees
      • There are different message delivery garanties
        • At-Least-Once delivery
          •  each message is guaranteed to be delivered to at least one of the components that retrieve messages from the queue
          • Example: in certain circumstances, it is possible that the same message may be delivered more than once. For example, if there are two instances of a web app retrieving messages from a queue, ordinarily each message goes to only one of those instances. However, if one instance takes a long time to process the message, and a time-out expires, the message may be sent to the other instance as well. Your web app code should be designed with this possibility in mind.
        • At-Most-Once delivery
          • each message is not guaranteed to be delivered, and there is a very small chance that it may not arrive.
          • unlike At-Least-Once delivery, there is no chance that the message will be delivered twice. This is sometimes referred to as “automatic duplicate detection”.
        • First-In-First-Out (FIFo) delivery
          •  If your distributed application requires that messages are processed in precisely the correct order, you must choose a queue system that includes a FIFO guarantee.
    • Transactional support
      • It’s useful for e.g. e-commerce systems. By clicking the buy button, a series of messages are sending off to different destinations e.g. order details system, total sum and payment details system, generate invoice system. If the credit card details message delivery fails, then so will the order details message.

How to decide for a queue technique

Queue Storage

  • Need audit trail of all messages
  • Queue exceed 80 GB
  • Track processing progress inside queue
  • It’s for simple solutions

Service bus queue

  • Need At-Most-Once delivery
  • Need FIFO guarantee
  • Need group messages into transactions
  • Want to receive messages without polling queue
  • Need Role-based access model to the queue
  • Need to handle messages larger than 64K but less than 256 KB
  • Queue Size not grow larger than 80 GB
  • Need batches of messages

Service bus topic

  • If you need multiple reciever to handle each message
Azure event technologies

Scenario: Suppose you have a music-sharing application with a Web API that runs in Azure. When a user uploads a new song, you need to notify all the mobile apps installed on user devices around the world who are interested in that genre [Source]. The Event Grid is the pefect solution for this scenario.

  • Many applications use the publish-subscribe model to notify distributed components that something happend.

Event grid

  • It’s a one-to-many relationship
  • Fully-managed event routing service running on top of Azure Service Fabric.
  • Event Grid distributes events from different sources,
  • to different handlers,
  • to build event-based and serverless applications
  • supports most Azure services as a publisher or subscriber
  • can be used with third-party services
  • provides a dynamically scalable, low-cost, messaging system that allows publishers to notify subscribers about a status change

Event hub

Source

Notification Hub

It’s a multi-platform, scalable push engine to quickly send millions of messages to applications running on various type of registered devices.

PNS: Platform Notification System

To push notification on multiple platforms


Success is achieved by perseverance and motivation.

Parisa Moosavinezhad


Develop frontend for backend via Vue

Let’s have fun with developing a sample together

Related topics

Scenario: When the caller and called application are not in the same origin the CORS Policy doesn’t allow the called application / backend to response the caller application.

It’s strongly recommended to specify the allow origin in your backend. In the following video I explain how we can do it.

Solve CORS policy

After developing the website, use the Security Header to test the security of your website.


You owe your dreams your courage.

Koleka Putuma


Develop containerized microservices in VS

Let’s have fun with developing a sample together

Related topics

Containerize project

It’s really simple to containerize your projects specially when you have api project.

Change controller route in containerized microservices project


You owe your dreams your courage.

Koleka Putuma


Develop Azure Function App in VS

Let’s have fun with developing a sample together

Related topics

Scenario: Assume you developed an online shopping. When a new product is added to the shop. A message is sent to a storage queue. You want to develop a Azure function that’s triggered when a message is pushed to the storage queue and create a thumbnail image for the newly added project from the product’s image.

coming soon ooo

About Azure Function Console in Visual Studio

When you develop an Azure Function App and you want to test and run it on your local machine always a console opens as follows. You can follow the progress of your function via this console if you use the Ilogger framework.

# For example
log.LogInformation($"C# Queue trigger function processed:{queueMessage.ImageName}");

But this consol stays open after stop the debugging. Therefore you can use the Tool > Option > Select Close console after stop debugging automatically.


Listen to your inspirations, they know more about the future than you.

Parisa Moosavinezhad


Onboarding : Azure Access Management

Topics

  • Key concept
  • RBAC
    • When to elevate access

Key concept

  • Role-based access control (RBAC)

RBAC

  • To grant access to a subscription, identify the appropriate role to assign to an employee

Scenario: Requirement of the presentation tier is to use in-memory sessions to store the logged user’s profile as the user interacts with the portal. In this scenario, the load balancer must provide source IP affinity to maintain a user’s session. The profile is stored only on the virtual machine that the client first connects to because that IP address is directed to the same server.

Azure RBAC roles vs. Azure AD Roles

RBAC rolesAD roles
apply to Azure resourcesapply to Azure AD resources (particularly users, groups, and domains)
scope covers management groups, subscriptions, resource groups, and resourceshas only one scope, the directory
This greater access grants them the Azure RBAC User Access Administrator role for all subscriptions of their directoryAn Azure AD Global Administrator can elevate their access to manage all Azure subscriptions and management groups
Through the User Access Administrator role, the Global Administrator can give other users access to Azure resources.
When to elevate access
  • By default, a Global Administrator doesn’t have access to Azure resources
  • The Global Administrator for Azure Active Directory (Azure AD) can temporarily elevate their permissions to the Azure role-based access control (RBAC) role of User Access Administrator,  is assigned at the scope of root (This action grants the Azure RBAC permissions that are needed to manage Azure resources)
  • Global administrator (AD role) + User Access Administrator (RBAC role) -> can view all resources in, and assign access to, any subscription or management group in that Azure AD organization

As Global Administrator, you might need to elevate your permissions to:

  • Regain lost access to an Azure subscription or management group.
  • Grant another user or yourself access to an Azure subscription or management group.
  • View all Azure subscriptions or management groups in an organization.
  • Grant an automation app access to all Azure subscriptions or management groups.

Assign a user administrative access to an Azure subscription

To assign a user administrative access to a subscription, you must have Microsoft.Authorization/roleAssignments/write and Microsoft.Authorization/roleAssignments/delete permissions at the subscription scope. Users with the subscription Owner or User Access Administrator role have these permissions.

# Assign the role by using Azure PowerShell
New-AzRoleAssignment `
    -SignInName rbacuser@example.com `
    -RoleDefinitionName "Owner" `
    -Scope "/subscriptions/<subscriptionID>"

# Assign the role by using the Azure CLI
az role assignment create \
    --assignee rbacuser@example.com \
    --role "Owner" \
    --subscription <subscription_name_or_id>

Get Access to an Azure subscription

  1. Elevate access

2. Verify access

3. assign a user as an administrator of a subscription

4. Revoke your elevate access

After revoke the elevated access the role assignments on subscription is as follows

Source


You owe your dreams your courage.

Koleka Putuma


Onboarding : Modern Applications

Topics

  • Key concepts
  • Using Azure container for containerized web application
    • Azure Container Registry
      • Azure Container Registry Tasks
    • Azure Container Instance (ACI)
      • Create an Azure Container Instance (ACI)
      • ACI restart-policies
      • ACI check log, state, events
      • ACI set environment variables
      • ACI data volumes
  • Azure Kubernetes Service
  • Using Azure APP Service
  • background task in an App Service Web App with WebJobs

Related topics

Key concepts

  • Docker : Docker is a technology that enables you to deploy applications and services quickly and easily.
  • Docker app : A Docker app runs using a Docker image
  • Docker image : A Docker image is a prepackaged environment containing the application code and the environment in which the code executes [more].
  • Container
  • Docker registries/Docker Hub: is a repository of docker images https://hub.docker.com/
  • Azure container registry
  • Containerized web application : A web app so that it can be deployed as a Docker image and run from an Azure Container Instance
  • Azure Container instance

Using Azure container for containerized web application

  • Rapid deployment is key to business agility
  • Containerization saves time and reduces costs.
  • Multiple apps can run in their isolated containers on the same hardware.

Scenario: Suppose you work for an online clothing retailer that is planning the development of a handful of internal apps but hasn’t yet decided how to host them. You’re looking for maximum compatibility, and the apps may be hosted on-prem, in Azure or another cloud provider. Some of the apps might share IaaS infrastructure. In these cases, the company requires the apps isolated from each other. Apps can share the hardware resources, but an app shouldn’t be able to interfere with the files, memory space, or other resources used by other apps. The company values the efficiency of its resources and wants something with a compelling app development story. Docker seems an ideal solution to these requirements. With Docker, you can quickly build and deploy an app and run it in its tailored environment, either locally or in the cloud.

To build a customized docker image for your aplication refer to Docker, container, Kubernetes post. In this post we focus on work with Azure Container Registry.

  • Azure Container Instance loads and runs Docker images on demand.
  • The Azure Container Instance service can retrieve the image from a registry such as Docker Hub or Azure Container Registry.
Azure Container Registry
  • it has a unique url
  • these registries are private
  • need authentication to push/pull image
  • pull and push only with docker CLI or azure CLI
  • has replication feature in premium SKU (geo-replicated image)
Standard SKU doesn’t support Replications
  • After change SKU to premium then geo-replication can be used
#-----------------------------------------------------------
# Deploy a Docker image to an Azure Container Instance
#-----------------------------------------------------------

az login

az account list

az account set --subscription="subscription-id"

az account list-locations --output table

az group create --name mygroup --location westeurope

# Different SKUs provide varying levels of scalability and storage.
az acr create --name parisaregistry --resource-group mygroup --sku [standard|Premium] --admin-enabled true
# output -> "loginServer": "parisaregistry.azurecr.io"

# for a username and password.
az acr credential show --name parisaregistry

# specify the URL of the login server for the registry.
docker login parisaregistry.azurecr.io --password ":)" --username ":O" # or using--password-stdin

# you must create an alias for the image that specifies the repository and tag to be created in the Docker registry
# The repository name must be of the form *<login_server>/<image_name>:<tag/>.
docker tag myapp:v1 myregistry.azurecr.io/myapp:v1 # myregistry.azurecr.io/myapp:v1 is the alias for myapp:v1

# Upload the image to the registry in Azure Container Registry.
docker push myregistry.azurecr.io/myapp:v1


# Verify that the image has been uploaded
az acr repository list --name myregistry


az acr repository show --repository myapp --name myregistry
Azure Container Registry Tasks [Source]
# Dockerfile with azure container registry tasks

FROM    node:9-alpine
ADD     https://raw.githubusercontent.com/Azure-Samples/acr-build-helloworld-node/master/package.json /
ADD     https://raw.githubusercontent.com/Azure-Samples/acr-build-helloworld-node/master/server.js /
RUN     npm install
EXPOSE  80
CMD     ["node", "server.js"]

After creating the docker file run the following codes

az acr build --registry $ACR_NAME --image helloacrtasks:v1 .

# Verify the image
az acr repository list --name $ACR_NAME --output table

# Enable the registry admin account
az acr update -n $ACR_NAME --admin-enabled true

az acr credential show --name $ACR_NAME

# Deploy a container with Azure CLI
az container create \
    --resource-group learn-deploy-acr-rg \
    --name acr-tasks \
    --image $ACR_NAME.azurecr.io/helloacrtasks:v1 \
    --registry-login-server $ACR_NAME.azurecr.io \
    --ip-address Public \
    --location <location> \
    --registry-username [username] \
    --registry-password [password]

az container show --resource-group  learn-deploy-acr-rg --name acr-tasks --query ipAddress.ip --output table

# place a container registry in each region where images are run
# This strategy will allow for network-close operations, enabling fast, reliable image layer transfers.
# Geo-replication enables an Azure container registry to function as a single registry, serving several regions with multi-master regional registries.
# A geo-replicated registry provides the following benefits:
#       Single registry/image/tag names can be used across multiple regions
#       Network-close registry access from regional deployments
#       No additional egress fees, as images are pulled from a local, replicated registry in the same region as your container host
#       Single management of a registry across multiple regions

az acr replication create --registry $ACR_NAME --location japaneast

az acr replication list --registry $ACR_NAME --output table

Azure Container Registry doesn’t support unauthenticated access and require authentication for all operations. Registries support two types of identities:

  • Azure Active Directory identities, including both user and service principals. Access to a registry with an Azure Active Directory identity is role-based, and identities can be assigned one of three roles: reader (pull access only), contributor (push and pull access), or owner (pull, push, and assign roles to other users).
  • The admin account included with each registry. The admin account is disabled by default.

The admin account provides a quick option to try a new registry. You enable the account and use its username and password in workflows and apps that need access. Once you’ve confirmed the registry works as expected, you should disable the admin account and use Azure Active Directory identities exclusively to ensure the security of your registry.

Azure Container Instance (ACI)
  • Azure Container Instance service can load an image from Azure Container Registry and run it in Azure
  • instance will have an ip address to be accessible
  • dns name can be used for a friendly label
  • image url can be azure container registry or docker hub
  • runs a container in Azure without managing virtual machines and without a higher-level service
  • Fast startup: Launch containers in seconds.
  • Per second billing: Incur costs only while the container is running.
  • Hypervisor-level security: Isolate your application as completely as it would be in a VM.
  • Custom sizes: Specify exact values for CPU cores and memory.
  • Persistent storage: Mount Azure Files shares directly to a container to retrieve and persist state.
  • Linux and Windows: Schedule both Windows and Linux containers using the same API.
  • The ease and speed of deploying containers in Azure Container Instances makes it a great fit for executing run-once tasks like image rendering or building and testing applications.
  • provide a DNS name to expose your container to the Internet (dns must be unique)

For scenarios where you need full container orchestration, including service discovery across multiple containers, automatic scaling, and coordinated application upgrades, we recommend Azure Kubernetes Service (AKS).

Create an Azure Container Instance (ACI)
#--------------------------------------------------------------
# Using Azure Container Instance to run a docker image
#--------------------------------------------------------------

# use to generate random dns name
DNS_NAME_LABEL=aci-demo-$RANDOM

# use these image for quick start/ or demo
--image microsoft/aci-helloworld         # -> basic Node.js web application on docker hub
--image microsoft/aci-wordcount:latest   # -> This container runs a Python script that analyzes the text of Shakespeare's Hamlet, writes the 10 most common words to standard output, and then exits

# create a container instance and start the image running
# for a user friendly url -> --dns-name-label mydnsname
az container create --resource-group mygroup --name ecommerceapiproducts --image parisaregistry.azurecr.io/ecommerceapiproducts:latest --os-type Windows --dns-name-label ecommerceapiproducts  --registry-username ":)" --registry-password ":O"

az container create --resource-group mygroup --name myproducts1 --image parisaregistry.azurecr.io/ecommerceapiproducts:latest --os-type Windows  --registry-login-server parisaregistry.azurecr.io --registry-username ":)" --registry-password ":O" --dns-name-label myproducts --ports 9000 --environment-variables 'PORT'='9000'

ACI restart-policies

Azure Container Instances has three restart-policy options [Source]:

Restart policy
in Azure Container Instance
Description
Always in ACIContainers in the container group are always restarted. This policy makes sense for long-running tasks such as a web server.
This is the default setting applied when no restart policy is specified at container creation.
Never in ACIContainers in the container group are never restarted.
The containers run one time only.
OnFailure in ACIContainers in the container group are restarted only when the process executed in the container fails (when it terminates with a nonzero exit code).
The containers are run at least once. This policy works well for containers that run short-lived tasks.

Azure Container Instances starts the container and then stops it when its process (a script, in this case) exits. When Azure Container Instances stops a container whose restart policy is Never or OnFailure, the container’s status is set to Terminated.

az container create \
  --resource-group learn-deploy-aci-rg \
  --name mycontainer-restart-demo \
  --image microsoft/aci-wordcount:latest \
  --restart-policy OnFailure \
  --location eastus

az container show \
  --resource-group learn-deploy-aci-rg \
  --name mycontainer-restart-demo \
  --query containers[0].instanceView.currentState.state

az container logs \
  --resource-group learn-deploy-aci-rg \
  --name mycontainer-restart-demo
ACI check log, state, events
az container delete --resource-group mygroup --name myproducts1

az container logs --resource-group mygroup --name myproducts1

az container attach --resource-group mygroup --name myproducts1

# find the fully qualified domain name of the instance by querying the IP address of the instance or Azure UI > Azure Container Instance > Overview: FQND
az container show --resource-group mygroup --name myproducts --query ipAddress.fqdn

# another variant 
--query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" \
--out table

# get the status of the container
--query containers[0].instanceView.currentState.state

# Execute a command in your container
az container exec \
  --resource-group learn-deploy-aci-rg \
  --name mycontainer \
  --exec-command /bin/sh

# Monitor CPU and memory usage on your container
CONTAINER_ID=$(az container show \
  --resource-group learn-deploy-aci-rg \
  --name mycontainer \
  --query id \
  --output tsv)

az monitor metrics list \
  --resource $CONTAINER_ID \
  --metric CPUUsage \
  --output table

az monitor metrics list \
  --resource $CONTAINER_ID \
  --metric MemoryUsage \
  --output table

ACI set environment variables [Source]
# create an Azure Cosmos DB name
COSMOS_DB_NAME=aci-cosmos-db-$RANDOM

# create the cosmos db, get endpoint and masterkey
COSMOS_DB_ENDPOINT=$(az cosmosdb create \
  --resource-group learn-deploy-aci-rg \
  --name $COSMOS_DB_NAME \
  --query documentEndpoint \
  --output tsv)

COSMOS_DB_MASTERKEY=$(az cosmosdb keys list \
  --resource-group learn-deploy-aci-rg \
  --name $COSMOS_DB_NAME \
  --query primaryMasterKey \
  --output tsv)

# create a container and set environments variables
az container create \
  --resource-group learn-deploy-aci-rg \
  --name aci-demo \
  --image microsoft/azure-vote-front:cosmosdb \
  --ip-address Public \
  --location eastus \
  --environment-variables \
    COSMOS_DB_ENDPOINT=$COSMOS_DB_ENDPOINT \
    COSMOS_DB_MASTERKEY=$COSMOS_DB_MASTERKEY

# get environment variables (by default are plaintext)
az container show \
  --resource-group learn-deploy-aci-rg \
  --name aci-demo \
  --query containers[0].environmentVariables

# secure environment variables to hide
az container create \
  --resource-group learn-deploy-aci-rg \
  --name aci-demo-secure \
  --image microsoft/azure-vote-front:cosmosdb \
  --ip-address Public \
  --location eastus \
  --secure-environment-variables \
    COSMOS_DB_ENDPOINT=$COSMOS_DB_ENDPOINT \
    COSMOS_DB_MASTERKEY=$COSMOS_DB_MASTERKEY
ACI data volumes [Source]
  • By default, Azure Container Instances are stateless.
  • If the container crashes or stops, all of its state is lost.
  • To persist state beyond the lifetime of the container, you must mount a volume from an external store.
  • mount an Azure file share to an Azure container instance so you can store data and access it later
STORAGE_ACCOUNT_NAME=mystorageaccount$RANDOM

az storage account create \
  --resource-group learn-deploy-aci-rg \
  --name $STORAGE_ACCOUNT_NAME \
  --sku Standard_LRS \
  --location eastus

# AZURE_STORAGE_CONNECTION_STRING is a special environment variable that's understood by the Azure CLI. 
# The export part makes this variable accessible to other CLI commands you'll run shortly.
export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string \
  --resource-group learn-deploy-aci-rg \
  --name $STORAGE_ACCOUNT_NAME \
  --output tsv)

# create a file share
az storage share create --name aci-share-demo

# To mount an Azure file share as a volume in Azure Container Instances, you need these three values:
# The storage account name
# The share name
# The storage account access key
STORAGE_KEY=$(az storage account keys list \
  --resource-group learn-deploy-aci-rg \
  --account-name $STORAGE_ACCOUNT_NAME \
  --query "[0].value" \
  --output tsv)

# check the value
echo $STORAGE_KEY

# Deploy a container and mount the file share (mount /aci/logs/ to your file share)
az container create \
  --resource-group learn-deploy-aci-rg \
  --name aci-demo-files \
  --image microsoft/aci-hellofiles \
  --location eastus \
  --ports 80 \
  --ip-address Public \
  --azure-file-volume-account-name $STORAGE_ACCOUNT_NAME \
  --azure-file-volume-account-key $STORAGE_KEY \
  --azure-file-volume-share-name aci-share-demo \
  --azure-file-volume-mount-path /aci/logs/

# check the storage
az storage file list -s aci-share-demo -o table

az storage file download -s aci-share-demo -p <filename>

Source

Azure Kubernetes Service

  • The task of automating, managing, and interacting with a large number of containers is known as orchestration.
  • Azure Kubernetes Service (AKS) is a complete orchestration service for containers with distributed architectures with multiple containers.
  • You can move existing applications to containers and run them within AKS.
  • You can control access via integration with Azure Active Directory (Azure AD) and access Service Level Agreement (SLA)–backed Azure services, such as Azure Database for MySQL for any data needs, via Open Service Broker for Azure (OSBA).

Source

Using Azure APP Service

  • platform as a service (PaaS) 
  • Azure takes care of the infrastructure to run and scale your applications.
  • prerequisite is a staging deployment slot to push code to azure app service
    • easily add deployment slots to an App Service web app (for creating a staging)
    • swap the staging deployment slot with the production slot
  • Azure portal provides out-of-the-box continuous integration and deployment with Azure DevOps, GitHub, Bitbucket, FTP, or a local Git repository on your development machine
  • Mode : Free doesn’t support deployment slot
Seployment slopts are listed under “Deployment slots” menu
  • Continuous integration/deployment support
    • Connect your web app with any of the above sources and App Service will do the rest for you by automatically syncing your code and any future changes on the code into the web app
    • with Azure DevOps, you can define your own build and release process that compiles your source code, runs the tests, builds a release, and finally deploys the release into your web app every time you commit the code
out-of-the-box continuous integration and deployment

Automated deployment

Automated deployment, or continuous integration, is a process used to push out new features and bug fixes in a fast and repetitive pattern with minimal impact on end users.

Azure supports automated deployment directly from several sources. The following options are available:

  • Azure DevOps: You can push your code to Azure DevOps (previously known as Visual Studio Team Services), build your code in the cloud, run the tests, generate a release from the code, and finally, push your code to an Azure Web App.
  • GitHub: Azure supports automated deployment directly from GitHub. When you connect your GitHub repository to Azure for automated deployment, any changes you push to your production branch on GitHub will be automatically deployed for you.
  • Bitbucket: With its similarities to GitHub, you can configure an automated deployment with Bitbucket.
  • OneDrive: Microsoft’s cloud-based storage. You must have a Microsoft Account linked to a OneDrive account to deploy to Azure.
  • Dropbox: Azure supports deployment from Dropbox, which is a popular cloud-based storage system that is similar to OneDrive.

Manual deployment

There are a few options that you can use to manually push your code to Azure:

  • Git: App Service web apps feature a Git URL that you can add as a remote repository. Pushing to the remote repository will deploy your app.
  • az webapp upwebapp up is a feature of the az command-line interface that packages your app and deploys it. Unlike other deployment methods, az webapp up can create a new App Service web app for you if you haven’t already created one.
  • Zipdeploy: Use az webapp deployment source config-zip to send a ZIP of your application files to App Service. Zipdeploy can also be accessed via basic HTTP utilities such as curl.
  • Visual Studio: Visual Studio features an App Service deployment wizard that can walk you through the deployment process.
  • FTP/S: FTP or FTPS is a traditional way of pushing your code to many hosting environments, including App Service.
# using SDK version 3.1.102. 
wget -q -O - https://dot.net/v1/dotnet-install.sh | bash -s -- --version 3.1.102
export PATH="~/.dotnet:$PATH"
echo "export PATH=~/.dotnet:\$PATH" >> ~/.bashrc

# create a new ASP.NET Core MVC application
dotnet new mvc --name BestBikeApp

# build and run the application to verify it is complete.
cd BestBikeApp
dotnet run
# output
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /home/user/BestBikeApp

# Deploy with zipdeploy
dotnet publish -o pub
cd pub
zip -r site.zip *

# perform the deployment
az webapp deployment source config-zip \
    --src site.zip \
    --resource-group learn-6126217c-08a6-4509-a288-2941d4b96a27 \
    --name <your-unique-app-name>

Sources

background task in an App Service Web App with WebJobs

  • Automate a task for a Web App that should run in the background without affecting the performance of the Web App
  • small automated task, which executes automatically in response to some events

Scenario: Suppose you are a senior web developer in a research role for an online luxury watch dealer. You have a production website that uses Azure web apps. You’ve built a small script that checks stock levels and reports them to an external service. You consider this script to be part of the website, but it’s meant to run in the background, not in response to a user’s actions on the site.

You’d like the website and the script code to be closely associated. They should be stored together as part of the same project in the same source control repository. The script may grow and change as the site changes, so you’d like to always deploy them at the same time, to the same set of cloud resources.

  • WebJobs are a feature of Azure App Service
  • WebJobs can be used to run any script or console application that can be run on a Windows computer, with some functionality limitations
  • To run a WebJob, you’ll need an existing Azure App Service web app, web API, or mobile app
  • You can run multiple WebJobs in a single App Service plan along with multiple apps or APIs.
  • Your WebJobs can be written as scripts of several different kinds including Windows batch files, PowerShell scripts, or Bash shell scripts
  • You can upload such scripts and executables directly to the web app in the Azure portal
  • you can write WebJobs using a framework such as Python or Node.js
  • This approach enables you to use the WebJobs tools in Visual Studio to ease development.

Types of Webjob

ContinuousTriggered
it starts when it is deployedonly starts when scheduled or manually triggered
continues to run in an endless loop
for continuous the code must be written in loop (to poll a message queue for new items and process their contents)use this kind of WebJob, for example, to create daily summaries of messages in a queue.

Webjob vs. Azure function (to know more about Azure Serverless Services/Architecture)

Source:


You owe your dreams your courage.

Koleka Putuma


Onboarding : Azure API Performance and secure backend

Topices

  • Key concepts
    • API Management Components
  • Improve performance by API Management caching
  • Configure caching policy in API Management
  • Caching possibilities
  • Authentication possibilities
  • Expose multiple Azure Function apps as a consistent
  • Azure Front Door

Related topices

Key concepts

  • Azure API Management
  • API
  • API definition
  • API Gateway (APIM component)
    • Accepts API calls and routes them to the backend.
    • Verifies API keys, JWT tokens, certificates, and other credentials.
    • Enforces usage quotas and rate limits.
    • Transforms your API on the fly without code modifications.
    • Caches backend responses where set up.
    • Logs call metadata for analytics purposes.
  • Cache
  • Policies
  • Redis cache
  • Front Door
API Management Components
API gateway

The API gateway is the endpoint that:

  • Accepts API calls and routes them to the backend.
  • Verifies API keys, JWT tokens, certificates, and other credentials.
  • Enforces usage quotas and rate limits.
  • Transforms your API on the fly without code modifications.
  • Caches backend responses where set up.
  • Logs call metadata for analytics purposes.
Azure portal

The Azure portal is the administrative interface where you set up your API program. You can also use it to:

  • Define or import API schema.
  • Package APIs into products.
  • Set up policies such as quotas or transformations on the APIs.
  • Get insights from analytics.
  • Manage users.
Developer portal

The Developer portal serves as the main web presence for developers. From here they can:

  • Read API documentation.
  • Try out an API via the interactive console.
  • Create an account and subscribe to get API keys.
  • Access analytics on their own usage.

Source: https://docs.microsoft.com/en-us/learn/modules/control-authentication-with-apim/1a-understand-apim

Improve performance by API Management caching

Scenario: Suppose you are a developer for a board game company. A product line produced by your company has recently become popular. The volume of requests from your retail partners to your inventory API is growing quickly: much faster than the rate that your inventory actually changes. You’d like your API to respond to requests rapidly without incurring load on your API. You use Azure API Management to host your API. You’re considering using an API Management policy to cache compiled responses to requests. 

  • Api management for changing the behaviore of the api without changing the code
    • policy for set limit
    • for changing response format
    • check mandetory headers
    • authenticared caller / enforce security requirements
    • certificate verification
    • has XML format
    • policies section
      • inbount
      • backend
      • outbount
      • on-error
    • policy scopes
      • global
      • api
      • operation
      • product
  • it exposes apis of a company for the api customers
  • it is used for api inventory
<policies>
    <inbound>
        <base />
 # it means first the policy of the higher level is applied
        <check-header name="Authorization" failed-check-httpcode="401" failed-check-error-message="Not authorized" ignore-case="false">
        </check-header>
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
        <json-to-xml apply="always" consider-accept-header="false" parse-date="false" />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>
  • policies for
    • restricting access e.g. Check Http Header, Limit call rate by subscription, Limit call rate by key, Restrict caller Ips, Policies for Authentication, Cross domain policies, Transformation policies
  • Cross domain policies
    • Cross domain requests are considered a security threat and denied by browsers and APIs
    • Cross-Origin Resource Sharing (CORS), use the CORS policy
    • Some AJAX code, which runs on the browser, uses JSON with padding to make cross-domain calls securely. Use the JSONP policy to permit clients to use this technique
  • Caching policies
    • better performance for caching the compiled responses
  • Advanced policies
    • apply a policy only when the response passes a specific test, use the Control flow policy
    • Use the Forward request policy to forward a request to a backend server
    • To control what happens when an action fails, use the Retry policy
    • The Send one-way request policy can send a request to a URL without waiting for a response
    • If you want to store a value for use in a later calculation or test, use the Set variable policy to persist a value in a named variable

Source : https://docs.microsoft.com/en-us/learn/modules/improve-api-performance-with-apim-caching-policy/1-introduction

Configure caching policy in API Management

  • using a cache of compiled responses
<policies>
    <inbound>
        <base />
        <cache-lookup vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="none" must-revalidate="true" caching-type="internal" />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <cache-store duration="60" />
        <base />
    </outbound>
    </on-error>
        <base />
    </on-error>
</policies>
  • store individual values in the cache, instead of a complete response
  • with an identifying key
  • Retrieve the value from the cache by using the cache-lookup-value policy
  • want to remove a value before it expires, use the cache-remove-value policy
<policies>
    <inbound>
        <cache-lookup-value key="12345"
            default-value="$0.00"
            variable-name="boardPrice"
            caching-type="internal" />
        <base />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <cache-store-value key="12345"
            value="$3.60"
            duration="3600"
            caching-type="internal" />
        <base />
    </outbound>
    </on-error>
        <base />
    </on-error>
</policies>
  • we can use vary-by tags/attributes in cache-lookup-value policy.
    • vary-by-query-parameter (tag): if all users have to see same price/result for a specific product, then we have to set vary-by-query-parameter to partnumber. APIM groups the requests based on partnumber.
    • vary-by-developer (attribute): becase vary-by-developer=”false”, APIM understands that different subscriptions key doesn’t alter the response. if this attribute is true, APIM serves a response from the cache only if it was originally requested with the same subscription key.
    • If a header can make a significant difference to a response, use the <vary-by-header> tag
<policies>
    <inbound>
        <base />
        <cache-lookup vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="none" must-revalidate="true" caching-type="internal">
            <vary-by-query-parameter>partnumber</vary-by-query-parameter>
        </cache-lookup>
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <cache-store duration="60" />
        <base />
    </outbound>
    </on-error>
        <base />
    </on-error>
</policies>

Source: https://docs.microsoft.com/en-us/learn/modules/improve-api-performance-with-apim-caching-policy/4-configure-a-caching-policy

Caching possibilities

  • Internal Cache -> API Management
  • External Cache -> Azure Cache for Redis service

Why using external cache

  • you want to avoid the cache being cleared when the API Management service is updated.
  • you want to have greater control over the cache configuration than the internal cache allows
  • You want to cache more data than can be store in the internal cache.
  • if you use apim with consumption pricing tier, then you have to use external cache. because this pricing tier follows the serverless designprincipal and we should use it with serverless web apis, and it has no internal cache.

Example:

# Create a Redis cache

Source

Authentication possibilities

  • OAuth 2.0
  • API keys / subscriptions (query string / header parameter)
    • The default header name is Ocp-Apim-Subscription-Key, and the default query string is subscription-key.
  • client certificate

Scenario: Suppose you work for a meteorological company, which has an API that customers use to access weather data for forecasts and research. There is proprietary information in this data, and you would like to ensure that only paying customers have access. You want to use Azure API Management to properly secure this API from unauthorized use.

Scenario: Businesses are extending their operations as a digital platform by creating new channels, finding new customers, and driving deeper engagement with existing ones. APIM provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. You can use APIM to take any backend and launch a full-fledged API program based on it.

Use Subscription key to secure access to an API

  • Azure api management service helps to expose the apis
  • developers musr subscrib the api / product (these are two different scope)
    • used to secure the api / product with a subscription key / API key
    • preventing denial of service attacks (DoS) by using throttling
    • or using advanced security policies like JSON Web Token (JWT) validation
  • Enabling independent software vendor (ISV) partner ecosystems by offering fast partner onboarding through the developer portal
  • we can define who can access api through the api gateway (only customers who have subscribed to your service can access the API and use your forecast data, by issuing subscription keys)
# how you can pass a key in the request header using curl
curl --header "Ocp-Apim-Subscription-Key: <key string>" https://<apim gateway>.azure-api.net/api/path

# example curl command that passes a key in the URL as a query string
curl https://<apim gateway>.azure-api.net/api/path?subscription-key=<key string>

# If the key is not passed in the header, or as a query string in the URL, you'll get a 401 Access Denied response from the API gateway.

# call without subscription key
curl -X GET https://[Name Of Gateway].azure-api.net/api/Weather/53/-1
# output
{ "statusCode": 401, "message": "Access denied due to missing subscription key. Make sure to include subscription key when making requests to an API." }

# call with subscription key as header
curl -X GET https://[Name Of Gateway].azure-api.net/api/Weather/53/-1 \
  -H 'Ocp-Apim-Subscription-Key: [Subscription Key]'

# output : {"mainOutlook":{"temperature":32,"humidity":34},"wind":{"speed":11,"direction":239.0},"date":"2019-05-16T00:00:00+00:00","latitude":53.0,"longitude":-1.0}

Use client certificates to secure access to an API

  • used to provide TLS mutual authentication between the client and the API gateway
  • allow only requests with certificates containing a specific thumbprint (through inbound policies)
  • TLS client authentication, the API Management gateway can inspect the certificate contained within the client request for the following properties
PropertyReason
Certificate Authority (CA)Only allow certificates signed by a particular CA
ThumbprintAllow certificates containing a specified thumbprint
SubjectOnly allow certificates with a specified subject
Expiration DateOnly allow certificates that have not expired
  • two common ways to verify a certificate
    • Check who issued the certificate. If the issuer was a certificate authority that you trust, you can use the certificate. You can configure the trusted certificate authorities in the Azure portal to automate this process.
    • If the certificate is issued by the partner, verify that it came from them. For example, if they deliver the certificate in person, you can be sure of its authenticity. These are known as self-signed certificates.
  • apim consumption tier
    • this tier is for serverless APIs e.g. azure functions
    • in this tier for using client certificate must explicitly enable it APIM Instance > custom domains > Request Client Certificate: Yes
    • this step is not necessary in other tiers

check thumbnail of a client certificate in policies

# Every client certificate includes a thumbprint, which is a hash, calculated from other certificate properties

<choose>
    <when condition="@(context.Request.Certificate == null || context.Request.Certificate.Thumbprint != "desired-thumbprint")" >
        <return-response>
            <set-status code="403" reason="Invalid client certificate" />
        </return-response>
    </when>
</choose>

Check the thumbprint against certificates uploaded to API Management

n the previous example, only one thumbprint would work so only one certificate would be validated. Usually, each customer or partner company would pass a different certificate with a different thumbprint. To support this scenario, obtain the certificates from your partners and use the Client certificates page in the Azure portal to upload them to the API Management resource. Then add this code to your policy:

<choose>
    <when condition="@(context.Request.Certificate == null || !context.Request.Certificate.Verify()  || !context.Deployment.Certificates.Any(c => c.Value.Thumbprint == context.Request.Certificate.Thumbprint))" >
        <return-response>
            <set-status code="403" reason="Invalid client certificate" />
        </return-response>
    </when>
</choose>

Check the issuer and subject of a client certificate

<choose>
    <when condition="@(context.Request.Certificate == null || context.Request.Certificate.Issuer != "trusted-issuer" || context.Request.Certificate.SubjectName.Name != "expected-subject-name")" >
        <return-response>
            <set-status code="403" reason="Invalid client certificate" />
        </return-response>
    </when>
</choose>
Create Self-Signed Certificate [Source] and use in APIM
# create a private key and certificate
pwd='Pa$$w0rd'
pfxFilePath='selfsigncert.pfx'
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out selfsigncert.crt -subj /CN=localhost

# convert the certificate to PEM format
openssl pkcs12 -export -out $pfxFilePath -inkey privateKey.key -in selfsigncert.crt -password pass:$pwd
openssl pkcs12 -in selfsigncert.pfx -out selfsigncert.pem -nodes

# When you are prompted for a password, type Pa$$w0rd and then press Enter.

# Get the thumbprint for the certificate
Fingerprint="$(openssl x509 -in selfsigncert.pem -noout -fingerprint)"
Fingerprint="${Fingerprint//:}"
echo ${Fingerprint#*=}

# output is hexadecimal string without any accompanying text and no colons
  1. create the self-signed certificate
  2. apim > custom domains > Request client certificates: yes
  3. configure the inbound policy on any scope as follows
<inbound>
    <choose>
        <when condition="@(context.Request.Certificate == null || context.Request.Certificate.Thumbprint != "desired-thumbprint")" >
            <return-response>
                <set-status code="403" reason="Invalid client certificate" />
            </return-response>
        </when>
    </choose>
    <base />
</inbound>

4. call an operation

curl -X GET https://[api-gateway-name].azure-api.net/api/Weather/53/-1 \
  -H 'Ocp-Apim-Subscription-Key: [Subscription Key]'

# output : return a 403 Client certificate error, and no data will be returned.

then test this

curl -X GET https://[gateway-name].azure-api.net/api/Weather/53/-1 \
  -H 'Ocp-Apim-Subscription-Key: [subscription-key]' \
  --cert-type pem \
  --cert selfsigncert.pem

# output: {"mainOutlook":{"temperature":32,"humidity":34},"wind":{"speed":11,"direction":239.0},"date":"2019-05-16T00:00:00+00:00","latitude":53.0,"longitude":-1.0}

Source

Expose multiple Azure Function apps as a consistent API by using APIM

Combine multiple Azure Functions apps into a unified interface by importing them into a single Azure API Management instance.

Scenario: Suppose you work for an online store with a successful and busy web site. Your developers have written the business logic for the site as microservices in the form of Azure Functions. Now, you want to enable partners to interact with your online store from their own code by creating a web API that they can call over HTTP. You want to find an easy way to assemble your functions into a single API.

In your online store, you have implemented each part of the application as a microservice – one for the product details, one for order details, and so on. A separate team manages each microservice and each team uses continuous development and delivery to update and deploy their code on a regular basis. You want to find a way to assemble these microservices into a single product and then manage that product centrally.

  • use Azure Functions and Azure API Management to build complete APIs with a microservices architecture
  • Microservices has become a popular approach to the architecture of distributed applications
  • we can develop distributed systems with serverless architecture e.g. azure function
  • servreless architecture uses stateless computing resources
  • azure function
    • enables serverless architecture
    • we can use NuGet or Node Package Manager (NPM) for development
    • authenticate users with OAuth
    • you can select a template for how you want to trigger your code
clone the functions project,
git clone https://github.com/MicrosoftDocs/mslearn-apim-and-functions.git ~/OnlineStoreFuncs

cd ~/OnlineStoreFuncs
bash setup.sh


after the azure functions are created then we can add them to apim

Source : https://docs.microsoft.com/en-us/learn/modules/build-serverless-api-with-functions-api-management/

Azure Front Door

  • secure entry point for delivering global performant hyperscale apps
  • for application acceleration at microsoft’s edge
  • ooo

Source: https://www.e-apostolidis.gr/microsoft/azure/deliver-your-app-at-global-scale-with-security-resiliency-with-azure-front-doo/


Let the light of your talent lighten your road to success.

Parisa Moosavinezhad


Onboarding : Azure Management Features

Topics

  • Key concepts
  • Azure scopes
  • Policies
  • Role-based access control (RBAC)

Key concepts

  • Azure AD Group
  • Policy
  • Role-based access control

Azure scopes

Azure provides four level of management

  • Level 1 : Management Groups
    • Level 2: Subscriptions
      • Level 3: Resource Groups
        • Level 4 : Resources

Note: lower level inherts setting from the higher level.

Apply the critical settings at higher levels and project specific requirements at lower level.

Policies

The policies are like guard rails. They keep the usage of azure resources in a specific frame and help to accomplish the requirements.

Examples:

  • Allow locations for resources.
  • Allow locations for specific resources.

Role-based access control (RBAC)

The user who manages the role-based access control needs the following roles

  • Microsoft.Authorization/RoleAssignments/* (this role is assigned through Owner or User Access Administrator Role)


You owe your dreams your courage.

Koleka Putuma


Onboarding : Azure Compute

Topics

  • Keywords
  • Manage VM
  • Availability Set
  • Scale Set
  • Snapshot
  • Image
  • Deploy VM from VHD
    • Generalize a server
  • Azure Batch
  • Automate business processes

Related topics

Keywords

  • Virtual Machine (VM)
  • CLI
  • VM
  • Availability Set
  • Scale Set
  • Snapshot (from disk)
  • Image (from vm)
  • Azure Batch: Azure Batch is an Azure service that enables you to run large-scale parallel and high-performance computing (HPC) applications efficiently in the cloud.
  • High-performance computing (HPC)
  • MPI: Message Passing Interface
  • Workflow: Business processes modeled in software are often called workflows.
  • Design-first approach: include user interfaces in which you can draw out the workflow
  • Azure compute: is an on-demand computing service for running cloud-based applications
    • Virtual machines
    • Containers
    • Azure App Service
    • Serverless computing

Source

Manage VM

VM management roles (RBAC)
  • Virtual Machine Contributor
  • Network Contributor
  • Storage Account Contributor

Note: The roles have to be assigned to an Azure AD Group instead of a user

To have a proper management on VMs, different management opptions have to be used

Available VM commands
az vm [subcommands]
Sub-commandDescription
createCreate a new virtual machine
deallocateDeallocate a virtual machine
deleteDelete a virtual machine
listList the created virtual machines in your subscription
open-portOpen a specific network port for inbound traffic
restartRestart a virtual machine
showGet the details for a virtual machine
startStart a stopped virtual machine
stopStop a running virtual machine
updateUpdate a property of a virtual machine
# Create a Linux virtual machine
az vm create \
  --resource-group [sandbox resource group name] \
  --location westus \
  --name SampleVM \
  --image UbuntuLTS \
  --admin-username azureuser \
  --generate-ssh-keys \
  --verbose # Azure CLI tool waits while the VM is being created.
    # Or
  --no-wait # option to tell the Azure CLI tool to return immediately and have Azure continue creating the VM in the background.
  
# output
{
  "fqdns": "",
  "id": "/subscriptions/<subscription-id>/resourceGroups/Learn-2568d0d0-efe3-4d04-a08f-df7f009f822a/providers/Microsoft.Compute/virtualMachines/SampleVM",
  "location": "westus",
  "macAddress": "00-0D-3A-58-F8-45",
  "powerState": "VM running",
  "privateIpAddress": "10.0.0.4",
  "publicIpAddress": "40.83.165.85",
  "resourceGroup": "2568d0d0-efe3-4d04-a08f-df7f009f822a",
  "zones": ""
}

  # generate-ssh-keys flag: This parameter is used for Linux distributions and creates 
  # a pair of security keys so we can use the ssh tool to access the virtual machine remotely. 
  # The two files are placed into the .ssh folder on your machine and in the VM. If you already 
  # have an SSH key named id_rsa in the target folder, then it will be used rather than having a new key generated.

# Connecting to the VM with SSH
ssh azureuser@<public-ip-address>

# for exit
logout

# Listing images
az vm image list --output table

# Getting all images
az vm image list --sku WordPress --output table --all # t is helpful to filter the list with the --publisher, --sku or –-offer options.

# Location-specific images
az vm image list --location eastus --output table


Pre-defined VM sizes

Azure defines a set of pre-defined VM sizes for Linux and Windows to choose from based on the expected usage.

TypeSizesDescription
General purposeDsv3, Dv3, DSv2, Dv2, DS, D, Av2, A0-7Balanced CPU-to-memory. Ideal for dev/test and small to medium applications and data solutions.
Compute optimizedFs, FHigh CPU-to-memory. Good for medium-traffic applications, network appliances, and batch processes.
Memory optimizedEsv3, Ev3, M, GS, G, DSv2, DS, Dv2, DHigh memory-to-core. Great for relational databases, medium to large caches, and in-memory analytics.
Storage optimizedLsHigh disk throughput and IO. Ideal for big data, SQL, and NoSQL databases.
GPU optimizedNV, NCSpecialized VMs targeted for heavy graphic rendering and video editing.
High performanceH, A8-11Our most powerful CPU VMs with optional high-throughput network interfaces (RDMA).
# get a list of the available sizes
az vm list-sizes --location eastus --output table

# output
MaxDataDiskCount    MemoryInMb  Name                      NumberOfCores    OsDiskSizeInMb    ResourceDiskSizeInMb
------------------  ------------  ----------------------  ---------------  ----------------  ----------------------
                 2          2048  Standard_B1ms                         1           1047552                    4096
                 2          1024  Standard_B1s                          1           1047552                    2048
                 4          8192  Standard_B2ms                         2           1047552                   16384
                 4          4096  Standard_B2s                          2           1047552                    8192
                 8         16384  Standard_B4ms                         4           1047552                   32768
                16         32768  Standard_B8ms                         8           1047552                   65536
                 4          3584  Standard_DS1_v2 (default)             1           1047552                    7168
                 8          7168  Standard_DS2_v2                       2           1047552                   14336
                16         14336  Standard_DS3_v2                       4           1047552                   28672
                32         28672  Standard_DS4_v2                       8           1047552                   57344
                64         57344  Standard_DS5_v2                      16           1047552                  114688
        ....
                64       3891200  Standard_M128-32ms                  128           1047552                 4096000
                64       3891200  Standard_M128-64ms                  128           1047552                 4096000
                64       3891200  Standard_M128ms                     128           1047552                 4096000
                64       2048000  Standard_M128s                      128           1047552                 4096000
                64       1024000  Standard_M64                         64           1047552                 8192000
                64       1792000  Standard_M64m                        64           1047552                 8192000
                64       2048000  Standard_M128                       128           1047552                16384000
                64       3891200  Standard_M128m                      128           1047552                16384000

# Specify a size during VM creation
az vm create \
    --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
    --name SampleVM2 \
    --image UbuntuLTS \
    --admin-username azureuser \
    --generate-ssh-keys \
    --verbose \
    --size "Standard_DS5_v2"

# Get available VM Size
# Before a resize is requested, we must check to see if the desired size is available in the cluster our VM is part of.
az vm list-vm-resize-options \
    --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
    --name SampleVM \
    --output table

# Resize an existing VM 
az vm resize \
    --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
    --name SampleVM \
    --size Standard_D2s_v3

This will return a list of all the possible size configurations available in the resource group. If the size we want isn’t available in our cluster, but is available in the region, we can deallocate the VM. This command will stop the running VM and remove it from the current cluster without losing any resources. Then we can resize it, which will re-create the VM in a new cluster where the size configuration is available.

# List VMs
az vm list

# Output types
az vm list --output table|json|jsonc|tsv

# Getting the IP address
az vm list-ip-addresses -n SampleVM -o table
# output
VirtualMachine    PublicIPAddresses    PrivateIPAddresses
----------------  -------------------  --------------------
SampleVM          168.61.54.62         10.0.0.4

# Getting VM details
az vm show --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 --name SampleVM
# we could change to a table format, but that omits almost all of the interesting data. Instead, we can turn to a built-in query language for JSON called JMESPath.
# https://jmespath.org/


# Adding filters to queries with JMESPath
{
  "people": [
    {
      "name": "Fred",
      "age": 28
    },
    {
      "name": "Barney",
      "age": 25
    },
    {
      "name": "Wilma",
      "age": 27
    }
  ]
}

# poeple is an array
people[1]
# output
{
    "name": "Barney",
    "age": 25
}


people[?age > '25'] 
# output
[
  {
    "name": "Fred",
    "age": 28
  },
  {
    "name": "Wilma",
    "age": 27
  }
]

people[?age > '25'].[name]
# output
[
  [
    "Fred"
  ],
  [
    "Wilma"
  ]
]

# Filtering our Azure CLI queries
az vm show \
    --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
    --name SampleVM \
    --query "osProfile.adminUsername"

az vm show \
    --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
    --name SampleVM \
    --query hardwareProfile.vmSize

az vm show \
    --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
    --name SampleVM \
    --query "networkProfile.networkInterfaces[].id"

az vm show \
    --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
    --name SampleVM \
    --query "networkProfile.networkInterfaces[].id" -o tsv

# Stopping a VM
az vm stop \
    --name SampleVM \
    --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844

# We can verify it has stopped by attempting to ping the public IP address, using ssh, or through the vm get-instance-view command.
az vm get-instance-view \
    --name SampleVM \
    --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
    --query "instanceView.statuses[?starts_with(code, 'PowerState/')].displayStatus" -o tsv

# Starting a VM    
az vm start \
    --name SampleVM \
    --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844

# Restarting a VM
az vm start \
    --name SampleVM \
    --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844
    --no-wait 

# Install NGINX web server
# 1.
z vm list-ip-addresses --name SampleVM --output table

# 2.
ssh azureuser@<PublicIPAddress>

# 3.
sudo apt-get -y update && sudo apt-get -y install nginx

# 4.
exit

# Retrieve our default page
# Either
curl -m 10 <PublicIPAddress>
# Or
# in browser try the public ip address

# This command will fail because the Linux virtual machine doesn't expose
# port 80 (http) through the network security group that secures the network 
# connectivity to the virtual machine. We can change this with the Azure CLI command vm open-port.

# open oprt
az vm open-port \
    --port 80 \
    --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
    --name SampleVM

# output of curl command
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
    width: 35em;
    margin: 0 auto;
    font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Source: https://docs.microsoft.com/en-us/learn/modules/manage-virtual-machines-with-azure-cli/

Availability Set

  • An availability set is a logical grouping of two or more VMs
  • keep your application available during planned or unplanned maintenance.
  • planned maintenance event is when the underlying Azure fabric that hosts VMs is updated by Microsoft.
    • to patch security vulnerabilities,
    • improve performance,
    • and add or update features
  • When the VM is part of an availability set, the Azure fabric updates are sequenced so not all of the associated VMs are rebooted at the same time.
  • VMs are put into different update domains.
  • Update domains indicate groups of VMs and underlying physical hardware that can be rebooted at the same time.
  • Update domains are a logical part of each data center and are implemented with software and logic.
  • Unplanned maintenance events involve a hardware failure in the data center,
    • such as a server power outage
    • or disk failure
  • VMs that are part of an availability set automatically switch to a working physical server so the VM continues to run.
  • The group of virtual machines that share common hardware are in the same fault domain.
  • A fault domain is essentially a rack of servers.
  • It provides the physical separation of your workload across different power, cooling, and network hardware that support the physical servers in the data center server racks. 
  • With an availability set, you get:
    • Up to three fault domains that each have a server rack with dedicated power and network resources
    • Five logical update domains which then can be increased to a maximum of 20
Diagram showing availability sets update and fault domains that are duplicated across servers.
Your VMs are then sequentially placed across the fault and update domains. The following diagram shows an example where you have six VMs in two availability sets distributed across the two fault domains and five update domains.

Source

Scale Set

Scenario: Imagine that you work for a domestic shipping company. Your customers use one of the company’s websites to manage and check the status of their shipments. This website is deployed to virtual machines and hosted on-premises. You’ve noticed that increased usage on the site is straining the virtual machines’ resources. However, you can’t adjust to load fluctuations without manually intervening and creating or deallocating virtual machines.

  • Scale set is for scalable applications ( automatically adjust to changes in load while minimizing costs with virtual machine scale sets)
  • adjust your virtual machine resources to match demands
  • keep the virtual machine configuration consistent to ensure application stabilit
  • VMs in this type of scale set all have the same configuration and run the same applications
  • for scenarios that include compute workloads, big-data workloads, and container workloads
  • to deploy and manage many load-balanced, identical VMs
  • it scales up and down automatically
  • it can even resize the vm
  • A scale set uses a load balancer to distribute requests across the VM instances
  • It uses a health probe to determine the availability of each instance (The health probe pings the instance)
  • keep in mind that you’re limited to running 1,000 VMs on a single scale set
  • support both Linux and Windows VMs
  • are designed for cost-effectiveness
  • scaling options
    • horizontal: adding or removing several VMs, by using rules, The rules are based on metrics.
    • vertical: adding resources such as memory, CPU power, or disk space to VMs,  increasing the size of the VMs in the scale set, by using rules.
  • How to scale
    • Scheduled scaling: You can proactively schedule the scale set to deploy one or N number of additional instances to accommodate a spike in traffic and then scale back down when the spike ends.
    • Autoscaling: If the workload is variable and can’t always be scheduled, you can use metric-based threshold scaling. Autoscaling horizontally scales out based on node usage. It then scales back in when the resources return to a baseline.
  • Reducing costs by using low-priority
    • allows you to use Azure compute resources at cost savings of up to 80 percent.
    • A low-priority scale set provisions VMs through this underused compute capability.
    • these VMs, keep in mind that they’re temporary. Availability depends on size, region, time of day, and so on. These VMs have no SLA.
    • When Azure needs the computing power again, you’ll receive a notification about the VM that will be removed from your scale set
    • you can use Azure Scheduled Events to react to the notification within the VM. 
  • low-priority scale set, you specify two kinds of removal
    • Delete: The entire VM is removed, including all of the underlying disks.
    • Deallocate: The VM is stopped. The processing and memory resources are deallocated. Disks are left intact and data is kept. You’re charged for the disk space while the VM isn’t running.
  • if the workload increases in complexity rather than in volume, and this complexity demands more of your resources, you might prefer to scale vertically.
# create custom data to config scale set
code cloud-init.yaml

# custom data 
#cloud-config
package_upgrade: true
packages:
  - nginx
write_files:
  - owner: www-data:www-data
  - path: /var/www/html/index.html
    content: |
        Hello world from Virtual Machine Scale Set !
runcmd:
  - service nginx restart

# create resource group
az group create \
  --location westus \
  --name scalesetrg

# create scale set
az vmss create \
  --resource-group scalesetrg \
  --name webServerScaleSet \
  --image UbuntuLTS \
  --upgrade-policy-mode automatic \
  --custom-data cloud-init.yaml \
  --admin-username azureuser \
  --generate-ssh-keys

# More about scaling : https://docs.microsoft.com/en-us/learn/modules/build-app-with-scale-sets/4-configure-virtual-machine-scale-set

By default, the new virtual machine scale set has two instances and a load balancer.

The custom-data flag specifies that the VM configuration should use the settings in the cloud-init.yaml file after the VM has been created. You can use a cloud-init file to install additional packages, configure security, and write to files when the machine is first installed.

Configure vm scale set

# add a health probe to the load balancer
az network lb probe create \
  --lb-name webServerScaleSetLB \
  --resource-group scalesetrg \
  --name webServerHealth \
  --port 80 \
  --protocol Http \
  --path /

The health probe pings the root of the website through port 80. If the website doesn't respond, the server is considered unavailable. The load balancer won't route traffic to the server.

# configure the load balancer to route HTTP traffic to the instances in the scale set
az network lb rule create \
  --resource-group scalesetrg \
  --name webServerLoadBalancerRuleWeb \
  --lb-name webServerScaleSetLB \
  --probe-name webServerHealth \
  --backend-pool-name webServerScaleSetLBBEPool \
  --backend-port 80 \
  --frontend-ip-name loadBalancerFrontEnd \
  --frontend-port 80 \
  --protocol tcp

# change the number of instances in a virtual machine scale set
az vmss scale \
    --name MyVMScaleSet \
    --resource-group MyResourceGroup \
    --new-capacity 6



  • a mechanism that updates your application consistently, across all instances in the scale set
    • Azure custom script extension downloads and runs a script on an Azure VM. It can automate the same tasks on all the VMs in a scale set.
    • create a configuration file that defines the files to get and the commands to run. This file is in JSON format.
    • to know more about custom script refer to Onboarding : Azure Infrastructure deployment.
# custom script configuration that downloads an application from a repository in GitHub and installs it on a host instance by running a script named custom_application_v1.sh
# yourConfigV1.json 
{
  "fileUris": ["https://raw.githubusercontent.com/yourrepo/master/custom_application_v1.sh"],
  "commandToExecute": "./custom_application_v1.sh"
}


# To deploy this configuration on the scale set, you use a custom script extension
az vmss extension set \
  --publisher Microsoft.Azure.Extensions \
  --version 2.0 \
  --name CustomScript \
  --resource-group myResourceGroup \
  --vmss-name yourScaleSet \
  --settings @yourConfigV1.json

# view the current upgrade policy for the scale set
az vmss show \
    --name webServerScaleSet \
    --resource-group scalesetrg \
    --query upgradePolicy.mode

# apply the update script
az vmss extension set \
    --publisher Microsoft.Azure.Extensions \
    --version 2.0 \
    --name CustomScript \
    --vmss-name webServerScaleSet \
    --resource-group scalesetrg \
    --settings "{\"commandToExecute\": \"echo This is the updated app installed on the Virtual Machine Scale Set ! > /var/www/html/index.html\"}"

# retrieve the IP address
az network public-ip show \
    --name webServerScaleSetLBPublicIP \
    --resource-group scalesetrg \
    --output tsv \
    --query ipAddress

Source

Snapshot

Image

  • Managed disk supports creating a managed Custome image
  • We can create image from custom VHD in a storage account or directly from generalized VM (via sysprepped VM command)
    • This process capture a single image
    • this image contains all managed disks associated with a VM, including both OS, and Data.

Image vs. Snapshot

ImageSnapshot
With managed disks, you can take an image of a generalized VM that has been deallocated.It’s copy of disk in a specific point of time.
This image includes all managed disks attached to this VM. it applies only to one disk.
This image can be used to create a Vm.Sanpshot doesn’t have awareness of any disk except the one it contains.

If a VM has only one OS disk, we can take a snapshot of the disk or take image of VM and create a VM from either snapshot or the image.

Deploy VM from VHD

  • a vm can have some configurations like installed software -> we can create a new Virtual Hard Disk (VHD) from this vm.
  • VHD
    • is like physical hard disk
    • A VHD can also hold databases and other user-defined folders, files, and data
    • A virtual machine can contain multiple VHDs
    • Typically, a virtual machine has an operating system VHD on which the operating system is installed. 
    • It also has one or more data VHDs that contain the applications and other user-specific data used by the virtual machine.
  • VHD advantages
    • high availability
    • physical security
    • Durability
    • scalability
    • cost and performance
  • VM image
    • vm image is an original image without preconfigured items
    • VHD contains configurations
    • vm image and vhds can be created via Microsoft Hyper-V -> then upload to cloud
  • Generalized image
    • it’s customized vm image
    • and then some server-specific information must be remove and create a general image
      • The host name of your virtual machine.
      • The username and credentials that you provided when you installed the operating system on the virtual machine.
      • Log files.
      • Security identifiers for various operating system services.
    • The process of resetting this data is called generalization, and the result is a generalized image.
    •  For Windows, use the Microsoft System Preparation (Sysprep) tool. For Linux, use the Windows Azure Linux Agent (waagent) tool.
  • specialized virtual image
    • use a specialized virtual image as a backup of your system at a particular point in time. If you need to recover after a catastrophic failure, or you need to roll back the virtual machine, you can restore your virtual machine from this image.
    • is snapshot of vm at a point in time
Generalize a server
  1. use a generalized image to build pre-configured virtual machines (VMs)
  2. To generalize a Windows VM, follow these steps:
    • Sign in to the Windows virtual machine.
    • Open a command prompt as an administrator.
    • Browse to the directory \windows\system32\sysprep.
    • Run sysprep.exe.
    • In the System Preparation Tool dialog box, select the following settings, and then select OK.TABLE 1PropertyValueSystem Cleanup ActionEnter System Out-of-Box Experience (OOBE)GeneralizeSelectShutdown OptionsShutdown

Running Sysprep is a destructive process, and you can’t easily reverse its effects. Back up your virtual machine first.

When you create a virtual machine image in this way, the original virtual machine becomes unusable. You can’t restart it. Instead, you must create a new virtual machine from the image, as described later in this unit.

Source

High-performance computing

Scenario: Suppose you work for an engineering organization that has an application that creates 3D models of the facilities they design. Your organization also has another system that stores a large amount of project-related statistical data. They want to use Azure to modernize the aging high-performance compute platforms that support these applications. Your organization needs to understand the solutions available on Azure, and how they fit into their plans.

  • Azure HPC choices
    • Azure batch
    • Azure VM HPC Instances
    • Microsoft HPC Pack
  • they are for specialized tasks
    • In genetic sciences, gene sequencing.
    • In oil and gas exploration, reservoir simulations.
    • In finance, market modeling.
    • In engineering, physical system modeling.
    • In meteorology, weather modeling.
  • Azure batch
    • for working with large-scale parallel and computationally intensive tasks 
    • batch is managed service
    • The Batch scheduling and management service is free
    • batch components
      • batch account
        • pools pf vms / notes
        • batch job
          • tasks / units of work
    • batch can associate with storage for input/ourput
    • the scheduling and management engine determines the optimal plan for allocating and scheduling tasks across the specified compute capacity
    • suggested for embarrassingly parallel tasks (https://www.youtube.com/watch?v=cadoD0aSQoM)
  • Azure VM HPC
    • H-series
    • HB-Series
    • HC-series
    • N -> NVIDIA GPUs
    • NC -> NVIDIA GPUs + CUDA
    • ND -> optimized for AI and deep learning workloads for are fast at running single-precision floating point operations, which are used by AI frameworks including Microsoft Cognitive Toolkit, TensorFlow, and Caffe.
  • Microsoft HPC Pack
    • for migrate from on-prem to azure
    • have full control of the management and scheduling of your clusters of VMs
    • HPC Pack has the flexibility to deploy to on-premises and the cloud.
    • HPC Pack offers a series of installers for Windows that allows you to configure your own control and management plane, and highly flexible deployments of on-premises and cloud nodes.
    •  Deployment of HPC Pack requires Windows Server 2012 or later, and takes careful consideration to implement.
    • Prerequisites:
      • You need SQL Server and an Active Directory controlle, and a topology
      • specify the count of heads/controller nodes and workers
      • pre-provision Azure nodes as part of the cluster
      • The size of the main machines that make up the control plane (head and control nodes, SQL Server, and Active Directory domain controller) will depend on the projected cluster size
      • install HPC PAck -> the you have job scheduler  for both HPC and parallel jobs
      • scheduler appears in the Microsoft Message Passing Interface
      • HPC Pack is highly integrated with Windows
      • can see all the application, networking, and operating system events from the compute nodes in the cluster in a single, debugger view.

Source

Azure Batch

Scenario: Imagine you’re a software developer at a non-profit organization whose mission is to give every human on the planet access to clean water. To reach this goal, every citizen is asked to take a picture of their water purification meter and text it to you. Each day, you have to scan pictures from over 500,000 households, and record each reading against the sender phone number. The data is used to detect water quality trends and to dispatch the mobile water quality team to investigate the worst cases across each region. Time is of the essence, but processing each image with Optical Character Recognition (OCR) is time-intensive. With Azure Batch, you can scale out the amount of compute needed to handle this task on a daily basis, saving your non-profit the expense of fixed resources.

  • Azure Batch is an Azure service that enables you to run large-scale parallel and high-performance computing (HPC) applications efficiently in the cloud.
  • no need to manage infrastructure
  • Azure Batch to execute large-scale, high-intensity computation jobs
  • for running parallel tasks
  • flexible and scalable compute solution, such as Azure Batch, to provide the computational power
  • for compute-intensive tasks
    • heavy workloads can be broken down into separate subtasks and run in parallel
  • components
    • azure batch account
    • batch account is container for all batch resources
    • batch account contains many batch pools
    • azure batch workflow
# define variables
RESOURCE_GROUP=<your resource group>
BATCH_ACCOUNT=batchaccount$RANDOM
LOCATION=westeurope

# create azure batch account
az batch account create \
 --name $BATCH_ACCOUNT \
 --resource-group $RESOURCE_GROUP \
 --location <choose a location from the list above>

# login to azure batch account
az batch account login \
 --name $BATCH_ACCOUNT \
 --resource-group $RESOURCE_GROUP \
 --shared-key-auth

# create azure batch bool
az batch pool create \
 --id mypool --vm-size Standard_A1_v2 \
 --target-dedicated-nodes 3 \
 --image canonical:ubuntuserver:16.04-LTS \
 --node-agent-sku-id "batch.node.ubuntu 16.04"

# verify the nodes
az batch pool show --pool-id mypool \
 --query "allocationState"

# create a job
az batch job create \
 --id myjob \
 --pool-id mypool

# create tasks
for i in {1..10}
do
   az batch task create \
    --task-id mytask$i \
    --job-id myjob \
    --command-line "/bin/bash -c 'echo \$(printenv | grep \AZ_BATCH_TASK_ID) processed by; echo \$(printenv | grep \AZ_BATCH_NODE_ID)'"
done


# delete batch job
az batch job delete --job-id myjob -y

Source

Monitor Azure Batch job
  • to monitor the progress ob the tasks
# create a job for monitoring
az batch job create \
 --id myjob2 \
 --pool-id mypool

# create tasks of the job
for i in {1..10}
do
   az batch task create \
    --task-id mytask$i \
    --job-id myjob2 \
    --command-line "/bin/bash -c 'echo \$(printenv | grep \AZ_BATCH_TASK_ID) processed by; echo \$(printenv | grep \AZ_BATCH_NODE_ID)'"
done

# check status
az batch task show \
 --job-id myjob2 \
 --task-id mytask1

# list tasks output
az batch task file list \
 --job-id myjob2 \
 --task-id mytask5 \
 --output table

# create a folder for output and change to this folder
mkdir taskoutputs && cd taskoutputs

# download tasks output
for i in {1..10}
do
az batch task file download \
    --job-id myjob2 \
    --task-id mytask$i \
    --file-path stdout.txt \
    --destination ./stdout$i.txt
done

# show content
cat stdout1.txt && cat stdout2.txt

# delte job
az batch job delete --job-id myjob2 -y

Automate business processes

  • Modern businesses run on multiple applications and services
  • send the right data to the rigth task impact the efficiency
  • azure features to build and implement workflows that integrate multiple systems
    • Logic Apps
    • Microsoft Power Automate
    • WebJobs
    • Azure Functions
  • similarities of them
    • They can all accept inputs. An input is a piece of data or a file that is supplied to the workflow.
    • They can all run actions. An action is a simple operation that the workflow executes and may often modify data or cause another action to be performed.
    • They can all include conditions. A condition is a test, often run against an input, that may decide which action to execute next.
    • They can all produce outputs. An output is a piece of data or a file that is created by the workflow.
    • In addition, workflows created with these technologies can either start based on a schedule or they can be triggered by some external event.
    • They have design-first approach
      • Logic app
      • Power automate
    • They have code-first technology
      • webjob
      • Azure functions

Logic Apps

  • to automate, orchestrate, and integrate disparate components of a distributed application.
  • Visual designer / Json Code Editor
  • over 200 connectors to external services
  • If you have an unusual or unique system that you want to call from a Logic Apps, you can create your own connector if your system exposes a REST API.

Microsoft Power Automate

  • create workflows even when you have no development or IT Pro experience
  • support four different types of flow
  • is built on Logic Apps
  • support same connectors and custom connectors

Webjobs

  • is a background tasks for app service
  • Onboarding : Modern Applications
  • kinds
    • continous
    • triggered
  • webjob can be written in several languages.
  • The WebJobs SDK only supports C# and the NuGet package manager.

Azure Functions

  • small pieces of code
  • pay for the time when the code runs
  • Azure automatically scales the function 
  • has available template
  • Microsoft Power Automate supported flows
    • Automated: A flow that is started by a trigger from some event. For example, the event could be the arrival of a new tweet or a new file being uploaded.
    • Button: Use a button flow to run a repetitive task with a single click from your mobile device.
    • Scheduled: A flow that executes on a regular basis such as once a week, on a specific date, or after 10 hours.
    • Business process: A flow that models a business process such as the stock ordering process or the complaints procedure.
  • Azure function available templates
    • HTTPTrigger. Use this template when you want the code to execute in response to a request sent through the HTTP protocol.
    • TimerTrigger. Use this template when you want the code to execute according to a schedule.
    • BlobTrigger. Use this template when you want the code to execute when a new blob is added to an Azure Storage account.
    • CosmosDBTrigger. Use this template when you want the code to execute in response to new or updated documents in a NoSQL database.
  • WebJobs for these reasons
    • You want the code to be a part of an existing App Service application and to be managed as part of that application, for example in the same Azure DevOps environment.
    • You need close control over the object that listens for events that trigger the code. This object in question is the JobHost class, and you have more flexibility to modify its behavior in WebJobs

design-first comparison

Microsoft Power AutomateLogic Apps
Intended usersOffice workers and business analystsDevelopers and IT pros
Intended scenariosSelf-service workflow creationAdvanced integration projects
Design toolsGUI only. Browser and mobile appBrowser and Visual Studio designer. Code editing is possible
Application Lifecycle ManagementPower Automate includes testing and production environmentsLogic Apps source code can be included in Azure DevOps and source code management systems

code-first comparison

Azure WebJobsAzure Functions
Supported languagesC# if you are using the WebJobs SDKC#, Java, JavaScript, PowerShell, etc.
Automatic scalingNoYes
Development and testing in a browserNoYes
Pay-per-use pricingNoYes
Integration with Logic AppsNoYes
Package managersNuGet if you are using the WebJobs SDKNuget and NPM
Can be part of an App Service applicationYesNo
Provides close control of JobHostYesNo
Diagram of decision flow chart that will be described in depth in the text that follows.
[Source]

Source


You owe your dreams your courage.

Koleka Putuma


AWS : Monitor, React, and Recover

Key concepts

  • Monitoring : is for understanding what is happening in your system.
  • Alerting : is CloudWatch component, is counterpart to monitoring, and it allows the platform to let us know when something is wrong.
  • Recovering : is for identifying the cause of the issue and rectifying it.
  • Automating
  • Alert:
  • Simple Notification System:
  • CloudTrail: with enabling CloudTrail on your AWS account, you ensure that you have the data necessary to look at the history of your AWS account and determine what happened and when.
  • Amazon Athena: which lets you filter through large amounts of data with ease.
  • SSL certificate: Cryptographic certificate for encrypting traffic between two computers.
  • Source of truth: When data is stored in multiple places or ways, the “source of truth” is the one that is used when there is a discrepancy between the multiple sources.
  • Chaos Engineering: Intentionally causing issues in order to validate that a system can respond appropriately to problems.

Monitoring concept

Without monitoring, you are blind to what is happening in your systems. Without having knowledgable folks alerted when things go wrong, you’re deaf to system failures. Creating systems that reach out to you and ask you for help when they need it, or better yet, let you know that they might need help soon, is critical to meeting your business goals and sleeping easier at night.

Once you have master monitoring and alerting, you can begin to think about how your systems can fix themselves. At least for routine problems, automation can be a fantastic tool for keeping your platform running seamlessly [Source].

Monitoring and responding are core to every vital system. When you architect a platform, you should always think about how you will know if something is wrong with that platform early on in the design process. There are many different kinds of monitoring that can be applied to many different facets of the system, and knowing which types to apply where it can be the difference between success and failure.

CloudWatch

  • CloudWatch is the primary AWS service for monitoring
  • it has different pieces that work together
  • CloudWatch metrices are the main repository of monitoring metrics e.g. what does the CPU utilization look like on your RDS database, or how man messages are currently in SQS (Simple Queue Service)
  • we can create custom metrics
  • CloudWatch Logs is a service for storing and viewing text-based logs e.g. Lambda, API Gateway,…
  • CloudWatch Synthetics are health checks for creating HTTP endpoints
  • CloudWatch Dashboard
  • CloudWatch Alarms

List of AWS services that push metrics into CloudWatch: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/aws-services-cloudwatch-metrics.html

Refer to AWS : Serverless post to create a simple Lambda for testing CloudWach.

How to use CloudWatch

This is the overview > metrices

You see the list of namespaces. Lambda is one of them.

CloudWatch Alert [Source]

  • cloudwatch doesn’t alert you
  • cloudwatch alert inform you
  • Proper alerting will help you keep tabs on your systems and will help you meet your SLAs
  • Alerting in ways that bring attention to important issues will keep everyone informed and prevent your customers from being the ones to inform you of problems
  • CloudWatch Alarms integrates with CloudWatch Metrics
  • Any metric in CloudWatch can be used as the basis for an alarm
  • These alarms are sent to SNS topics, and from there, you have a whole variety of options for distributing information such as email, text message, Lambda invocation or third party integration.
  • Alerting when problems occur is critical, but alerting when problems are about to occur is far better.
  • Understanding the design and architecture of your platform is key to being able to set thresholds correctly
  • You want to set your thresholds so that your systems are quiet when the load is within their capacity, but to start speaking up when they head toward exceeding their capacity. You will need to determine how much advanced warning you will need to fix issues.
Always try to configure the alert in a way that you have a weekend to solve the problem if it’s utilization

Example: create a Lambda function and set up an alert on a Lambda functions invocation in CloudWatch Alarms to email you anytime that the Lamdba is run.

Solution has been recorded in video

Recovering From Failure by using CloudTrail 

The key to recovering from failure is identifying the root cause as well as how and who/what triggered the incident.

We can log

  • management events (first copy of management events is free of charge but extra copies arre each 2$ for 100,000 write management events [Source])
  • data events (pay $0.10 per 100,000 data events)

You will be able to refer to this CloudTrail log for a complete history of the actions taken in your AWS account. You can also query these logs with Amazon Athena, which lets you filter through large amounts of data with ease.

Automating recovery

Automating service recovery and creating “self-healing” systems can take you to the next level of system architecture. Some solutions are quite simple. Using autoscaling within AWS, you can handle single instance/server failures without missing a beat. These solutions will automatically replace a failed server or will create or delete servers based on the demand at any given point in time.

Beyond the simple tasks, many types of failure can be automatically recovered from, but this can involve significant work. Many failure events can generate notifications, either directly from the service, or via an alarm generated out of CloudWatch. These events can have a Lambda function attached to them, and from there, you can do anything you need to in order to recover the system. Do be cautious with this type of automation where you are, in essence, turning over some control of the platform – to the platform. Just like with a business application, there can be defects. However, as with any software, proper and thorough testing can help ensure a high-quality product.

Some aws services can autoscale to help with some automated recovery.

Chaos engineering

Chaos Engineering is the practice of intentionally breaking things in production. If your systems can handle these failures, why not allow or encourage these failures?

Set rational alerting levels for your system so that for foreseeable issues, you get alerted so that you can take care of issues before they become critical.

Edge cases [Source]

Many applications and services lend themselves to being monitored and maintained. When you run into an application that does not, it is no less important (it’s like more important) to monitor, alert and maintain these applications. You may find yourself needing to go to extremes in order to pull these systems into your monitoring framework, but if you do not, you are putting yourself at risk for letting faults go undetected. Ensuring coverage of all of the components of your platform, documenting and training staff to understand the platform and practicing what to do in the case of outages will help ensure the highest uptime for your company.



You owe your dreams your courage.

Koleka Putuma