Scenario: When the caller and called application are not in the same origin the CORS Policy doesn’t allow the called application / backend to response the caller application.
It’s strongly recommended to specify the allow origin in your backend. In the following video I explain how we can do it.
Scenario: Assume you developed an online shopping. When a new product is added to the shop. A message is sent to a storage queue. You want to develop a Azure function that’s triggered when a message is pushed to the storage queue and create a thumbnail image for the newly added project from the product’s image.
coming soon ooo
About Azure Function Console in Visual Studio
When you develop an Azure Function App and you want to test and run it on your local machine always a console opens as follows. You can follow the progress of your function via this console if you use the Ilogger framework.
# For example
log.LogInformation($"C# Queue trigger function processed:{queueMessage.ImageName}");
But this consol stays open after stop the debugging. Therefore you can use the Tool > Option > Select Close console after stop debugging automatically.
To grant access to a subscription, identify the appropriate role to assign to an employee
Scenario: Requirement of the presentation tier is to use in-memory sessions to store the logged user’s profile as the user interacts with the portal. In this scenario, the load balancer must provide source IP affinity to maintain a user’s session. The profile is stored only on the virtual machine that the client first connects to because that IP address is directed to the same server.
Azure RBAC roles vs. Azure AD Roles
RBAC roles
AD roles
apply to Azure resources
apply to Azure AD resources (particularly users, groups, and domains)
scope covers management groups, subscriptions, resource groups, and resources
has only one scope, the directory
This greater access grants them the Azure RBAC User Access Administrator role for all subscriptions of their directory
An Azure AD Global Administrator can elevate their access to manage all Azure subscriptions and management groups
–
Through the User Access Administrator role, the Global Administrator can give other users access to Azure resources.
By default, a Global Administrator doesn’t have access to Azure resources
The Global Administrator for Azure Active Directory (Azure AD) can temporarily elevate their permissions to the Azure role-based access control (RBAC) role of User Access Administrator, is assigned at the scope of root (This action grants the Azure RBAC permissions that are needed to manage Azure resources)
Global administrator (AD role) + User Access Administrator (RBAC role) -> can view all resources in, and assign access to, any subscription or management group in that Azure AD organization
As Global Administrator, you might need to elevate your permissions to:
Regain lost access to an Azure subscription or management group.
Grant another user or yourself access to an Azure subscription or management group.
View all Azure subscriptions or management groups in an organization.
Grant an automation app access to all Azure subscriptions or management groups.
Assign a user administrative access to an Azure subscription
To assign a user administrative access to a subscription, you must have Microsoft.Authorization/roleAssignments/write and Microsoft.Authorization/roleAssignments/delete permissions at the subscription scope. Users with the subscription Owner or User Access Administrator role have these permissions.
# Assign the role by using Azure PowerShell
New-AzRoleAssignment `
-SignInName rbacuser@example.com `
-RoleDefinitionName "Owner" `
-Scope "/subscriptions/<subscriptionID>"
# Assign the role by using the Azure CLI
az role assignment create \
--assignee rbacuser@example.com \
--role "Owner" \
--subscription <subscription_name_or_id>
Containerized web application : A web app so that it can be deployed as a Docker image and run from an Azure Container Instance
Azure Container instance
Using Azure container for containerized web application
Rapid deployment is key to business agility
Containerization saves time and reduces costs.
Multiple apps can run in their isolated containers on the same hardware.
Scenario: Suppose you work for an online clothing retailer that is planning the development of a handful of internal apps but hasn’t yet decided how to host them. You’re looking for maximum compatibility, and the apps may be hosted on-prem, in Azure or another cloud provider. Some of the apps might share IaaS infrastructure. In these cases, the company requires the apps isolated from each other. Apps can share the hardware resources, but an app shouldn’t be able to interfere with the files, memory space, or other resources used by other apps. The company values the efficiency of its resources and wants something with a compelling app development story. Docker seems an ideal solution to these requirements. With Docker, you can quickly build and deploy an app and run it in its tailored environment, either locally or in the cloud.
To build a customized docker image for your aplication refer to Docker, container, Kubernetes post. In this post we focus on work with Azure Container Registry.
Azure Container Instance loads and runs Docker images on demand.
The Azure Container Instance service can retrieve the image from a registry such as Docker Hub or Azure Container Registry.
Azure Container Registry
it has a unique url
these registries are private
need authentication to push/pull image
pull and push only with docker CLI or azure CLI
has replication feature in premium SKU (geo-replicated image)
Standard SKU doesn’t support Replications
After change SKU to premium then geo-replication can be used
#-----------------------------------------------------------
# Deploy a Docker image to an Azure Container Instance
#-----------------------------------------------------------
az login
az account list
az account set --subscription="subscription-id"
az account list-locations --output table
az group create --name mygroup --location westeurope
# Different SKUs provide varying levels of scalability and storage.
az acr create --name parisaregistry --resource-group mygroup --sku [standard|Premium] --admin-enabled true
# output -> "loginServer": "parisaregistry.azurecr.io"
# for a username and password.
az acr credential show --name parisaregistry
# specify the URL of the login server for the registry.
docker login parisaregistry.azurecr.io --password ":)" --username ":O" # or using--password-stdin
# you must create an alias for the image that specifies the repository and tag to be created in the Docker registry
# The repository name must be of the form *<login_server>/<image_name>:<tag/>.
docker tag myapp:v1 myregistry.azurecr.io/myapp:v1 # myregistry.azurecr.io/myapp:v1 is the alias for myapp:v1
# Upload the image to the registry in Azure Container Registry.
docker push myregistry.azurecr.io/myapp:v1
# Verify that the image has been uploaded
az acr repository list --name myregistry
az acr repository show --repository myapp --name myregistry
# Dockerfile with azure container registry tasks
FROM node:9-alpine
ADD https://raw.githubusercontent.com/Azure-Samples/acr-build-helloworld-node/master/package.json /
ADD https://raw.githubusercontent.com/Azure-Samples/acr-build-helloworld-node/master/server.js /
RUN npm install
EXPOSE 80
CMD ["node", "server.js"]
After creating the docker file run the following codes
az acr build --registry $ACR_NAME --image helloacrtasks:v1 .
# Verify the image
az acr repository list --name $ACR_NAME --output table
# Enable the registry admin account
az acr update -n $ACR_NAME --admin-enabled true
az acr credential show --name $ACR_NAME
# Deploy a container with Azure CLI
az container create \
--resource-group learn-deploy-acr-rg \
--name acr-tasks \
--image $ACR_NAME.azurecr.io/helloacrtasks:v1 \
--registry-login-server $ACR_NAME.azurecr.io \
--ip-address Public \
--location <location> \
--registry-username [username] \
--registry-password [password]
az container show --resource-group learn-deploy-acr-rg --name acr-tasks --query ipAddress.ip --output table
# place a container registry in each region where images are run
# This strategy will allow for network-close operations, enabling fast, reliable image layer transfers.
# Geo-replication enables an Azure container registry to function as a single registry, serving several regions with multi-master regional registries.
# A geo-replicated registry provides the following benefits:
# Single registry/image/tag names can be used across multiple regions
# Network-close registry access from regional deployments
# No additional egress fees, as images are pulled from a local, replicated registry in the same region as your container host
# Single management of a registry across multiple regions
az acr replication create --registry $ACR_NAME --location japaneast
az acr replication list --registry $ACR_NAME --output table
Azure Container Registry doesn’t support unauthenticated access and require authentication for all operations. Registries support two types of identities:
Azure Active Directory identities, including both user and service principals. Access to a registry with an Azure Active Directory identity is role-based, and identities can be assigned one of three roles: reader (pull access only), contributor (push and pull access), or owner (pull, push, and assign roles to other users).
The admin account included with each registry. The admin account is disabled by default.
The admin account provides a quick option to try a new registry. You enable the account and use its username and password in workflows and apps that need access. Once you’ve confirmed the registry works as expected, you should disable the admin account and use Azure Active Directory identities exclusively to ensure the security of your registry.
Azure Container Instance (ACI)
Azure Container Instance service can load an image from Azure Container Registry and run it in Azure
instance will have an ip address to be accessible
dns name can be used for a friendly label
image url can be azure container registry or docker hub
runs a container in Azure without managing virtual machines and without a higher-level service
Fast startup: Launch containers in seconds.
Per second billing: Incur costs only while the container is running.
Hypervisor-level security: Isolate your application as completely as it would be in a VM.
Custom sizes: Specify exact values for CPU cores and memory.
Persistent storage: Mount Azure Files shares directly to a container to retrieve and persist state.
Linux and Windows: Schedule both Windows and Linux containers using the same API.
The ease and speed of deploying containers in Azure Container Instances makes it a great fit for executing run-once tasks like image rendering or building and testing applications.
provide a DNS name to expose your container to the Internet (dns must be unique)
For scenarios where you need full container orchestration, including service discovery across multiple containers, automatic scaling, and coordinated application upgrades, we recommend Azure Kubernetes Service (AKS).
Create an Azure Container Instance (ACI)
#--------------------------------------------------------------
# Using Azure Container Instance to run a docker image
#--------------------------------------------------------------
# use to generate random dns name
DNS_NAME_LABEL=aci-demo-$RANDOM
# use these image for quick start/ or demo
--image microsoft/aci-helloworld # -> basic Node.js web application on docker hub
--image microsoft/aci-wordcount:latest # -> This container runs a Python script that analyzes the text of Shakespeare's Hamlet, writes the 10 most common words to standard output, and then exits
# create a container instance and start the image running
# for a user friendly url -> --dns-name-label mydnsname
az container create --resource-group mygroup --name ecommerceapiproducts --image parisaregistry.azurecr.io/ecommerceapiproducts:latest --os-type Windows --dns-name-label ecommerceapiproducts --registry-username ":)" --registry-password ":O"
az container create --resource-group mygroup --name myproducts1 --image parisaregistry.azurecr.io/ecommerceapiproducts:latest --os-type Windows --registry-login-server parisaregistry.azurecr.io --registry-username ":)" --registry-password ":O" --dns-name-label myproducts --ports 9000 --environment-variables 'PORT'='9000'
ACI restart-policies
Azure Container Instances has three restart-policy options [Source]:
Restart policy in Azure Container Instance
Description
Always in ACI
Containers in the container group are always restarted. This policy makes sense for long-running tasks such as a web server. This is the default setting applied when no restart policy is specified at container creation.
Never in ACI
Containers in the container group are never restarted. The containers run one time only.
OnFailure in ACI
Containers in the container group are restarted only when the process executed in the container fails (when it terminates with a nonzero exit code). The containers are run at least once. This policy works well for containers that run short-lived tasks.
Azure Container Instances starts the container and then stops it when its process (a script, in this case) exits. When Azure Container Instances stops a container whose restart policy is Never or OnFailure, the container’s status is set to Terminated.
az container create \
--resource-group learn-deploy-aci-rg \
--name mycontainer-restart-demo \
--image microsoft/aci-wordcount:latest \
--restart-policy OnFailure \
--location eastus
az container show \
--resource-group learn-deploy-aci-rg \
--name mycontainer-restart-demo \
--query containers[0].instanceView.currentState.state
az container logs \
--resource-group learn-deploy-aci-rg \
--name mycontainer-restart-demo
ACI check log, state, events
az container delete --resource-group mygroup --name myproducts1
az container logs --resource-group mygroup --name myproducts1
az container attach --resource-group mygroup --name myproducts1
# find the fully qualified domain name of the instance by querying the IP address of the instance or Azure UI > Azure Container Instance > Overview: FQND
az container show --resource-group mygroup --name myproducts --query ipAddress.fqdn
# another variant
--query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" \
--out table
# get the status of the container
--query containers[0].instanceView.currentState.state
# Execute a command in your container
az container exec \
--resource-group learn-deploy-aci-rg \
--name mycontainer \
--exec-command /bin/sh
# Monitor CPU and memory usage on your container
CONTAINER_ID=$(az container show \
--resource-group learn-deploy-aci-rg \
--name mycontainer \
--query id \
--output tsv)
az monitor metrics list \
--resource $CONTAINER_ID \
--metric CPUUsage \
--output table
az monitor metrics list \
--resource $CONTAINER_ID \
--metric MemoryUsage \
--output table
By default, Azure Container Instances are stateless.
If the container crashes or stops, all of its state is lost.
To persist state beyond the lifetime of the container, you must mount a volume from an external store.
mount an Azure file share to an Azure container instance so you can store data and access it later
STORAGE_ACCOUNT_NAME=mystorageaccount$RANDOM
az storage account create \
--resource-group learn-deploy-aci-rg \
--name $STORAGE_ACCOUNT_NAME \
--sku Standard_LRS \
--location eastus
# AZURE_STORAGE_CONNECTION_STRING is a special environment variable that's understood by the Azure CLI.
# The export part makes this variable accessible to other CLI commands you'll run shortly.
export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string \
--resource-group learn-deploy-aci-rg \
--name $STORAGE_ACCOUNT_NAME \
--output tsv)
# create a file share
az storage share create --name aci-share-demo
# To mount an Azure file share as a volume in Azure Container Instances, you need these three values:
# The storage account name
# The share name
# The storage account access key
STORAGE_KEY=$(az storage account keys list \
--resource-group learn-deploy-aci-rg \
--account-name $STORAGE_ACCOUNT_NAME \
--query "[0].value" \
--output tsv)
# check the value
echo $STORAGE_KEY
# Deploy a container and mount the file share (mount /aci/logs/ to your file share)
az container create \
--resource-group learn-deploy-aci-rg \
--name aci-demo-files \
--image microsoft/aci-hellofiles \
--location eastus \
--ports 80 \
--ip-address Public \
--azure-file-volume-account-name $STORAGE_ACCOUNT_NAME \
--azure-file-volume-account-key $STORAGE_KEY \
--azure-file-volume-share-name aci-share-demo \
--azure-file-volume-mount-path /aci/logs/
# check the storage
az storage file list -s aci-share-demo -o table
az storage file download -s aci-share-demo -p <filename>
The task of automating, managing, and interacting with a large number of containers is known as orchestration.
Azure Kubernetes Service (AKS) is a complete orchestration service for containers with distributed architectures with multiple containers.
You can move existing applications to containers and run them within AKS.
You can control access via integration with Azure Active Directory (Azure AD) and access Service Level Agreement (SLA)–backed Azure services, such as Azure Database for MySQL for any data needs, via Open Service Broker for Azure (OSBA).
Azure takes care of the infrastructure to run and scale your applications.
prerequisite is a staging deployment slot to push code to azure app service
easily add deployment slots to an App Service web app (for creating a staging)
swap the staging deployment slot with the production slot
Azure portal provides out-of-the-box continuous integration and deployment with Azure DevOps, GitHub, Bitbucket, FTP, or a local Git repository on your development machine
Mode : Free doesn’t support deployment slot
Seployment slopts are listed under “Deployment slots” menu
Continuous integration/deployment support
Connect your web app with any of the above sources and App Service will do the rest for you by automatically syncing your code and any future changes on the code into the web app
with Azure DevOps, you can define your own build and release process that compiles your source code, runs the tests, builds a release, and finally deploys the release into your web app every time you commit the code
out-of-the-box continuous integration and deployment
Automated deployment
Automated deployment, or continuous integration, is a process used to push out new features and bug fixes in a fast and repetitive pattern with minimal impact on end users.
Azure supports automated deployment directly from several sources. The following options are available:
Azure DevOps: You can push your code to Azure DevOps (previously known as Visual Studio Team Services), build your code in the cloud, run the tests, generate a release from the code, and finally, push your code to an Azure Web App.
GitHub: Azure supports automated deployment directly from GitHub. When you connect your GitHub repository to Azure for automated deployment, any changes you push to your production branch on GitHub will be automatically deployed for you.
Bitbucket: With its similarities to GitHub, you can configure an automated deployment with Bitbucket.
OneDrive: Microsoft’s cloud-based storage. You must have a Microsoft Account linked to a OneDrive account to deploy to Azure.
Dropbox: Azure supports deployment from Dropbox, which is a popular cloud-based storage system that is similar to OneDrive.
Manual deployment
There are a few options that you can use to manually push your code to Azure:
Git: App Service web apps feature a Git URL that you can add as a remote repository. Pushing to the remote repository will deploy your app.
az webapp up: webapp up is a feature of the az command-line interface that packages your app and deploys it. Unlike other deployment methods, az webapp up can create a new App Service web app for you if you haven’t already created one.
Zipdeploy: Use az webapp deployment source config-zip to send a ZIP of your application files to App Service. Zipdeploy can also be accessed via basic HTTP utilities such as curl.
Visual Studio: Visual Studio features an App Service deployment wizard that can walk you through the deployment process.
FTP/S: FTP or FTPS is a traditional way of pushing your code to many hosting environments, including App Service.
# using SDK version 3.1.102.
wget -q -O - https://dot.net/v1/dotnet-install.sh | bash -s -- --version 3.1.102
export PATH="~/.dotnet:$PATH"
echo "export PATH=~/.dotnet:\$PATH" >> ~/.bashrc
# create a new ASP.NET Core MVC application
dotnet new mvc --name BestBikeApp
# build and run the application to verify it is complete.
cd BestBikeApp
dotnet run
# output
info: Microsoft.Hosting.Lifetime[0]
Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: /home/user/BestBikeApp
# Deploy with zipdeploy
dotnet publish -o pub
cd pub
zip -r site.zip *
# perform the deployment
az webapp deployment source config-zip \
--src site.zip \
--resource-group learn-6126217c-08a6-4509-a288-2941d4b96a27 \
--name <your-unique-app-name>
background task in an App Service Web App with WebJobs
Automate a task for a Web App that should run in the background without affecting the performance of the Web App
small automated task, which executes automatically in response to some events
Scenario: Suppose you are a senior web developer in a research role for an online luxury watch dealer. You have a production website that uses Azure web apps. You’ve built a small script that checks stock levels and reports them to an external service. You consider this script to be part of the website, but it’s meant to run in the background, not in response to a user’s actions on the site.
You’d like the website and the script code to be closely associated. They should be stored together as part of the same project in the same source control repository. The script may grow and change as the site changes, so you’d like to always deploy them at the same time, to the same set of cloud resources.
WebJobs are a feature of Azure App Service
WebJobs can be used to run any script or console application that can be run on a Windows computer, with some functionality limitations
To run a WebJob, you’ll need an existing Azure App Service web app, web API, or mobile app
You can run multiple WebJobs in a single App Service plan along with multiple apps or APIs.
Your WebJobs can be written as scripts of several different kinds including Windows batch files, PowerShell scripts, or Bash shell scripts
You can upload such scripts and executables directly to the web app in the Azure portal
you can write WebJobs using a framework such as Python or Node.js
This approach enables you to use the WebJobs tools in Visual Studio to ease development.
Types of Webjob
Continuous
Triggered
it starts when it is deployed
only starts when scheduled or manually triggered
continues to run in an endless loop
for continuous the code must be written in loop (to poll a message queue for new items and process their contents)
use this kind of WebJob, for example, to create daily summaries of messages in a queue.
Webjob vs. Azure function (to know more about Azure Serverless Services/Architecture)
Azure Batch: Azure Batch is an Azure service that enables you to run large-scale parallel and high-performance computing (HPC) applications efficiently in the cloud.
High-performance computing (HPC)
MPI: Message Passing Interface
Workflow: Business processes modeled in software are often called workflows.
Design-first approach: include user interfaces in which you can draw out the workflow
Azure compute: is an on-demand computing service for running cloud-based applications
List the created virtual machines in your subscription
open-port
Open a specific network port for inbound traffic
restart
Restart a virtual machine
show
Get the details for a virtual machine
start
Start a stopped virtual machine
stop
Stop a running virtual machine
update
Update a property of a virtual machine
# Create a Linux virtual machine
az vm create \
--resource-group [sandbox resource group name] \
--location westus \
--name SampleVM \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys \
--verbose # Azure CLI tool waits while the VM is being created.
# Or
--no-wait # option to tell the Azure CLI tool to return immediately and have Azure continue creating the VM in the background.
# output
{
"fqdns": "",
"id": "/subscriptions/<subscription-id>/resourceGroups/Learn-2568d0d0-efe3-4d04-a08f-df7f009f822a/providers/Microsoft.Compute/virtualMachines/SampleVM",
"location": "westus",
"macAddress": "00-0D-3A-58-F8-45",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "40.83.165.85",
"resourceGroup": "2568d0d0-efe3-4d04-a08f-df7f009f822a",
"zones": ""
}
# generate-ssh-keys flag: This parameter is used for Linux distributions and creates
# a pair of security keys so we can use the ssh tool to access the virtual machine remotely.
# The two files are placed into the .ssh folder on your machine and in the VM. If you already
# have an SSH key named id_rsa in the target folder, then it will be used rather than having a new key generated.
# Connecting to the VM with SSH
ssh azureuser@<public-ip-address>
# for exit
logout
# Listing images
az vm image list --output table
# Getting all images
az vm image list --sku WordPress --output table --all # t is helpful to filter the list with the --publisher, --sku or –-offer options.
# Location-specific images
az vm image list --location eastus --output table
Pre-defined VM sizes
Azure defines a set of pre-defined VM sizes for Linux and Windows to choose from based on the expected usage.
Type
Sizes
Description
General purpose
Dsv3, Dv3, DSv2, Dv2, DS, D, Av2, A0-7
Balanced CPU-to-memory. Ideal for dev/test and small to medium applications and data solutions.
Compute optimized
Fs, F
High CPU-to-memory. Good for medium-traffic applications, network appliances, and batch processes.
Memory optimized
Esv3, Ev3, M, GS, G, DSv2, DS, Dv2, D
High memory-to-core. Great for relational databases, medium to large caches, and in-memory analytics.
Storage optimized
Ls
High disk throughput and IO. Ideal for big data, SQL, and NoSQL databases.
GPU optimized
NV, NC
Specialized VMs targeted for heavy graphic rendering and video editing.
High performance
H, A8-11
Our most powerful CPU VMs with optional high-throughput network interfaces (RDMA).
# get a list of the available sizes
az vm list-sizes --location eastus --output table
# output
MaxDataDiskCount MemoryInMb Name NumberOfCores OsDiskSizeInMb ResourceDiskSizeInMb
------------------ ------------ ---------------------- --------------- ---------------- ----------------------
2 2048 Standard_B1ms 1 1047552 4096
2 1024 Standard_B1s 1 1047552 2048
4 8192 Standard_B2ms 2 1047552 16384
4 4096 Standard_B2s 2 1047552 8192
8 16384 Standard_B4ms 4 1047552 32768
16 32768 Standard_B8ms 8 1047552 65536
4 3584 Standard_DS1_v2 (default) 1 1047552 7168
8 7168 Standard_DS2_v2 2 1047552 14336
16 14336 Standard_DS3_v2 4 1047552 28672
32 28672 Standard_DS4_v2 8 1047552 57344
64 57344 Standard_DS5_v2 16 1047552 114688
....
64 3891200 Standard_M128-32ms 128 1047552 4096000
64 3891200 Standard_M128-64ms 128 1047552 4096000
64 3891200 Standard_M128ms 128 1047552 4096000
64 2048000 Standard_M128s 128 1047552 4096000
64 1024000 Standard_M64 64 1047552 8192000
64 1792000 Standard_M64m 64 1047552 8192000
64 2048000 Standard_M128 128 1047552 16384000
64 3891200 Standard_M128m 128 1047552 16384000
# Specify a size during VM creation
az vm create \
--resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
--name SampleVM2 \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys \
--verbose \
--size "Standard_DS5_v2"
# Get available VM Size
# Before a resize is requested, we must check to see if the desired size is available in the cluster our VM is part of.
az vm list-vm-resize-options \
--resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
--name SampleVM \
--output table
# Resize an existing VM
az vm resize \
--resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
--name SampleVM \
--size Standard_D2s_v3
This will return a list of all the possible size configurations available in the resource group. If the size we want isn’t available in our cluster, but is available in the region, we can deallocate the VM. This command will stop the running VM and remove it from the current cluster without losing any resources. Then we can resize it, which will re-create the VM in a new cluster where the size configuration is available.
# List VMs
az vm list
# Output types
az vm list --output table|json|jsonc|tsv
# Getting the IP address
az vm list-ip-addresses -n SampleVM -o table
# output
VirtualMachine PublicIPAddresses PrivateIPAddresses
---------------- ------------------- --------------------
SampleVM 168.61.54.62 10.0.0.4
# Getting VM details
az vm show --resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 --name SampleVM
# we could change to a table format, but that omits almost all of the interesting data. Instead, we can turn to a built-in query language for JSON called JMESPath.
# https://jmespath.org/
# Adding filters to queries with JMESPath
{
"people": [
{
"name": "Fred",
"age": 28
},
{
"name": "Barney",
"age": 25
},
{
"name": "Wilma",
"age": 27
}
]
}
# poeple is an array
people[1]
# output
{
"name": "Barney",
"age": 25
}
people[?age > '25']
# output
[
{
"name": "Fred",
"age": 28
},
{
"name": "Wilma",
"age": 27
}
]
people[?age > '25'].[name]
# output
[
[
"Fred"
],
[
"Wilma"
]
]
# Filtering our Azure CLI queries
az vm show \
--resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
--name SampleVM \
--query "osProfile.adminUsername"
az vm show \
--resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
--name SampleVM \
--query hardwareProfile.vmSize
az vm show \
--resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
--name SampleVM \
--query "networkProfile.networkInterfaces[].id"
az vm show \
--resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
--name SampleVM \
--query "networkProfile.networkInterfaces[].id" -o tsv
# Stopping a VM
az vm stop \
--name SampleVM \
--resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844
# We can verify it has stopped by attempting to ping the public IP address, using ssh, or through the vm get-instance-view command.
az vm get-instance-view \
--name SampleVM \
--resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
--query "instanceView.statuses[?starts_with(code, 'PowerState/')].displayStatus" -o tsv
# Starting a VM
az vm start \
--name SampleVM \
--resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844
# Restarting a VM
az vm start \
--name SampleVM \
--resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844
--no-wait
# Install NGINX web server
# 1.
z vm list-ip-addresses --name SampleVM --output table
# 2.
ssh azureuser@<PublicIPAddress>
# 3.
sudo apt-get -y update && sudo apt-get -y install nginx
# 4.
exit
# Retrieve our default page
# Either
curl -m 10 <PublicIPAddress>
# Or
# in browser try the public ip address
# This command will fail because the Linux virtual machine doesn't expose
# port 80 (http) through the network security group that secures the network
# connectivity to the virtual machine. We can change this with the Azure CLI command vm open-port.
# open oprt
az vm open-port \
--port 80 \
--resource-group learn-5d4bcefe-17c2-4db6-aba8-3f25d2c54844 \
--name SampleVM
# output of curl command
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
An availability set is a logical grouping of two or more VMs
keep your application available during planned or unplanned maintenance.
A planned maintenance event is when the underlying Azure fabric that hosts VMs is updated by Microsoft.
to patch security vulnerabilities,
improve performance,
and add or update features
When the VM is part of an availability set, the Azure fabric updates are sequenced so not all of the associated VMs are rebooted at the same time.
VMs are put into different update domains.
Update domains indicate groups of VMs and underlying physical hardware that can be rebooted at the same time.
Update domains are a logical part of each data center and are implemented with software and logic.
Unplanned maintenance events involve a hardware failure in the data center,
such as a server power outage
or disk failure
VMs that are part of an availability set automatically switch to a working physical server so the VM continues to run.
The group of virtual machines that share common hardware are in the same fault domain.
A fault domain is essentially a rack of servers.
It provides the physical separation of your workload across different power, cooling, and network hardware that support the physical servers in the data center server racks.
With an availability set, you get:
Up to three fault domains that each have a server rack with dedicated power and network resources
Five logical update domains which then can be increased to a maximum of 20
Your VMs are then sequentially placed across the fault and update domains. The following diagram shows an example where you have six VMs in two availability sets distributed across the two fault domains and five update domains.
Scenario: Imagine that you work for a domestic shipping company. Your customers use one of the company’s websites to manage and check the status of their shipments. This website is deployed to virtual machines and hosted on-premises. You’ve noticed that increased usage on the site is straining the virtual machines’ resources. However, you can’t adjust to load fluctuations without manually intervening and creating or deallocating virtual machines.
Scale set is for scalable applications ( automatically adjust to changes in load while minimizing costs with virtual machine scale sets)
adjust your virtual machine resources to match demands
keep the virtual machine configuration consistent to ensure application stabilit
VMs in this type of scale set all have the same configuration and run the same applications
for scenarios that include compute workloads, big-data workloads, and container workloads
to deploy and manage many load-balanced, identical VMs
it scales up and down automatically
it can even resize the vm
A scale set uses a load balancer to distribute requests across the VM instances
It uses a health probe to determine the availability of each instance (The health probe pings the instance)
keep in mind that you’re limited to running 1,000 VMs on a single scale set
support both Linux and Windows VMs
are designed for cost-effectiveness
scaling options
horizontal: adding or removing several VMs, by using rules, The rules are based on metrics.
vertical: adding resources such as memory, CPU power, or disk space to VMs, increasing the size of the VMs in the scale set, by using rules.
How to scale
Scheduled scaling: You can proactively schedule the scale set to deploy one or N number of additional instances to accommodate a spike in traffic and then scale back down when the spike ends.
Autoscaling: If the workload is variable and can’t always be scheduled, you can use metric-based threshold scaling. Autoscaling horizontally scales out based on node usage. It then scales back in when the resources return to a baseline.
Reducing costs by using low-priority
allows you to use Azure compute resources at cost savings of up to 80 percent.
A low-priority scale set provisions VMs through this underused compute capability.
these VMs, keep in mind that they’re temporary. Availability depends on size, region, time of day, and so on. These VMs have no SLA.
When Azure needs the computing power again, you’ll receive a notification about the VM that will be removed from your scale set
you can use Azure Scheduled Events to react to the notification within the VM.
low-priority scale set, you specify two kinds of removal
Delete: The entire VM is removed, including all of the underlying disks.
Deallocate: The VM is stopped. The processing and memory resources are deallocated. Disks are left intact and data is kept. You’re charged for the disk space while the VM isn’t running.
if the workload increases in complexity rather than in volume, and this complexity demands more of your resources, you might prefer to scale vertically.
# create custom data to config scale set
code cloud-init.yaml
# custom data
#cloud-config
package_upgrade: true
packages:
- nginx
write_files:
- owner: www-data:www-data
- path: /var/www/html/index.html
content: |
Hello world from Virtual Machine Scale Set !
runcmd:
- service nginx restart
# create resource group
az group create \
--location westus \
--name scalesetrg
# create scale set
az vmss create \
--resource-group scalesetrg \
--name webServerScaleSet \
--image UbuntuLTS \
--upgrade-policy-mode automatic \
--custom-data cloud-init.yaml \
--admin-username azureuser \
--generate-ssh-keys
# More about scaling : https://docs.microsoft.com/en-us/learn/modules/build-app-with-scale-sets/4-configure-virtual-machine-scale-set
By default, the new virtual machine scale set has two instances and a load balancer.
Thecustom-data flag specifies that the VM configuration should use the settings in the cloud-init.yaml file after the VM has been created. You can use a cloud-init file to install additional packages, configure security, and write to files when the machine is first installed.
Configure vm scale set
# add a health probe to the load balancer
az network lb probe create \
--lb-name webServerScaleSetLB \
--resource-group scalesetrg \
--name webServerHealth \
--port 80 \
--protocol Http \
--path /
The health probe pings the root of the website through port 80. If the website doesn't respond, the server is considered unavailable. The load balancer won't route traffic to the server.
# configure the load balancer to route HTTP traffic to the instances in the scale set
az network lb rule create \
--resource-group scalesetrg \
--name webServerLoadBalancerRuleWeb \
--lb-name webServerScaleSetLB \
--probe-name webServerHealth \
--backend-pool-name webServerScaleSetLBBEPool \
--backend-port 80 \
--frontend-ip-name loadBalancerFrontEnd \
--frontend-port 80 \
--protocol tcp
# change the number of instances in a virtual machine scale set
az vmss scale \
--name MyVMScaleSet \
--resource-group MyResourceGroup \
--new-capacity 6
a mechanism that updates your application consistently, across all instances in the scale set
Azure custom script extension downloads and runs a script on an Azure VM. It can automate the same tasks on all the VMs in a scale set.
create a configuration file that defines the files to get and the commands to run. This file is in JSON format.
# custom script configuration that downloads an application from a repository in GitHub and installs it on a host instance by running a script named custom_application_v1.sh
# yourConfigV1.json
{
"fileUris": ["https://raw.githubusercontent.com/yourrepo/master/custom_application_v1.sh"],
"commandToExecute": "./custom_application_v1.sh"
}
# To deploy this configuration on the scale set, you use a custom script extension
az vmss extension set \
--publisher Microsoft.Azure.Extensions \
--version 2.0 \
--name CustomScript \
--resource-group myResourceGroup \
--vmss-name yourScaleSet \
--settings @yourConfigV1.json
# view the current upgrade policy for the scale set
az vmss show \
--name webServerScaleSet \
--resource-group scalesetrg \
--query upgradePolicy.mode
# apply the update script
az vmss extension set \
--publisher Microsoft.Azure.Extensions \
--version 2.0 \
--name CustomScript \
--vmss-name webServerScaleSet \
--resource-group scalesetrg \
--settings "{\"commandToExecute\": \"echo This is the updated app installed on the Virtual Machine Scale Set ! > /var/www/html/index.html\"}"
# retrieve the IP address
az network public-ip show \
--name webServerScaleSetLBPublicIP \
--resource-group scalesetrg \
--output tsv \
--query ipAddress
Managed disk supports creating a managed Custome image
We can create image from custom VHD in a storage account or directly from generalized VM (via sysprepped VM command)
This process capture a single image
this image contains all managed disks associated with a VM, including both OS, and Data.
Image vs. Snapshot
Image
Snapshot
With managed disks, you can take an image of a generalized VM that has been deallocated.
It’s copy of disk in a specific point of time.
This image includes all managed disks attached to this VM.
it applies only to one disk.
This image can be used to create a Vm.
Sanpshot doesn’t have awareness of any disk except the one it contains.
If a VM has only one OS disk, we can take a snapshot of the disk or take image of VM and create a VM from either snapshot or the image.
Deploy VM from VHD
a vm can have some configurations like installed software -> we can create a new Virtual Hard Disk (VHD) from this vm.
VHD
is like physical hard disk
A VHD can also hold databases and other user-defined folders, files, and data
A virtual machine can contain multiple VHDs
Typically, a virtual machine has an operating system VHD on which the operating system is installed.
It also has one or more data VHDs that contain the applications and other user-specific data used by the virtual machine.
VHD advantages
high availability
physical security
Durability
scalability
cost and performance
VM image
vm image is an original image without preconfigured items
VHD contains configurations
vm image and vhds can be created via Microsoft Hyper-V -> then upload to cloud
Generalized image
it’s customized vm image
and then some server-specific information must be remove and create a general image
The host name of your virtual machine.
The username and credentials that you provided when you installed the operating system on the virtual machine.
Log files.
Security identifiers for various operating system services.
The process of resetting this data is called generalization, and the result is a generalized image.
For Windows, use the Microsoft System Preparation (Sysprep) tool. For Linux, use the Windows Azure Linux Agent (waagent) tool.
specialized virtual image
use a specialized virtual image as a backup of your system at a particular point in time. If you need to recover after a catastrophic failure, or you need to roll back the virtual machine, you can restore your virtual machine from this image.
use a generalized image to build pre-configured virtual machines (VMs)
To generalize a Windows VM, follow these steps:
Sign in to the Windows virtual machine.
Open a command prompt as an administrator.
Browse to the directory \windows\system32\sysprep.
Run sysprep.exe.
In the System Preparation Tool dialog box, select the following settings, and then select OK.TABLE 1PropertyValueSystem Cleanup ActionEnter System Out-of-Box Experience (OOBE)GeneralizeSelectShutdown OptionsShutdown
Running Sysprep is a destructive process, and you can’t easily reverse its effects. Back up your virtual machine first.
When you create a virtual machine image in this way, the original virtual machine becomes unusable. You can’t restart it. Instead, you must create a new virtual machine from the image, as described later in this unit.
Scenario: Suppose you work for an engineering organization that has an application that creates 3D models of the facilities they design. Your organization also has another system that stores a large amount of project-related statistical data. They want to use Azure to modernize the aging high-performance compute platforms that support these applications. Your organization needs to understand the solutions available on Azure, and how they fit into their plans.
Azure HPC choices
Azure batch
Azure VM HPC Instances
Microsoft HPC Pack
they are for specialized tasks
In genetic sciences, gene sequencing.
In oil and gas exploration, reservoir simulations.
In finance, market modeling.
In engineering, physical system modeling.
In meteorology, weather modeling.
Azure batch
for working with large-scale parallel and computationally intensive tasks
batch is managed service
The Batch scheduling and management service is free
batch components
batch account
pools pf vms / notes
batch job
tasks / units of work
batch can associate with storage for input/ourput
the scheduling and management engine determines the optimal plan for allocating and scheduling tasks across the specified compute capacity
ND -> optimized for AI and deep learning workloads for are fast at running single-precision floating point operations, which are used by AI frameworks including Microsoft Cognitive Toolkit, TensorFlow, and Caffe.
have full control of the management and scheduling of your clusters of VMs
HPC Pack has the flexibility to deploy to on-premises and the cloud.
HPC Pack offers a series of installers for Windows that allows you to configure your own control and management plane, and highly flexible deployments of on-premises and cloud nodes.
Deployment of HPC Pack requires Windows Server 2012 or later, and takes careful consideration to implement.
Prerequisites:
You need SQL Server and an Active Directory controlle, and a topology
specify the count of heads/controller nodes and workers
pre-provision Azure nodes as part of the cluster
The size of the main machines that make up the control plane (head and control nodes, SQL Server, and Active Directory domain controller) will depend on the projected cluster size
install HPC PAck -> the you have job scheduler for both HPC and parallel jobs
scheduler appears in the Microsoft Message Passing Interface
HPC Pack is highly integrated with Windows
can see all the application, networking, and operating system events from the compute nodes in the cluster in a single, debugger view.
Scenario: Imagine you’re a software developer at a non-profit organization whose mission is to give every human on the planet access to clean water. To reach this goal, every citizen is asked to take a picture of their water purification meter and text it to you. Each day, you have to scan pictures from over 500,000 households, and record each reading against the sender phone number. The data is used to detect water quality trends and to dispatch the mobile water quality team to investigate the worst cases across each region. Time is of the essence, but processing each image with Optical Character Recognition (OCR) is time-intensive. With Azure Batch, you can scale out the amount of compute needed to handle this task on a daily basis, saving your non-profit the expense of fixed resources.
Azure Batch is an Azure service that enables you to run large-scale parallel and high-performance computing (HPC) applications efficiently in the cloud.
no need to manage infrastructure
Azure Batch to execute large-scale, high-intensity computation jobs
for running parallel tasks
flexible and scalable compute solution, such as Azure Batch, to provide the computational power
for compute-intensive tasks
heavy workloads can be broken down into separate subtasks and run in parallel
components
azure batch account
batch account is container for all batch resources
# create a job for monitoring
az batch job create \
--id myjob2 \
--pool-id mypool
# create tasks of the job
for i in {1..10}
do
az batch task create \
--task-id mytask$i \
--job-id myjob2 \
--command-line "/bin/bash -c 'echo \$(printenv | grep \AZ_BATCH_TASK_ID) processed by; echo \$(printenv | grep \AZ_BATCH_NODE_ID)'"
done
# check status
az batch task show \
--job-id myjob2 \
--task-id mytask1
# list tasks output
az batch task file list \
--job-id myjob2 \
--task-id mytask5 \
--output table
# create a folder for output and change to this folder
mkdir taskoutputs && cd taskoutputs
# download tasks output
for i in {1..10}
do
az batch task file download \
--job-id myjob2 \
--task-id mytask$i \
--file-path stdout.txt \
--destination ./stdout$i.txt
done
# show content
cat stdout1.txt && cat stdout2.txt
# delte job
az batch job delete --job-id myjob2 -y
Automate business processes
Modern businesses run on multiple applications and services
send the right data to the rigth task impact the efficiency
azure features to build and implement workflows that integrate multiple systems
Logic Apps
Microsoft Power Automate
WebJobs
Azure Functions
similarities of them
They can all accept inputs. An input is a piece of data or a file that is supplied to the workflow.
They can all run actions. An action is a simple operation that the workflow executes and may often modify data or cause another action to be performed.
They can all include conditions. A condition is a test, often run against an input, that may decide which action to execute next.
They can all produce outputs. An output is a piece of data or a file that is created by the workflow.
In addition, workflows created with these technologies can either start based on a schedule or they can be triggered by some external event.
They have design-first approach
Logic app
Power automate
They have code-first technology
webjob
Azure functions
Logic Apps
to automate, orchestrate, and integrate disparate components of a distributed application.
Visual designer / Json Code Editor
over 200 connectors to external services
If you have an unusual or unique system that you want to call from a Logic Apps, you can create your own connector if your system exposes a REST API.
Microsoft Power Automate
create workflows even when you have no development or IT Pro experience
The WebJobs SDK only supports C# and the NuGet package manager.
Azure Functions
small pieces of code
pay for the time when the code runs
Azure automatically scales the function
has available template
Microsoft Power Automate supported flows
Automated: A flow that is started by a trigger from some event. For example, the event could be the arrival of a new tweet or a new file being uploaded.
Button: Use a button flow to run a repetitive task with a single click from your mobile device.
Scheduled: A flow that executes on a regular basis such as once a week, on a specific date, or after 10 hours.
Business process: A flow that models a business process such as the stock ordering process or the complaints procedure.
Azure function available templates
HTTPTrigger. Use this template when you want the code to execute in response to a request sent through the HTTP protocol.
TimerTrigger. Use this template when you want the code to execute according to a schedule.
BlobTrigger. Use this template when you want the code to execute when a new blob is added to an Azure Storage account.
CosmosDBTrigger. Use this template when you want the code to execute in response to new or updated documents in a NoSQL database.
WebJobs for these reasons
You want the code to be a part of an existing App Service application and to be managed as part of that application, for example in the same Azure DevOps environment.
You need close control over the object that listens for events that trigger the code. This object in question is the JobHost class, and you have more flexibility to modify its behavior in WebJobs
design-first comparison
Microsoft Power Automate
Logic Apps
Intended users
Office workers and business analysts
Developers and IT pros
Intended scenarios
Self-service workflow creation
Advanced integration projects
Design tools
GUI only. Browser and mobile app
Browser and Visual Studio designer. Code editing is possible
Application Lifecycle Management
Power Automate includes testing and production environments
Logic Apps source code can be included in Azure DevOps and source code management systems
We should ask this questions ourselves by architecting a solution by designing its monitoring solution
how would you diagnose issues with an application
how would you understand it’s health
what are it’s choke points
how would you identify them and what would you do when something breaks
Like the firefighting maneuver that must be executed half-yearly or yearly in each company, we have to use “chaos engineering” technique to intentionally cause breakage in the environments in a controlled manner to test monitoring, alerts, react of the architecture and resiliency of our solution.
Decide for the right resource and architecture for youe product
Choose the appropriate architecture based on your requirements
Know which compute options is right for your workload
Identify the right storage solution that meets your needs
Decide how you’re going to manage all your resources
Automation: The use of software to create repeatable instructions and processes to replace or reduce human interaction with IT systems
Cloud Governance: The people, process, and technology associated with your cloud infrastructure, security, and operations. It involves a framework with a set of policies and standard practices
Infrastructure as Code: The process of managing and provisioning computer resources through human and machine-readable definition files, rather than physical hardware configuration or interactive configuration tools like the AWS console
IT Audit: The examination and evaluation of an organization’s information technology infrastructure, policies and operations
CloudFormation
CloudFormation is a AWS service for create infrastructure as code.
it’s a yaml file
How to start with CloudFormation
Services -> CloudFormation
Create stack “With new resources (standard)”
Template is ready
Upload a template file
Click “Choose file” button
Select provided YAML file
Next
CloudFormation Template sections
Format version
Decsription
Parameters
Resources
Outputs
Each AWS Account has its own AWS Identity & Access Management (IAM) Service.
If you know Azure On Microsoft Azure, we have a Subscription. The AWS Account can be equivalent to the Azure Subscription. With a difference. Each AWS Account can have its own IAM Users but in Azure, we have a central IAM Service, called Azure Active Directory (AAD). Each above-called service is a huge topic but we don’t do a deep dive right now.
The AWS IAM User can be used
Only for CLI purposes. This user can’t log in to the AWS Portal.
Only for working with the AWS Portal. This user can’t be used for CLI.
Both purposes. This user can be used to log in to the AWS Portal and CLI.
Pipeline User
The first question is why do we need a Pipeline User?
Automated deployment (CI/CD) pipeline and prevent manual or per-click deployment.
We can only grant the pipeline user for some specific permissions and audit the logs of this user.
This user can work with AWS Services only via CLI. Therefore it has an Access Key ID and a Key Secret.
If you know Azure It’s used like a Service Principal, that you have a client-id and client-secret.
Expose the API/Service Products for external customers (exposes an OpenAPI endpoint)
Includes a secure API gateway
In case of Premium tier includes an Azure Traffic Manager
Throtteling the requests to prevent resource exhaustion
Set policies
Set Cache
Key concepts
Secure and isolate access to azure resources by using Network Security Group and Application Security Group
This section is only “what should we know about NSG and ASG”. To see the configuration refer to “Configure NSG and ASG“.
By using Network Security Group (NSG) can be specified which computer can be connected to application server [Source]. – Network Security Group: is to secure network traffic for virtual machines – Virtual Network Service Endpoint: is for controlling network traffic to and from azure services e.g. storage, database – Application Security Group:
Network security group
filter network traffic to or from azure resources
contains security rules that are configured to allow or deny inbound and outbound traffic.
can be used to filter traffic between virtual machines or subnets, both within a vnet and from the internet.
The allowed IP addresses can be configured in NSG as well.
NSG rules are applied to connection between on-prem to vnet or vnet to vnet.
NSG of a subnet is applied to all NIC in this subnet
NSG of subnet and NIC are evaluated separately
NSG on subnet instead of NIC reduces administration and management effort.
Each subnet and NIC can habe only one NSG
NSG supports TCP, UDP, ICMP, and operates at layer 4 of the OSI model.
Vnet and NSG must be in the same region
Network security group security rules
NSG contains one or more rules
Rules are allow or deny
Rule properites
Name
Priority 100..4096
Source [Any, IP Addresses|Service Tag|Application Security Group]
Source Port range
Protocol [Any|TCP|UDP|ICMP]
Destination [Any, IP Addresses|Service Tag|Application Security Group]
Destination Port range
Action [Allow|Deny]
Rules are evaluated by priority using 5-tuple information (Source, SourcePort, Destination, DestinationPort, Protocol)
The rule with lower priority will takeplace e.g. 200 (Allow 3389 RDP) and 150 (Deny 3389 RDP). 150 will takeplace.
With NSG, connections are stateful. It means, return traffic is automatically allowed for the same TCP/UDP session e.g. inbound rule allows traffic on port 80 also allows the vm to response the request. A corresponding outbound rule is not needed.
Add Inbound rule pane
Service tag can allow or deny traffic to a spesific azure service either globally or per region. Therefore you don’t need to know the IP address and port os the service because azure does it for you.
Microosft create the service tags (you cannot create your own)
Some examples of the tags are:
VirtualNetwork – This tag represents all virtual network addresses anywhere in Azure, and in your on-premises network if you’re using hybrid connectivity.
AzureLoadBalancer – This tag denotes Azure’s infrastructure load balancer. The tag translates to the virtual IP address of the host (168.63.129.16) where Azure health probes originate.
Internet – This tag represents anything outside the virtual network address that is publicly reachable, including resources that have public IP addresses. One such resource is the Web Apps feature of Azure App Service.
AzureTrafficManager – This tag represents the IP address for Azure Traffic Manager.
Storage – This tag represents the IP address space for Azure Storage. You can specify whether traffic is allowed or denied. You can also specify if access is allowed only to a specific region, but you can’t select individual storage accounts.
SQL – This tag represents the address for Azure SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL, and Azure SQL Data Warehouse services. You can specify whether traffic is allowed or denied, and you can limit to a specific region.
AppService – This tag represents address prefixes for Azure App Service.
service Tag
Scenario: We have a WebServer in Subnet1 and SQL Server in Subnet2. NSG must only allow 1433 for SQL.
Scenario: Suppose your company wants to restrict access to resources in your datacenter, spread across several network address ranges. With augmented rules, you can add all these ranges into a single rule, reducing the administrative overhead and complexity in your network security groups.
Network security group default rules
default rules connot be deleted or changed but can be overriden
NSG Overview
Application Security Group (ASG)
Scenario: your company has a number of front-end servers in a virtual network. The web servers must be accessible over ports 80 and 8080. Database servers must be accessible over port 1433. You assign the network interfaces for the web servers to one application security group, and the network interfaces for the database servers to another application security group. You then create two inbound rules in your network security group. One rule allows HTTP traffic to all servers in the web server application security group. The other rule allows SQL traffic to all servers in the database server application security group.
Application security group let you configure network security for resources used by specific application.
It’s for grouping Vms logically, no matter what ip address is or in which subnet assigned
Using ASG within NSG to apply a security rule to a group of resources, after that should only the resources be added to ASG.
ASG let us to group network interfaces together and the ASG can be used as Source or Destination in NSG.
Secure and isolate access to azure resources by using Service Enpoints
Scenario: The agency has created an API to make recent and historical census data available. They want to prevent any unnecessary back-end information from being exposed that could be used in malicious attacks. They would also like to prevent abuse of the APIs in the form of a large volume of requests and need a mechanism to throttle requests if they exceed an allowed amount. They are serving their APIs on the Azure API Management service and would like to implement policies to address these concerns.
add a policy to remove the X-Powered-By header from responses via adding a policy to outbound
Converts a request or response body from JSON to XML.
Convert XML to JSON
Converts a request or response body from XML to JSON.
Find and replace string in body
Finds a request or response substring and replaces it with a different substring.
Mask URLs in content
Rewrites links in the response body so that they point to the equivalent link through the gateway. by adding <redirect-content-urls /> in outbount section, all backend urls are replaced with apim endpoint url.
Set backend service
Changes the backend service for an incoming request.
Set body
Sets the message body for incoming and outgoing requests.
Set HTTP header
Assigns a value to an existing response or request header, or adds a new response or request header.
Set query string parameter
Adds, replaces the value of, or deletes a request query string parameter.
Rewrite URL
Converts a request URL from its public form to the form expected by the web service.
Transform XML using an XSLT
Applies an XSL transformation to the XML in the request or response body.
Throttling policies
Throttling
Detail
Throttle API requests
a few users over-use an API to the extent that you incur extra costs or that responsiveness to other uses is reduced. You can use throttling to limit access to API endpoints by putting limits on the number of times an API can be called within a specified period of time <rate-limit calls=”3″ renewal-period=”15″ /> and user receives 429 error when that limit was reached
# applies to all API operations
<rate-limit calls="3" renewal-period="15" />
# target a particular API operation
<rate-limit calls="number" renewal-period="seconds">
<api name="API name" id="API id" calls="number" renewal-period="seconds" />
<operation name="operation name" id="operation id" calls="number" renewal-period="seconds" />
</api>
</rate-limit>
#it applies the limit to a specified request key, often the client IP address. It gives every client equal bandwidth for calling the API
<rate-limit-by-key calls="number"
renewal-period="seconds"
increment-condition="condition"
counter-key="key value" />
# limit rate limit by a requests IP Address
<rate-limit-by-key calls="10"
renewal-period="60"
increment-condition="@(context.Response.StatusCode == 200)"
counter-key="@(context.Request.IpAddress)"/>
# When you choose to throttle by key, you will need to decide on specific requirements for rate limiting. For example, the table below lists three common ways of specifying the counter-key:
Value Detail
context.Request.IpAddress Rates limited by client IP address
context.Subscription.Id Rates limited by subscription ID
context.Request.Headers.GetValue("My-Custom-Header-Value") Rates limited by a specified client request header value
Note: The <rate-limit-by-key> policy is not available when your API Management gateway is in the Consumption tier. You can use <rate-limit>instead.