I assumed that you are familiar with the Kubernetes Cluster concepts (elementary level). Therefore I didn’t do any deep dive into the elementary components. The focus of this post is the following topics:
Azure-related Kubernetes components
Deploying AKS with Terraform
The control plane (Kubernetes core component)
It’s the core of the Kubernetes Cluster and doesn’t matter on which cloud provider platform you are provisioning a cluster. The main OS for AKS is Linux based.
Node pool (AKS component)
AKS has two types of Node pools:
System Node Pool: contains the nodes on which the control plane is running. For the control plane’s high availability is recommended to have at least 3 nodes in the System Node Pool.
User Node pool: contains the nodes on which my applications, APIs, APPs, or Services are running. This node pool can have one of the following host’s OSs.
Linux
Windows
An AKS Cluster can have both Windows and Linux -based User Node Pools in parallel. We can use nodeSelector in the YAML file to specify on which User Node Pool my application should be deployed. See more in the video below.
Note: The importance is that all the nodes in a Node pool (doesn’t matter System or User) have the same VM size. Because we can specify one VM size for one Node Pool.
Node components
Each node in the Node Pool is a VM. Kubernetes uses the following components to orchestrate the nodes and pods that are running on the nodes.
Kubelet: manages deployments
Kube-Proxy: manages the nodes’ networking
Container runtime: up and run container images
This video walkthrough the AKS core concept and components and its implementation in Terraform.
The PowerPoint slides of the video are available here.
AKS security (service principal or managed identity)
AKS Cluster needs access to other Azure resources e.g. for autoscaling must be able to expand the VM Scale Set and assign an IP Address to the VM. Therefore the AKS Cluster needs Network Contributor RBAC Role.
Kubele needs to pull images from Azure Container Registry, therefore it needs AcrPull RBAC Role.
Only an identity can obtain a role. In Azure, we have two possibilities:
Associate a Service Principal to a Service (old solution in 2022) and give RBAC roles to the service principal.
Assign an identity to a service (new solution in 2022) and give RBAC roles to this identity. Here we have two types of identities:
System Managed Identity: is created automatically and assigned to a service and is deleted when the service is deleted
User Managed Identity: is created by the user and the user should assign it to a service and is not deleted when the service is deleted.
In this video, I have explained how to configure the Terraform implementation to assign the User Managed Identity to AKS Cluster and Kubelet. In addition, has been explained how to assign RBAC roles to them and which RBAC role for which purpose should be assigned.
The PowerPoint slides of the video are available here.
The onboarding posts go through the important points, that an Azure Cloud Solution Architect has to know to get started for certificate preparation. The prepration for the certificate has the following steps:
Get familier with Azure services
Know the keywords
Get familier with the key concept of each Azure service
Scenario: Imagine you work for an escalator company that has invested in IoT technology to monitor its product in the field. You oversee the processing of temperature sensor data from the drive gears of the escalators. You monitor the temperature data and add a data flag to indicate when the gears are too hot. In downstream systems, this data helps determine when maintenance is required.
Your company receives sensor data from several locations and from different escalator models. The data arrives in different formats, including batch file uploads, scheduled database pulls, messages on a queue, and incoming data from an event hub. You want to develop a reusable service that can process your temperature data from all these sources. [Source]
Azure function has three components like all the function that we develop:
Input/s: which is done by a json configuration without developing a code.
Logic: the part that you have to develop with the language you like.
Output/s: which is done by a json configuration without developing a code.
Azure function
Can be considered as the Function as a Service (FaaS)
Function can be a microservice (But don’t user Azure Function for long run workloads)
Auto scale infrastructure (scale out or down) based on load
Automatic provisioning by cloud provider
Use the language of your choice.
Less administrative tasks and more focus business logic
Important characteristics of serverless solutions
Avoid over-allocation of infrastructure (you pay only when the function is running)
Stateless logic (as the work around the states can be stored in associated storage services)
Event driven (they run only in response to an event e.g. receive an HTTP request, message being added to a queue,… No need to develop a code for listening or watching the queue). Refer to Azure function triggers to see a list of supported services.
Drawbacks of serverless solutions
Execution time: Function has a timeout of 5 minutes and configurable to 10 minutes. With Azure Durable Functions we can solve the timeout problem.
Execution frequency: if the function is used/ executed continuously, it’s prudent to host this service on a VM unless it will get expensive/costly.
Function APP
It’s for logically group the functions and resources.
Service Plan ( Azure function is a serverless service but doesn’t mean, that it doesn’t have a server, where it have to be hosted and run. Azure function has a server, where it’s hosted and run but Cloud provider will provision the resources for you)
Service Plan Types
Consumption Service Plan
Timeout 5-10 min
Automatic scaling
Bills you when function is running
App Service Plan (Not serverless anymore)
Avoid timeout periods + continuously run
Azure function uses a storage account as well for logging function execution
Azure function can be tested as well, refer to screenshot below. To have automated test use the deployment slots and deployment center. They are explained in next sections.
Use the Monitor option in the screenshot below to check the executions.
Azure function triggers
Blob storage
Microsoft graph events
Azure cosmos db
Queue storage
Evnt grid
Service bus
Http
Timer
Azure function binding
Azure function has to have input and output bindings.
Scenario: Let’s say we want to write a new row to Azure Table storage whenever a new message appears in Azure Queue storage. This scenario can be implemented using an Azure Queue storage trigger and an Azure Table storage output binding.
This sample code is a microservice architecture with “database per service” design pattern. When a new product is added, a message is pushed to the storage queue for each image of the product. By pushing the message/s to the queue, the function is getting run, gets the original image/s from a storage container, generates the thumbnail and saves it to another container.
By Creating a function app some resources are creating by default.
The following figures demonstrate testing an azure function.
Scenario: For example, in the shoe-company scenario we want to monitor social media reaction to our new product. We’ll build a logic app to integrate Twitter, Azure Cognitive Services, SQL Server, and Outlook email.
Azure Logic Apps
Make diverse services work together
Provide pre-built components that can connect to hundreds of services
Steps of designing a logic app
Plan the business process (step based)
Identify the type of each step
Logic apps operations
Trigger -> respond to external events. Triggers are for lunching the logic app.
Action -> process or store data
Control action -> make decision based on data
Example:
detect tweets about the product -> Trigger
analyze the sentiment -> Action
If logic -> Control
store a link to positive tweets -> Action
email customer service for negative tweets -> Action
Polling trigger: periodically checks an external service for new data e.g. check RSS feed for new posts. For polling trigger we have to set frequency (second, minute, hour) & interval e.g. frequency = minutes & interval = 5 means the pooling trigger runs each 5 minutes.
Polling triggers force you to make a choice between how much they cost and how quickly they respond to new data. There is often a delay between when new data becomes available and when it is detected by the app. The following illustration shows the issue.
In the worst case, the potential delay for detecting new data is equal to the polling interval. So why not use a smaller interval? To check for new data, the Logic Apps execution engine needs to run your app, which means you incur a cost. In general, the shorter the interval, the higher the cost but the quicker you respond to new data. The best polling interval for your logic app depends on your business process and its tolerance for delay.
Polling triggers are perfect for the “route and process data” scenarios.
Push trigger
notifies immediately when data is available e.g. the trigger that detects when a message is added to an Azure Service Bus queue is a push trigger.
Push triggers are implemented using webhooks.
The Logic Apps infrastructure generates a callback URL for you and registers it with the external service by first creation and each later updates
Logic Apps de-registers the callback for you as needed e.g. if you disable or delete your app.
The nice thing about push triggers is that they don’t incur any costs polling for data when none is available
If push triggers respond more quickly and cost less than polling triggers, then why not use them all the time? The reason is that not every connector offers a push trigger.
Sometimes the trigger author chose not to implement push and sometimes the external service didn’t support push
When a solution consists of several different services/programs, this solution is a distributed solution. In distributed solutions the components have to communicate with each other via messages.
Even on the same server or in the same data center, loosely coupled architectures require mechanisms for components to communicate. Reliable messaging is often a critical problem.
As the cloud solution architect you have to
understand each individual communication that the components of the application exchange
understand whether the communication sends message or event
then you can decide to choose an event-based or message-based architecture
Each communication can use different technologies
In the both event-based and message-based, there’s a sender and receiver. But the difference is the content of what they send.
Message
Contains raw data
This data is produced by sender
This data is consumed by receiver
It contains data/payload itself not just the reference to that data
Sender expect that the destination component process this data in a certain way
E.g. mobile app expect that the web API save the sent data to a storage.
Available technologies
Azure Queue Storage
Azure Service Bus
Message Queue
Topics
Event
Light weight notification that indicates something has happend
Doesn’t contain raw data
May reference where the data lives
Sender has no expectations of receiver
E.g. Web API inform the Web App or mobile App about a new file.
Available technologies
Azure Event Grid
Azure Event Hubs
Azure queue technologies
The section explains more about Azure Queue Storage, Azure Service Bus Queue, and Azure Service Bus Topic and when which technology can be used in the solution.
Azure queue storage
This service is integrated in Azure storage account
Can contains millions of messages
The queue limitation is by the capacity of the storage account
Azure service bus queue
It’s a message broker system intended for enterprise applications
For higher security requirements
have different data contracts
utilize multiple communication protocols
include both cloud and on-prem services
In message queues of the Azure queue storage and the Azure service bus queu, each queue has a sender and a subscriber. Subscriber takes the message and process is as the sender expects.
Both of these services are based on the idea of a “queue” which holds sent messages until the target is ready to receive them.
Azure service bus topics
It’s like queues
Can have multiple subscriber
Each subscriber receives its own copy of the message
Topics use queues
By post to a topic, the message is copied and dropped into the queue for each subscription.
The queue means that the message copy will stay around to be processed by each subscription branch even if the component processing that subscription is too busy to keep up.
Benefits of quese
Increased reliability
For exchanging messages (at times of high demand, messages can simply wait until a destination component is ready to process them)
Message delivery guarantees
There are different message delivery garanties
At-Least-Once delivery
each message is guaranteed to be delivered to at least one of the components that retrieve messages from the queue
Example: in certain circumstances, it is possible that the same message may be delivered more than once. For example, if there are two instances of a web app retrieving messages from a queue, ordinarily each message goes to only one of those instances. However, if one instance takes a long time to process the message, and a time-out expires, the message may be sent to the other instance as well. Your web app code should be designed with this possibility in mind.
At-Most-Once delivery
each message is not guaranteed to be delivered, and there is a very small chance that it may not arrive.
unlike At-Least-Once delivery, there is no chance that the message will be delivered twice. This is sometimes referred to as “automatic duplicate detection”.
First-In-First-Out (FIFo) delivery
If your distributed application requires that messages are processed in precisely the correct order, you must choose a queue system that includes a FIFO guarantee.
Transactional support
It’s useful for e.g. e-commerce systems. By clicking the buy button, a series of messages are sending off to different destinations e.g. order details system, total sum and payment details system, generate invoice system. If the credit card details message delivery fails, then so will the order details message.
How to decide for a queue technique
Queue Storage
Need audit trail of all messages
Queue exceed 80 GB
Track processing progress inside queue
It’s for simple solutions
Service bus queue
Need At-Most-Once delivery
Need FIFO guarantee
Need group messages into transactions
Want to receive messages without polling queue
Need Role-based access model to the queue
Need to handle messages larger than 64K but less than 256 KB
Queue Size not grow larger than 80 GB
Need batches of messages
Service bus topic
If you need multiple reciever to handle each message
Azure event technologies
Scenario: Suppose you have a music-sharing application with a Web API that runs in Azure. When a user uploads a new song, you need to notify all the mobile apps installed on user devices around the world who are interested in that genre [Source]. The Event Grid is the pefect solution for this scenario.
Many applications use the publish-subscribe model to notify distributed components that something happend.
Event grid
It’s a one-to-many relationship
Fully-managed event routing service running on top of Azure Service Fabric.
Event Grid distributes events from different sources,
Scenario: When the caller and called application are not in the same origin the CORS Policy doesn’t allow the called application / backend to response the caller application.
It’s strongly recommended to specify the allow origin in your backend. In the following video I explain how we can do it.
To grant access to a subscription, identify the appropriate role to assign to an employee
Scenario: Requirement of the presentation tier is to use in-memory sessions to store the logged user’s profile as the user interacts with the portal. In this scenario, the load balancer must provide source IP affinity to maintain a user’s session. The profile is stored only on the virtual machine that the client first connects to because that IP address is directed to the same server.
Azure RBAC roles vs. Azure AD Roles
RBAC roles
AD roles
apply to Azure resources
apply to Azure AD resources (particularly users, groups, and domains)
scope covers management groups, subscriptions, resource groups, and resources
has only one scope, the directory
This greater access grants them the Azure RBAC User Access Administrator role for all subscriptions of their directory
An Azure AD Global Administrator can elevate their access to manage all Azure subscriptions and management groups
–
Through the User Access Administrator role, the Global Administrator can give other users access to Azure resources.
By default, a Global Administrator doesn’t have access to Azure resources
The Global Administrator for Azure Active Directory (Azure AD) can temporarily elevate their permissions to the Azure role-based access control (RBAC) role of User Access Administrator, is assigned at the scope of root (This action grants the Azure RBAC permissions that are needed to manage Azure resources)
Global administrator (AD role) + User Access Administrator (RBAC role) -> can view all resources in, and assign access to, any subscription or management group in that Azure AD organization
As Global Administrator, you might need to elevate your permissions to:
Regain lost access to an Azure subscription or management group.
Grant another user or yourself access to an Azure subscription or management group.
View all Azure subscriptions or management groups in an organization.
Grant an automation app access to all Azure subscriptions or management groups.
Assign a user administrative access to an Azure subscription
To assign a user administrative access to a subscription, you must have Microsoft.Authorization/roleAssignments/write and Microsoft.Authorization/roleAssignments/delete permissions at the subscription scope. Users with the subscription Owner or User Access Administrator role have these permissions.
# Assign the role by using Azure PowerShell
New-AzRoleAssignment `
-SignInName rbacuser@example.com `
-RoleDefinitionName "Owner" `
-Scope "/subscriptions/<subscriptionID>"
# Assign the role by using the Azure CLI
az role assignment create \
--assignee rbacuser@example.com \
--role "Owner" \
--subscription <subscription_name_or_id>
Containerized web application : A web app so that it can be deployed as a Docker image and run from an Azure Container Instance
Azure Container instance
Using Azure container for containerized web application
Rapid deployment is key to business agility
Containerization saves time and reduces costs.
Multiple apps can run in their isolated containers on the same hardware.
Scenario: Suppose you work for an online clothing retailer that is planning the development of a handful of internal apps but hasn’t yet decided how to host them. You’re looking for maximum compatibility, and the apps may be hosted on-prem, in Azure or another cloud provider. Some of the apps might share IaaS infrastructure. In these cases, the company requires the apps isolated from each other. Apps can share the hardware resources, but an app shouldn’t be able to interfere with the files, memory space, or other resources used by other apps. The company values the efficiency of its resources and wants something with a compelling app development story. Docker seems an ideal solution to these requirements. With Docker, you can quickly build and deploy an app and run it in its tailored environment, either locally or in the cloud.
To build a customized docker image for your aplication refer to Docker, container, Kubernetes post. In this post we focus on work with Azure Container Registry.
Azure Container Instance loads and runs Docker images on demand.
The Azure Container Instance service can retrieve the image from a registry such as Docker Hub or Azure Container Registry.
Azure Container Registry
it has a unique url
these registries are private
need authentication to push/pull image
pull and push only with docker CLI or azure CLI
has replication feature in premium SKU (geo-replicated image)
Standard SKU doesn’t support Replications
After change SKU to premium then geo-replication can be used
#-----------------------------------------------------------
# Deploy a Docker image to an Azure Container Instance
#-----------------------------------------------------------
az login
az account list
az account set --subscription="subscription-id"
az account list-locations --output table
az group create --name mygroup --location westeurope
# Different SKUs provide varying levels of scalability and storage.
az acr create --name parisaregistry --resource-group mygroup --sku [standard|Premium] --admin-enabled true
# output -> "loginServer": "parisaregistry.azurecr.io"
# for a username and password.
az acr credential show --name parisaregistry
# specify the URL of the login server for the registry.
docker login parisaregistry.azurecr.io --password ":)" --username ":O" # or using--password-stdin
# you must create an alias for the image that specifies the repository and tag to be created in the Docker registry
# The repository name must be of the form *<login_server>/<image_name>:<tag/>.
docker tag myapp:v1 myregistry.azurecr.io/myapp:v1 # myregistry.azurecr.io/myapp:v1 is the alias for myapp:v1
# Upload the image to the registry in Azure Container Registry.
docker push myregistry.azurecr.io/myapp:v1
# Verify that the image has been uploaded
az acr repository list --name myregistry
az acr repository show --repository myapp --name myregistry
# Dockerfile with azure container registry tasks
FROM node:9-alpine
ADD https://raw.githubusercontent.com/Azure-Samples/acr-build-helloworld-node/master/package.json /
ADD https://raw.githubusercontent.com/Azure-Samples/acr-build-helloworld-node/master/server.js /
RUN npm install
EXPOSE 80
CMD ["node", "server.js"]
After creating the docker file run the following codes
az acr build --registry $ACR_NAME --image helloacrtasks:v1 .
# Verify the image
az acr repository list --name $ACR_NAME --output table
# Enable the registry admin account
az acr update -n $ACR_NAME --admin-enabled true
az acr credential show --name $ACR_NAME
# Deploy a container with Azure CLI
az container create \
--resource-group learn-deploy-acr-rg \
--name acr-tasks \
--image $ACR_NAME.azurecr.io/helloacrtasks:v1 \
--registry-login-server $ACR_NAME.azurecr.io \
--ip-address Public \
--location <location> \
--registry-username [username] \
--registry-password [password]
az container show --resource-group learn-deploy-acr-rg --name acr-tasks --query ipAddress.ip --output table
# place a container registry in each region where images are run
# This strategy will allow for network-close operations, enabling fast, reliable image layer transfers.
# Geo-replication enables an Azure container registry to function as a single registry, serving several regions with multi-master regional registries.
# A geo-replicated registry provides the following benefits:
# Single registry/image/tag names can be used across multiple regions
# Network-close registry access from regional deployments
# No additional egress fees, as images are pulled from a local, replicated registry in the same region as your container host
# Single management of a registry across multiple regions
az acr replication create --registry $ACR_NAME --location japaneast
az acr replication list --registry $ACR_NAME --output table
Azure Container Registry doesn’t support unauthenticated access and require authentication for all operations. Registries support two types of identities:
Azure Active Directory identities, including both user and service principals. Access to a registry with an Azure Active Directory identity is role-based, and identities can be assigned one of three roles: reader (pull access only), contributor (push and pull access), or owner (pull, push, and assign roles to other users).
The admin account included with each registry. The admin account is disabled by default.
The admin account provides a quick option to try a new registry. You enable the account and use its username and password in workflows and apps that need access. Once you’ve confirmed the registry works as expected, you should disable the admin account and use Azure Active Directory identities exclusively to ensure the security of your registry.
Azure Container Instance (ACI)
Azure Container Instance service can load an image from Azure Container Registry and run it in Azure
instance will have an ip address to be accessible
dns name can be used for a friendly label
image url can be azure container registry or docker hub
runs a container in Azure without managing virtual machines and without a higher-level service
Fast startup: Launch containers in seconds.
Per second billing: Incur costs only while the container is running.
Hypervisor-level security: Isolate your application as completely as it would be in a VM.
Custom sizes: Specify exact values for CPU cores and memory.
Persistent storage: Mount Azure Files shares directly to a container to retrieve and persist state.
Linux and Windows: Schedule both Windows and Linux containers using the same API.
The ease and speed of deploying containers in Azure Container Instances makes it a great fit for executing run-once tasks like image rendering or building and testing applications.
provide a DNS name to expose your container to the Internet (dns must be unique)
For scenarios where you need full container orchestration, including service discovery across multiple containers, automatic scaling, and coordinated application upgrades, we recommend Azure Kubernetes Service (AKS).
Create an Azure Container Instance (ACI)
#--------------------------------------------------------------
# Using Azure Container Instance to run a docker image
#--------------------------------------------------------------
# use to generate random dns name
DNS_NAME_LABEL=aci-demo-$RANDOM
# use these image for quick start/ or demo
--image microsoft/aci-helloworld # -> basic Node.js web application on docker hub
--image microsoft/aci-wordcount:latest # -> This container runs a Python script that analyzes the text of Shakespeare's Hamlet, writes the 10 most common words to standard output, and then exits
# create a container instance and start the image running
# for a user friendly url -> --dns-name-label mydnsname
az container create --resource-group mygroup --name ecommerceapiproducts --image parisaregistry.azurecr.io/ecommerceapiproducts:latest --os-type Windows --dns-name-label ecommerceapiproducts --registry-username ":)" --registry-password ":O"
az container create --resource-group mygroup --name myproducts1 --image parisaregistry.azurecr.io/ecommerceapiproducts:latest --os-type Windows --registry-login-server parisaregistry.azurecr.io --registry-username ":)" --registry-password ":O" --dns-name-label myproducts --ports 9000 --environment-variables 'PORT'='9000'
ACI restart-policies
Azure Container Instances has three restart-policy options [Source]:
Restart policy in Azure Container Instance
Description
Always in ACI
Containers in the container group are always restarted. This policy makes sense for long-running tasks such as a web server. This is the default setting applied when no restart policy is specified at container creation.
Never in ACI
Containers in the container group are never restarted. The containers run one time only.
OnFailure in ACI
Containers in the container group are restarted only when the process executed in the container fails (when it terminates with a nonzero exit code). The containers are run at least once. This policy works well for containers that run short-lived tasks.
Azure Container Instances starts the container and then stops it when its process (a script, in this case) exits. When Azure Container Instances stops a container whose restart policy is Never or OnFailure, the container’s status is set to Terminated.
az container create \
--resource-group learn-deploy-aci-rg \
--name mycontainer-restart-demo \
--image microsoft/aci-wordcount:latest \
--restart-policy OnFailure \
--location eastus
az container show \
--resource-group learn-deploy-aci-rg \
--name mycontainer-restart-demo \
--query containers[0].instanceView.currentState.state
az container logs \
--resource-group learn-deploy-aci-rg \
--name mycontainer-restart-demo
ACI check log, state, events
az container delete --resource-group mygroup --name myproducts1
az container logs --resource-group mygroup --name myproducts1
az container attach --resource-group mygroup --name myproducts1
# find the fully qualified domain name of the instance by querying the IP address of the instance or Azure UI > Azure Container Instance > Overview: FQND
az container show --resource-group mygroup --name myproducts --query ipAddress.fqdn
# another variant
--query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" \
--out table
# get the status of the container
--query containers[0].instanceView.currentState.state
# Execute a command in your container
az container exec \
--resource-group learn-deploy-aci-rg \
--name mycontainer \
--exec-command /bin/sh
# Monitor CPU and memory usage on your container
CONTAINER_ID=$(az container show \
--resource-group learn-deploy-aci-rg \
--name mycontainer \
--query id \
--output tsv)
az monitor metrics list \
--resource $CONTAINER_ID \
--metric CPUUsage \
--output table
az monitor metrics list \
--resource $CONTAINER_ID \
--metric MemoryUsage \
--output table
By default, Azure Container Instances are stateless.
If the container crashes or stops, all of its state is lost.
To persist state beyond the lifetime of the container, you must mount a volume from an external store.
mount an Azure file share to an Azure container instance so you can store data and access it later
STORAGE_ACCOUNT_NAME=mystorageaccount$RANDOM
az storage account create \
--resource-group learn-deploy-aci-rg \
--name $STORAGE_ACCOUNT_NAME \
--sku Standard_LRS \
--location eastus
# AZURE_STORAGE_CONNECTION_STRING is a special environment variable that's understood by the Azure CLI.
# The export part makes this variable accessible to other CLI commands you'll run shortly.
export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string \
--resource-group learn-deploy-aci-rg \
--name $STORAGE_ACCOUNT_NAME \
--output tsv)
# create a file share
az storage share create --name aci-share-demo
# To mount an Azure file share as a volume in Azure Container Instances, you need these three values:
# The storage account name
# The share name
# The storage account access key
STORAGE_KEY=$(az storage account keys list \
--resource-group learn-deploy-aci-rg \
--account-name $STORAGE_ACCOUNT_NAME \
--query "[0].value" \
--output tsv)
# check the value
echo $STORAGE_KEY
# Deploy a container and mount the file share (mount /aci/logs/ to your file share)
az container create \
--resource-group learn-deploy-aci-rg \
--name aci-demo-files \
--image microsoft/aci-hellofiles \
--location eastus \
--ports 80 \
--ip-address Public \
--azure-file-volume-account-name $STORAGE_ACCOUNT_NAME \
--azure-file-volume-account-key $STORAGE_KEY \
--azure-file-volume-share-name aci-share-demo \
--azure-file-volume-mount-path /aci/logs/
# check the storage
az storage file list -s aci-share-demo -o table
az storage file download -s aci-share-demo -p <filename>
The task of automating, managing, and interacting with a large number of containers is known as orchestration.
Azure Kubernetes Service (AKS) is a complete orchestration service for containers with distributed architectures with multiple containers.
You can move existing applications to containers and run them within AKS.
You can control access via integration with Azure Active Directory (Azure AD) and access Service Level Agreement (SLA)–backed Azure services, such as Azure Database for MySQL for any data needs, via Open Service Broker for Azure (OSBA).
Azure takes care of the infrastructure to run and scale your applications.
prerequisite is a staging deployment slot to push code to azure app service
easily add deployment slots to an App Service web app (for creating a staging)
swap the staging deployment slot with the production slot
Azure portal provides out-of-the-box continuous integration and deployment with Azure DevOps, GitHub, Bitbucket, FTP, or a local Git repository on your development machine
Mode : Free doesn’t support deployment slot
Seployment slopts are listed under “Deployment slots” menu
Continuous integration/deployment support
Connect your web app with any of the above sources and App Service will do the rest for you by automatically syncing your code and any future changes on the code into the web app
with Azure DevOps, you can define your own build and release process that compiles your source code, runs the tests, builds a release, and finally deploys the release into your web app every time you commit the code
out-of-the-box continuous integration and deployment
Automated deployment
Automated deployment, or continuous integration, is a process used to push out new features and bug fixes in a fast and repetitive pattern with minimal impact on end users.
Azure supports automated deployment directly from several sources. The following options are available:
Azure DevOps: You can push your code to Azure DevOps (previously known as Visual Studio Team Services), build your code in the cloud, run the tests, generate a release from the code, and finally, push your code to an Azure Web App.
GitHub: Azure supports automated deployment directly from GitHub. When you connect your GitHub repository to Azure for automated deployment, any changes you push to your production branch on GitHub will be automatically deployed for you.
Bitbucket: With its similarities to GitHub, you can configure an automated deployment with Bitbucket.
OneDrive: Microsoft’s cloud-based storage. You must have a Microsoft Account linked to a OneDrive account to deploy to Azure.
Dropbox: Azure supports deployment from Dropbox, which is a popular cloud-based storage system that is similar to OneDrive.
Manual deployment
There are a few options that you can use to manually push your code to Azure:
Git: App Service web apps feature a Git URL that you can add as a remote repository. Pushing to the remote repository will deploy your app.
az webapp up: webapp up is a feature of the az command-line interface that packages your app and deploys it. Unlike other deployment methods, az webapp up can create a new App Service web app for you if you haven’t already created one.
Zipdeploy: Use az webapp deployment source config-zip to send a ZIP of your application files to App Service. Zipdeploy can also be accessed via basic HTTP utilities such as curl.
Visual Studio: Visual Studio features an App Service deployment wizard that can walk you through the deployment process.
FTP/S: FTP or FTPS is a traditional way of pushing your code to many hosting environments, including App Service.
# using SDK version 3.1.102.
wget -q -O - https://dot.net/v1/dotnet-install.sh | bash -s -- --version 3.1.102
export PATH="~/.dotnet:$PATH"
echo "export PATH=~/.dotnet:\$PATH" >> ~/.bashrc
# create a new ASP.NET Core MVC application
dotnet new mvc --name BestBikeApp
# build and run the application to verify it is complete.
cd BestBikeApp
dotnet run
# output
info: Microsoft.Hosting.Lifetime[0]
Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: /home/user/BestBikeApp
# Deploy with zipdeploy
dotnet publish -o pub
cd pub
zip -r site.zip *
# perform the deployment
az webapp deployment source config-zip \
--src site.zip \
--resource-group learn-6126217c-08a6-4509-a288-2941d4b96a27 \
--name <your-unique-app-name>
background task in an App Service Web App with WebJobs
Automate a task for a Web App that should run in the background without affecting the performance of the Web App
small automated task, which executes automatically in response to some events
Scenario: Suppose you are a senior web developer in a research role for an online luxury watch dealer. You have a production website that uses Azure web apps. You’ve built a small script that checks stock levels and reports them to an external service. You consider this script to be part of the website, but it’s meant to run in the background, not in response to a user’s actions on the site.
You’d like the website and the script code to be closely associated. They should be stored together as part of the same project in the same source control repository. The script may grow and change as the site changes, so you’d like to always deploy them at the same time, to the same set of cloud resources.
WebJobs are a feature of Azure App Service
WebJobs can be used to run any script or console application that can be run on a Windows computer, with some functionality limitations
To run a WebJob, you’ll need an existing Azure App Service web app, web API, or mobile app
You can run multiple WebJobs in a single App Service plan along with multiple apps or APIs.
Your WebJobs can be written as scripts of several different kinds including Windows batch files, PowerShell scripts, or Bash shell scripts
You can upload such scripts and executables directly to the web app in the Azure portal
you can write WebJobs using a framework such as Python or Node.js
This approach enables you to use the WebJobs tools in Visual Studio to ease development.
Types of Webjob
Continuous
Triggered
it starts when it is deployed
only starts when scheduled or manually triggered
continues to run in an endless loop
for continuous the code must be written in loop (to poll a message queue for new items and process their contents)
use this kind of WebJob, for example, to create daily summaries of messages in a queue.
Webjob vs. Azure function (to know more about Azure Serverless Services/Architecture)
The following figure demonstrates, what we implement in the following code [Source].
# Define variable
rg=<resource group name>
# create a resource group
az group create --name $rg --location <location>
# Create a virtual network and subnet for application servers and database servers
az network vnet create \
--resource-group $rg \
--name ERP-servers \
--address-prefix 10.0.0.0/16 \
--subnet-name Applications \
--subnet-prefix 10.0.0.0/24
az network vnet subnet create \
--resource-group $rg \
--vnet-name ERP-servers \
--address-prefix 10.0.1.0/24 \
--name Databases
# Create Network Security Group
az network nsg create \
--resource-group $rg \
--name ERP-SERVERS-NSG
# Create virtual machines running Ubuntu (build the AppServer virtual machine)
# NSG is assigned to NIC of the VM
wget -N https://raw.githubusercontent.com/MicrosoftDocs/mslearn-secure-and-isolate-with-nsg-and-service-endpoints/master/cloud-init.yml && \
az vm create \
--resource-group $rg \
--name AppServer \
--vnet-name ERP-servers \
--subnet Applications \
--nsg ERP-SERVERS-NSG \
--image UbuntuLTS \
--size Standard_DS1_v2 \
--admin-username azureuser \
--custom-data cloud-init.yml \
--no-wait \
--admin-password <password>
# build the DataServer virtual machine
az vm create \
--resource-group $rg \
--name DataServer \
--vnet-name ERP-servers \
--subnet Databases \
--nsg ERP-SERVERS-NSG \
--size Standard_DS1_v2 \
--image UbuntuLTS \
--admin-username azureuser \
--custom-data cloud-init.yml \
--admin-password <password>
# To confirm that the virtual machines are running
az vm list \
--resource-group $rg \
--show-details \
--query "[*].{Name:name, Provisioned:provisioningState, Power:powerState}" \
--output table
# To connect to your virtual machines, use SSH directly from Cloud Shell. To do this, you need the public IP addresses that have been assigned to your virtual machines
az vm list \
--resource-group $rg \
--show-details \
--query "[*].{Name:name, PrivateIP:privateIps, PublicIP:publicIps}" \
--output table
# To make it easier to connect to your virtual machines during the rest of this exercise, assign the public IP addresses to variables
APPSERVERIP="$(az vm list-ip-addresses \
--resource-group $rg \
--name AppServer \
--query "[].virtualMachine.network.publicIpAddresses[*].ipAddress" \
--output tsv)"
DATASERVERIP="$(az vm list-ip-addresses \
--resource-group $rg \
--name DataServer \
--query "[].virtualMachine.network.publicIpAddresses[*].ipAddress" \
--output tsv)"
# to check whether you can connect to your AppServer virtual machine
ssh azureuser@$APPSERVERIP -o ConnectTimeout=5
# You'll get a Connection timed out message.
# to check whether you can connect to your DataServer virtual machine
ssh azureuser@$DATASERVERIP -o ConnectTimeout=5
# You'll get the same connection failure message.
Remember that the default rules deny all inbound traffic into a virtual network, unless this traffic is coming from another virtual network. The Deny All Inbound rule blocked the inbound SSH connections
Inbound
Name
Priority
Source IP
Destination IP
Access
Allow VNet Inbound
65000
VIRTUAL_NETWORK
VIRTUAL_NETWORK
Allow
Deny All Inbound
65500
*
*
Deny
Create a security rule for SSH
# Create a security rule for SSH
az network nsg rule create \
--resource-group $rg \
--nsg-name ERP-SERVERS-NSG \
--name AllowSSHRule \
--direction Inbound \
--priority 100 \
--source-address-prefixes '*' \
--source-port-ranges '*' \
--destination-address-prefixes '*' \
--destination-port-ranges 22 \
--access Allow \
--protocol Tcp \
--description "Allow inbound SSH"
# check whether you can now connect to your AppServer virtual machine
ssh azureuser@$APPSERVERIP -o ConnectTimeout=5
ssh azureuser@$DATASERVERIP -o ConnectTimeout=5
# You will be asked "are you sure to continue?", you answer with yes, and enter password
# for exit enter exit
Create a security rule to prevent web access
Server name
IP address
AppServer
10.0.0.4
DataServer
10.0.1.4
# Now add a rule so that AppServer can communicate with DataServer over HTTP, but DataServer can't communicate with AppServer over HTTP
az network nsg rule create \
--resource-group $rg \
--nsg-name ERP-SERVERS-NSG \
--name httpRule \
--direction Inbound \
--priority 150 \
--source-address-prefixes 10.0.1.4 \
--source-port-ranges '*' \
--destination-address-prefixes 10.0.0.4 \
--destination-port-ranges 80 \
--access Deny \
--protocol Tcp \
--description "Deny from DataServer to AppServer on port 80"
# to connect to your AppServer virtual machine, and check if AppServer can communicate with DataServer over HTTP.
ssh -t azureuser@$APPSERVERIP 'wget http://10.0.1.4; exit; bash'
# he response should include a 200 OK message.
# to connect to your DataServer virtual machine, and check if DataServer can communicate with AppServer over HTTP
ssh -t azureuser@$DATASERVERIP 'wget http://10.0.0.4; exit; bash'
# his shouldn't succeed, because you've blocked access over port 80. Press Ctrl+C to stop the command prior to the timeout.
Configure Application Security Group (ASG)
The following figure demonstrates, what we implement in this section.
Create an application security group for database servers, so that all servers in this group can be assigned the same settings. You’re planning to deploy more database servers, and want to prevent these servers from accessing application servers over HTTP. By assigning sources in the application security group, you don’t need to manually maintain a list of IP addresses in the network security group. Instead, you assign the network interfaces of the virtual machines you want to manage to the application security group.
# create a new application security group called ERP-DB-SERVERS-ASG
az network asg create \
--resource-group $rg \
--name ERP-DB-SERVERS-ASG
# to associate DataServer with the application security group
az network nic ip-config update \
--resource-group $rg \
--application-security-groups ERP-DB-SERVERS-ASG \
--name ipconfigDataServer \
--nic-name DataServerVMNic \
--vnet-name ERP-servers \
--subnet Databases
# to update the HTTP rule in the ERP-SERVERS-NSG network security group. It should reference the ERP-DB-Servers application security group
az network nsg rule update \
--resource-group $rg \
--nsg-name ERP-SERVERS-NSG \
--name httpRule \
--direction Inbound \
--priority 150 \
--source-address-prefixes "" \
--source-port-ranges '*' \
--source-asgs ERP-DB-SERVERS-ASG \
--destination-address-prefixes 10.0.0.4 \
--destination-port-ranges 80 \
--access Deny \
--protocol Tcp \
--description "Deny from DataServer to AppServer on port 80 using application security group"
# to connect to your AppServer virtual machine, and check if AppServer can communicate with DataServer over HTTP.
ssh -t azureuser@$APPSERVERIP 'wget http://10.0.1.4; exit; bash'
# the response should include a 200 OK message.
# to connect to your DataServer virtual machine, and check if DataServer can communicate with AppServer over HTTP.
ssh -t azureuser@$DATASERVERIP 'wget http://10.0.0.4; exit; bash'
# you should get a Connection timed out message. Press Ctrl+C to stop the command prior to the timeout.
Configure Service Firewall
Storage
Storage has a layered security model
The layered model enables us to secure storage to a specific set of supported networks
To use network, the network rules must be configured.
Only applications requesting data from over specific networks can access storage.
The application request can go through the network rules, but this application must have an authorization on the storage as well
Authorization can be done via Storage Access Key (for blob & queue).
Or Authorization can be done via Share Access Signature (SAS) (for blob & queue).
In both case the authorization is done via Azure Active Directory.
Network rules are enforced are protocols e.g. REST and SMB
How network rules must be configured
Deny access to traffic from all networks (it will be done automatically after first config).
Grant access to the traffic of specific vnet (for secure application boundary).
Then if needed grant access to public internet IP/IP range or on-prem.
Configure network rules for Azure Portal, Storage Explorer, and AZCopy
VM disk traffic (mount, unmount, disk io) is not affected by network rules.
REST access is affected by network rules
Classic storage don’t support firewall and vnet.
Shared Access Signature (SAS)
This access token is not related to securing storage via vnet
The IP address that has some authorization on storage can work with storage again even after configuring network rules.
Discover the services and tools available to automate the deployment and configuration of your Azure infrastructure
Scenario: A clothing manufacturer that’s moving several product design applications to Azure virtual machines. The company needs to scale out to many virtual machines now and in the future. Their current manual process is time consuming and error prone. They want to automate the scale-out process to improve operational abilities. They’re unsure about the tools that are available on Azure to provision compute resources, and where each fits into the overall provisioning process.
Available provisioing solutions are:
Custom scripts (VMs)
Desired State Configuration Extensions (VMs)
Chef Server
Terraform (all resources)
Azure Automation State Configuration
Azure Resource Manager templates (all resources)
Custom Script Extension (VMs)
custom script extension downloads and runs scripts on vms
useful for post deployment configuration, software installation
Note: Take care if your configuration or management task requires a restart. A custom script extension won’t continue after a restart.
How to extend a Resource Manager template
There are several ways
create multiple templates, each defining one piece of the system (then link or nest them together to build a more complete system)
modify an existing template ( that’s often the fastest way to get started writing your own templates)
Example
Create a VM.
Open port 80 through the network firewall.
Install and configure web server software on your VM.
# Requirements:
# Create a VM.
# Open port 80 through the network firewall.
# Install and configure web server software on your VM.
az vm extension set \
--resource-group $RESOURCEGROUP \
--vm-name SimpleWinVM \
--name CustomScriptExtension \
--publisher Microsoft.Compute \
--version 1.9 \
--settings '{"fileUris":["https://raw.githubusercontent.com/MicrosoftDocs/mslearn-welcome-to-azure/master/configure-iis.ps1"]}' \
--protected-settings '{"commandToExecute": "powershell -ExecutionPolicy Unrestricted -File configure-iis.ps1"}' # the script to enable IIS
# This is the content of the configure-iis.ps1 file
#--------------------------------------------------------------
# Install IIS.
dism /online /enable-feature /featurename:IIS-WebServerRole
# Set the home page.
Set-Content `
-Path "C:\\inetpub\\wwwroot\\Default.htm" `
-Value "<html><body><h2>Welcome to Azure! My name is $($env:computername).</h2></body></html>"
#--------------------------------------------------------------
Use Chef’s knife tool to deploy virtual machines and simultaneously apply recipes to them. You install the knife tool on your admin workstation, which is the machine where you create policies and execute commands. Then run your knife commands from your admin workstation.
# The following example shows how a knife command can be used to create a virtual machine on Azure. The command
# simultaneously applies a recipe that installs a web server on the machine.
knife azurerm server create `
--azure-resource-group-name rg-chefdeployment `
--azure-storage-account store `
--azure-vm-name chefvm `
--azure-vm-size 'Standard_DS2_v2' `
--azure-service-location 'eastus' `
--azure-image-reference-offer 'WindowsServer' `
--azure-image-reference-publisher 'MicrosoftWindowsServer' `
--azure-image-reference-sku '2016-Datacenter' `
--azure-image-reference-version 'latest' `
-x myuser `
-P yourPassword `
--tcp-endpoints '80,3389' `
--chef-daemon-interval 1 `
-r "recipe[webserver]"
You can also use the Chef extension to apply recipes to the target machines. The following example defines a Chef extension for a virtual machine in an Azure Resource Manager template. It points to a Chef server by using the chef_server_url property. It points to a recipe to run on the virtual machine to put it in the desired state.
A recipe might look like the one that follows. The recipe installs an IIS web server.
#install IIS on the node.
powershell_script 'Install IIS' do
action :run
code 'add-windowsfeature Web-Server'
end
service 'w3svc' do
action [ :enable, :start ]
end
Azure Desired State Configuration (DSC) extensions
Automation State Configuration
Resource Manager templates
Ease of setup
is built into the Azure portal, so setup is eas
are easy to read, update, and store. Configurations define what state you want to achieve. The author doesn’t need to know how that state is reached.
isn’t difficult to set up, but it requires the user to be familiar with the Azure portal.
create Resource Manager templates easily. You have many templates available from the GitHub community, which you can use or build upon. Alternatively, you can create your own templates from the Azure portal.
Management
can get tricky as your infrastructure grows and you accumulate different custom scripts for different resources
democratizes configuration management across servers.
The service manages all of the virtual machines for you automatically. Each virtual machine can send you detailed reports about its state, which you can use to draw insights from this data. Automation State Configuration also helps you to manage your DSC configurations more easily.
is straightforward because you manage JavaScript Object Notation (JSON) files.
Interoperability
can be added into an Azure Resource Manager template. can also deploy it through Azure PowerShell or the Azure CLI.
are used with Azure Automation State Configuration. They can be configured through the Azure portal, Azure PowerShell, or Azure Resource Manager templates.
requires DSC configurations. It works with your Azure virtual machines automatically, and any virtual machines that you have on-premises or on another cloud provider.
You can use other tools to provision Resource Manager templates, such as the Azure CLI, the Azure portal, PowerShell, and Terraform.
Configuration language
write scripts by using many types of commands. e.g. powershell, bash
Use PowerShell
powershell
JSON
Limitations and drawbacks
aren’t suitable for long run scripts or reboots needed scripts
only use PowerShell to define configurations. If you use DSC without Azure Automation State Configuration, you have to take care of your own orchestration and management.
use powershell
JSON has a strict syntax and grammar, and mistakes can easily render a template invalid. The requirement to know all of the resource providers in Azure and their options can be onerous.
Scenario for custom script: The organization you work for has been given a new contract to work for a new client. They have a handful of virtual machines that run on Azure. The development team decides they need to install a small application they’ve written to help increase their team’s productivity and make sure they can meet new deadlines. This application doesn’t require a restart.
Custom script advantages: The custom script extension is good for small configurations after provisioning. It’s also good if you need to add or update some applications on a target machine quickly. It’s imperative for ad-hoc cross-platform scripting.
Scenario for Azure Desired State Configuration State: The organization you work for is testing a new application, which requires new virtual machines to be identical so that the application can be accurately tested. The company wants to ensure that the virtual machines have the exact same configuration settings. You notice that some of these settings require multiple restarts of each virtual machine. Your company wants a singular state configuration for all machines at the point of provisioning. Any error handling to achieve the state should be abstracted as much as possible from the state configuration. Configurations should be easy to read.
Azure Desired State Configuration advantages: DSC is easy to read, update, and store. DSC configurations help you declare the state your machines should be in at the point they are provisioned, rather than having instructions that detail how to put the machines in a certain state. Without Azure Automation State Configuration, you have to manage your own DSC configurations and orchestration. DSC can achieve more when it’s coupled with Azure Automation State Configuration.
Scenario for Azure State Configuration: You learn that the company you work for wants to be able to create hundreds of virtual machines, with identical configurations. They want to report back on these configurations. They want to be able to see which machines accept which configurations without problems. They also want to see those problems when a machine doesn’t achieve a desired state. In addition, they want to be able to feed all of this data into a monitoring tool so they can analyze all of the data and learn from it.
Azure State Configuration advantages: The Azure Automation State Configuration service is good for automating your DSC configurations, along with the management of machines that need those configurations, and getting centralized reporting back from each machine. You can use DSC without Azure Automation State Configuration, particularly if you want to administer a smaller number of machines. For larger and more complicated scenarios that need orchestration, Azure Automation State Configuration is the solution you need. All of the configurations and features that you need can be pushed to all of the machines, and applied equally, with minimal effort.
Scenario for ARM Templates: Each developer should be able to automatically provision an entire group of virtual machines that are identical to what everyone else on the team creates. The developers want to be sure they’re all working in the same environment. The developers are familiar with JSON, but they don’t necessarily know how to administer infrastructure. They need to be able to provision all of the resources they need to run these virtual machines in an easy and rapid manner.
ARM Template advantages: Resource Manager templates can be used for small ad-hoc infrastructures. They’re also ideal for deploying larger infrastructures with multiple services along with their dependencies. Resource templates can fit well into developers’ workflows. You use the same template to deploy your application repeatedly during every stage of the application lifecycle.
third-party solution comparison
Chef
Terraform
Ease of setup
runs on the master machine, and Chef clients run as agents on each of your client machines. You can also use hosted Chef and get started much faster, instead of running your own server.
To get started with Terraform, download the version that corresponds with your operating system and install it.
Management
can be difficult because it uses a Ruby-based domain-specific language. You might need a Ruby developer to manage the configuration.
files are designed to be easy to manage.
Interoperability
only works under Linux and Unix, but the Chef client can run on Windows.
supports Azure, Amazon Web Services, and Google Cloud Platform.
Configuration language
uses a Ruby-based domain-specific language.
uses Hashicorp Configuration Language (HCL). You can also use JSON.
Limitations and drawbacks
The language can take time to learn, especially for developers who aren’t familiar with Ruby.
Because Terraform is managed separately from Azure, you might find that you can’t provision some types of services or resources.
Scenario for Chef Server: Your organization has decided to let the developers create some virtual machines for their own testing purposes. The development team knows various programming languages and recently started writing Ruby applications. They’d like to scale these applications and run them on test environments. They’re familiar with Linux. The developers run only Linux-based machines and destroy them after testing is finished.
Chef Server advantages: Chef is suitable for large-scale infrastructure deployment and configuration. Chef makes it easy for you to automate the deployment of an entire infrastructure, such as in the workflow of a development team.
Scenario for Terraform: Your organization has gained a new client who wants to create multiple virtual machines across several cloud providers. The client has asked you to create three new virtual machines in Azure and one other in the public cloud. The client wants the virtual machines to be similar. They should be created by using a script that works with both providers. This approach will help the client have a better idea of what they’ve provisioned across providers.
Terraform advantages: With Terraform, you can plan the infrastructure as code and see a preview of what the code will create. You can have that code peer reviewed to minimize errors in configuration. Terraform supports infrastructure configurations across different cloud service providers.
Example
# Source : https://docs.microsoft.com/en-us/learn/modules/choose-compute-provisioning/5-exercise-deploy-template
# Clone the configuration and template
git clone https://github.com/MicrosoftDocs/mslearn-choose-compute-provisioning.git
cd mslearn-choose-compute-provisioning
code Webserver.ps1
# file content
Configuration Webserver
{
param ($MachineName)
Node $MachineName
{
#Install the IIS Role
WindowsFeature IIS
{
Ensure = "Present"
Name = "Web-Server"
}
#Install ASP.NET 4.5
WindowsFeature ASP
{
Ensure = "Present"
Name = "Web-Asp-Net45"
}
WindowsFeature WebServerManagementConsole
{
Name = "Web-Mgmt-Console"
Ensure = "Present"
}
}
}
# configure template
code template.json
# replace modulesUrl parameter in template
"modulesUrl": {
"type": "string",
"metadata": {
"description": "URL for the DSC configuration module."
}
},
# Validate your template
az deployment group validate \
--resource-group learn-46d7acf0-e3c7-48c8-9416-bf9f3875659c \
--template-file template.json \
--parameters vmName=hostVM1 adminUsername=serveradmin
# Deploy your template
az deployment group create \
--resource-group learn-46d7acf0-e3c7-48c8-9416-bf9f3875659c \
--template-file template.json \
--parameters vmName=hostVM1 adminUsername=serveradmin
az resource list \
--resource-group learn-46d7acf0-e3c7-48c8-9416-bf9f3875659c \
--output table \
--query "[*].{Name:name, Type:type}"
echo http://$(az vm show \
--show-details \
--resource-group learn-46d7acf0-e3c7-48c8-9416-bf9f3875659c \
--name hostVM1 \
--query publicIps \
--output tsv)
New-AzResourceGroup -Name <resource-group-name> -Location <resource-group-location> #use this command when you need to create a new resource group for your deployment
New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.json
CLI
az group create --name <resource-group-name> --location <resource-group-location> #use this command when you need to create a new resource group for your deployment
az group deployment create --resource-group <my-resource-group> --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.json
Example
# define parameters for ARM template
RESOURCEGROUP=learn-quickstart-vm-rg
LOCATION=eastus
USERNAME=azureuser
PASSWORD=$(openssl rand -base64 32)
# create resource group
az group create --name $RESOURCEGROUP --location $LOCATION
# validate the template
az deployment group validate \
--resource-group $RESOURCEGROUP \
--template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.json" \
--parameters adminUsername=$USERNAME \
--parameters adminPassword=$PASSWORD \
--parameters dnsLabelPrefix=$DNS_LABEL_PREFIX
# deploy the template
az deployment group create \
--name MyDeployment \
--resource-group $RESOURCEGROUP \
--template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.json" \
--parameters adminUsername=$USERNAME \
--parameters adminPassword=$PASSWORD \
--parameters dnsLabelPrefix=$DNS_LABEL_PREFIX
# verify the deployment
az deployment group show \
--name MyDeployment \
--resource-group $RESOURCEGROUP
# list the vms
az vm list \
--resource-group $RESOURCEGROUP \
--output table
This document presents the Azure Storage’s Best Practices.
Call Storage Rest API
The Storage’s REST API can be called as follows over HTTP/HTTPS. The output of this call is XML therefore the pre-built client libraries can help to work with XML output.
GET https://[url-for-service-account]/?comp=list&include=metadata
# Custom Domain can be used as well
# Https://[StorageName].blob.core.windows.net/
# Https://[StorageName].queue.core.windows.net/
# Https://[StorageName].table.core.windows.net/
# Https://[StorageName].file.core.windows.net/
Access Key & API Endpoint: Each storage has a unique access key.
Shared Access Signature (SAS): It can have grained permission
How to secure the authentication values
Using Key/value
Best Practice 1
Scenario
You’re building a photo-sharing application. Every day, thousands of users take pictures and rely on your application to keep them safe and make them accessible across all their devices. Storing these photos is critical to your business, and you would like to ensure that the system used in your application is fast, reliable, and secure. Ideally, this would be done without you having to build all these aspects into the app. [Source]
# Create an Azure Storage
az storage account create \
--resource-group learn-242f907f-37b3-454d-a023-dae97958e5d9 \
--kind StorageV2 \
--sku Standard_LRS \
--access-tier Cool \
--name parisalsnstorage
# Get the ConnectionString of the Storage
az storage account show-connection-string \
--resource-group learn-242f907f-37b3-454d-a023-dae97958e5d9 \
--name parisalsnstorage \
--query parisalsnstorage
2. Create an Application
# Create a DotNet Core Application
# Create the project in spesific folder with -o / --output <folder-name>
dotnet new console --name PhotoSharingApp
# Change to project folder
cd PhotoSharingApp
# Run the project
dotnet run
# Create a appsettings.json file. The Storage connection string is kept here.
# This is the simple version
touch appsettings.json
3. Configure Application
# Add Azure Storage NuGet Package
dotnet add package WindowsAzure.Storage
# Run to test the project
dotnet run
# Edit the appsettings.json
code .
After the appsettings.json file is opned in Editor change the content as follows
{
"StorageAccountConnectionString": "The Storage Connection String must be placed here"
}
The next file is PhotoSharingApp.csproj. It have to be changed as follows
using System;
using Microsoft.Extensions.Configuration;
using System.IO;
using Microsoft.WindowsAzure.Storage;
using System.Threading.Tasks;
namespace PhotoSharingApp
{
class Program
{
static async Task Main(string[] args)
{
var builder = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("appsettings.json");
var configuration = builder.Build();
var connectionString = configuration["StorageAccountConnectionString"];
# Simplest way to initialize the object model via either .TryParse or .Parse
if (!CloudStorageAccount.TryParse(connectionString, out CloudStorageAccount storageAccount))
{
Console.WriteLine("Unable to parse connection string");
return;
}
var blobClient = storageAccount.CreateCloudBlobClient();
var blobContainer = blobClient.GetContainerReference("photoblobs");
bool created = await blobContainer.CreateIfNotExistsAsync();
Console.WriteLine(created ? "Created the Blob container" : "Blob container already exists.");
}
}
}
Best Practice 2
Best Practice n
I’m working on the content..it will be published soon 🙂
Traffic Manager: provides DNS load balancing to your application, so you improve your ability to distribute your application around the world. Use Traffic Manager to improve the performance and availability of your application.
Application Gateway vs. Traffic Manager: The traffic manager only directs the clients to the IP address of the service that they want to go to and the Traffic Manager cannot see the traffic. But Gateway sees the traffic.
Load balancing the web service with the application gateway
Improve application resilience by distributing the load across multiple servers and using path-based routing to direct web traffic.
Application gateway works based on Layer 7
Scenario: you work for the motor vehicle department of a governmental organization. The department runs several public websites that enable drivers to register their vehicles and renew their driver’s licenses online. The vehicle registration website has been running on a single server and has suffered multiple outages because of server failures.
Link to a sample code – Terraform implementation of Azure Application Gateway – Terraform implementation of Azure Application Gateway’ Backend pool with VM – Terraform implementation of Azure Application Gateway’s HTTPS with Keyvault as Ceritficate Store
Load balancing with Azure Load Balancer
Azure load balancer for resilient applications against failure and for easily scaling
Azure load balancer works in layer 4
LB spreads/distributes requests to multiple VMs and services (user gets service even when a VM is failed) automatically
LB provides high availability
LB uses a Hash-based distribution algorithm (5-tuple)
5-tuple hash map traffic to available services (Source IP, Source Port, Destination IP, Destination Port, Protocol Type)
supports an inbound, and outbound scenario
Low latency, high throughput, scale up to millions of flows for all TCP and UDP applications
Isn’t a physical instance but only an object for configuring infrastructure
For high availability, we can use LB with availability set (protect for hardware failure) and availability zones (for data center failure)
Scenario: You work for a healthcare organization that’s launching a new portal application in which patients can schedule appointments. The application has a patient portal and web application front end and a business-tier database. The database is used by the front end to retrieve and save patient information. The new portal needs to be available around the clock to handle failures. The portal must adjust to fluctuations in load by adding and removing resources to match the load. The organization needs a solution that distributes work to virtual machines across the system as virtual machines are added. The solution should detect failures and reroute jobs to virtual machines as needed. Improved resiliency and scalability help ensure that patients can schedule appointments from any location [Source].
Link to a sample code to deploy simple Nginx web servers with Availability Set and Public Load Balancer.
Load Balancer SKU
Basic Load Balancer
Port forwarding
Automatic reconfiguration
Health Probe
Outbound connections through source network address translation (SNAT)
Diagnostics through Azure log analytics for public-facing load balancers
Can be used only with availability set
Standard Load Balancer
Supports all the basic LB features
Https health probe
Availability zone
Diagnostics through Azure monitor, for multidimensional metrics
High availability (HA) ports
outbound rules
guaranteed SLA (99,99% for two or more VMs)
Load Balancer Types
Internal LB
distributes the load from internal Azure resources to other Azure resources
no traffic from the internet is allowed
External/Public LB
Distributes client traffic across multiple VMS.
Permits traffic from the internet (browser, module app, other resources)
public LB maps the public IP and port of incoming traffic to the private IP address and port number of the VM in the back-end pool.
Distribute traffic by applying the load-balancing rule
Distribution modes
Lb distributes traffic equally among vms
distribution modes are for creating different behavior
When you create the load balancer endpoint, you must specify the distribution mode in the load balancer rule
Prerequisites for load balancer rule
must have at least one backend
must have at least one health probe
Five tuple hash
default of LB
As the source port is included in the hash and can be changed for each session, the client might be directed to a different VM for each session.
source IP affinity / Session Affinity / Client IP affinity
this distribution is known as session affinity/client IP affinity
to map traffic to the server, the 2-tuple hash is used (Source IP, Destination IP) or the 3-tuple (Source IP, Destination IP, Protocol)
Hash ensures that requests from specific clients are always sent to the same VM.
Scenario: Remote Desktop Protocol is incompatible with 5-tuple hash
Scenario: for uploading media files this distribution must be used because for uploading a file the same TCP session is used to monitor the progress and a separate UDP session uploads the file.
Scenario: The requirement of the presentation tier is to use in-memory sessions to store the logged user’s profile as the user interacts with the portal. In this scenario, the load balancer must provide source IP affinity to maintain a user’s session. The profile is stored only on the virtual machine that the client first connects to because that IP address is directed to the same server.
Enhance service availability and data locality with Traffic Manager
Scenario: a company that provides a global music streaming web application. You want your customers, wherever they are in the world, to experience near-zero downtime. The application needs to be responsive. You know that poor performance might drive your customers to your competitors. You’d also like to have customized experiences for customers who are in specific regions for user interface, legal, and operational reasons. Your customers require 24×7 availability of your company’s streaming music application. Cloud services in one region might become unavailable because of technical issues, such as planned maintenance or scheduled security updates. In these scenarios, your company wants to have a failover endpoint so your customers can continue to access its services.
traffic manager is a DNS-based traffic load balancer
Traffic Manager distributes traffic to different regions for high availability, resilience, and responsiveness
it resolves the DNS name of the service as an IP address (directs to the service endpoint based on the rules of the traffic routing method)
it’s a proxy or gateway
it doesn’t see the traffic that a client sends to a server
it only gives the client the IP address of where they need to go
it’s created only Global.
The location cannot be specified because it’s Global
Traffic Manager Profile’s routing methods
each profile has only one routing method
Weighted routing
distribute traffic across a set of endpoints, either evently or based on different weights
weights between 1 to 1000
for each DNS query received, the traffic manager randomly chooses an available endpoint
probability of choosing an endpoint is based on the weights assigned to endpoints
with endpoints in different geographic locations, the best performance endpoint for the user is sent
it uses an internet latency table, which actively track network latencies to the endpoints
Geographic routing
based on where the DNS query originated, the specific endpoint of the region is sent to the user
it’s good for geo-fence content e.g. it’s good for countries with specific terms and conditions for regional compliance
Multivalue routing
to obtain multiple healthy endpoints in a single DNS query
caller can make client-side retries if endpoint is unresponsive
it can increase availability of service and reduce latency associated with a new DNS query
Subnet routing
maps a set of user ip addresses to specific endpoints e.g. can be used for testing an app before release (internal test), or to block users from specific ISPs.
Priority routing
traffic manager profile contains a prioritized list of services
Traffic Manager Profile’s endpoints
endpoint is the destination location that is returned to the client
Types are
Azure endpoints: for services hosted in azure
Azure App Service
public ip resources that are associated with load balancers, or vms
External endpoints
for ip v4/v6
FQDNs
services hosted outside azure either on-prem or other cloud
Nested endpoints: are used to combine Traffic Manager profiles to create more flexible traffic-routing schemes to support the needs of larger, more complex deployments.
Endpoints Types/Targets
Each traffic manager profile can have serveral endpoints with different types
resource group is for managing resources in Azure.
How to create resource group: go to resource group by searching it > use add button > fill in the form and create a resource group.
AWS
Coming soon…
GCP
project is for managing resources in GCP.
How to create project: go to manage resource page by searching the page > and use create project button to create a new project > you can select the organization if you are not free trial user.