When a policy is set on an organization/ top node all descendants of that node inherit this policy by default. If you set a policy at the root organization node/ root account, then the configuration of restrictions defined by that policy will be passed down through all descendant folders, projects, services, and resources.
My opinion
AWS Advantage: In some scenarios is necessary to have only one VPC for the whole organization and the projects must use this VPC but from different Accounts. It’s possible in AWS because we have cross-account shared services.
In Azure and GCP we cannot share a VPC or a VNet between two Subscriptions or Projects.
Configure the access to the resources e.g. servers
Responsible for operating system hardening of the servers
Ensure the disk volume has been encrypted
Determine the identity and access permissions of specific resources
ooo
Who should take care of security?
In companies where they up and run services/application on the cloud, the responsible teams have to have enough knowledge about the security on the cloud.
Developers and Enterprise architect
Ensure cloud services they use are designed and deployed with security.
DevOps and SRE Teams
Ensure security introduced into the infrastructure build pipeline and the environments remain secure post-production.
InfoSec Team
Secure systems
In which step of the project the security have to be applied?
We should ask this questions ourselves by architecting a solution by designing its monitoring solution
how would you diagnose issues with an application
how would you understand it’s health
what are it’s choke points
how would you identify them and what would you do when something breaks
Like the firefighting maneuver that must be executed half-yearly or yearly in each company, we have to use “chaos engineering” technique to intentionally cause breakage in the environments in a controlled manner to test monitoring, alerts, react of the architecture and resiliency of our solution.
Decide for the right resource and architecture for youe product
Choose the appropriate architecture based on your requirements
Know which compute options is right for your workload
Identify the right storage solution that meets your needs
Decide how you’re going to manage all your resources
Select region for vnet and regions are region/zone because we have for example East US & East US 2
Select region for VPC
Subnet is created in vnet’s region.
Subnet is created in different zones of the region
GCP
coming soon..
Multi-cloud : Public IP
Azure
AWS
GCP
Static IP
Elastic IP
Dynamic IP
Multi-cloud
You can configure VPN between cloud providers (it’s straight forward) and it’s the same as VPN between on-prem and cloud with setting up the Gateway and then we have an encrypted tunnel for the traffic between cloud providers.
Azure, GCP, and AWS support IKEv2 in virtual private network
resource group is for managing resources in Azure.
How to create resource group: go to resource group by searching it > use add button > fill in the form and create a resource group.
AWS
Coming soon…
GCP
project is for managing resources in GCP.
How to create project: go to manage resource page by searching the page > and use create project button to create a new project > you can select the organization if you are not free trial user.
Hybrid, multi-cloud management platform for APIs across all environments. Nowadays, enterprises are API producer and they expose their services to their customers via APIs.
With Azure API Management Service enterprises can selectively expose their services to their partners, consumers in a secure manner.
Enterprise level benefits of Azure API Management
Exposing the services/APIs in a secure manner.
A Framework for API Management can be approved by compliance gate and teams can use it without repeating the same compliance gate process.
A list of exposed APIs/Services are always for monitoring available for CTO.
Must haves at enterprise level implementation for Azure API Management :
Define a secure framework for API Management
On-board teams to be able to use this framework
Support and monitor the Teams activities
Enterprise Level limitation
If an enterprise level decides to use the custom role assignment must pay attention to 2000 RBAC assignment per subscription.
Framework for Azure API Management
In the framework document we must define at least two teams and the functional and non-functional requirement must be clarified and explained in great detail.
Service Provider Team : is the team who define the framework and perform the compliance gate process for the service, they want to provide
Consumer Team : uses the provided service, because
They need this service in their solution.
They receive an On-Boarding and start technically easier with this service.
They can use the support of this service instead of using their resources
They don’t need compliance gate process for this service
Functional requirements
Non-functional requirements
By which cloud provider?
How teams can request this service?
Is it private or public cloud?
How they can get on-boarding?
How can have access to resources?
How they can get support?
How to determine the dev/QA/prod environments?
How are the SLA?
How team can access his resources?
What are the service provider team’s responsibilities?
How they can add/remove/config their resources?
What are the consumer team’s responsibilities?
Is their any automated flow? if yes, what are they?
How the automated flow can be considered in CI/CD? (if necessary for consumer team)
Most organizations choose to work with multiple cloud providers, because it’s a struggle for an enterprise to find only one public cloud infrustructure provider, which meet all their requirements. [refrence]
The following figure demonstrates that the multi-cloud solution is a sub concept for hybrid-cloud computing.
Multi-cloud solutions are sub topic of the hybrid-cloud computing.
Multi-cloud scenarios
1-Strategic advantages of partitioned complexity
To avoid committing to a single vendor, you spread applications across multiple cloud providers. Best Practice:weight the strategic advantages of a partitioned complexity this setup brings. Achieving workload portability and consistent tooling across multiple cloud environments increases development, testing, and operations work. [1]
2-For regulatory reasons
For regulatory reasons, you serve a certain segment of your user base and data from a country where a vendor does not yet have any presence. Best Practice:Use a multi-cloud environment only for mission-critical workloads or if, for legal or regulatory reasons, a single public cloud environment cannot accommodate the workloads. [1]
3-Choose the best services that the providers offer
For deploying application across multiple cloud providers in a way that allows you to choose among the best services that the providers offer. Best practice:Minimize dependencies between systems are running in different public cloud environments, particularly when communication is handled synchronously. These dependencies can slow performance and decrease overall availability. [1]
4-To have data autonomy
To have data autonomy in the future, therefore companies can take their data with them wherever they end up going.
Advantage of multi-cloud scenarios
To avoid vendor lock-in. The multi-cloud helps lower strategic risk and provides you with the flexibility to change plans or partnerships later. [1]
If the workload has been kept portable, you can optimize your operations by shifting workloads between computing environments. [1]
Hybrid-cloud scenarios
Hybrid-cloud description by National Institute of Standards
Hybrid cloud is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability. [2]
Cloud and on-premises, which were previously distinct entities and had cumbersome interaction configuration, are now converging to provide more efficient, less costly, and more flexible operation model for workflows.
1-Backup & Archive [2]
2-Data Protection [2]
3-Lifecycle Partitioning [4]
Lifecycle partitioning is the process of moving parts of the application development lifecycle to the cloud while the rest remains on premises. The most popular is the cloud deployment and testing but move to on-premises for the production deployment.
4-Application Partitioning [4]
A part of an application is running in the could and the other part runs on premises. For example, Sony PlayStation runs databases for individual games in the cloud but takes care of user authentication on-premises.
5-Application spanning [3]
Application spanning happens when the same application runs on-premises and in the cloud. “Best Buy” is an example of the application spanning. The entire online store application is running across multiple cloud regions and multiple on-premises data center to allow it to quickly adjust to demand.
Application Programming Interface Management (API Management), consists of a set of tools and services that enable developers and companies to build, analyse, operate, and scale APIs in secure environment.
Azure
AWS
GCP
Service
API Management Service
Amazon API Gateway
– API Gateway – Developer Portal
– API Access Control – API Protection – API Creation and design – Support for hybrid models – High performance – Customizable developer portal
???
API Management tools overview
API Management can be delivered on-premises, through the could, or using a hybrid on-premises – SaaS (Software as a Service) approach.
For migration from On-Prem to Cloud we have the following possibilities on different platforms.
Azure
AWS
GCP
Lift and shift
Yes
Yes
Lift and shift It means a virtual machine is taken from a hyper-visor and migrated to cloud with the same configuration as it had on-prem. An app will be migrated to the cloud without refactoring or changing architecture.