Migrate Data from AWS to Azure

Azure and AWS both offer reliable, scalable and secure hosting environments for enterprise workloads in the cloud. Many organisations have already adopted a “cloud first” policy to leverage these benefits and have gone all-in with either Azure or AWS. But what if something changes and a company wants to leave that cloud service provider?

Why Move from One Cloud to Another?

Reasons why users in Azure or AWS would want to switch to the competing cloud service provider include:

1. Changes in the terms and conditions: Initial cloud adoption for enterprises often depends on a unique value proposition offered by a vendor. However, changes in the terms and conditions of a cloud service provider over time could lead to cloud lock-in concerns for organizations.

2. Application portability: Another example of cloud lock-in possibility is the use of heterogeneous platforms used by different cloud vendors which can affect application portability. For example, workloads that use AWS community-contributed Amazon Machine Images (AMIs) or applications configured to make Amazon S3 API calls might limit the ability for enterprises to use other services outside of AWS. In such a case, it might be desirable to migrate out. Another lock-in example is how Azure Site Recovery provides automated mechanisms for moving workloads from AWS to Azure, but requires multiple complex manual steps or third-party tools to migrate in the opposite direction.

3. Contract renewal: Organizations often reevaluate hosting options during the contract renewal period to explore differentiating features offered by competing service providers. With new products and features being introduced by cloud service providers, customers have more choices than ever before to choose an optimal hosting platform for their applications.

4. Cost-benefits: Services offered at premium rates by one service provider could be available at competitive rates with a different provider. For example, Azure Hybrid Benefit along with reserved instances can provide up to 80% cost saving and offers a great value proposition for organizations with pre-existing investments in Microsoft licenses. AWS, on the other hand, provides Microsoft Licensing on AWS, in which customers can use their Microsoft licenses with or without Software Assurance to reduce cloud hosting charges.

5. Compliance standards: The compliance standards to be met for hosting data and application with a cloud service provider or on-premises varies across different industry sectors. Any instance of non-compliance flagged during an audit could lead to re-hosting or migration of application/data to a compliant platform.

6. Data consolidation: In hybrid cloud architectures, a company’s data could exist across public/private clouds or on-premises deployments. Consolidation of data and seamless management is important for optimizing the spend on data storage and operations. One example of this is in the case of M&A (Mergers & acquisitions), where companies with different platforms need to consolidate.

Cloud Migration Challenges between AWS and Azure

Data is the nexus of enterprise IT, and migration from AWS to Azure and vice versa is one of the most challenging aspects when implementing multicloud architectures. Let’s look at some of the challenges.

1. Data Migration: The fact that Azure and AWS use proprietary storage offerings and APIs make the data migration process complex. Leveraging third party tools for data transfer could lead to integration challenges as both the platforms use diverse technologies in the backend. And the entire process of transitioning between the two clouds may not be a feasible option for business-critical applications due to time and cost constraints involved.

2. Secure Data Transfer: Secure transfer of data between Azure and AWS should be done using a process that meets industry-specific governance and compliance standards. Direct download and upload of data can lead to security concerns as the data at rest and in transit should always be encrypted. While Azure Site Recovery offers a feasible solution for large scale secure migration between AWS and Azure, it requires additional infrastructure to be set up in AWS, which may not be feasible in cost-sensitive environments.

3. Access Control Privileges: When data is migrated between AWS and Azure platforms, administrators need to ensure that consistent data access and protection policies are applied in the destination as well. Security and access control are configured using different sets of tools and policies in AWS and Azure. While AWS depends on IAM user policies and resource-based policies for Amazon S3 access, Azure storage uses RBAC assigned to Azure AD users. Hence, redesign and reconfiguration of the entire system might be required to maintain the same level of security after migration. Management of data across AWS and Azure environments using unified tools and interfaces is also a major challenge.

4. Other Challenges: There are a few other additional challenges to the migration process between platforms. It will be necessary to find a way to evaluate the costs and calculate the differences. You’ll also need a way to measure and maintain the same or acceptable performance and SLA’s of different devices, instances, VM’s, storage types, etc. on the new platform.

Please follow and like us:

Supporting Azure in the Terraform Module Registry

After this was announced last year I have been itching to find the time to contribute – Now I finally have to support Azure.

My Module creates a VM and installs Active Directory, it can be found here.

For those of you who haven’t heard of it, The HashiCorp Terraform Module Registry gives Terraform users easy access to templates for setting up and running their infrastructure with verified and community modules.

 

Please follow and like us:

Build an FTP Site in Azure with Azure Storage File Share

If you want to host an FTP site in Azure there’s currently not a dedicated resource for this so the next best option is to spin up a virtual machine and use IIS for running the FTP site. It’s also possible to set the FTP site to use an Azure Storage file share to host the files.

Virtual Machine

When you create your VM  you will also need to allow traffic on port 3389 to allow you to remote desktop into it.

Once your VM has been provisioned go to it’s networking settings in the Azure portal and add port 21 and the range 9990-10000 to it’s inbound ports.

Azure Storage File Share

Within the Azure Storage account that’s created when you provision the VM go to Files and add a file share, when this has created click on it in the portal and then click Connect, this will open a blade containing PowerShell commands for adding the file share as a UNC drive to a Windows machine, copy this code and save it somewhere as you’ll need it soon.

IIS

Log into the VM using the admin credentials set at creation and open PowerShell, run the code copied from the file share blade in the Azure portal to add it as a UNC drive.

Next you need to install IIS on your server, this can be done from the Server Manager dashboard by choosing Add Roles and Features from the Manage menu.

  • Proceed to Installation Type step and confirm Role-based or feature-based installation.
  • Proceed to Server Roles step and check Web Server (IIS) role. Note that it is checked already, if you had IIS installed as a Web Server previously. Confirm installing IIS Management Console tool.
  • Proceed to Web Server Role (IIS) > Role Services step and check FTP Server role service. Uncheck Web Server role service, if you do not need it.
  • Proceed to the end of the wizard and click Install.
  • Wait for the installation to complete.

In order to have your FTP server play nicely with the Azure Storage file share you need to create a user capable of logging into the file share as the UNC path needs to be referenced rather than the mapped drive used within Windows which was added by the PowerShell commands above, as explained here.

Users can be added though Tools > Computer Management in the Server Manager. The username should be the name of the storage account and as usernames can’t be over 20 characters long or have the same name as the VM this is the reason for the restrictions in naming our storage account earlier. The password should be the access key for the storage account and “User cannot change password” and “Password never expires” should be selected. This user should then be added to the IIS_IUSRS group.

Once the connecting user has been added you need to create the FTP site, this is done from Tools > Internet Information Services (IIS) Manager in the Server Manager.

First add the ports that you opened in the Azure Firewall to the FTP Firewall Support setting at the server level, the external IP address should be that of your VM.

Next, right click in Sites and add a new FTP site, the physical path parameter should be the UNC path to your file share, rather than the drive alias used by Windows.

When creating an FTP site you should disallow anonymous authentication and use basic, users can be granted access by adding them in the local users step above an either assigning them to a relevant group or just granting all users of the machine access to the FTP site.

You will now have an FTP site set up and available but if you try to connect to it you’ll get an access denied error, this is because FTP on IIS fails to pass through the credentials and so you need to set these explicitly. This is done from the Basic Settings dialog in the right hand menu bar of the FTP site, within the connect as section of this enter the username and password of the user you created earlier (this will be the name of your storage account and the access key), once done save and then test the connection settings.

The FTP site should now be up and running and uploaded files saved on the Azure file share!

 

Please follow and like us:

Serverless on Azure – Deploying Azure Function using Terraform

Azure
Why?

The idea of running our own web servers, sizing VM’s and patching OSes seems so old school. For simple web apps, and seeing if our new service will be successful, we want hosting that is as low-cost as possible, but we also want the ability to scale elastically should we turn into the next big thing!

How?

In this example, we’ll use Azure Functions within an App Service Plan

We’ll manage this whole stack with one Terraform configuration, practicing what we preach with Infrastructure as Code.

Prerequisites

The below example assumes you have Terraform configured for use with your Azure Subscription.

Terraform definition

The desired resource is an Azure Function Application. There’s a handy Terraform template here.

Unfortunately, this Terraform template doesn’t include Azure Application Insights, which has its own template here.

Create a new file named “azure_function.tf” and place this code in it, which is a combination of the two above templates.

resource “azurerm_resource_group” “test” {
name = “tf-azfunc-test”
location = “WestEurope”
}

resource “random_id” “server” {
keepers = {
# Generate a new id each time we switch to a new Azure Resource Group
rg_id = “${azurerm_resource_group.test.name}”
}

byte_length = 8
}

resource “azurerm_storage_account” “test” {
name = “${random_id.server.hex}”
resource_group_name = “${azurerm_resource_group.test.name}”
location = “${azurerm_resource_group.test.location}”
account_tier = “Standard”
account_replication_type = “LRS”
}

resource “azurerm_app_service_plan” “test” {
name = “azure-functions-test-service-plan”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
kind = “FunctionApp”

sku {
tier = “Dynamic”
size = “Y1”
}
}

resource “azurerm_application_insights” “test” {
name = “test-terraform-insights”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
application_type = “Web”
}

resource “azurerm_function_app” “test” {
name = “test-terraform”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
app_service_plan_id = “${azurerm_app_service_plan.test.id}”
storage_connection_string = “${azurerm_storage_account.test.primary_connection_string}”

app_settings {
“AppInsights_InstrumentationKey” = “${azurerm_application_insights.test.instrumentation_key}”
}
}

This Azure Function and Application Insight template only differs from the Terraform documentation in two ways.

1. An Azure Function is associated with an Application Insights instance by adding the Instrumentation Key to the App Settings of the Azure Function application.

app_settings {
“AppInsights_InstrumentationKey” = “${azurerm_application_insights.test.instrumentation_key}”
}

2. Using a random ID for the Azure Storage Account gives it a better chance of being a unique URL.

resource “random_id” “server” {
keepers = {
# Generate a new id each time we switch to a new Azure Resource Group
rg_id = “${azurerm_resource_group.test.name}”
}

byte_length = 8
}

Testing Function works with App Insights

Once the above code is deployed via Terraform. Open up the Azure Function and create a new Javascript Webhook.

Azure Function

Run the default function a few times as-is.

Go look at the App Insights resource and see that the function was run a few times.

App Insights

 

 

 

 

 

 

Summary

A few lines of Terraform code above gives us a working Azure Functions resource group, complete with storage & Application Insights.

You have to love the awesome Terraform Azure Integration and I hope this inspires you to deploy your own Azure Function today!

Please follow and like us:

Upload to Azure Blob Storage using a PowerShell GUI

The reason I put this together was because of a requirement for internal users to upload content to Azure Blob Storage. However, the following requirements were mandated:

  • The data they were uploading needed to be put in a specific container
  • Not to give the users keys / permissions to the Azure Blob Storage

Andrews-Super-Uploader.ps1 s a GUI wrapper for the Microsoft Azure AZCopy tool (AZCopy.exe) to simplify the process of uploading data to Azure Blob Storage.

Requirements:
  • The script will work natively in PowerShell 2.0+
  • The script requires the Microsoft Azure AZCopy Tool with default installation path – get it here
Usage:

There are no parameters or switches, simply execute the script

The main section you will need to edit in the code is this:

$DestList=[collections.arraylist]@(
[pscustomobject]@{Name=’CONTENT / MANIFEST CHINA’;Value=”https://XXX.blob.core.windows.net/tests-data/Products?SASKEY”}
[pscustomobject]@{Name=’CONTENT / MANIFEST QA1′;Value=”https://XXX.blob.core.windows.net/tests-data/Products?SASKEY”}
[pscustomobject]@{Name=’CONTENT / MANIFEST UAT1′;Value=”https://XXX.blob.core.windows.net/tests-data/Products?SASKEY”}
)
$DropDownBox = New-Object System.Windows.Forms.ComboBox
$DropDownBox.Location = New-Object System.Drawing.Size(109,126)
$DropDownBox.Size = New-Object System.Drawing.Size(479,20)
$DropDownBox.DropDownHeight = 200
$Form.Controls.Add($DropDownBox)
$DropDownBox.DataSource=$DestList
$DropDownBox.DisplayMember=’Name’

$SourceList1=[collections.arraylist]@(
[pscustomobject]@{Name=’PRODUCT FOLDER 1′;Value=”D:\TFS\ProductDownloads\PRODUCT FOLDER 1″}
[pscustomobject]@{Name=’PRODUCT FOLDER 2′;Value=”D:\TFS\ProductDownloads\PRODUCT FOLDER 2″}
[pscustomobject]@{Name=’PRODUCT FOLDER 3′;Value=”D:\TFS\ProductDownloads\PRODUCT FOLDER 3″}
)

Just add your Azure Blob Storage SAS Key(s), your local source(s), your destination container(s) and amend the Names as required

Screenshot:

Azure Blob Uploader

Once you have it configured the way you want, hide the config away from end users by converting it to an EXE using PS2EXE

The full code for this can be found in my GitHub Repo

Inspired by MVP Chris Goosen’s PST Import Tool

 

 

 

Please follow and like us:

A Multi-Tier Azure Environment with Terraform including Active Directory – PART 5

In PART 4 we got Terraform to deploy a secondary Domain Controller for resiliency.

In PART 5 I am going to be showing you how to deploy Microsoft SQL VM(s) behind an Azure Internal Load Balancer and install Failover Cluster Manager so it is ready for AlwaysOn capabilities.

MODULES/sql-vm

This all happens in the SQL-VM module. First of all we create the Azure Internal Load Balancer with an AlwaysOn Endpoint Listener. Your soon to be created VM(s) are added to the backend pool.

1-lb.TF

resource “azurerm_lb” “sql-loadbalancer” {
name = “${var.prefix}-sql-loadbalancer”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
sku = “Standard”
frontend_ip_configuration {
name = “LoadBalancerFrontEnd”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “${var.lbprivate_ip_address}”
}
}
resource “azurerm_lb_backend_address_pool” “loadbalancer_backend” {
name = “loadbalancer_backend”
resource_group_name = “${var.resource_group_name}”
loadbalancer_id = “${azurerm_lb.sql-loadbalancer.id}”
}
resource “azurerm_lb_probe” “loadbalancer_probe” {
resource_group_name = “${var.resource_group_name}”
loadbalancer_id = “${azurerm_lb.sql-loadbalancer.id}”
name = “SQLAlwaysOnEndPointProbe”
protocol = “tcp”
port = 59999
interval_in_seconds = 5
number_of_probes = 2
}

resource “azurerm_lb_rule” “SQLAlwaysOnEndPointListener” {
resource_group_name = “${var.resource_group_name}”
loadbalancer_id = “${azurerm_lb.sql-loadbalancer.id}”
name = “SQLAlwaysOnEndPointListener”
protocol = “Tcp”
frontend_port = 1433
backend_port = 1433
frontend_ip_configuration_name = “LoadBalancerFrontEnd”
backend_address_pool_id = “${azurerm_lb_backend_address_pool.loadbalancer_backend.id}”
probe_id = “${azurerm_lb_probe.loadbalancer_probe.id}”
}

Next we create the NIC to be attached to your soon to be created VM. This includes a static public & private IP Address in the appropriate “dbsubnet” created in PART 1. This is where it is attached to the Azure Load Balancer backend pool.

Please note that this also created an Azure NSG for RDP on port 3389. This is because when using a Standard Load Balancer it defaults to blocking all traffic (I don’t think this is the case when using a Basic SKU)

2-NETWORK-INTERFACE.TF

resource “azurerm_network_security_group” “allow-rdp” {
name = “allow-rdp”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
}

resource “azurerm_network_security_rule” “allow-rdp” {
name = “allow-rdp”
priority = 100
direction = “Inbound”
access = “Allow”
protocol = “Tcp”
source_port_range = “*”
destination_port_range = “3389”
source_address_prefix = “*”
destination_address_prefix = “*”
resource_group_name = “${var.resource_group_name}”
network_security_group_name = “${azurerm_network_security_group.allow-rdp.name}”
}

resource “azurerm_public_ip” “static” {
name = “${var.prefix}-sql${1 + count.index}-ext”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
public_ip_address_allocation = “static”
count = “${var.sqlvmcount}”
sku = “Standard”
}

resource “azurerm_network_interface” “primary” {
name = “${var.prefix}-sql${1 + count.index}-int”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
internal_dns_name_label = “${var.prefix}-sql${1 + count.index}”
network_security_group_id = “${azurerm_network_security_group.allow-rdp.id}”
count = “${var.sqlvmcount}”

ip_configuration {
name = “primary”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “10.100.50.${10 + count.index}”
public_ip_address_id = “${azurerm_public_ip.static.*.id[count.index]}”
load_balancer_backend_address_pools_ids = [“${azurerm_lb_backend_address_pool.loadbalancer_backend.id}”]
}
}

The next step is to create our database VM(s). This example deploys a 2012-R2-Datacenter image with SQL 2014 SP2 Enterprise Installed. It is deployed into an availability group for resiliency, you can deploy as many as you want using the “vmcount” variable. It also has separate disks for OS, Data & Logs as per Microsoft Best Practice.

3-VIRTUAL-MACHINE.TF

resource “azurerm_availability_set” “sqlavailabilityset” {
name = “sqlavailabilityset”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
platform_fault_domain_count = 3
platform_update_domain_count = 5
managed = true
}

resource “azurerm_virtual_machine” “sql” {
name = “${var.prefix}-sql${1 + count.index}”
location = “${var.location}”
availability_set_id = “${azurerm_availability_set.sqlavailabilityset.id}”
resource_group_name = “${var.resource_group_name}”
network_interface_ids = [“${element(azurerm_network_interface.primary.*.id, count.index)}”]
vm_size = “Standard_B1s”
delete_os_disk_on_termination = true
count = “${var.sqlvmcount}”

storage_image_reference {
publisher = “MicrosoftSQLServer”
offer = “SQL2014SP2-WS2012R2”
sku = “Enterprise”
version = “latest”
}

storage_os_disk {
name = “${var.prefix}-sql${1 + count.index}-disk1”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}

os_profile {
computer_name = “${var.prefix}-sql${1 + count.index}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = false
}

storage_data_disk {
name = “${var.prefix}-sql${1 + count.index}-data-disk1”
disk_size_gb = “2000”
caching = “ReadWrite”
create_option = “Empty”
managed_disk_type = “Standard_LRS”
lun = “2”
}

storage_data_disk {
name = “${var.prefix}-sql${1 + count.index}-log-disk1”
disk_size_gb = “500”
caching = “ReadWrite”
create_option = “Empty”
managed_disk_type = “Standard_LRS”
lun = “3”
}

depends_on = [“azurerm_network_interface.primary”]
}

We now join the VM(s) to the domain using a Virtual Machine Extension. Note the use of the Splat Operator (*) with count.

4-join-domain.TF

resource “azurerm_virtual_machine_extension” “join-domain” {
name = “join-domain”
location = “${element(azurerm_virtual_machine.sql.*.location, count.index)}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${element(azurerm_virtual_machine.sql.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “JsonADDomainExtension”
type_handler_version = “1.3”
count = “${var.sqlvmcount}”

# NOTE: the `OUPath` field is intentionally blank, to put it in the Computers OU
settings = <<SETTINGS
{
“Name”: “${var.active_directory_domain}”,
“OUPath”: “”,
“User”: “${var.active_directory_domain}\\${var.active_directory_username}”,
“Restart”: “true”,
“Options”: “3”
}
SETTINGS

protected_settings = <<SETTINGS
{
“Password”: “${var.active_directory_password}”
}
SETTINGS
}

Finally we install Windows Server Failover Clustering so it can easily be added to an AlwaysOn Availability Group if required.

5-install-wsfc.TF

resource “azurerm_virtual_machine_extension” “wsfc” {
count = “${var.sqlvmcount}”
name = “create-cluster”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
virtual_machine_name = “${element(azurerm_virtual_machine.sql.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “CustomScriptExtension”
type_handler_version = “1.9”

settings = <<SETTINGS
{
“commandToExecute”: “powershell Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools”
}
SETTINGS

depends_on = [“azurerm_virtual_machine_extension.join-domain”]
}

Your MAIN.TF file should now look like this

main.tf

# Configure the Microsoft Azure Provider
provider “azurerm” {
subscription_id = “${var.subscription_id}”
client_id = “${var.client_id}”
client_secret = “${var.client_secret}”
tenant_id = “${var.tenant_id}”
}

##########################################################
## Create Resource group Network & subnets
##########################################################
module “network” {
source = “..\\modules\\network”
address_space = “${var.address_space}”
dns_servers = [“${var.dns_servers}”]
environment_name = “${var.environment_name}”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
dcsubnet_name = “${var.dcsubnet_name}”
dcsubnet_prefix = “${var.dcsubnet_prefix}”
wafsubnet_name = “${var.wafsubnet_name}”
wafsubnet_prefix = “${var.wafsubnet_prefix}”
rpsubnet_name = “${var.rpsubnet_name}”
rpsubnet_prefix = “${var.rpsubnet_prefix}”
issubnet_name = “${var.issubnet_name}”
issubnet_prefix = “${var.issubnet_prefix}”
dbsubnet_name = “${var.dbsubnet_name}”
dbsubnet_prefix = “${var.dbsubnet_prefix}”
}

##########################################################
## Create DC VM & AD Forest
##########################################################

module “active-directory” {
source = “..\\modules\\active-directory”
resource_group_name = “${module.network.out_resource_group_name}”
location = “${var.location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_netbios_name = “${var.prefix}”
private_ip_address = “${var.private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

##########################################################
## Create IIS VM’s & Join domain
##########################################################

module “iis-vm” {
source = “..\\modules\\iis-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.is_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
vmcount = “${var.vmcount}”
}

##########################################################
## Create Secondary Domain Controller VM & Join domain
##########################################################
module “dc2-vm” {
source = “..\\modules\\dc2-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
dcavailability_set_id = “${module.active-directory.out_dcavailabilityset}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
active_directory_netbios_name = “${var.prefix}”
dc2private_ip_address = “${var.dc2private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
domainadmin_username = “${var.domainadmin_username}”
}

##########################################################
## Create SQL Server VM Join domain
##########################################################
module “sql-vm” {
source = “..\\modules\\sql-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.db_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
sqlvmcount = “${var.sqlvmcount}”
lbprivate_ip_address = “${var.lbprivate_ip_address}”
}

This brings us to the end of this example. I have tried to showcase a lots of different options of what you can deploy to Azure with Terraform using a mixture of IaaS and PaaS options.

You don’t have to use all of it but hopefully it gives you a few ideas and inspires you to start using Terraform to spin up resources in Azure.

To get the full complete example including variables & output files, you can find it on GitHub where it has also been contributed to the Hashicorp Offical Repo

 

Please follow and like us:

A Multi-Tier Azure Environment with Terraform including Active Directory – PART 4

In PART 3 we got Terraform to deploy an IIS web server(s) and join to your newly configured Active Directory Domain.

In PART 4 I am going to be showing you how to deploy a secondary Domain Controller for resiliency.

MODULES/dc2-vm

This all happens in the DC2-VM module. First of all we create the NIC to be attached to your soon to be created VM. This includes a static public & private IP Address in the appropriate “dcsubnet” created in PART 1

1-NETWORK-INTERFACE.TF

resource “azurerm_public_ip” “dc2-external” {
name = “${var.prefix}-dc2-ext”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
public_ip_address_allocation = “Static”
idle_timeout_in_minutes = 30
}

resource “azurerm_network_interface” “dc2primary” {
name = “${var.prefix}-dc2-primary”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
internal_dns_name_label = “${local.dc2virtual_machine_name}”

ip_configuration {
name = “primary”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “${var.dc2private_ip_address}”
public_ip_address_id = “${azurerm_public_ip.dc2-external.id}”
}
}

The next step is to create our secondary Domain Controller VM. This example deploys a 2012-R2-Datacenter image.

2-VIRTUAL-MACHINE.TF

locals {
dc2virtual_machine_name = “${var.prefix}-dc2”
dc2virtual_machine_fqdn = “${local.dc2virtual_machine_name}.${var.active_directory_domain}”
dc2custom_data_params = “Param($RemoteHostName = \”${local.dc2virtual_machine_fqdn}\”, $ComputerName = \”${local.dc2virtual_machine_name}\”)”
dc2custom_data_content = “${local.dc2custom_data_params} ${file(“${path.module}/files/winrm.ps1″)}”
}

resource “azurerm_virtual_machine” “domain-controller2” {
name = “${local.dc2virtual_machine_name}”
location = “${var.location}”
availability_set_id = “${var.dcavailability_set_id}”
resource_group_name = “${var.resource_group_name}”
network_interface_ids = [“${azurerm_network_interface.dc2primary.id}”]
vm_size = “Standard_A1”
delete_os_disk_on_termination = false

storage_image_reference {
publisher = “MicrosoftWindowsServer”
offer = “WindowsServer”
sku = “2012-R2-Datacenter”
version = “latest”
}

storage_os_disk {
name = “${local.dc2virtual_machine_name}-disk1”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}

os_profile {
computer_name = “${local.dc2virtual_machine_name}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
custom_data = “${local.dc2custom_data_content}”
}

os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = false

additional_unattend_config {
pass = “oobeSystem”
component = “Microsoft-Windows-Shell-Setup”
setting_name = “AutoLogon”
content = “<AutoLogon><Password><Value>${var.admin_password}</Value></Password><Enabled>true</Enabled><LogonCount>1</LogonCount><Username>${var.admin_username}</Username></AutoLogon>”
}

# Unattend config is to enable basic auth in WinRM, required for the provisioner stage.
additional_unattend_config {
pass = “oobeSystem”
component = “Microsoft-Windows-Shell-Setup”
setting_name = “FirstLogonCommands”
content = “${file(“${path.module}/files/FirstLogonCommands.xml”)}”
}
}

depends_on = [“azurerm_network_interface.dc2primary”]
}

We now join the VM(s) to the domain using a Virtual Machine Extension.

3-join-domain.TF

resource “azurerm_virtual_machine_extension” “join-domain” {
name = “join-domain”
location = “${azurerm_virtual_machine.domain-controller2.location}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${azurerm_virtual_machine.domain-controller2.name}”
publisher = “Microsoft.Compute”
type = “JsonADDomainExtension”
type_handler_version = “1.3”

# NOTE: the `OUPath` field is intentionally blank, to put it in the Computers OU
settings = <<SETTINGS
{
“Name”: “${var.active_directory_domain}”,
“OUPath”: “”,
“User”: “${var.active_directory_domain}\\${var.active_directory_username}”,
“Restart”: “true”,
“Options”: “3”
}
SETTINGS

protected_settings = <<SETTINGS
{
“Password”: “${var.active_directory_password}”
}
SETTINGS
}

Finally we promote this to a Domain Controller

4-promote-dc.TF

// the `exit_code_hack` is to keep the VM Extension resource happy
locals {
dc2import_command = “Import-Module ADDSDeployment”
dc2user_command = “$dc2user = ${var.domainadmin_username}”
dc2password_command = “$password = ConvertTo-SecureString ${var.admin_password} -AsPlainText -Force”
dc2creds_command = “$mycreds = New-Object System.Management.Automation.PSCredential -ArgumentList $dc2user, $password”
dc2install_ad_command = “Add-WindowsFeature -name ad-domain-services -IncludeManagementTools”
dc2configure_ad_command = “Install-ADDSDomainController -Credential $mycreds -CreateDnsDelegation:$false -DomainName ${var.active_directory_domain} -InstallDns:$true -SafeModeAdministratorPassword $password -Force:$true”
dc2shutdown_command = “shutdown -r -t 10”
dc2exit_code_hack = “exit 0”
dc2powershell_command = “${local.dc2import_command}; ${local.dc2user_command}; ${local.dc2password_command}; ${local.dc2creds_command}; ${local.dc2install_ad_command}; ${local.dc2configure_ad_command}; ${local.dc2shutdown_command}; ${local.dc2exit_code_hack}”
}

resource “azurerm_virtual_machine_extension” “promote-dc” {
name = “promote-dc”
location = “${azurerm_virtual_machine_extension.join-domain.location}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${azurerm_virtual_machine.domain-controller2.name}”
publisher = “Microsoft.Compute”
type = “CustomScriptExtension”
type_handler_version = “1.9”

settings = <<SETTINGS
{
“commandToExecute”: “powershell.exe -Command \”${local.dc2powershell_command}\””
}
SETTINGS
}

Your MAIN.TF file should now look like this

main.tf

# Configure the Microsoft Azure Provider
provider “azurerm” {
subscription_id = “${var.subscription_id}”
client_id = “${var.client_id}”
client_secret = “${var.client_secret}”
tenant_id = “${var.tenant_id}”
}

##########################################################
## Create Resource group Network & subnets
##########################################################
module “network” {
source = “..\\modules\\network”
address_space = “${var.address_space}”
dns_servers = [“${var.dns_servers}”]
environment_name = “${var.environment_name}”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
dcsubnet_name = “${var.dcsubnet_name}”
dcsubnet_prefix = “${var.dcsubnet_prefix}”
wafsubnet_name = “${var.wafsubnet_name}”
wafsubnet_prefix = “${var.wafsubnet_prefix}”
rpsubnet_name = “${var.rpsubnet_name}”
rpsubnet_prefix = “${var.rpsubnet_prefix}”
issubnet_name = “${var.issubnet_name}”
issubnet_prefix = “${var.issubnet_prefix}”
dbsubnet_name = “${var.dbsubnet_name}”
dbsubnet_prefix = “${var.dbsubnet_prefix}”
}

##########################################################
## Create DC VM & AD Forest
##########################################################

module “active-directory” {
source = “..\\modules\\active-directory”
resource_group_name = “${module.network.out_resource_group_name}”
location = “${var.location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_netbios_name = “${var.prefix}”
private_ip_address = “${var.private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

##########################################################
## Create IIS VM’s & Join domain
##########################################################

module “iis-vm” {
source = “..\\modules\\iis-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.is_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
vmcount = “${var.vmcount}”
}

##########################################################
## Create Secondary Domain Controller VM & Join domain
##########################################################
module “dc2-vm” {
source = “..\\modules\\dc2-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
dcavailability_set_id = “${module.active-directory.out_dcavailabilityset}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
active_directory_netbios_name = “${var.prefix}”
dc2private_ip_address = “${var.dc2private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
domainadmin_username = “${var.domainadmin_username}”
}

This is the end of PART 4, by now you should have Terraform configured, building a resource group containing a Network with 5 subnets in Azure.  Within the dcsubnet you should have 2 VM’s running the Domain Controller role with an Active Directory Domain configured. Within the issubnet you should have at least one web server running IIS, in an availability group and joined to the domain.

Join me again soon for PART 5 where we will adding database VM(s) which will be running SQL Server and joined to the domain.

P.S. If you cant wait and just want to jump to the complete example, you can find it on GitHub where it has also been contributed to the Hashicorp Offical Repo

Please follow and like us:

A Multi-Tier Azure Environment with Terraform including Active Directory – PART 3

In PART 2 we got Terraform to deploy a Domain Controller into your newly configured network.

In PART 3 I am going to be showing you how to deploy a web server (IIS) and join it to your newly configured Active Directory Domain.

MODULES/iis-vm

This all happens in the IIS-VM module. First of all we create the NIC to be attached to your soon to be created VM. This includes a static public & private IP Address in the appropriate “issubnet” created in PART 1

1-NETWORK-INTERFACE.TF

resource “azurerm_public_ip” “static” {
name = “${var.prefix}-iis${1 + count.index}-ext”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
public_ip_address_allocation = “static”
count = “${var.vmcount}”
}

resource “azurerm_network_interface” “primary” {
name = “${var.prefix}-iis${1 + count.index}-int”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
internal_dns_name_label = “${var.prefix}-iis${1 + count.index}”
count = “${var.vmcount}”

ip_configuration {
name = “primary”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “10.100.30.${10 + count.index}”
public_ip_address_id = “${azurerm_public_ip.static.*.id[count.index]}”
}
}

The next step is to create our web server VM. This example deploys a 2012-R2-Datacenter image. It is deployed into an availability group for resiliency, you can deploy as many as you want using the “vmcount” variable.

2-VIRTUAL-MACHINE.TF

resource “azurerm_availability_set” “isavailabilityset” {
name = “isavailabilityset”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
platform_fault_domain_count = 3
platform_update_domain_count = 5
managed = true
}

resource “azurerm_virtual_machine” “iis” {
name = “${var.prefix}-iis${1 + count.index}”
location = “${var.location}”
availability_set_id = “${azurerm_availability_set.isavailabilityset.id}”
resource_group_name = “${var.resource_group_name}”
network_interface_ids = [“${element(azurerm_network_interface.primary.*.id, count.index)}”]
vm_size = “Standard_A1”
delete_os_disk_on_termination = true
count = “${var.vmcount}”

storage_image_reference {
publisher = “MicrosoftWindowsServer”
offer = “WindowsServer”
sku = “2012-R2-Datacenter”
version = “latest”
}

storage_os_disk {
name = “${var.prefix}-iis${1 + count.index}-disk1”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}

os_profile {
computer_name = “${var.prefix}-iis${1 + count.index}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = false
}

depends_on = [“azurerm_network_interface.primary”]
}

We now join the VM(s) to the domain using a Virtual Machine Extension. Note the use of the Splat Operator (*) with count.

3-join-domain.TF

resource “azurerm_virtual_machine_extension” “join-domain” {
name = “join-domain”
location = “${element(azurerm_virtual_machine.iis.*.location, count.index)}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${element(azurerm_virtual_machine.iis.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “JsonADDomainExtension”
type_handler_version = “1.3”
count = “${var.vmcount}”

# NOTE: the `OUPath` field is intentionally blank, to put it in the Computers OU
settings = <<SETTINGS
{
“Name”: “${var.active_directory_domain}”,
“OUPath”: “”,
“User”: “${var.active_directory_domain}\\${var.active_directory_username}”,
“Restart”: “true”,
“Options”: “3”
}
SETTINGS

protected_settings = <<SETTINGS
{
“Password”: “${var.active_directory_password}”
}
SETTINGS
}

Finally we install IIS and some common features to help manage it.

4-install-iis.TF

resource “azurerm_virtual_machine_extension” “iis” {
count = “${var.vmcount}”
name = “install-iis”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
virtual_machine_name = “${element(azurerm_virtual_machine.iis.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “CustomScriptExtension”
type_handler_version = “1.9”

settings = <<SETTINGS
{
“commandToExecute”: “powershell Add-WindowsFeature Web-Asp-Net45;Add-WindowsFeature NET-Framework-45-Core;Add-WindowsFeature Web-Net-Ext45;Add-WindowsFeature Web-ISAPI-Ext;Add-WindowsFeature Web-ISAPI-Filter;Add-WindowsFeature Web-Mgmt-Console;Add-WindowsFeature Web-Scripting-Tools;Add-WindowsFeature Search-Service;Add-WindowsFeature Web-Filtering;Add-WindowsFeature Web-Basic-Auth;Add-WindowsFeature Web-Windows-Auth;Add-WindowsFeature Web-Default-Doc;Add-WindowsFeature Web-Http-Errors;Add-WindowsFeature Web-Static-Content;”
}
SETTINGS

depends_on = [“azurerm_virtual_machine_extension.join-domain”]
}

Your MAIN.TF file should now look like this

main.tf

# Configure the Microsoft Azure Provider
provider “azurerm” {
subscription_id = “${var.subscription_id}”
client_id = “${var.client_id}”
client_secret = “${var.client_secret}”
tenant_id = “${var.tenant_id}”
}

##########################################################
## Create Resource group Network & subnets
##########################################################
module “network” {
source = “..\\modules\\network”
address_space = “${var.address_space}”
dns_servers = [“${var.dns_servers}”]
environment_name = “${var.environment_name}”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
dcsubnet_name = “${var.dcsubnet_name}”
dcsubnet_prefix = “${var.dcsubnet_prefix}”
wafsubnet_name = “${var.wafsubnet_name}”
wafsubnet_prefix = “${var.wafsubnet_prefix}”
rpsubnet_name = “${var.rpsubnet_name}”
rpsubnet_prefix = “${var.rpsubnet_prefix}”
issubnet_name = “${var.issubnet_name}”
issubnet_prefix = “${var.issubnet_prefix}”
dbsubnet_name = “${var.dbsubnet_name}”
dbsubnet_prefix = “${var.dbsubnet_prefix}”
}

##########################################################
## Create DC VM & AD Forest
##########################################################

module “active-directory” {
source = “..\\modules\\active-directory”
resource_group_name = “${module.network.out_resource_group_name}”
location = “${var.location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_netbios_name = “${var.prefix}”
private_ip_address = “${var.private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

##########################################################
## Create IIS VM’s & Join domain
##########################################################

module “iis-vm” {
source = “..\\modules\\iis-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.is_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
vmcount = “${var.vmcount}”
}

This is the end of PART 3, by now you should have Terraform configured, building a resource group containing a Network with 5 subnets in Azure.  Within the dcsubnet you should have a VM running the Domain Controller role with an Active Directory Domain configured. Within the issubnet you should have at least one web server running IIS, in an availability group and joined to the domain.

Join me again soon for PART 4 where we will adding secondary Domain Controller VM for resiliency.

P.S. If you cant wait and just want to jump to the complete example, you can find it on GitHub where it has also been contributed to the Hashicorp Offical Repo

 

Please follow and like us:

A Multi-Tier Azure Environment with Terraform including Active Directory – PART 2

In PART 1 we got Terraform configured and deployed a Resource Group to Azure containing a Network with 5 subnets.

In PART 2 I am going to be showing you how to deploy a Domain Controller into your newly configured network.

MODULES/ACTIVE-DIRECTORY

This all happens in the Active-Directory module. First of all we create the NIC to be attached to your soon to be created VM. This includes a static public & private IP Address in the appropriate “dcsubnet” created in PART 1

1-NETWORK-INTERFACE.TF

resource “azurerm_public_ip” “dc1-external” {
name = “${var.prefix}-dc1-external”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
public_ip_address_allocation = “Static”
idle_timeout_in_minutes = 30
}

resource “azurerm_network_interface” “primary” {
name = “${var.prefix}-dc1-primary”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
internal_dns_name_label = “${local.virtual_machine_name}”

ip_configuration {
name = “primary”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “${var.private_ip_address}”
public_ip_address_id = “${azurerm_public_ip.dc1-external.id}”
}
}

The next step is to create our first Domain Controller VM. This example deploys a 2012-R2-Datacenter image.

2-VIRTUAL-MACHINE.TF

locals {
virtual_machine_name = “${var.prefix}-dc1”
virtual_machine_fqdn = “${local.virtual_machine_name}.${var.active_directory_domain}”
custom_data_params = “Param($RemoteHostName = \”${local.virtual_machine_fqdn}\”, $ComputerName = \”${local.virtual_machine_name}\”)”
custom_data_content = “${local.custom_data_params} ${file(“${path.module}/files/winrm.ps1″)}”
}
resource “azurerm_availability_set” “dcavailabilityset” {
name = “dcavailabilityset”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
platform_fault_domain_count = 3
platform_update_domain_count = 5
managed = true
}

resource “azurerm_virtual_machine” “domain-controller” {
name = “${local.virtual_machine_name}”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
availability_set_id = “${azurerm_availability_set.dcavailabilityset.id}”
network_interface_ids = [“${azurerm_network_interface.primary.id}”]
vm_size = “Standard_A1”
delete_os_disk_on_termination = false

storage_image_reference {
publisher = “MicrosoftWindowsServer”
offer = “WindowsServer”
sku = “2012-R2-Datacenter”
version = “latest”
}

storage_os_disk {
name = “${local.virtual_machine_name}-disk1”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}

os_profile {
computer_name = “${local.virtual_machine_name}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
custom_data = “${local.custom_data_content}”
}

os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = false

additional_unattend_config {
pass = “oobeSystem”
component = “Microsoft-Windows-Shell-Setup”
setting_name = “AutoLogon”
content = “<AutoLogon><Password><Value>${var.admin_password}</Value></Password><Enabled>true</Enabled><LogonCount>1</LogonCount><Username>${var.admin_username}</Username></AutoLogon>”
}

# Unattend config is to enable basic auth in WinRM, required for the provisioner stage.
additional_unattend_config {
pass = “oobeSystem”
component = “Microsoft-Windows-Shell-Setup”
setting_name = “FirstLogonCommands”
content = “${file(“${path.module}/files/FirstLogonCommands.xml”)}”
}
}
}

Now we will provision the Active Directory Domain using the Custom Script Extension

3-PROVISION-DOMAIN

// the `exit_code_hack` is to keep the VM Extension resource happy
locals {
import_command = “Import-Module ADDSDeployment”
password_command = “$password = ConvertTo-SecureString ${var.admin_password} -AsPlainText -Force”
install_ad_command = “Add-WindowsFeature -name ad-domain-services -IncludeManagementTools”
configure_ad_command = “Install-ADDSForest -CreateDnsDelegation:$false -DomainMode Win2012R2 -DomainName ${var.active_directory_domain} -DomainNetbiosName ${var.active_directory_netbios_name} -ForestMode Win2012R2 -InstallDns:$true -SafeModeAdministratorPassword $password -Force:$true”
shutdown_command = “shutdown -r -t 10”
exit_code_hack = “exit 0”
powershell_command = “${local.import_command}; ${local.password_command}; ${local.install_ad_command}; ${local.configure_ad_command}; ${local.shutdown_command}; ${local.exit_code_hack}”
}

resource “azurerm_virtual_machine_extension” “create-active-directory-forest” {
name = “create-active-directory-forest”
location = “${azurerm_virtual_machine.domain-controller.location}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${azurerm_virtual_machine.domain-controller.name}”
publisher = “Microsoft.Compute”
type = “CustomScriptExtension”
type_handler_version = “1.9”

settings = <<SETTINGS
{
“commandToExecute”: “powershell.exe -Command \”${local.powershell_command}\””
}
SETTINGS
}

Your MAIN.TF file should now look like this

MAIN.TF

# Configure the Microsoft Azure Provider
provider “azurerm” {
subscription_id = “${var.subscription_id}”
client_id = “${var.client_id}”
client_secret = “${var.client_secret}”
tenant_id = “${var.tenant_id}”
}

##########################################################
## Create Resource group Network & subnets
##########################################################
module “network” {
source = “..\\modules\\network”
address_space = “${var.address_space}”
dns_servers = [“${var.dns_servers}”]
environment_name = “${var.environment_name}”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
dcsubnet_name = “${var.dcsubnet_name}”
dcsubnet_prefix = “${var.dcsubnet_prefix}”
wafsubnet_name = “${var.wafsubnet_name}”
wafsubnet_prefix = “${var.wafsubnet_prefix}”
rpsubnet_name = “${var.rpsubnet_name}”
rpsubnet_prefix = “${var.rpsubnet_prefix}”
issubnet_name = “${var.issubnet_name}”
issubnet_prefix = “${var.issubnet_prefix}”
dbsubnet_name = “${var.dbsubnet_name}”
dbsubnet_prefix = “${var.dbsubnet_prefix}”
}

##########################################################
## Create DC VM & AD Forest
##########################################################

module “active-directory” {
source = “..\\modules\\active-directory”
resource_group_name = “${module.network.out_resource_group_name}”
location = “${var.location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_netbios_name = “${var.prefix}”
private_ip_address = “${var.private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

This is the end of PART 2, by now you should have Terraform configured, building a resource group containing a Network with 5 subnets in Azure.  Within the dcsubnet you should have a VM running the Domain Controller role with an Active Directory Domain configured.

Join me again soon for PART 3 where we will adding web server VM(s) which will be running IIS and joined to the domain.

P.S. If you cant wait and just want to jump to the complete example, you can find it on GitHub where it has also been contributed to the Hashicorp Offical Repo

Please follow and like us:

A Multi-Tier Azure Environment with Terraform including Active Directory – PART 1

Infrastructure as Code (IAC) is common in DevOps cultures and gives us the ability to manage configurations and automatically provision infrastructure along with deployments. IAC is an approach of defining infrastructure and network components through descriptive or high-level code; i.e., programmable infrastructure. Various tools such as Vagrant, Ansible, Docker, Chef, Terraform, Puppet, and more, independently or when combined, make life easy by automating infrastructure provisioning and deployment automation.

Terraform is one such tool. In this guide, we will use Terraform to provision a Two-Tier infrastructure on Azure including Active Directory. We will create all the components from scratch including the resource group, VNet, subnets, NIC, security groups, VMs, etc.

Terraform must be installed in your system. Find the instructions to install Terraform in the article: Install and configure Terraform

For Terraform to provision resources in Azure, it must be able to authenticate. In this example we will need to create a Client ID and Client Secret in Azure AD that can be used as credentials in Terraform to allow it to provision resources in Azure.

Using Terraform, we will provision the following things:

  • A resource group with one virtual network in it

 

  • wafsubnet – To contain a WAF of your choosing (Not created as part of this example)

 

  • rpsubnet – To contain a Reverse Proxy of your choosing (Not created as part of this example)

 

  • issubnet – To contain IIS VM(s) (Created as part of this example)

 

  • dbsubnet – To contain MS SQL VM(s) (Created as part of this example)

 

  • dcsubnet – To contain pair of Domain Controllers

 

  • DC1 – Primary Domain Controller holding FSMO roles, Static Public & Private IP Addresses

 

  • DC2 – Secondary Domain Controller joined to domain, Static Public & Private IP Addresses

 

  • IIS VM(s) – Scalable using the count function, IIS & Management Tools installed, Windows Server 2012 R2, Added to Availability Set, Static Public & Private IP Addresses Joined to Domain

 

  • SQL VM(s) – Scalable using the count function, SQL 2014 SP2 & SSMS Installed, Windows Server 2012 R2, Windows Failover Clustering Service Installed, Added to Availability Set , Static Public & Private IP Addresses, Joined to Domain

 

  • Azure internal load balancer (ILB) and adds the SQL VM(s) to the backend pool so can be expanded to use AlwaysOn Capability

This example also includes an environments folder containing a .tfvars file, to allow this Infrastructure to be deployed throughout a pipeline i.e. Dev,QA,UAT etc

At first, we will create variables file. In the VARIABLES.TF file, we will configure Azure Provider as well as declare all the variables that we will use in all our Terraform configurations.

# Provider info
variable subscription_id {}

variable client_id {}
variable client_secret {}
variable tenant_id {}

# Generic info
variable location {}

variable resource_group_name {}
variable environment_name {}

# Network
variable address_space {}

variable dns_servers {
type = “list”
}

variable wafsubnet_name {}
variable wafsubnet_prefix {}
variable rpsubnet_name {}
variable rpsubnet_prefix {}
variable issubnet_name {}
variable issubnet_prefix {}
variable dbsubnet_name {}
variable dbsubnet_prefix {}
variable dcsubnet_name {}
variable dcsubnet_prefix {}

# Active Directory & Domain Controller
variable prefix {}
variable private_ip_address {}
variable admin_username {}
variable admin_password {}

# IIS Servers
variable vmcount {}

# Domain Controller 2
variable “dc2private_ip_address” {}
variable “domainadmin_username” {}

# SQL LB
variable “lbprivate_ip_address” {}
# SQL DB Servers
variable sqlvmcount {}

In the VARIABLES.TF file, we haven’t specified any default values for the variables we created. We will assign values to variables by declaring them in another file: ENVIRONMENTNAME.TFVARS.

 

# Provider info
subscription_id = “XXXXXXXXXXXXXXX”

client_id = “XXXXXXXXXXXXXXX”

client_secret = “XXXXXXXXXXXXXXX”

tenant_id = “XXXXXXXXXXXXXXX”

# Generic info
location = “West Europe”

resource_group_name = “productname”

environment_name = “devci1”

# Network
address_space = “10.100.0.0/16”

dns_servers = [“10.100.1.4”, “10.100.1.5”]

dcsubnet_name = “sndc”

dcsubnet_prefix = “10.100.1.0/24”

wafsubnet_name = “snwf”

wafsubnet_prefix = “10.100.10.0/24”

rpsubnet_name = “snrp”

rpsubnet_prefix = “10.100.20.0/24”

issubnet_name = “snis”

issubnet_prefix = “10.100.30.0/24”

dbsubnet_name = “sndb”

dbsubnet_prefix = “10.100.50.0/24”

# Active Directory & Domain Controller 1

prefix = “devad”

private_ip_address = “10.100.1.4”

dc2private_ip_address = “10.100.1.5”

admin_username = “AdminTest”

admin_password = “Password123”

# IIS Servers

vmcount = “1”

# Domain Controller 2

domainadmin_username = “‘AdminTest@devad.local'”

# SQL LB

lbprivate_ip_address = “10.100.50.20”

# SQL DB Servers

sqlvmcount = “1”

This example makes use of 5 modules:

  • modules/active-directory
    This module creates an Active Directory Forest on a single Virtual Machine

 

  • modules/network
    This module creates the Network with 4 subnets.
    In a Production environment there would be Network Security Rules in effect which limited which ports can be used between these Subnets, however for the purposes of keeping this demonstration simple, these have been omitted.

 

  • modules/dc2-vm
    This module creates a secondary domain controller machine for resiliency that is bound to the Active Directory Domain created in the active-directory module above.

 

  • modules/iis-vm
    This module creates IIS VM’s – Choose how many you want using count

 

  • modules/sql-vm This module creates SQL VM’s – Choose how many you want using count. Also creates the ILB so you could scale out to use AlwaysOn

 

The modules are all called from a MAIN.TF file:

# Configure the Microsoft Azure Provider
provider “azurerm” {
subscription_id = “${var.subscription_id}”
client_id = “${var.client_id}”
client_secret = “${var.client_secret}”
tenant_id = “${var.tenant_id}”
}

##########################################################
## Create Resource group Network & subnets
##########################################################
module “network” {
source = “..\\modules\\network”
address_space = “${var.address_space}”
dns_servers = [“${var.dns_servers}”]
environment_name = “${var.environment_name}”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
dcsubnet_name = “${var.dcsubnet_name}”
dcsubnet_prefix = “${var.dcsubnet_prefix}”
wafsubnet_name = “${var.wafsubnet_name}”
wafsubnet_prefix = “${var.wafsubnet_prefix}”
rpsubnet_name = “${var.rpsubnet_name}”
rpsubnet_prefix = “${var.rpsubnet_prefix}”
issubnet_name = “${var.issubnet_name}”
issubnet_prefix = “${var.issubnet_prefix}”
dbsubnet_name = “${var.dbsubnet_name}”
dbsubnet_prefix = “${var.dbsubnet_prefix}”
}

When you start from scratch, first and foremost you will need a resource group. So in this module, we will create a resource group, the virtual network, and the subnets.

resource “azurerm_resource_group” “network” {
name = “${var.resource_group_name}-${var.environment_name}”
location = “${var.location}”
}

resource “azurerm_virtual_network” “main” {
name = “${var.resource_group_name}-${var.environment_name}-net”
address_space = [“${var.address_space}”]
location = “${var.location}”
resource_group_name = “${var.resource_group_name}-${var.environment_name}”
dns_servers = [“${var.dns_servers}”]

depends_on = [“azurerm_resource_group.network”]
}

resource “azurerm_subnet” “dc-subnet” {
name = “${var.resource_group_name}-${var.dcsubnet_name}-${var.environment_name}”
resource_group_name = “${var.resource_group_name}-${var.environment_name}”
virtual_network_name = “${azurerm_virtual_network.main.name}”
address_prefix = “${var.dcsubnet_prefix}”
}

resource “azurerm_subnet” “waf-subnet” {
name = “${var.resource_group_name}-${var.wafsubnet_name}-${var.environment_name}”
resource_group_name = “${var.resource_group_name}-${var.environment_name}”
virtual_network_name = “${azurerm_virtual_network.main.name}”
address_prefix = “${var.wafsubnet_prefix}”
}

resource “azurerm_subnet” “rp-subnet” {
name = “${var.resource_group_name}-${var.rpsubnet_name}-${var.environment_name}”
resource_group_name = “${var.resource_group_name}-${var.environment_name}”
virtual_network_name = “${azurerm_virtual_network.main.name}”
address_prefix = “${var.rpsubnet_prefix}”
}

resource “azurerm_subnet” “is-subnet” {
name = “${var.resource_group_name}-${var.issubnet_name}-${var.environment_name}”
resource_group_name = “${var.resource_group_name}-${var.environment_name}”
virtual_network_name = “${azurerm_virtual_network.main.name}”
address_prefix = “${var.issubnet_prefix}”
}

resource “azurerm_subnet” “db-subnet” {
name = “${var.resource_group_name}-${var.dbsubnet_name}-${var.environment_name}”
resource_group_name = “${var.resource_group_name}-${var.environment_name}”
virtual_network_name = “${azurerm_virtual_network.main.name}”
address_prefix = “${var.dbsubnet_prefix}”
}

This is the end of PART 1, by now you should have Terraform configured and building a resource group containing a Network with 5 subnets in Azure.

Join me again soon for PART 2 where we will adding a VM which will be our primary Domain Controller.

P.S. If you cant wait and just want to jump to the complete example, you can find it on GitHub where it has also been contributed to the Hashicorp Offical Repo

Please follow and like us: