Which Azure Messaging System Should I Use?

At the moment there are a number of Azure messaging services available.

  • Storage Queue
  • Service Bus Queue
  • Service Bus Topic
  • Event Hubs
  • Event Grid
  • IoT Hub

This article gives you a general overview.

Storage QueueAzure-Storage---Queue

Azure Queue storage is a service for storing large numbers of messages that can be accessed from anywhere in the world via authenticated calls using HTTP or HTTPS. A single queue message can be up to 64 KB in size, and a queue can contain millions of messages, up to the total capacity limit of a storage account. The maximum time that a message can remain in the queue is 7 days.

Common uses of Queue storage include:

  • Creating a backlog of work to process asynchronously
  • Passing messages from an Azure web role to an Azure worker role
Service Bus Queue
Azure-Service-Bus-Queue

Messages are sent to and received from queues. Queues enable you to store messages until the receiving application is available to receive and process them.

Messages in queues are ordered and timestamped on arrival. Once accepted, the message is held safely in redundant storage. Messages are delivered in pull mode, which delivers messages on request.

Service Bus Topic
Azure-Service-Bus-Topic

In contrast to queues, in which each message is processed by a single consumer, topics and subscriptions provide a one-to-many form of communication, in a publish/subscribe pattern. Useful for scaling to large numbers of recipients, each published message is made available to each subscription registered with the topic. Messages are sent to a topic and delivered to one or more associated subscriptions, depending on filter rules that can be set on a per-subscription basis. The subscriptions can use additional filters to restrict the messages that they want to receive. Messages are sent to a topic in the same way they are sent to a queue, but messages are not received from the topic directly. Instead, they are received from subscriptions. A topic subscription resembles a virtual queue that receives copies of the messages that are sent to the topic. Messages are received from a subscription identically to the way they are received from a queue.

By way of comparison, the message-sending functionality of a queue maps directly to a topic and its message-receiving functionality maps to a subscription. Among other things, this feature means that subscriptions support the same patterns described earlier in this section with regard to queues: competing consumer, temporal decoupling, load levelling, and load balancing.

Event HubsAzure-Event-Hubs

Azure Event Hubs is a Big Data streaming platform and event ingestion service, capable of receiving and processing millions of events per second. Event Hubs can process and store events, data, or telemetry produced by distributed software and devices. Data sent to an event hub can be transformed and stored using any real-time analytics provider or batching/storage adapters.

Event Hubs is used in some of the following common scenarios:

  • Anomaly detection (fraud/outliers)
  • Application logging
  • Analytics pipelines, such as clickstreams
  • Live dashboarding
  • Archiving data
  • Transaction processing
  • User telemetry processing
  • Device telemetry streaming
Event Grid
Azure-Event-Grid3

Azure Event Grid allows you to easily build applications with event-based architectures. You select the Azure resource you would like to subscribe to, and give the event handler or WebHook endpoint to send the event to. Event Grid has built-in support for events coming from Azure services, like storage blobs and resource groups. Event Grid also has custom support for application and third-party events, using custom topics and custom webhooks.

You can use filters to route specific events to different endpoints, multicast to multiple endpoints, and make sure your events are reliably delivered. Event Grid also has built in support for custom and third-party events.

Iot Hub

Azure-Iot-Hub

IoT Hub is a managed service, hosted in the cloud, that acts as a central message hub for bi-directional communication between your IoT application and the devices it manages. You can use Azure IoT Hub to build IoT solutions with reliable and secure communications between millions of IoT devices and a cloud-hosted solution backend. You can connect virtually any device to IoT Hub.

IoT Hub supports communications both from the device to the cloud and from the cloud to the device. IoT Hub supports multiple messaging patterns such as device-to-cloud telemetry, file upload from devices, and request-reply methods to control your devices from the cloud. IoT Hub monitoring helps you maintain the health of your solution by tracking events such as device creation, device failures, and device connections.

IoT Hub’s capabilities help you build scalable, full-featured IoT solutions such as managing industrial equipment used in manufacturing, tracking valuable assets in healthcare, and monitoring office building usage.

What to use when?

Azure provides myriad options to perform messaging and decouple applications. Which one should you use, and when?

Azure-Event-Grid3Azure-Event-HubsAzure-Iot-HubAzure-Service-Bus-TopicAzure-Service-Bus-QueueAzure-Storage---Queue
Event IngestionXXX
Device managementX
MessagingXXXXXX
Multiple consumersXXXX
Multiple sendersXXXXXX
Use for decouplingXXXXX
Use for publish/subscribeXX
Max message size64K64K256KB1MB256 KB|1 MB64KB

Azure Troubleshooting – VM agent is unable to communicate with the Azure Backup Service

When enabling Backup and Recovery services for an Azure VM, you may get a deployment failed error message:

VM agent is unable to communicate with the Azure Backup Service

With the following error code:

UserErrorGuestAgentStatusUnavailable

This is often because the Azure VM Agent is in a failed provisioning state.

Check Azure VM Agent is Installed

Azure services such as Azure Backup require the Agent extension to be installed.

Windows

To detect if the Windows Azure VM agent is installed successfully, when you logon to a Windows Azure VM, open Task Manager > click the Details tab, and look for a process named WindowsAzureGuestAgent.exe. The presence of this process indicates the VM agent is installed.

The Azure VM Agent is installed by default on any Windows VM deployed from an Azure Marketplace image. Manual installation may be necessary when you create a custom VM image that is deployed to Azure. You can manually install the Azure VM Agent with a Windows installer package.

To manually install the Windows VM Agent, download the latest VM Agent installer from this location.

The VM Agent can be installed by double-clicking the Windows installer file. For an automated or unattended installation of the VM agent, change the name of the downloaded installer if necessary and run the following command:

msiexec.exe /i WindowsAzureVmAgent.2.7.41491.885_180531-1125.fre /quiet

As of this writing, the Windows Azure VM Agent is version 2.7.41491.885.

Linux

NOTE: The following commands are based upon the CentOS 6 operating system.

SSH into the Linux VM.

Check to see if the Azure VM Agent is installed by running the following command:
sudo yum list WALinuxAgent

If the Azure VM Agent is installed, it should return a result similar to the below:

Loaded plugins: security
Installed Packages
WALinuxAgent.noach 2.2.18-1.e16 @openlogic

Check for available updates to the Azure VM Agent with the following command:

sudo yum check-update WALinuxAgent

If necessary, install the latest package version:

sudo yum install WALinuxAgent

Enable Azure VM Agent via PowerShell

 

Once the Azure VM Agent has been installed on the virtual machine, you must use Azure PowerShell to update the ProvisionGuestAgent property so Azure knows the VM has the agent installed.

Azure RM

Open PowerShell as Administrator

Connect to Azure RM account and enter credentials:

Connect-AzureRMAccount

Run the following commands to update the ProvisionGuestAgent property to be set as True:

$vm=get-azureRMvm -ResourceGroupName <resource group name> -Name <VM name> -DisplayHint Expand

$vm.OSProfile.windowsConfiguration.provisionVMAgent = $True

Update-AzureRmVM -ResourceGroupName $rg -VM $vm

If you run the Get-AzureRmVM command again, the -DisplayHint Expand will show the Windows Configuration -> ProvisionVMAgent property set as True.

get-azureRMvm -ResourceGroupName <resource group name> -Name <VM name> -DisplayHint Expand

Azure Classic

Classic Azure deployments will not be accessible through the Azure RM PowerShell. You must use the Classic Azure PowerShell module instead.

NOTE: User must be a Co-Administrator on Azure Subscription to be able to connect to Azure Classic PowerShell.

Open PowerShell as Administrator

Connect to Azure Classic PowerShell and enter credentials:

Add-AzureAccount

Run the following commands to update the ProvisionGuestAgent property to be set as True:

$vm = Get-AzureVM –ServiceName <cloud service name> –Name <VM name>

$vm.VM.ProvisionGuestAgent = $TRUE

Update-AzureVM –Name <VM name> –VM $vm.VM –ServiceName <cloud service name>

The command should say it was successful once complete.

NOTE: In Classic Azure PowerShell, you cannot see the value of ProvisionGuestAgent property, whether it is True or False. You have to rely on the message saying it succeeded.

Migrate Data from AWS to Azure

Azure and AWS both offer reliable, scalable and secure hosting environments for enterprise workloads in the cloud. Many organisations have already adopted a “cloud first” policy to leverage these benefits and have gone all-in with either Azure or AWS. But what if something changes and a company wants to leave that cloud service provider?

Why Move from One Cloud to Another?

Reasons why users in Azure or AWS would want to switch to the competing cloud service provider include:

1. Changes in the terms and conditions: Initial cloud adoption for enterprises often depends on a unique value proposition offered by a vendor. However, changes in the terms and conditions of a cloud service provider over time could lead to cloud lock-in concerns for organizations.

2. Application portability: Another example of cloud lock-in possibility is the use of heterogeneous platforms used by different cloud vendors which can affect application portability. For example, workloads that use AWS community-contributed Amazon Machine Images (AMIs) or applications configured to make Amazon S3 API calls might limit the ability for enterprises to use other services outside of AWS. In such a case, it might be desirable to migrate out. Another lock-in example is how Azure Site Recovery provides automated mechanisms for moving workloads from AWS to Azure, but requires multiple complex manual steps or third-party tools to migrate in the opposite direction.

3. Contract renewal: Organizations often reevaluate hosting options during the contract renewal period to explore differentiating features offered by competing service providers. With new products and features being introduced by cloud service providers, customers have more choices than ever before to choose an optimal hosting platform for their applications.

4. Cost-benefits: Services offered at premium rates by one service provider could be available at competitive rates with a different provider. For example, Azure Hybrid Benefit along with reserved instances can provide up to 80% cost saving and offers a great value proposition for organizations with pre-existing investments in Microsoft licenses. AWS, on the other hand, provides Microsoft Licensing on AWS, in which customers can use their Microsoft licenses with or without Software Assurance to reduce cloud hosting charges.

5. Compliance standards: The compliance standards to be met for hosting data and application with a cloud service provider or on-premises varies across different industry sectors. Any instance of non-compliance flagged during an audit could lead to re-hosting or migration of application/data to a compliant platform.

6. Data consolidation: In hybrid cloud architectures, a company’s data could exist across public/private clouds or on-premises deployments. Consolidation of data and seamless management is important for optimizing the spend on data storage and operations. One example of this is in the case of M&A (Mergers & acquisitions), where companies with different platforms need to consolidate.

Cloud Migration Challenges between AWS and Azure

Data is the nexus of enterprise IT, and migration from AWS to Azure and vice versa is one of the most challenging aspects when implementing multicloud architectures. Let’s look at some of the challenges.

1. Data Migration: The fact that Azure and AWS use proprietary storage offerings and APIs make the data migration process complex. Leveraging third party tools for data transfer could lead to integration challenges as both the platforms use diverse technologies in the backend. And the entire process of transitioning between the two clouds may not be a feasible option for business-critical applications due to time and cost constraints involved.

2. Secure Data Transfer: Secure transfer of data between Azure and AWS should be done using a process that meets industry-specific governance and compliance standards. Direct download and upload of data can lead to security concerns as the data at rest and in transit should always be encrypted. While Azure Site Recovery offers a feasible solution for large scale secure migration between AWS and Azure, it requires additional infrastructure to be set up in AWS, which may not be feasible in cost-sensitive environments.

3. Access Control Privileges: When data is migrated between AWS and Azure platforms, administrators need to ensure that consistent data access and protection policies are applied in the destination as well. Security and access control are configured using different sets of tools and policies in AWS and Azure. While AWS depends on IAM user policies and resource-based policies for Amazon S3 access, Azure storage uses RBAC assigned to Azure AD users. Hence, redesign and reconfiguration of the entire system might be required to maintain the same level of security after migration. Management of data across AWS and Azure environments using unified tools and interfaces is also a major challenge.

4. Other Challenges: There are a few other additional challenges to the migration process between platforms. It will be necessary to find a way to evaluate the costs and calculate the differences. You’ll also need a way to measure and maintain the same or acceptable performance and SLA’s of different devices, instances, VM’s, storage types, etc. on the new platform.

Build an FTP Site in Azure with Azure Storage File Share

If you want to host an FTP site in Azure there’s currently not a dedicated resource for this so the next best option is to spin up a virtual machine and use IIS for running the FTP site. It’s also possible to set the FTP site to use an Azure Storage file share to host the files.

Virtual Machine

When you create your VM  you will also need to allow traffic on port 3389 to allow you to remote desktop into it.

Once your VM has been provisioned go to it’s networking settings in the Azure portal and add port 21 and the range 9990-10000 to it’s inbound ports.

Azure Storage File Share

Within the Azure Storage account that’s created when you provision the VM go to Files and add a file share, when this has created click on it in the portal and then click Connect, this will open a blade containing PowerShell commands for adding the file share as a UNC drive to a Windows machine, copy this code and save it somewhere as you’ll need it soon.

IIS

Log into the VM using the admin credentials set at creation and open PowerShell, run the code copied from the file share blade in the Azure portal to add it as a UNC drive.

Next you need to install IIS on your server, this can be done from the Server Manager dashboard by choosing Add Roles and Features from the Manage menu.

  • Proceed to Installation Type step and confirm Role-based or feature-based installation.
  • Proceed to Server Roles step and check Web Server (IIS) role. Note that it is checked already, if you had IIS installed as a Web Server previously. Confirm installing IIS Management Console tool.
  • Proceed to Web Server Role (IIS) > Role Services step and check FTP Server role service. Uncheck Web Server role service, if you do not need it.
  • Proceed to the end of the wizard and click Install.
  • Wait for the installation to complete.

In order to have your FTP server play nicely with the Azure Storage file share you need to create a user capable of logging into the file share as the UNC path needs to be referenced rather than the mapped drive used within Windows which was added by the PowerShell commands above, as explained here.

Users can be added though Tools > Computer Management in the Server Manager. The username should be the name of the storage account and as usernames can’t be over 20 characters long or have the same name as the VM this is the reason for the restrictions in naming our storage account earlier. The password should be the access key for the storage account and “User cannot change password” and “Password never expires” should be selected. This user should then be added to the IIS_IUSRS group.

Once the connecting user has been added you need to create the FTP site, this is done from Tools > Internet Information Services (IIS) Manager in the Server Manager.

First add the ports that you opened in the Azure Firewall to the FTP Firewall Support setting at the server level, the external IP address should be that of your VM.

Next, right click in Sites and add a new FTP site, the physical path parameter should be the UNC path to your file share, rather than the drive alias used by Windows.

When creating an FTP site you should disallow anonymous authentication and use basic, users can be granted access by adding them in the local users step above an either assigning them to a relevant group or just granting all users of the machine access to the FTP site.

You will now have an FTP site set up and available but if you try to connect to it you’ll get an access denied error, this is because FTP on IIS fails to pass through the credentials and so you need to set these explicitly. This is done from the Basic Settings dialog in the right hand menu bar of the FTP site, within the connect as section of this enter the username and password of the user you created earlier (this will be the name of your storage account and the access key), once done save and then test the connection settings.

The FTP site should now be up and running and uploaded files saved on the Azure file share!

 

Serverless on Azure – Deploying Azure Function using Terraform

Azure
Why?

The idea of running our own web servers, sizing VM’s and patching OSes seems so old school. For simple web apps, and seeing if our new service will be successful, we want hosting that is as low-cost as possible, but we also want the ability to scale elastically should we turn into the next big thing!

How?

In this example, we’ll use Azure Functions within an App Service Plan

We’ll manage this whole stack with one Terraform configuration, practicing what we preach with Infrastructure as Code.

Prerequisites

The below example assumes you have Terraform configured for use with your Azure Subscription.

Terraform definition

The desired resource is an Azure Function Application. There’s a handy Terraform template here.

Unfortunately, this Terraform template doesn’t include Azure Application Insights, which has its own template here.

Create a new file named “azure_function.tf” and place this code in it, which is a combination of the two above templates.

resource “azurerm_resource_group” “test” {
name = “tf-azfunc-test”
location = “WestEurope”
}

resource “random_id” “server” {
keepers = {
# Generate a new id each time we switch to a new Azure Resource Group
rg_id = “${azurerm_resource_group.test.name}”
}

byte_length = 8
}

resource “azurerm_storage_account” “test” {
name = “${random_id.server.hex}”
resource_group_name = “${azurerm_resource_group.test.name}”
location = “${azurerm_resource_group.test.location}”
account_tier = “Standard”
account_replication_type = “LRS”
}

resource “azurerm_app_service_plan” “test” {
name = “azure-functions-test-service-plan”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
kind = “FunctionApp”

sku {
tier = “Dynamic”
size = “Y1”
}
}

resource “azurerm_application_insights” “test” {
name = “test-terraform-insights”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
application_type = “Web”
}

resource “azurerm_function_app” “test” {
name = “test-terraform”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
app_service_plan_id = “${azurerm_app_service_plan.test.id}”
storage_connection_string = “${azurerm_storage_account.test.primary_connection_string}”

app_settings {
“AppInsights_InstrumentationKey” = “${azurerm_application_insights.test.instrumentation_key}”
}
}

This Azure Function and Application Insight template only differs from the Terraform documentation in two ways.

1. An Azure Function is associated with an Application Insights instance by adding the Instrumentation Key to the App Settings of the Azure Function application.

app_settings {
“AppInsights_InstrumentationKey” = “${azurerm_application_insights.test.instrumentation_key}”
}

2. Using a random ID for the Azure Storage Account gives it a better chance of being a unique URL.

resource “random_id” “server” {
keepers = {
# Generate a new id each time we switch to a new Azure Resource Group
rg_id = “${azurerm_resource_group.test.name}”
}

byte_length = 8
}

Testing Function works with App Insights

Once the above code is deployed via Terraform. Open up the Azure Function and create a new Javascript Webhook.

Azure Function

Run the default function a few times as-is.

Go look at the App Insights resource and see that the function was run a few times.

App Insights

 

 

 

 

 

 

Summary

A few lines of Terraform code above gives us a working Azure Functions resource group, complete with storage & Application Insights.

You have to love the awesome Terraform Azure Integration and I hope this inspires you to deploy your own Azure Function today!

Upload to Azure Blob Storage using a PowerShell GUI

The reason I put this together was because of a requirement for internal users to upload content to Azure Blob Storage. However, the following requirements were mandated:

  • The data they were uploading needed to be put in a specific container
  • Not to give the users keys / permissions to the Azure Blob Storage

Andrews-Super-Uploader.ps1 s a GUI wrapper for the Microsoft Azure AZCopy tool (AZCopy.exe) to simplify the process of uploading data to Azure Blob Storage.

Requirements:
  • The script will work natively in PowerShell 2.0+
  • The script requires the Microsoft Azure AZCopy Tool with default installation path – get it here
Usage:

There are no parameters or switches, simply execute the script

The main section you will need to edit in the code is this:

$DestList=[collections.arraylist]@(
[pscustomobject]@{Name=’CONTENT / MANIFEST CHINA’;Value=”https://XXX.blob.core.windows.net/tests-data/Products?SASKEY”}
[pscustomobject]@{Name=’CONTENT / MANIFEST QA1′;Value=”https://XXX.blob.core.windows.net/tests-data/Products?SASKEY”}
[pscustomobject]@{Name=’CONTENT / MANIFEST UAT1′;Value=”https://XXX.blob.core.windows.net/tests-data/Products?SASKEY”}
)
$DropDownBox = New-Object System.Windows.Forms.ComboBox
$DropDownBox.Location = New-Object System.Drawing.Size(109,126)
$DropDownBox.Size = New-Object System.Drawing.Size(479,20)
$DropDownBox.DropDownHeight = 200
$Form.Controls.Add($DropDownBox)
$DropDownBox.DataSource=$DestList
$DropDownBox.DisplayMember=’Name’

$SourceList1=[collections.arraylist]@(
[pscustomobject]@{Name=’PRODUCT FOLDER 1′;Value=”D:\TFS\ProductDownloads\PRODUCT FOLDER 1″}
[pscustomobject]@{Name=’PRODUCT FOLDER 2′;Value=”D:\TFS\ProductDownloads\PRODUCT FOLDER 2″}
[pscustomobject]@{Name=’PRODUCT FOLDER 3′;Value=”D:\TFS\ProductDownloads\PRODUCT FOLDER 3″}
)

Just add your Azure Blob Storage SAS Key(s), your local source(s), your destination container(s) and amend the Names as required

Screenshot:

Azure Blob Uploader

Once you have it configured the way you want, hide the config away from end users by converting it to an EXE using PS2EXE

The full code for this can be found in my GitHub Repo

Inspired by MVP Chris Goosen’s PST Import Tool

 

 

 

A Multi-Tier Azure Environment with Terraform including Active Directory – PART 5

In PART 4 we got Terraform to deploy a secondary Domain Controller for resiliency.

In PART 5 I am going to be showing you how to deploy Microsoft SQL VM(s) behind an Azure Internal Load Balancer and install Failover Cluster Manager so it is ready for AlwaysOn capabilities.

MODULES/sql-vm

This all happens in the SQL-VM module. First of all we create the Azure Internal Load Balancer with an AlwaysOn Endpoint Listener. Your soon to be created VM(s) are added to the backend pool.

1-lb.TF

resource “azurerm_lb” “sql-loadbalancer” {
name = “${var.prefix}-sql-loadbalancer”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
sku = “Standard”
frontend_ip_configuration {
name = “LoadBalancerFrontEnd”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “${var.lbprivate_ip_address}”
}
}
resource “azurerm_lb_backend_address_pool” “loadbalancer_backend” {
name = “loadbalancer_backend”
resource_group_name = “${var.resource_group_name}”
loadbalancer_id = “${azurerm_lb.sql-loadbalancer.id}”
}
resource “azurerm_lb_probe” “loadbalancer_probe” {
resource_group_name = “${var.resource_group_name}”
loadbalancer_id = “${azurerm_lb.sql-loadbalancer.id}”
name = “SQLAlwaysOnEndPointProbe”
protocol = “tcp”
port = 59999
interval_in_seconds = 5
number_of_probes = 2
}

resource “azurerm_lb_rule” “SQLAlwaysOnEndPointListener” {
resource_group_name = “${var.resource_group_name}”
loadbalancer_id = “${azurerm_lb.sql-loadbalancer.id}”
name = “SQLAlwaysOnEndPointListener”
protocol = “Tcp”
frontend_port = 1433
backend_port = 1433
frontend_ip_configuration_name = “LoadBalancerFrontEnd”
backend_address_pool_id = “${azurerm_lb_backend_address_pool.loadbalancer_backend.id}”
probe_id = “${azurerm_lb_probe.loadbalancer_probe.id}”
}

Next we create the NIC to be attached to your soon to be created VM. This includes a static public & private IP Address in the appropriate “dbsubnet” created in PART 1. This is where it is attached to the Azure Load Balancer backend pool.

Please note that this also created an Azure NSG for RDP on port 3389. This is because when using a Standard Load Balancer it defaults to blocking all traffic (I don’t think this is the case when using a Basic SKU)

2-NETWORK-INTERFACE.TF

resource “azurerm_network_security_group” “allow-rdp” {
name = “allow-rdp”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
}

resource “azurerm_network_security_rule” “allow-rdp” {
name = “allow-rdp”
priority = 100
direction = “Inbound”
access = “Allow”
protocol = “Tcp”
source_port_range = “*”
destination_port_range = “3389”
source_address_prefix = “*”
destination_address_prefix = “*”
resource_group_name = “${var.resource_group_name}”
network_security_group_name = “${azurerm_network_security_group.allow-rdp.name}”
}

resource “azurerm_public_ip” “static” {
name = “${var.prefix}-sql${1 + count.index}-ext”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
public_ip_address_allocation = “static”
count = “${var.sqlvmcount}”
sku = “Standard”
}

resource “azurerm_network_interface” “primary” {
name = “${var.prefix}-sql${1 + count.index}-int”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
internal_dns_name_label = “${var.prefix}-sql${1 + count.index}”
network_security_group_id = “${azurerm_network_security_group.allow-rdp.id}”
count = “${var.sqlvmcount}”

ip_configuration {
name = “primary”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “10.100.50.${10 + count.index}”
public_ip_address_id = “${azurerm_public_ip.static.*.id[count.index]}”
load_balancer_backend_address_pools_ids = [“${azurerm_lb_backend_address_pool.loadbalancer_backend.id}”]
}
}

The next step is to create our database VM(s). This example deploys a 2012-R2-Datacenter image with SQL 2014 SP2 Enterprise Installed. It is deployed into an availability group for resiliency, you can deploy as many as you want using the “vmcount” variable. It also has separate disks for OS, Data & Logs as per Microsoft Best Practice.

3-VIRTUAL-MACHINE.TF

resource “azurerm_availability_set” “sqlavailabilityset” {
name = “sqlavailabilityset”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
platform_fault_domain_count = 3
platform_update_domain_count = 5
managed = true
}

resource “azurerm_virtual_machine” “sql” {
name = “${var.prefix}-sql${1 + count.index}”
location = “${var.location}”
availability_set_id = “${azurerm_availability_set.sqlavailabilityset.id}”
resource_group_name = “${var.resource_group_name}”
network_interface_ids = [“${element(azurerm_network_interface.primary.*.id, count.index)}”]
vm_size = “Standard_B1s”
delete_os_disk_on_termination = true
count = “${var.sqlvmcount}”

storage_image_reference {
publisher = “MicrosoftSQLServer”
offer = “SQL2014SP2-WS2012R2”
sku = “Enterprise”
version = “latest”
}

storage_os_disk {
name = “${var.prefix}-sql${1 + count.index}-disk1”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}

os_profile {
computer_name = “${var.prefix}-sql${1 + count.index}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = false
}

storage_data_disk {
name = “${var.prefix}-sql${1 + count.index}-data-disk1”
disk_size_gb = “2000”
caching = “ReadWrite”
create_option = “Empty”
managed_disk_type = “Standard_LRS”
lun = “2”
}

storage_data_disk {
name = “${var.prefix}-sql${1 + count.index}-log-disk1”
disk_size_gb = “500”
caching = “ReadWrite”
create_option = “Empty”
managed_disk_type = “Standard_LRS”
lun = “3”
}

depends_on = [“azurerm_network_interface.primary”]
}

We now join the VM(s) to the domain using a Virtual Machine Extension. Note the use of the Splat Operator (*) with count.

4-join-domain.TF

resource “azurerm_virtual_machine_extension” “join-domain” {
name = “join-domain”
location = “${element(azurerm_virtual_machine.sql.*.location, count.index)}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${element(azurerm_virtual_machine.sql.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “JsonADDomainExtension”
type_handler_version = “1.3”
count = “${var.sqlvmcount}”

# NOTE: the `OUPath` field is intentionally blank, to put it in the Computers OU
settings = <<SETTINGS
{
“Name”: “${var.active_directory_domain}”,
“OUPath”: “”,
“User”: “${var.active_directory_domain}\\${var.active_directory_username}”,
“Restart”: “true”,
“Options”: “3”
}
SETTINGS

protected_settings = <<SETTINGS
{
“Password”: “${var.active_directory_password}”
}
SETTINGS
}

Finally we install Windows Server Failover Clustering so it can easily be added to an AlwaysOn Availability Group if required.

5-install-wsfc.TF

resource “azurerm_virtual_machine_extension” “wsfc” {
count = “${var.sqlvmcount}”
name = “create-cluster”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
virtual_machine_name = “${element(azurerm_virtual_machine.sql.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “CustomScriptExtension”
type_handler_version = “1.9”

settings = <<SETTINGS
{
“commandToExecute”: “powershell Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools”
}
SETTINGS

depends_on = [“azurerm_virtual_machine_extension.join-domain”]
}

Your MAIN.TF file should now look like this

main.tf

# Configure the Microsoft Azure Provider
provider “azurerm” {
subscription_id = “${var.subscription_id}”
client_id = “${var.client_id}”
client_secret = “${var.client_secret}”
tenant_id = “${var.tenant_id}”
}

##########################################################
## Create Resource group Network & subnets
##########################################################
module “network” {
source = “..\\modules\\network”
address_space = “${var.address_space}”
dns_servers = [“${var.dns_servers}”]
environment_name = “${var.environment_name}”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
dcsubnet_name = “${var.dcsubnet_name}”
dcsubnet_prefix = “${var.dcsubnet_prefix}”
wafsubnet_name = “${var.wafsubnet_name}”
wafsubnet_prefix = “${var.wafsubnet_prefix}”
rpsubnet_name = “${var.rpsubnet_name}”
rpsubnet_prefix = “${var.rpsubnet_prefix}”
issubnet_name = “${var.issubnet_name}”
issubnet_prefix = “${var.issubnet_prefix}”
dbsubnet_name = “${var.dbsubnet_name}”
dbsubnet_prefix = “${var.dbsubnet_prefix}”
}

##########################################################
## Create DC VM & AD Forest
##########################################################

module “active-directory” {
source = “..\\modules\\active-directory”
resource_group_name = “${module.network.out_resource_group_name}”
location = “${var.location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_netbios_name = “${var.prefix}”
private_ip_address = “${var.private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

##########################################################
## Create IIS VM’s & Join domain
##########################################################

module “iis-vm” {
source = “..\\modules\\iis-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.is_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
vmcount = “${var.vmcount}”
}

##########################################################
## Create Secondary Domain Controller VM & Join domain
##########################################################
module “dc2-vm” {
source = “..\\modules\\dc2-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
dcavailability_set_id = “${module.active-directory.out_dcavailabilityset}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
active_directory_netbios_name = “${var.prefix}”
dc2private_ip_address = “${var.dc2private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
domainadmin_username = “${var.domainadmin_username}”
}

##########################################################
## Create SQL Server VM Join domain
##########################################################
module “sql-vm” {
source = “..\\modules\\sql-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.db_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
sqlvmcount = “${var.sqlvmcount}”
lbprivate_ip_address = “${var.lbprivate_ip_address}”
}

This brings us to the end of this example. I have tried to showcase a lots of different options of what you can deploy to Azure with Terraform using a mixture of IaaS and PaaS options.

You don’t have to use all of it but hopefully it gives you a few ideas and inspires you to start using Terraform to spin up resources in Azure.

To get the full complete example including variables & output files, you can find it on GitHub where it has also been contributed to the Hashicorp Offical Repo

 

A Multi-Tier Azure Environment with Terraform including Active Directory – PART 4

In PART 3 we got Terraform to deploy an IIS web server(s) and join to your newly configured Active Directory Domain.

In PART 4 I am going to be showing you how to deploy a secondary Domain Controller for resiliency.

MODULES/dc2-vm

This all happens in the DC2-VM module. First of all we create the NIC to be attached to your soon to be created VM. This includes a static public & private IP Address in the appropriate “dcsubnet” created in PART 1

1-NETWORK-INTERFACE.TF

resource “azurerm_public_ip” “dc2-external” {
name = “${var.prefix}-dc2-ext”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
public_ip_address_allocation = “Static”
idle_timeout_in_minutes = 30
}

resource “azurerm_network_interface” “dc2primary” {
name = “${var.prefix}-dc2-primary”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
internal_dns_name_label = “${local.dc2virtual_machine_name}”

ip_configuration {
name = “primary”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “${var.dc2private_ip_address}”
public_ip_address_id = “${azurerm_public_ip.dc2-external.id}”
}
}

The next step is to create our secondary Domain Controller VM. This example deploys a 2012-R2-Datacenter image.

2-VIRTUAL-MACHINE.TF

locals {
dc2virtual_machine_name = “${var.prefix}-dc2”
dc2virtual_machine_fqdn = “${local.dc2virtual_machine_name}.${var.active_directory_domain}”
dc2custom_data_params = “Param($RemoteHostName = \”${local.dc2virtual_machine_fqdn}\”, $ComputerName = \”${local.dc2virtual_machine_name}\”)”
dc2custom_data_content = “${local.dc2custom_data_params} ${file(“${path.module}/files/winrm.ps1″)}”
}

resource “azurerm_virtual_machine” “domain-controller2” {
name = “${local.dc2virtual_machine_name}”
location = “${var.location}”
availability_set_id = “${var.dcavailability_set_id}”
resource_group_name = “${var.resource_group_name}”
network_interface_ids = [“${azurerm_network_interface.dc2primary.id}”]
vm_size = “Standard_A1”
delete_os_disk_on_termination = false

storage_image_reference {
publisher = “MicrosoftWindowsServer”
offer = “WindowsServer”
sku = “2012-R2-Datacenter”
version = “latest”
}

storage_os_disk {
name = “${local.dc2virtual_machine_name}-disk1”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}

os_profile {
computer_name = “${local.dc2virtual_machine_name}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
custom_data = “${local.dc2custom_data_content}”
}

os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = false

additional_unattend_config {
pass = “oobeSystem”
component = “Microsoft-Windows-Shell-Setup”
setting_name = “AutoLogon”
content = “<AutoLogon><Password><Value>${var.admin_password}</Value></Password><Enabled>true</Enabled><LogonCount>1</LogonCount><Username>${var.admin_username}</Username></AutoLogon>”
}

# Unattend config is to enable basic auth in WinRM, required for the provisioner stage.
additional_unattend_config {
pass = “oobeSystem”
component = “Microsoft-Windows-Shell-Setup”
setting_name = “FirstLogonCommands”
content = “${file(“${path.module}/files/FirstLogonCommands.xml”)}”
}
}

depends_on = [“azurerm_network_interface.dc2primary”]
}

We now join the VM(s) to the domain using a Virtual Machine Extension.

3-join-domain.TF

resource “azurerm_virtual_machine_extension” “join-domain” {
name = “join-domain”
location = “${azurerm_virtual_machine.domain-controller2.location}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${azurerm_virtual_machine.domain-controller2.name}”
publisher = “Microsoft.Compute”
type = “JsonADDomainExtension”
type_handler_version = “1.3”

# NOTE: the `OUPath` field is intentionally blank, to put it in the Computers OU
settings = <<SETTINGS
{
“Name”: “${var.active_directory_domain}”,
“OUPath”: “”,
“User”: “${var.active_directory_domain}\\${var.active_directory_username}”,
“Restart”: “true”,
“Options”: “3”
}
SETTINGS

protected_settings = <<SETTINGS
{
“Password”: “${var.active_directory_password}”
}
SETTINGS
}

Finally we promote this to a Domain Controller

4-promote-dc.TF

// the `exit_code_hack` is to keep the VM Extension resource happy
locals {
dc2import_command = “Import-Module ADDSDeployment”
dc2user_command = “$dc2user = ${var.domainadmin_username}”
dc2password_command = “$password = ConvertTo-SecureString ${var.admin_password} -AsPlainText -Force”
dc2creds_command = “$mycreds = New-Object System.Management.Automation.PSCredential -ArgumentList $dc2user, $password”
dc2install_ad_command = “Add-WindowsFeature -name ad-domain-services -IncludeManagementTools”
dc2configure_ad_command = “Install-ADDSDomainController -Credential $mycreds -CreateDnsDelegation:$false -DomainName ${var.active_directory_domain} -InstallDns:$true -SafeModeAdministratorPassword $password -Force:$true”
dc2shutdown_command = “shutdown -r -t 10”
dc2exit_code_hack = “exit 0”
dc2powershell_command = “${local.dc2import_command}; ${local.dc2user_command}; ${local.dc2password_command}; ${local.dc2creds_command}; ${local.dc2install_ad_command}; ${local.dc2configure_ad_command}; ${local.dc2shutdown_command}; ${local.dc2exit_code_hack}”
}

resource “azurerm_virtual_machine_extension” “promote-dc” {
name = “promote-dc”
location = “${azurerm_virtual_machine_extension.join-domain.location}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${azurerm_virtual_machine.domain-controller2.name}”
publisher = “Microsoft.Compute”
type = “CustomScriptExtension”
type_handler_version = “1.9”

settings = <<SETTINGS
{
“commandToExecute”: “powershell.exe -Command \”${local.dc2powershell_command}\””
}
SETTINGS
}

Your MAIN.TF file should now look like this

main.tf

# Configure the Microsoft Azure Provider
provider “azurerm” {
subscription_id = “${var.subscription_id}”
client_id = “${var.client_id}”
client_secret = “${var.client_secret}”
tenant_id = “${var.tenant_id}”
}

##########################################################
## Create Resource group Network & subnets
##########################################################
module “network” {
source = “..\\modules\\network”
address_space = “${var.address_space}”
dns_servers = [“${var.dns_servers}”]
environment_name = “${var.environment_name}”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
dcsubnet_name = “${var.dcsubnet_name}”
dcsubnet_prefix = “${var.dcsubnet_prefix}”
wafsubnet_name = “${var.wafsubnet_name}”
wafsubnet_prefix = “${var.wafsubnet_prefix}”
rpsubnet_name = “${var.rpsubnet_name}”
rpsubnet_prefix = “${var.rpsubnet_prefix}”
issubnet_name = “${var.issubnet_name}”
issubnet_prefix = “${var.issubnet_prefix}”
dbsubnet_name = “${var.dbsubnet_name}”
dbsubnet_prefix = “${var.dbsubnet_prefix}”
}

##########################################################
## Create DC VM & AD Forest
##########################################################

module “active-directory” {
source = “..\\modules\\active-directory”
resource_group_name = “${module.network.out_resource_group_name}”
location = “${var.location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_netbios_name = “${var.prefix}”
private_ip_address = “${var.private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

##########################################################
## Create IIS VM’s & Join domain
##########################################################

module “iis-vm” {
source = “..\\modules\\iis-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.is_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
vmcount = “${var.vmcount}”
}

##########################################################
## Create Secondary Domain Controller VM & Join domain
##########################################################
module “dc2-vm” {
source = “..\\modules\\dc2-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
dcavailability_set_id = “${module.active-directory.out_dcavailabilityset}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
active_directory_netbios_name = “${var.prefix}”
dc2private_ip_address = “${var.dc2private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
domainadmin_username = “${var.domainadmin_username}”
}

This is the end of PART 4, by now you should have Terraform configured, building a resource group containing a Network with 5 subnets in Azure.  Within the dcsubnet you should have 2 VM’s running the Domain Controller role with an Active Directory Domain configured. Within the issubnet you should have at least one web server running IIS, in an availability group and joined to the domain.

Join me again soon for PART 5 where we will adding database VM(s) which will be running SQL Server and joined to the domain.

P.S. If you cant wait and just want to jump to the complete example, you can find it on GitHub where it has also been contributed to the Hashicorp Offical Repo

A Multi-Tier Azure Environment with Terraform including Active Directory – PART 3

In PART 2 we got Terraform to deploy a Domain Controller into your newly configured network.

In PART 3 I am going to be showing you how to deploy a web server (IIS) and join it to your newly configured Active Directory Domain.

MODULES/iis-vm

This all happens in the IIS-VM module. First of all we create the NIC to be attached to your soon to be created VM. This includes a static public & private IP Address in the appropriate “issubnet” created in PART 1

1-NETWORK-INTERFACE.TF

resource “azurerm_public_ip” “static” {
name = “${var.prefix}-iis${1 + count.index}-ext”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
public_ip_address_allocation = “static”
count = “${var.vmcount}”
}

resource “azurerm_network_interface” “primary” {
name = “${var.prefix}-iis${1 + count.index}-int”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
internal_dns_name_label = “${var.prefix}-iis${1 + count.index}”
count = “${var.vmcount}”

ip_configuration {
name = “primary”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “10.100.30.${10 + count.index}”
public_ip_address_id = “${azurerm_public_ip.static.*.id[count.index]}”
}
}

The next step is to create our web server VM. This example deploys a 2012-R2-Datacenter image. It is deployed into an availability group for resiliency, you can deploy as many as you want using the “vmcount” variable.

2-VIRTUAL-MACHINE.TF

resource “azurerm_availability_set” “isavailabilityset” {
name = “isavailabilityset”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
platform_fault_domain_count = 3
platform_update_domain_count = 5
managed = true
}

resource “azurerm_virtual_machine” “iis” {
name = “${var.prefix}-iis${1 + count.index}”
location = “${var.location}”
availability_set_id = “${azurerm_availability_set.isavailabilityset.id}”
resource_group_name = “${var.resource_group_name}”
network_interface_ids = [“${element(azurerm_network_interface.primary.*.id, count.index)}”]
vm_size = “Standard_A1”
delete_os_disk_on_termination = true
count = “${var.vmcount}”

storage_image_reference {
publisher = “MicrosoftWindowsServer”
offer = “WindowsServer”
sku = “2012-R2-Datacenter”
version = “latest”
}

storage_os_disk {
name = “${var.prefix}-iis${1 + count.index}-disk1”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}

os_profile {
computer_name = “${var.prefix}-iis${1 + count.index}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = false
}

depends_on = [“azurerm_network_interface.primary”]
}

We now join the VM(s) to the domain using a Virtual Machine Extension. Note the use of the Splat Operator (*) with count.

3-join-domain.TF

resource “azurerm_virtual_machine_extension” “join-domain” {
name = “join-domain”
location = “${element(azurerm_virtual_machine.iis.*.location, count.index)}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${element(azurerm_virtual_machine.iis.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “JsonADDomainExtension”
type_handler_version = “1.3”
count = “${var.vmcount}”

# NOTE: the `OUPath` field is intentionally blank, to put it in the Computers OU
settings = <<SETTINGS
{
“Name”: “${var.active_directory_domain}”,
“OUPath”: “”,
“User”: “${var.active_directory_domain}\\${var.active_directory_username}”,
“Restart”: “true”,
“Options”: “3”
}
SETTINGS

protected_settings = <<SETTINGS
{
“Password”: “${var.active_directory_password}”
}
SETTINGS
}

Finally we install IIS and some common features to help manage it.

4-install-iis.TF

resource “azurerm_virtual_machine_extension” “iis” {
count = “${var.vmcount}”
name = “install-iis”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
virtual_machine_name = “${element(azurerm_virtual_machine.iis.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “CustomScriptExtension”
type_handler_version = “1.9”

settings = <<SETTINGS
{
“commandToExecute”: “powershell Add-WindowsFeature Web-Asp-Net45;Add-WindowsFeature NET-Framework-45-Core;Add-WindowsFeature Web-Net-Ext45;Add-WindowsFeature Web-ISAPI-Ext;Add-WindowsFeature Web-ISAPI-Filter;Add-WindowsFeature Web-Mgmt-Console;Add-WindowsFeature Web-Scripting-Tools;Add-WindowsFeature Search-Service;Add-WindowsFeature Web-Filtering;Add-WindowsFeature Web-Basic-Auth;Add-WindowsFeature Web-Windows-Auth;Add-WindowsFeature Web-Default-Doc;Add-WindowsFeature Web-Http-Errors;Add-WindowsFeature Web-Static-Content;”
}
SETTINGS

depends_on = [“azurerm_virtual_machine_extension.join-domain”]
}

Your MAIN.TF file should now look like this

main.tf

# Configure the Microsoft Azure Provider
provider “azurerm” {
subscription_id = “${var.subscription_id}”
client_id = “${var.client_id}”
client_secret = “${var.client_secret}”
tenant_id = “${var.tenant_id}”
}

##########################################################
## Create Resource group Network & subnets
##########################################################
module “network” {
source = “..\\modules\\network”
address_space = “${var.address_space}”
dns_servers = [“${var.dns_servers}”]
environment_name = “${var.environment_name}”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
dcsubnet_name = “${var.dcsubnet_name}”
dcsubnet_prefix = “${var.dcsubnet_prefix}”
wafsubnet_name = “${var.wafsubnet_name}”
wafsubnet_prefix = “${var.wafsubnet_prefix}”
rpsubnet_name = “${var.rpsubnet_name}”
rpsubnet_prefix = “${var.rpsubnet_prefix}”
issubnet_name = “${var.issubnet_name}”
issubnet_prefix = “${var.issubnet_prefix}”
dbsubnet_name = “${var.dbsubnet_name}”
dbsubnet_prefix = “${var.dbsubnet_prefix}”
}

##########################################################
## Create DC VM & AD Forest
##########################################################

module “active-directory” {
source = “..\\modules\\active-directory”
resource_group_name = “${module.network.out_resource_group_name}”
location = “${var.location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_netbios_name = “${var.prefix}”
private_ip_address = “${var.private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

##########################################################
## Create IIS VM’s & Join domain
##########################################################

module “iis-vm” {
source = “..\\modules\\iis-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.is_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
vmcount = “${var.vmcount}”
}

This is the end of PART 3, by now you should have Terraform configured, building a resource group containing a Network with 5 subnets in Azure.  Within the dcsubnet you should have a VM running the Domain Controller role with an Active Directory Domain configured. Within the issubnet you should have at least one web server running IIS, in an availability group and joined to the domain.

Join me again soon for PART 4 where we will adding secondary Domain Controller VM for resiliency.

P.S. If you cant wait and just want to jump to the complete example, you can find it on GitHub where it has also been contributed to the Hashicorp Offical Repo