Enable logging to Log Analytics Workspace on Azure SQL Managed Instances using PowerShell

Enable logging on Azure SQL Managed Instances using PowerShell

 

Just a quick post to share an easy way to enable logging on your Azure SQL managed Instances using a simple PowerShell script:

#Log analytics workspace resource group name
$OMSRG = “prd-rg01”
#Log analytics workspace name
$OMSWSName = “log-prd01”
#Get Log analytics workspace ID
$WS = Get-AzureRmOperationalInsightsWorkspace
-ResourceGroupName $OMSRG -Name $OMSWSName
$WSId = $WS.ResourceId
$SQLInstanceNAme = “prd-msql01”
$RGName = “prd-rg01”
#Get SQL managed instace server
$SQLMI = Get-AzureRmSqlInstance -Name $SQLInstanceNAme -ResourceGroupName $RGName
$SQLMIID = $SQLMI.Id
$sqlserverdiagname = $SQLInstanceNAme+”-diag”
#Enable diagnositic settings to Log analytics for the SQL instance.
Set-AzureRmDiagnosticSetting -ResourceId $SQLMIID -WorkspaceId $WSId -Enabled $true -Name $sqlserverdiagname
#Get Managed SQL instance DB names
$SQLManagedInstanceDBS = Get-AzureRmSqlInstanceDatabase -InstanceName $SQLInstanceNAme -ResourceGroupName $RGName
#iterate through each DB to enable logging to the log analytics workspace.
foreach ($db in $SQLManagedInstanceDBS)
{
$SQLMIDBID=$db.Id
$diagname=$db.name+”-diag”
$SQLMIDBID
$diagname
Set-AzureRmDiagnosticSetting-ResourceId $SQLMIDBID-WorkspaceId $WSId-Enabled $true-Name $diagname
}
#It can take a while for the portal to show the config change to check this with PS just run this commnad
#Add the resource ID of Managed instance server or DB and it will show you what is enabled and the workspace it is configured to use.
#Get-AzDiagnosticSetting -ResourceId
Please follow and like us:

Create a build pipeline for Angular apps in Azure DevOps

I wanted to show you how I created a Build Pipeline for an Angular App in Azure DevOps.

As always I have this as a task group so I can reuse across projects.

My Task Group for the build is comprised of 5 steps, including tagging the build using PowerShell:

 

 

 

 

 

 

 

 

My Install Node step looks like this, which also add it to the PATH:

 

 

 

 

 

 

 

My NPM install step looks like this:

 

 

 

 

 

 

 

 

My NPM Run Build Script Step looks like this:

 

 

 

 

 

 

 

 

 

This calls the script located in my package.json file as below:

 

 

 

 

 

 

 

I now tag the build using a PowerShell script, this obviously isn’t required but thought I would show it as might be useful for some of you:

 

 

 

 

 

 

 

 

 

 

 

Finally I publish the build artifact:

 

 

 

 

 

 

 

 

 

I hope this shows how easy it is to use Azure DevOps Build Pipelines to build an Angular application with an added bonus of tagging using PowerShell.

You could then have a Release Pipeline use the artifact to deploy to an Azure WebApp or wherever else you wanted.

Hope this was helpful!

Any questions just get in touch via Twitter

Please follow and like us:

Use Git Short Hash in Azure DevOps Release Pipeline

 

I wanted to include the Git Short Hash in the release name of my Azure DevOps Release Pipeline.

I am sure there may be other ways to do this but wanted to show how I did it using PowerShell.

As always I have this as a task group so I can reuse across projects. However, it only has 1 step:

 

 

 

 

The PowerShell within this step looks like this:

$commitId= “$env:BUILD_BUILDNUMBER”

$definitionName= “1.0.0-“

$deploymentId = “$env:RELEASE_DEPLOYMENTID”

$releaseName=$definitionName+$commitId + -$deploymentId

Write-Host (“##vso[release.updatereleasename]$releaseName”)

 

 

 

 

 

 

 

One issue I have found with this is that this obviously only updates once the release has been successfully deployed:

 

 

 

This is because it runs as part of the Release Pipeline. Here is a view of the logs to see it in action:

 

 

 

I figured out the command required to update the release name from these Microsoft Docs at the very bottom under “Release Logging Commands”

Hope you find this post useful and it will help you to build your Infrastructure in Azure using Azure Devops with custom Release Names.

Any questions just get in touch via Twitter

Please follow and like us:

Managing Database Schemas using Azure DevOps

 

A data model changes during development and gets out of sync with the database. You can drop the database and let Entity Framework create a new one that matches the model, but this procedure results in the loss of data. The migrations feature in EF Core provides a way to incrementally update the database schema to keep it in sync with the application’s data model while preserving existing data in the database.

I am using a Task Group for this to keep it as general as possible to allow it ot be used across multiple projects.

My Task Group for the build is comprised of 3 steps, a restore of the solution, the build of the migrations and then a publish of the artifact:

My NuGet Restore step looks like this and also uses the Azure DevOps Artifacts Feed:

My Build EF Core Migrations step looks like this, more info can be found on these scripts here:

ef migrations script -v -i -o $(build.artifactstagingdirectory)\Migrations\$(Build.DefinitionName).sql –startup-project ../$(Build.DefinitionName).API

The final step takes the output of the previous step and publishes it to Azure Pipelines (Artifact Feed):

I use this artifact within a Release Pipeline after I deploy my Web App:

The settings for this look like this:

To give you an idea of the structure, the linked artifacts look like this. This has the app code and the sql script generated in our steps above in seperate folders. (The Deploy Azure App Service step above would just look at the “platform” folder:

 

 

 

 

 

Hope you find this post useful and it will help you to build your Infrastructure in Azure using Azure Devops & Entity Framework Core Migrations.

Any questions just get in touch via Twitter

Please follow and like us:

Using Terraform in Azure DevOps Task Group

I am using Terraform to build my Infrastructure in Azure DevOps using the Task Group feature to keep it generalised. To do this I am using the Terraform Extension from Peter Groenewegen and remote state in Azure Blob Storage.

Thought I would share my setup in the hope that it would be useful for others.

My Task Group is comprised of 2 steps, the Plan & the Apply:

The first part of my Plan setup looks like the below:

 

In the Terraform template path I have used a mixture of built-in system variables and a custom parameter that can be defined in your pipeline

$(System.DefaultWorkingDirectory)/$(RELEASE.PRIMARYARTIFACTSOURCEALIAS)/$(Terraform.Template.Path)

The system parameters can be found here

In the Terraform arguments section I have entered the path to my environment var file. Again I have used the system variable $(Release.EnvironmentName) and ensured that the folder within my Terraform Repo is named exactly the same, along with the tfvars file. This ensures you can keep the Task Group generalised throughout the pipeline.

plan -var-file=”./environments/$(Release.EnvironmentName)/$(Release.EnvironmentName).tfvars”

I have ticked the Install Terraform box to ensure Terraform is installed on the Build Agent.

I have also ticked the Use Azure Service Principal Endpoint Box as per the Terraform best practice guidelines.

The last part of my Plan setup looks like the below:

In this section I declared that I went the state initialised in Blob storage and give the details of where I want this to be stored. Again keeping it generalised by using the system variables.

Within my Terraform code, the Main.tf file has this at the start so it knows to use the remote state:

 

 

 

 

The structure of my Terraform is as below, with the backend.tfvars sitting alongside my $environment.tfvars file. This means you could have completely different settings for each remote state if you wanted. I have it this way as devci1 & qa1 sit in a different subscription to uat1 & prod, so use different blob storage. I also seperate each state into different containers within the blob storage to keep it organised as you will see in the next screenshot.

 

 

 

 

 

 

 

 

The backend.tfvars file looks like the below. The main thing to note is that I change the container name to match the environment (I have changed this to be hard coded for this post but would usually just use the variable to pass the value in to avoid mistakes)

The Terraform Apply section is exactly the same apart from the Terraform arguments section. It has Apply rather than Plan and most importantly the -auto-approve means it doesn’t require any human intervention.

apply -var-file=”./environments/$(Release.EnvironmentName)/$(Release.EnvironmentName).tfvars” -auto-approve 

Hope you find this post useful and it will help you to build your Infrastructure in Azure using Azure Devops & Terraform.

Any questions just get in touch via Twitter

Please follow and like us:

Supporting Azure in the Terraform Module Registry

After this was announced last year I have been itching to find the time to contribute – Now I finally have to support Azure.

My Module creates a VM and installs Active Directory, it can be found here.

For those of you who haven’t heard of it, The HashiCorp Terraform Module Registry gives Terraform users easy access to templates for setting up and running their infrastructure with verified and community modules.

 

Please follow and like us:

Serverless on Azure – Deploying Azure Function using Terraform

Azure
Why?

The idea of running our own web servers, sizing VM’s and patching OSes seems so old school. For simple web apps, and seeing if our new service will be successful, we want hosting that is as low-cost as possible, but we also want the ability to scale elastically should we turn into the next big thing!

How?

In this example, we’ll use Azure Functions within an App Service Plan

We’ll manage this whole stack with one Terraform configuration, practicing what we preach with Infrastructure as Code.

Prerequisites

The below example assumes you have Terraform configured for use with your Azure Subscription.

Terraform definition

The desired resource is an Azure Function Application. There’s a handy Terraform template here.

Unfortunately, this Terraform template doesn’t include Azure Application Insights, which has its own template here.

Create a new file named “azure_function.tf” and place this code in it, which is a combination of the two above templates.

resource “azurerm_resource_group” “test” {
name = “tf-azfunc-test”
location = “WestEurope”
}

resource “random_id” “server” {
keepers = {
# Generate a new id each time we switch to a new Azure Resource Group
rg_id = “${azurerm_resource_group.test.name}”
}

byte_length = 8
}

resource “azurerm_storage_account” “test” {
name = “${random_id.server.hex}”
resource_group_name = “${azurerm_resource_group.test.name}”
location = “${azurerm_resource_group.test.location}”
account_tier = “Standard”
account_replication_type = “LRS”
}

resource “azurerm_app_service_plan” “test” {
name = “azure-functions-test-service-plan”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
kind = “FunctionApp”

sku {
tier = “Dynamic”
size = “Y1”
}
}

resource “azurerm_application_insights” “test” {
name = “test-terraform-insights”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
application_type = “Web”
}

resource “azurerm_function_app” “test” {
name = “test-terraform”
location = “${azurerm_resource_group.test.location}”
resource_group_name = “${azurerm_resource_group.test.name}”
app_service_plan_id = “${azurerm_app_service_plan.test.id}”
storage_connection_string = “${azurerm_storage_account.test.primary_connection_string}”

app_settings {
“AppInsights_InstrumentationKey” = “${azurerm_application_insights.test.instrumentation_key}”
}
}

This Azure Function and Application Insight template only differs from the Terraform documentation in two ways.

1. An Azure Function is associated with an Application Insights instance by adding the Instrumentation Key to the App Settings of the Azure Function application.

app_settings {
“AppInsights_InstrumentationKey” = “${azurerm_application_insights.test.instrumentation_key}”
}

2. Using a random ID for the Azure Storage Account gives it a better chance of being a unique URL.

resource “random_id” “server” {
keepers = {
# Generate a new id each time we switch to a new Azure Resource Group
rg_id = “${azurerm_resource_group.test.name}”
}

byte_length = 8
}

Testing Function works with App Insights

Once the above code is deployed via Terraform. Open up the Azure Function and create a new Javascript Webhook.

Azure Function

Run the default function a few times as-is.

Go look at the App Insights resource and see that the function was run a few times.

App Insights

 

 

 

 

 

 

Summary

A few lines of Terraform code above gives us a working Azure Functions resource group, complete with storage & Application Insights.

You have to love the awesome Terraform Azure Integration and I hope this inspires you to deploy your own Azure Function today!

Please follow and like us:

A Multi-Tier Azure Environment with Terraform including Active Directory – PART 5

In PART 4 we got Terraform to deploy a secondary Domain Controller for resiliency.

In PART 5 I am going to be showing you how to deploy Microsoft SQL VM(s) behind an Azure Internal Load Balancer and install Failover Cluster Manager so it is ready for AlwaysOn capabilities.

MODULES/sql-vm

This all happens in the SQL-VM module. First of all we create the Azure Internal Load Balancer with an AlwaysOn Endpoint Listener. Your soon to be created VM(s) are added to the backend pool.

1-lb.TF

resource “azurerm_lb” “sql-loadbalancer” {
name = “${var.prefix}-sql-loadbalancer”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
sku = “Standard”
frontend_ip_configuration {
name = “LoadBalancerFrontEnd”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “${var.lbprivate_ip_address}”
}
}
resource “azurerm_lb_backend_address_pool” “loadbalancer_backend” {
name = “loadbalancer_backend”
resource_group_name = “${var.resource_group_name}”
loadbalancer_id = “${azurerm_lb.sql-loadbalancer.id}”
}
resource “azurerm_lb_probe” “loadbalancer_probe” {
resource_group_name = “${var.resource_group_name}”
loadbalancer_id = “${azurerm_lb.sql-loadbalancer.id}”
name = “SQLAlwaysOnEndPointProbe”
protocol = “tcp”
port = 59999
interval_in_seconds = 5
number_of_probes = 2
}

resource “azurerm_lb_rule” “SQLAlwaysOnEndPointListener” {
resource_group_name = “${var.resource_group_name}”
loadbalancer_id = “${azurerm_lb.sql-loadbalancer.id}”
name = “SQLAlwaysOnEndPointListener”
protocol = “Tcp”
frontend_port = 1433
backend_port = 1433
frontend_ip_configuration_name = “LoadBalancerFrontEnd”
backend_address_pool_id = “${azurerm_lb_backend_address_pool.loadbalancer_backend.id}”
probe_id = “${azurerm_lb_probe.loadbalancer_probe.id}”
}

Next we create the NIC to be attached to your soon to be created VM. This includes a static public & private IP Address in the appropriate “dbsubnet” created in PART 1. This is where it is attached to the Azure Load Balancer backend pool.

Please note that this also created an Azure NSG for RDP on port 3389. This is because when using a Standard Load Balancer it defaults to blocking all traffic (I don’t think this is the case when using a Basic SKU)

2-NETWORK-INTERFACE.TF

resource “azurerm_network_security_group” “allow-rdp” {
name = “allow-rdp”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
}

resource “azurerm_network_security_rule” “allow-rdp” {
name = “allow-rdp”
priority = 100
direction = “Inbound”
access = “Allow”
protocol = “Tcp”
source_port_range = “*”
destination_port_range = “3389”
source_address_prefix = “*”
destination_address_prefix = “*”
resource_group_name = “${var.resource_group_name}”
network_security_group_name = “${azurerm_network_security_group.allow-rdp.name}”
}

resource “azurerm_public_ip” “static” {
name = “${var.prefix}-sql${1 + count.index}-ext”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
public_ip_address_allocation = “static”
count = “${var.sqlvmcount}”
sku = “Standard”
}

resource “azurerm_network_interface” “primary” {
name = “${var.prefix}-sql${1 + count.index}-int”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
internal_dns_name_label = “${var.prefix}-sql${1 + count.index}”
network_security_group_id = “${azurerm_network_security_group.allow-rdp.id}”
count = “${var.sqlvmcount}”

ip_configuration {
name = “primary”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “10.100.50.${10 + count.index}”
public_ip_address_id = “${azurerm_public_ip.static.*.id[count.index]}”
load_balancer_backend_address_pools_ids = [“${azurerm_lb_backend_address_pool.loadbalancer_backend.id}”]
}
}

The next step is to create our database VM(s). This example deploys a 2012-R2-Datacenter image with SQL 2014 SP2 Enterprise Installed. It is deployed into an availability group for resiliency, you can deploy as many as you want using the “vmcount” variable. It also has separate disks for OS, Data & Logs as per Microsoft Best Practice.

3-VIRTUAL-MACHINE.TF

resource “azurerm_availability_set” “sqlavailabilityset” {
name = “sqlavailabilityset”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
platform_fault_domain_count = 3
platform_update_domain_count = 5
managed = true
}

resource “azurerm_virtual_machine” “sql” {
name = “${var.prefix}-sql${1 + count.index}”
location = “${var.location}”
availability_set_id = “${azurerm_availability_set.sqlavailabilityset.id}”
resource_group_name = “${var.resource_group_name}”
network_interface_ids = [“${element(azurerm_network_interface.primary.*.id, count.index)}”]
vm_size = “Standard_B1s”
delete_os_disk_on_termination = true
count = “${var.sqlvmcount}”

storage_image_reference {
publisher = “MicrosoftSQLServer”
offer = “SQL2014SP2-WS2012R2”
sku = “Enterprise”
version = “latest”
}

storage_os_disk {
name = “${var.prefix}-sql${1 + count.index}-disk1”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}

os_profile {
computer_name = “${var.prefix}-sql${1 + count.index}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = false
}

storage_data_disk {
name = “${var.prefix}-sql${1 + count.index}-data-disk1”
disk_size_gb = “2000”
caching = “ReadWrite”
create_option = “Empty”
managed_disk_type = “Standard_LRS”
lun = “2”
}

storage_data_disk {
name = “${var.prefix}-sql${1 + count.index}-log-disk1”
disk_size_gb = “500”
caching = “ReadWrite”
create_option = “Empty”
managed_disk_type = “Standard_LRS”
lun = “3”
}

depends_on = [“azurerm_network_interface.primary”]
}

We now join the VM(s) to the domain using a Virtual Machine Extension. Note the use of the Splat Operator (*) with count.

4-join-domain.TF

resource “azurerm_virtual_machine_extension” “join-domain” {
name = “join-domain”
location = “${element(azurerm_virtual_machine.sql.*.location, count.index)}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${element(azurerm_virtual_machine.sql.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “JsonADDomainExtension”
type_handler_version = “1.3”
count = “${var.sqlvmcount}”

# NOTE: the `OUPath` field is intentionally blank, to put it in the Computers OU
settings = <<SETTINGS
{
“Name”: “${var.active_directory_domain}”,
“OUPath”: “”,
“User”: “${var.active_directory_domain}\\${var.active_directory_username}”,
“Restart”: “true”,
“Options”: “3”
}
SETTINGS

protected_settings = <<SETTINGS
{
“Password”: “${var.active_directory_password}”
}
SETTINGS
}

Finally we install Windows Server Failover Clustering so it can easily be added to an AlwaysOn Availability Group if required.

5-install-wsfc.TF

resource “azurerm_virtual_machine_extension” “wsfc” {
count = “${var.sqlvmcount}”
name = “create-cluster”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
virtual_machine_name = “${element(azurerm_virtual_machine.sql.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “CustomScriptExtension”
type_handler_version = “1.9”

settings = <<SETTINGS
{
“commandToExecute”: “powershell Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools”
}
SETTINGS

depends_on = [“azurerm_virtual_machine_extension.join-domain”]
}

Your MAIN.TF file should now look like this

main.tf

# Configure the Microsoft Azure Provider
provider “azurerm” {
subscription_id = “${var.subscription_id}”
client_id = “${var.client_id}”
client_secret = “${var.client_secret}”
tenant_id = “${var.tenant_id}”
}

##########################################################
## Create Resource group Network & subnets
##########################################################
module “network” {
source = “..\\modules\\network”
address_space = “${var.address_space}”
dns_servers = [“${var.dns_servers}”]
environment_name = “${var.environment_name}”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
dcsubnet_name = “${var.dcsubnet_name}”
dcsubnet_prefix = “${var.dcsubnet_prefix}”
wafsubnet_name = “${var.wafsubnet_name}”
wafsubnet_prefix = “${var.wafsubnet_prefix}”
rpsubnet_name = “${var.rpsubnet_name}”
rpsubnet_prefix = “${var.rpsubnet_prefix}”
issubnet_name = “${var.issubnet_name}”
issubnet_prefix = “${var.issubnet_prefix}”
dbsubnet_name = “${var.dbsubnet_name}”
dbsubnet_prefix = “${var.dbsubnet_prefix}”
}

##########################################################
## Create DC VM & AD Forest
##########################################################

module “active-directory” {
source = “..\\modules\\active-directory”
resource_group_name = “${module.network.out_resource_group_name}”
location = “${var.location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_netbios_name = “${var.prefix}”
private_ip_address = “${var.private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

##########################################################
## Create IIS VM’s & Join domain
##########################################################

module “iis-vm” {
source = “..\\modules\\iis-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.is_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
vmcount = “${var.vmcount}”
}

##########################################################
## Create Secondary Domain Controller VM & Join domain
##########################################################
module “dc2-vm” {
source = “..\\modules\\dc2-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
dcavailability_set_id = “${module.active-directory.out_dcavailabilityset}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
active_directory_netbios_name = “${var.prefix}”
dc2private_ip_address = “${var.dc2private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
domainadmin_username = “${var.domainadmin_username}”
}

##########################################################
## Create SQL Server VM Join domain
##########################################################
module “sql-vm” {
source = “..\\modules\\sql-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.db_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
sqlvmcount = “${var.sqlvmcount}”
lbprivate_ip_address = “${var.lbprivate_ip_address}”
}

This brings us to the end of this example. I have tried to showcase a lots of different options of what you can deploy to Azure with Terraform using a mixture of IaaS and PaaS options.

You don’t have to use all of it but hopefully it gives you a few ideas and inspires you to start using Terraform to spin up resources in Azure.

To get the full complete example including variables & output files, you can find it on GitHub where it has also been contributed to the Hashicorp Offical Repo

 

Please follow and like us:

A Multi-Tier Azure Environment with Terraform including Active Directory – PART 4

In PART 3 we got Terraform to deploy an IIS web server(s) and join to your newly configured Active Directory Domain.

In PART 4 I am going to be showing you how to deploy a secondary Domain Controller for resiliency.

MODULES/dc2-vm

This all happens in the DC2-VM module. First of all we create the NIC to be attached to your soon to be created VM. This includes a static public & private IP Address in the appropriate “dcsubnet” created in PART 1

1-NETWORK-INTERFACE.TF

resource “azurerm_public_ip” “dc2-external” {
name = “${var.prefix}-dc2-ext”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
public_ip_address_allocation = “Static”
idle_timeout_in_minutes = 30
}

resource “azurerm_network_interface” “dc2primary” {
name = “${var.prefix}-dc2-primary”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
internal_dns_name_label = “${local.dc2virtual_machine_name}”

ip_configuration {
name = “primary”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “${var.dc2private_ip_address}”
public_ip_address_id = “${azurerm_public_ip.dc2-external.id}”
}
}

The next step is to create our secondary Domain Controller VM. This example deploys a 2012-R2-Datacenter image.

2-VIRTUAL-MACHINE.TF

locals {
dc2virtual_machine_name = “${var.prefix}-dc2”
dc2virtual_machine_fqdn = “${local.dc2virtual_machine_name}.${var.active_directory_domain}”
dc2custom_data_params = “Param($RemoteHostName = \”${local.dc2virtual_machine_fqdn}\”, $ComputerName = \”${local.dc2virtual_machine_name}\”)”
dc2custom_data_content = “${local.dc2custom_data_params} ${file(“${path.module}/files/winrm.ps1″)}”
}

resource “azurerm_virtual_machine” “domain-controller2” {
name = “${local.dc2virtual_machine_name}”
location = “${var.location}”
availability_set_id = “${var.dcavailability_set_id}”
resource_group_name = “${var.resource_group_name}”
network_interface_ids = [“${azurerm_network_interface.dc2primary.id}”]
vm_size = “Standard_A1”
delete_os_disk_on_termination = false

storage_image_reference {
publisher = “MicrosoftWindowsServer”
offer = “WindowsServer”
sku = “2012-R2-Datacenter”
version = “latest”
}

storage_os_disk {
name = “${local.dc2virtual_machine_name}-disk1”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}

os_profile {
computer_name = “${local.dc2virtual_machine_name}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
custom_data = “${local.dc2custom_data_content}”
}

os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = false

additional_unattend_config {
pass = “oobeSystem”
component = “Microsoft-Windows-Shell-Setup”
setting_name = “AutoLogon”
content = “<AutoLogon><Password><Value>${var.admin_password}</Value></Password><Enabled>true</Enabled><LogonCount>1</LogonCount><Username>${var.admin_username}</Username></AutoLogon>”
}

# Unattend config is to enable basic auth in WinRM, required for the provisioner stage.
additional_unattend_config {
pass = “oobeSystem”
component = “Microsoft-Windows-Shell-Setup”
setting_name = “FirstLogonCommands”
content = “${file(“${path.module}/files/FirstLogonCommands.xml”)}”
}
}

depends_on = [“azurerm_network_interface.dc2primary”]
}

We now join the VM(s) to the domain using a Virtual Machine Extension.

3-join-domain.TF

resource “azurerm_virtual_machine_extension” “join-domain” {
name = “join-domain”
location = “${azurerm_virtual_machine.domain-controller2.location}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${azurerm_virtual_machine.domain-controller2.name}”
publisher = “Microsoft.Compute”
type = “JsonADDomainExtension”
type_handler_version = “1.3”

# NOTE: the `OUPath` field is intentionally blank, to put it in the Computers OU
settings = <<SETTINGS
{
“Name”: “${var.active_directory_domain}”,
“OUPath”: “”,
“User”: “${var.active_directory_domain}\\${var.active_directory_username}”,
“Restart”: “true”,
“Options”: “3”
}
SETTINGS

protected_settings = <<SETTINGS
{
“Password”: “${var.active_directory_password}”
}
SETTINGS
}

Finally we promote this to a Domain Controller

4-promote-dc.TF

// the `exit_code_hack` is to keep the VM Extension resource happy
locals {
dc2import_command = “Import-Module ADDSDeployment”
dc2user_command = “$dc2user = ${var.domainadmin_username}”
dc2password_command = “$password = ConvertTo-SecureString ${var.admin_password} -AsPlainText -Force”
dc2creds_command = “$mycreds = New-Object System.Management.Automation.PSCredential -ArgumentList $dc2user, $password”
dc2install_ad_command = “Add-WindowsFeature -name ad-domain-services -IncludeManagementTools”
dc2configure_ad_command = “Install-ADDSDomainController -Credential $mycreds -CreateDnsDelegation:$false -DomainName ${var.active_directory_domain} -InstallDns:$true -SafeModeAdministratorPassword $password -Force:$true”
dc2shutdown_command = “shutdown -r -t 10”
dc2exit_code_hack = “exit 0”
dc2powershell_command = “${local.dc2import_command}; ${local.dc2user_command}; ${local.dc2password_command}; ${local.dc2creds_command}; ${local.dc2install_ad_command}; ${local.dc2configure_ad_command}; ${local.dc2shutdown_command}; ${local.dc2exit_code_hack}”
}

resource “azurerm_virtual_machine_extension” “promote-dc” {
name = “promote-dc”
location = “${azurerm_virtual_machine_extension.join-domain.location}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${azurerm_virtual_machine.domain-controller2.name}”
publisher = “Microsoft.Compute”
type = “CustomScriptExtension”
type_handler_version = “1.9”

settings = <<SETTINGS
{
“commandToExecute”: “powershell.exe -Command \”${local.dc2powershell_command}\””
}
SETTINGS
}

Your MAIN.TF file should now look like this

main.tf

# Configure the Microsoft Azure Provider
provider “azurerm” {
subscription_id = “${var.subscription_id}”
client_id = “${var.client_id}”
client_secret = “${var.client_secret}”
tenant_id = “${var.tenant_id}”
}

##########################################################
## Create Resource group Network & subnets
##########################################################
module “network” {
source = “..\\modules\\network”
address_space = “${var.address_space}”
dns_servers = [“${var.dns_servers}”]
environment_name = “${var.environment_name}”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
dcsubnet_name = “${var.dcsubnet_name}”
dcsubnet_prefix = “${var.dcsubnet_prefix}”
wafsubnet_name = “${var.wafsubnet_name}”
wafsubnet_prefix = “${var.wafsubnet_prefix}”
rpsubnet_name = “${var.rpsubnet_name}”
rpsubnet_prefix = “${var.rpsubnet_prefix}”
issubnet_name = “${var.issubnet_name}”
issubnet_prefix = “${var.issubnet_prefix}”
dbsubnet_name = “${var.dbsubnet_name}”
dbsubnet_prefix = “${var.dbsubnet_prefix}”
}

##########################################################
## Create DC VM & AD Forest
##########################################################

module “active-directory” {
source = “..\\modules\\active-directory”
resource_group_name = “${module.network.out_resource_group_name}”
location = “${var.location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_netbios_name = “${var.prefix}”
private_ip_address = “${var.private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

##########################################################
## Create IIS VM’s & Join domain
##########################################################

module “iis-vm” {
source = “..\\modules\\iis-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.is_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
vmcount = “${var.vmcount}”
}

##########################################################
## Create Secondary Domain Controller VM & Join domain
##########################################################
module “dc2-vm” {
source = “..\\modules\\dc2-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
dcavailability_set_id = “${module.active-directory.out_dcavailabilityset}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
active_directory_netbios_name = “${var.prefix}”
dc2private_ip_address = “${var.dc2private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
domainadmin_username = “${var.domainadmin_username}”
}

This is the end of PART 4, by now you should have Terraform configured, building a resource group containing a Network with 5 subnets in Azure.  Within the dcsubnet you should have 2 VM’s running the Domain Controller role with an Active Directory Domain configured. Within the issubnet you should have at least one web server running IIS, in an availability group and joined to the domain.

Join me again soon for PART 5 where we will adding database VM(s) which will be running SQL Server and joined to the domain.

P.S. If you cant wait and just want to jump to the complete example, you can find it on GitHub where it has also been contributed to the Hashicorp Offical Repo

Please follow and like us:

A Multi-Tier Azure Environment with Terraform including Active Directory – PART 3

In PART 2 we got Terraform to deploy a Domain Controller into your newly configured network.

In PART 3 I am going to be showing you how to deploy a web server (IIS) and join it to your newly configured Active Directory Domain.

MODULES/iis-vm

This all happens in the IIS-VM module. First of all we create the NIC to be attached to your soon to be created VM. This includes a static public & private IP Address in the appropriate “issubnet” created in PART 1

1-NETWORK-INTERFACE.TF

resource “azurerm_public_ip” “static” {
name = “${var.prefix}-iis${1 + count.index}-ext”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
public_ip_address_allocation = “static”
count = “${var.vmcount}”
}

resource “azurerm_network_interface” “primary” {
name = “${var.prefix}-iis${1 + count.index}-int”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
internal_dns_name_label = “${var.prefix}-iis${1 + count.index}”
count = “${var.vmcount}”

ip_configuration {
name = “primary”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “10.100.30.${10 + count.index}”
public_ip_address_id = “${azurerm_public_ip.static.*.id[count.index]}”
}
}

The next step is to create our web server VM. This example deploys a 2012-R2-Datacenter image. It is deployed into an availability group for resiliency, you can deploy as many as you want using the “vmcount” variable.

2-VIRTUAL-MACHINE.TF

resource “azurerm_availability_set” “isavailabilityset” {
name = “isavailabilityset”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
platform_fault_domain_count = 3
platform_update_domain_count = 5
managed = true
}

resource “azurerm_virtual_machine” “iis” {
name = “${var.prefix}-iis${1 + count.index}”
location = “${var.location}”
availability_set_id = “${azurerm_availability_set.isavailabilityset.id}”
resource_group_name = “${var.resource_group_name}”
network_interface_ids = [“${element(azurerm_network_interface.primary.*.id, count.index)}”]
vm_size = “Standard_A1”
delete_os_disk_on_termination = true
count = “${var.vmcount}”

storage_image_reference {
publisher = “MicrosoftWindowsServer”
offer = “WindowsServer”
sku = “2012-R2-Datacenter”
version = “latest”
}

storage_os_disk {
name = “${var.prefix}-iis${1 + count.index}-disk1”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}

os_profile {
computer_name = “${var.prefix}-iis${1 + count.index}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = false
}

depends_on = [“azurerm_network_interface.primary”]
}

We now join the VM(s) to the domain using a Virtual Machine Extension. Note the use of the Splat Operator (*) with count.

3-join-domain.TF

resource “azurerm_virtual_machine_extension” “join-domain” {
name = “join-domain”
location = “${element(azurerm_virtual_machine.iis.*.location, count.index)}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${element(azurerm_virtual_machine.iis.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “JsonADDomainExtension”
type_handler_version = “1.3”
count = “${var.vmcount}”

# NOTE: the `OUPath` field is intentionally blank, to put it in the Computers OU
settings = <<SETTINGS
{
“Name”: “${var.active_directory_domain}”,
“OUPath”: “”,
“User”: “${var.active_directory_domain}\\${var.active_directory_username}”,
“Restart”: “true”,
“Options”: “3”
}
SETTINGS

protected_settings = <<SETTINGS
{
“Password”: “${var.active_directory_password}”
}
SETTINGS
}

Finally we install IIS and some common features to help manage it.

4-install-iis.TF

resource “azurerm_virtual_machine_extension” “iis” {
count = “${var.vmcount}”
name = “install-iis”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
virtual_machine_name = “${element(azurerm_virtual_machine.iis.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “CustomScriptExtension”
type_handler_version = “1.9”

settings = <<SETTINGS
{
“commandToExecute”: “powershell Add-WindowsFeature Web-Asp-Net45;Add-WindowsFeature NET-Framework-45-Core;Add-WindowsFeature Web-Net-Ext45;Add-WindowsFeature Web-ISAPI-Ext;Add-WindowsFeature Web-ISAPI-Filter;Add-WindowsFeature Web-Mgmt-Console;Add-WindowsFeature Web-Scripting-Tools;Add-WindowsFeature Search-Service;Add-WindowsFeature Web-Filtering;Add-WindowsFeature Web-Basic-Auth;Add-WindowsFeature Web-Windows-Auth;Add-WindowsFeature Web-Default-Doc;Add-WindowsFeature Web-Http-Errors;Add-WindowsFeature Web-Static-Content;”
}
SETTINGS

depends_on = [“azurerm_virtual_machine_extension.join-domain”]
}

Your MAIN.TF file should now look like this

main.tf

# Configure the Microsoft Azure Provider
provider “azurerm” {
subscription_id = “${var.subscription_id}”
client_id = “${var.client_id}”
client_secret = “${var.client_secret}”
tenant_id = “${var.tenant_id}”
}

##########################################################
## Create Resource group Network & subnets
##########################################################
module “network” {
source = “..\\modules\\network”
address_space = “${var.address_space}”
dns_servers = [“${var.dns_servers}”]
environment_name = “${var.environment_name}”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
dcsubnet_name = “${var.dcsubnet_name}”
dcsubnet_prefix = “${var.dcsubnet_prefix}”
wafsubnet_name = “${var.wafsubnet_name}”
wafsubnet_prefix = “${var.wafsubnet_prefix}”
rpsubnet_name = “${var.rpsubnet_name}”
rpsubnet_prefix = “${var.rpsubnet_prefix}”
issubnet_name = “${var.issubnet_name}”
issubnet_prefix = “${var.issubnet_prefix}”
dbsubnet_name = “${var.dbsubnet_name}”
dbsubnet_prefix = “${var.dbsubnet_prefix}”
}

##########################################################
## Create DC VM & AD Forest
##########################################################

module “active-directory” {
source = “..\\modules\\active-directory”
resource_group_name = “${module.network.out_resource_group_name}”
location = “${var.location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_netbios_name = “${var.prefix}”
private_ip_address = “${var.private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

##########################################################
## Create IIS VM’s & Join domain
##########################################################

module “iis-vm” {
source = “..\\modules\\iis-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.is_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
vmcount = “${var.vmcount}”
}

This is the end of PART 3, by now you should have Terraform configured, building a resource group containing a Network with 5 subnets in Azure.  Within the dcsubnet you should have a VM running the Domain Controller role with an Active Directory Domain configured. Within the issubnet you should have at least one web server running IIS, in an availability group and joined to the domain.

Join me again soon for PART 4 where we will adding secondary Domain Controller VM for resiliency.

P.S. If you cant wait and just want to jump to the complete example, you can find it on GitHub where it has also been contributed to the Hashicorp Offical Repo

 

Please follow and like us: