Enable logging to Log Analytics Workspace on Azure SQL Managed Instances using PowerShell

Enable logging on Azure SQL Managed Instances using PowerShell

 

Just a quick post to share an easy way to enable logging on your Azure SQL managed Instances using a simple PowerShell script:

#Log analytics workspace resource group name
$OMSRG = “prd-rg01”
#Log analytics workspace name
$OMSWSName = “log-prd01”
#Get Log analytics workspace ID
$WS = Get-AzureRmOperationalInsightsWorkspace
-ResourceGroupName $OMSRG -Name $OMSWSName
$WSId = $WS.ResourceId
$SQLInstanceNAme = “prd-msql01”
$RGName = “prd-rg01”
#Get SQL managed instace server
$SQLMI = Get-AzureRmSqlInstance -Name $SQLInstanceNAme -ResourceGroupName $RGName
$SQLMIID = $SQLMI.Id
$sqlserverdiagname = $SQLInstanceNAme+”-diag”
#Enable diagnositic settings to Log analytics for the SQL instance.
Set-AzureRmDiagnosticSetting -ResourceId $SQLMIID -WorkspaceId $WSId -Enabled $true -Name $sqlserverdiagname
#Get Managed SQL instance DB names
$SQLManagedInstanceDBS = Get-AzureRmSqlInstanceDatabase -InstanceName $SQLInstanceNAme -ResourceGroupName $RGName
#iterate through each DB to enable logging to the log analytics workspace.
foreach ($db in $SQLManagedInstanceDBS)
{
$SQLMIDBID=$db.Id
$diagname=$db.name+”-diag”
$SQLMIDBID
$diagname
Set-AzureRmDiagnosticSetting-ResourceId $SQLMIDBID-WorkspaceId $WSId-Enabled $true-Name $diagname
}
#It can take a while for the portal to show the config change to check this with PS just run this commnad
#Add the resource ID of Managed instance server or DB and it will show you what is enabled and the workspace it is configured to use.
#Get-AzDiagnosticSetting -ResourceId
Please follow and like us:

Managing Database Schemas using Azure DevOps

 

A data model changes during development and gets out of sync with the database. You can drop the database and let Entity Framework create a new one that matches the model, but this procedure results in the loss of data. The migrations feature in EF Core provides a way to incrementally update the database schema to keep it in sync with the application’s data model while preserving existing data in the database.

I am using a Task Group for this to keep it as general as possible to allow it ot be used across multiple projects.

My Task Group for the build is comprised of 3 steps, a restore of the solution, the build of the migrations and then a publish of the artifact:

My NuGet Restore step looks like this and also uses the Azure DevOps Artifacts Feed:

My Build EF Core Migrations step looks like this, more info can be found on these scripts here:

ef migrations script -v -i -o $(build.artifactstagingdirectory)\Migrations\$(Build.DefinitionName).sql –startup-project ../$(Build.DefinitionName).API

The final step takes the output of the previous step and publishes it to Azure Pipelines (Artifact Feed):

I use this artifact within a Release Pipeline after I deploy my Web App:

The settings for this look like this:

To give you an idea of the structure, the linked artifacts look like this. This has the app code and the sql script generated in our steps above in seperate folders. (The Deploy Azure App Service step above would just look at the “platform” folder:

 

 

 

 

 

Hope you find this post useful and it will help you to build your Infrastructure in Azure using Azure Devops & Entity Framework Core Migrations.

Any questions just get in touch via Twitter

Please follow and like us:

A Multi-Tier Azure Environment with Terraform including Active Directory – PART 5

In PART 4 we got Terraform to deploy a secondary Domain Controller for resiliency.

In PART 5 I am going to be showing you how to deploy Microsoft SQL VM(s) behind an Azure Internal Load Balancer and install Failover Cluster Manager so it is ready for AlwaysOn capabilities.

MODULES/sql-vm

This all happens in the SQL-VM module. First of all we create the Azure Internal Load Balancer with an AlwaysOn Endpoint Listener. Your soon to be created VM(s) are added to the backend pool.

1-lb.TF

resource “azurerm_lb” “sql-loadbalancer” {
name = “${var.prefix}-sql-loadbalancer”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
sku = “Standard”
frontend_ip_configuration {
name = “LoadBalancerFrontEnd”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “${var.lbprivate_ip_address}”
}
}
resource “azurerm_lb_backend_address_pool” “loadbalancer_backend” {
name = “loadbalancer_backend”
resource_group_name = “${var.resource_group_name}”
loadbalancer_id = “${azurerm_lb.sql-loadbalancer.id}”
}
resource “azurerm_lb_probe” “loadbalancer_probe” {
resource_group_name = “${var.resource_group_name}”
loadbalancer_id = “${azurerm_lb.sql-loadbalancer.id}”
name = “SQLAlwaysOnEndPointProbe”
protocol = “tcp”
port = 59999
interval_in_seconds = 5
number_of_probes = 2
}

resource “azurerm_lb_rule” “SQLAlwaysOnEndPointListener” {
resource_group_name = “${var.resource_group_name}”
loadbalancer_id = “${azurerm_lb.sql-loadbalancer.id}”
name = “SQLAlwaysOnEndPointListener”
protocol = “Tcp”
frontend_port = 1433
backend_port = 1433
frontend_ip_configuration_name = “LoadBalancerFrontEnd”
backend_address_pool_id = “${azurerm_lb_backend_address_pool.loadbalancer_backend.id}”
probe_id = “${azurerm_lb_probe.loadbalancer_probe.id}”
}

Next we create the NIC to be attached to your soon to be created VM. This includes a static public & private IP Address in the appropriate “dbsubnet” created in PART 1. This is where it is attached to the Azure Load Balancer backend pool.

Please note that this also created an Azure NSG for RDP on port 3389. This is because when using a Standard Load Balancer it defaults to blocking all traffic (I don’t think this is the case when using a Basic SKU)

2-NETWORK-INTERFACE.TF

resource “azurerm_network_security_group” “allow-rdp” {
name = “allow-rdp”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
}

resource “azurerm_network_security_rule” “allow-rdp” {
name = “allow-rdp”
priority = 100
direction = “Inbound”
access = “Allow”
protocol = “Tcp”
source_port_range = “*”
destination_port_range = “3389”
source_address_prefix = “*”
destination_address_prefix = “*”
resource_group_name = “${var.resource_group_name}”
network_security_group_name = “${azurerm_network_security_group.allow-rdp.name}”
}

resource “azurerm_public_ip” “static” {
name = “${var.prefix}-sql${1 + count.index}-ext”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
public_ip_address_allocation = “static”
count = “${var.sqlvmcount}”
sku = “Standard”
}

resource “azurerm_network_interface” “primary” {
name = “${var.prefix}-sql${1 + count.index}-int”
location = “${var.location}”
resource_group_name = “${var.resource_group_name}”
internal_dns_name_label = “${var.prefix}-sql${1 + count.index}”
network_security_group_id = “${azurerm_network_security_group.allow-rdp.id}”
count = “${var.sqlvmcount}”

ip_configuration {
name = “primary”
subnet_id = “${var.subnet_id}”
private_ip_address_allocation = “static”
private_ip_address = “10.100.50.${10 + count.index}”
public_ip_address_id = “${azurerm_public_ip.static.*.id[count.index]}”
load_balancer_backend_address_pools_ids = [“${azurerm_lb_backend_address_pool.loadbalancer_backend.id}”]
}
}

The next step is to create our database VM(s). This example deploys a 2012-R2-Datacenter image with SQL 2014 SP2 Enterprise Installed. It is deployed into an availability group for resiliency, you can deploy as many as you want using the “vmcount” variable. It also has separate disks for OS, Data & Logs as per Microsoft Best Practice.

3-VIRTUAL-MACHINE.TF

resource “azurerm_availability_set” “sqlavailabilityset” {
name = “sqlavailabilityset”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
platform_fault_domain_count = 3
platform_update_domain_count = 5
managed = true
}

resource “azurerm_virtual_machine” “sql” {
name = “${var.prefix}-sql${1 + count.index}”
location = “${var.location}”
availability_set_id = “${azurerm_availability_set.sqlavailabilityset.id}”
resource_group_name = “${var.resource_group_name}”
network_interface_ids = [“${element(azurerm_network_interface.primary.*.id, count.index)}”]
vm_size = “Standard_B1s”
delete_os_disk_on_termination = true
count = “${var.sqlvmcount}”

storage_image_reference {
publisher = “MicrosoftSQLServer”
offer = “SQL2014SP2-WS2012R2”
sku = “Enterprise”
version = “latest”
}

storage_os_disk {
name = “${var.prefix}-sql${1 + count.index}-disk1”
caching = “ReadWrite”
create_option = “FromImage”
managed_disk_type = “Standard_LRS”
}

os_profile {
computer_name = “${var.prefix}-sql${1 + count.index}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

os_profile_windows_config {
provision_vm_agent = true
enable_automatic_upgrades = false
}

storage_data_disk {
name = “${var.prefix}-sql${1 + count.index}-data-disk1”
disk_size_gb = “2000”
caching = “ReadWrite”
create_option = “Empty”
managed_disk_type = “Standard_LRS”
lun = “2”
}

storage_data_disk {
name = “${var.prefix}-sql${1 + count.index}-log-disk1”
disk_size_gb = “500”
caching = “ReadWrite”
create_option = “Empty”
managed_disk_type = “Standard_LRS”
lun = “3”
}

depends_on = [“azurerm_network_interface.primary”]
}

We now join the VM(s) to the domain using a Virtual Machine Extension. Note the use of the Splat Operator (*) with count.

4-join-domain.TF

resource “azurerm_virtual_machine_extension” “join-domain” {
name = “join-domain”
location = “${element(azurerm_virtual_machine.sql.*.location, count.index)}”
resource_group_name = “${var.resource_group_name}”
virtual_machine_name = “${element(azurerm_virtual_machine.sql.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “JsonADDomainExtension”
type_handler_version = “1.3”
count = “${var.sqlvmcount}”

# NOTE: the `OUPath` field is intentionally blank, to put it in the Computers OU
settings = <<SETTINGS
{
“Name”: “${var.active_directory_domain}”,
“OUPath”: “”,
“User”: “${var.active_directory_domain}\\${var.active_directory_username}”,
“Restart”: “true”,
“Options”: “3”
}
SETTINGS

protected_settings = <<SETTINGS
{
“Password”: “${var.active_directory_password}”
}
SETTINGS
}

Finally we install Windows Server Failover Clustering so it can easily be added to an AlwaysOn Availability Group if required.

5-install-wsfc.TF

resource “azurerm_virtual_machine_extension” “wsfc” {
count = “${var.sqlvmcount}”
name = “create-cluster”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
virtual_machine_name = “${element(azurerm_virtual_machine.sql.*.name, count.index)}”
publisher = “Microsoft.Compute”
type = “CustomScriptExtension”
type_handler_version = “1.9”

settings = <<SETTINGS
{
“commandToExecute”: “powershell Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools”
}
SETTINGS

depends_on = [“azurerm_virtual_machine_extension.join-domain”]
}

Your MAIN.TF file should now look like this

main.tf

# Configure the Microsoft Azure Provider
provider “azurerm” {
subscription_id = “${var.subscription_id}”
client_id = “${var.client_id}”
client_secret = “${var.client_secret}”
tenant_id = “${var.tenant_id}”
}

##########################################################
## Create Resource group Network & subnets
##########################################################
module “network” {
source = “..\\modules\\network”
address_space = “${var.address_space}”
dns_servers = [“${var.dns_servers}”]
environment_name = “${var.environment_name}”
resource_group_name = “${var.resource_group_name}”
location = “${var.location}”
dcsubnet_name = “${var.dcsubnet_name}”
dcsubnet_prefix = “${var.dcsubnet_prefix}”
wafsubnet_name = “${var.wafsubnet_name}”
wafsubnet_prefix = “${var.wafsubnet_prefix}”
rpsubnet_name = “${var.rpsubnet_name}”
rpsubnet_prefix = “${var.rpsubnet_prefix}”
issubnet_name = “${var.issubnet_name}”
issubnet_prefix = “${var.issubnet_prefix}”
dbsubnet_name = “${var.dbsubnet_name}”
dbsubnet_prefix = “${var.dbsubnet_prefix}”
}

##########################################################
## Create DC VM & AD Forest
##########################################################

module “active-directory” {
source = “..\\modules\\active-directory”
resource_group_name = “${module.network.out_resource_group_name}”
location = “${var.location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_netbios_name = “${var.prefix}”
private_ip_address = “${var.private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
}

##########################################################
## Create IIS VM’s & Join domain
##########################################################

module “iis-vm” {
source = “..\\modules\\iis-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.is_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
vmcount = “${var.vmcount}”
}

##########################################################
## Create Secondary Domain Controller VM & Join domain
##########################################################
module “dc2-vm” {
source = “..\\modules\\dc2-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
dcavailability_set_id = “${module.active-directory.out_dcavailabilityset}”
prefix = “${var.prefix}”
subnet_id = “${module.network.dc_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
active_directory_netbios_name = “${var.prefix}”
dc2private_ip_address = “${var.dc2private_ip_address}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
domainadmin_username = “${var.domainadmin_username}”
}

##########################################################
## Create SQL Server VM Join domain
##########################################################
module “sql-vm” {
source = “..\\modules\\sql-vm”
resource_group_name = “${module.active-directory.out_resource_group_name}”
location = “${module.active-directory.out_dc_location}”
prefix = “${var.prefix}”
subnet_id = “${module.network.db_subnet_subnet_id}”
active_directory_domain = “${var.prefix}.local”
active_directory_username = “${var.admin_username}”
active_directory_password = “${var.admin_password}”
admin_username = “${var.admin_username}”
admin_password = “${var.admin_password}”
sqlvmcount = “${var.sqlvmcount}”
lbprivate_ip_address = “${var.lbprivate_ip_address}”
}

This brings us to the end of this example. I have tried to showcase a lots of different options of what you can deploy to Azure with Terraform using a mixture of IaaS and PaaS options.

You don’t have to use all of it but hopefully it gives you a few ideas and inspires you to start using Terraform to spin up resources in Azure.

To get the full complete example including variables & output files, you can find it on GitHub where it has also been contributed to the Hashicorp Offical Repo

 

Please follow and like us: